Unnamed: 0
int64 0
1.91M
| id
int64 337
73.8M
| title
stringlengths 10
150
| question
stringlengths 21
64.2k
| answer
stringlengths 19
59.4k
| tags
stringlengths 5
112
| score
int64 -10
17.3k
|
---|---|---|---|---|---|---|
1,907,400 | 70,656,424 |
Is there a way to disable some function in python class so that it cannot be used except using it in inside its class?
|
<p>for example i have <em>myClassFile.py</em> file with code as follow:</p>
<pre><code>class myClass:
def first(self):
return 'tea'
def second(self):
print(f'drink {self.first()}')
</code></pre>
<p>then i have <em>run.py</em> file with code as follow:</p>
<pre><code>from myClassFile import myClass
class_ = myClass()
class_.second()
</code></pre>
<p>which when i run will output</p>
<pre><code>>>> 'drink tea'
</code></pre>
<p>how to prevent someone to write below code on <em>run.py</em> file or outside <em>myClass</em> ?</p>
<pre><code>class_.first()
</code></pre>
<p>so that if they used that method outside <em>myClass</em> class it will be an error or some sort</p>
|
<p>You can add a level of protection around methods and attributes by prefixing them with <code>__</code>.</p>
<p>But you can't make them totally private (as far as I know), there's always a way around, as shown in example below.</p>
<pre class="lang-py prettyprint-override"><code>class MyClass:
def __init__(self):
self.__a = 1
def __method(self):
return 2
obj = MyClass()
# obj.__a # raise an exception
# obj.__method() # raise an exception
print(dir(obj)) # you can see the method and attributes have been renamed !
print(obj._MyClass__a) # 1
print(obj._MyClass__method()) # 2
</code></pre>
|
python|python-class|python-object
| 3 |
1,907,401 | 69,875,328 |
can i create 2 module for a single page by using tkinter?
|
<p>i want to make a single page by using Tkinter moreover I want to make it in 2 modules. in that case, I can simplify the code</p>
<p>the code that I've made is</p>
<p><strong>module 1</strong> (a1.py)</p>
<pre><code>from tkinter import *
from a2 import frame
root=Tk()
root.title("voccabulary journel")
root.geometry("700x450")
root.configure(bg='#ff8000')
frame()
root.mainloop()
</code></pre>
<p><strong>module 2</strong>(a2.py)</p>
<pre><code>from tkinter import *
def frame():
Grid.rowconfigure(root,0,weight=1)
Grid.columnconfigure(root,0,weight=1)
Grid.columnconfigure(root,1,weight=3)
frame1=Frame(root,background='black')
frame1.grid(row=0,column=0,sticky='nsew')
frame2=Frame(root,background='white')
frame2.grid(row=0,column=1,sticky='nsew')
labeltry1=Label(frame1, text='testing')
labeltry1.pack()
labeltry2=Label(frame2,text='tersting')
labeltry2.pack()
</code></pre>
<p>I could have written in one module but I just want to simplify it..</p>
<p>i will attach the image of the terminal anyway</p>
<p><a href="https://i.stack.imgur.com/BJ0lc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BJ0lc.png" alt="enter image description here" /></a></p>
|
<p>There is good rule: send all values explicitly as arguments.</p>
<p>And this is your problem - in <code>frame()</code> you use <code>root</code> which you didn't send as argument.</p>
<p>Use:</p>
<p><strong>a2.py</strong></p>
<pre><code>def frame(root):
# code
</code></pre>
<p><strong>a1.py</strong></p>
<pre><code>frame(root)
</code></pre>
<p>And this resolves your problem and makes code more readable and simpler to debug.</p>
|
python|python-3.x|tkinter|software-design|tkinter-canvas
| 1 |
1,907,402 | 73,137,807 |
error : ", ".join(_PIL_INTERPOLATION_METHODS.keys())))
|
<p>I'm training a model for image classification and I keep getting this error while training.
<a href="https://i.stack.imgur.com/okw7u.png" rel="nofollow noreferrer">enter image description here</a></p>
|
<p>I think you wrote "billinear". Change it to "bilinear"</p>
|
python
| 0 |
1,907,403 | 73,000,879 |
pandas read_excel every kth column
|
<p>I am trying to read every kth column of an Excel file using <code>pandas.read_excel</code>. From the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_excel.html" rel="nofollow noreferrer">documentation</a> <code>usecols</code> option with a callable seems useful:</p>
<blockquote>
<p>If callable, then evaluate each column name against it and parse the
column if the callable returns True.</p>
</blockquote>
<p>Is there a way for the callable to take in the column number rather than column name? Something like:</p>
<pre><code>pd.read_excel('file.xls', usecols=lambda col_number: not col_number % k)
</code></pre>
|
<p>This is the best way that I know of - read the first lines, figure out the number of columns then create an array with every kth column integer index. (And the callable only receives the column name.)</p>
<pre><code>import numpy as np
dfe = pd.read_excel(r'D:\jchfiles\excel\jch\house\Rats.xlsx', nrows=1)
k = 3
print(dfe.shape)
(1, 9)
nb_sel_cols = dfe.shape[1]//k
print(nb_sel_cols)
3
sel_cols = np.arange(nb_sel_cols)*k
print(sel_cols)
[0 3 6]
df_rats = pd.read_excel(r'D:\jchfiles\excel\jch\house\Rats.xlsx', usecols=sel_cols)
df_rats.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 16 entries, 0 to 15
Data columns (total 3 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Unnamed: 0 3 non-null datetime64[ns]
1 Inside 3 non-null float64
2 Where 3 non-null object
dtypes: datetime64[ns](1), float64(1), object(1)
memory usage: 512.0+ bytes
</code></pre>
|
excel|pandas
| 1 |
1,907,404 | 73,051,429 |
Why is Python requests returning a different text value to what I get when I navigate to the webpage by hand?
|
<p>I am trying to build a simple 'stock-checker' for a T-shirt I want to buy. Here is the link: <a href="https://yesfriends.co/products/mens-t-shirt-black?variant=40840532689069" rel="nofollow noreferrer">https://yesfriends.co/products/mens-t-shirt-black?variant=40840532689069</a></p>
<p><a href="https://i.stack.imgur.com/usSJN.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/usSJN.jpg" alt="t-shirt" /></a></p>
<p>As you can see, I am present with 'Coming Soon' text, whereas usually if an item is in stock, it will show 'Add To Cart'.</p>
<p>I thought the simplest way would be to use <code>requests</code> and <code>beautifulsoup</code> to isolate this <code><button></code> tag, and read the value of text. If it eventually says 'Add To Cart', then I will write the code to email/message myself it's back in stock.</p>
<p><a href="https://i.stack.imgur.com/yHzY7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yHzY7.png" alt="chrome inspect" /></a></p>
<p>However, here's the code I have so far, and you'll see that the response says the text contains 'Add To Cart', which is not what the website actually shows?</p>
<pre class="lang-py prettyprint-override"><code>import requests
import bs4
URL = 'https://yesfriends.co/products/mens-t-shirt-black?variant=40840532689069'
def check_stock(url):
page = requests.get(url)
soup = bs4.BeautifulSoup(page.content, "html.parser")
buttons = soup.find_all('button', {'name': 'add'})
return buttons
if __name__ == '__main__':
buttons = check_stock(URL)
print(buttons[0].text)
</code></pre>
|
<p>All data available in <code><script></code> tag in JSON. So we need to get this, and extract the information we need. Let's use a simple slice by indexes to get clean JSON</p>
<pre><code>import requests
import json
url = 'https://yesfriends.co/products/mens-t-shirt-black'
response = requests.get(url)
index_start = response.text.index('product:', 0) + len('product:')
index_finish = response.text.index(', }', index_start)
json_obj = json.loads(response.text[index_start:index_finish])
for variant in json_obj['variants']:
available = 'IN STOCK' if variant['available'] else 'OUT OF STOCK'
print(variant['id'], variant['option1'], available)
</code></pre>
<p>OUTPUT:</p>
<pre><code>40840532623533 XXS OUT OF STOCK
40840532656301 XS OUT OF STOCK
40840532689069 S OUT OF STOCK
40840532721837 M OUT OF STOCK
40840532754605 L OUT OF STOCK
40840532787373 XL OUT OF STOCK
40840532820141 XXL OUT OF STOCK
40840532852909 3XL IN STOCK
40840532885677 4XL OUT OF STOCK
</code></pre>
|
python|html|beautifulsoup|python-requests
| 1 |
1,907,405 | 55,932,167 |
Python math is correct in command line while painfully wrong in py file?
|
<p>I was working on a projects on r/dailyprogrammer.
I copied one of the coded for reference and try to run it but it did the math wronngly.</p>
<pre><code>def N_queens_validator(n):
</code></pre>
<p>(...) this part I try to illustrate the board</p>
<pre><code>if len(set(n))!=len(n):
return print(f'{n} =>False same row')
else:
origin=[(ind,val) for ind,val in enumerate(n)]
a=origin[:]
for m in range (len(n)):
root=a.pop(0)
for i in range(m+1,len(n)):
result=root[0]-origin[i][0]/root[1]-origin[i][1]
print(str(root[0]-origin[i][0])+'/'+str(root[1]-origin[i][0])+'the result is: '+str(result))
if np.abs(result)==1:
return print(f'{n} =>False same diagonal')
return print(f'{n} =>True')
N_queens_validator([8, 6, 4, 2, 7, 1, 3, 5])
</code></pre>
<p>and here is the result which is just non-sense at all. Obviously the math was done wrongly
<a href="https://i.stack.imgur.com/cgHSD.png" rel="nofollow noreferrer">The result of program</a></p>
|
<p>You have two problems with your code:</p>
<p>1) You need, as user2357112 says, to put parentheses around the two subtractions so that the subtractions are done before the division.</p>
<p>2) You had a typo in your print statement. <code>str(root[1]-origin[i][0])</code> should be <code>str(root[1]-origin[i][1])</code></p>
<p>Here's the fixed version of your code:</p>
<pre><code>def N_queens_validator(n):
if len(set(n))!=len(n):
return print(f'{n} =>False same row')
else:
origin=[(ind,val) for ind,val in enumerate(n)]
a=origin[:]
for m in range (len(n)):
root=a.pop(0)
for i in range(m+1,len(n)):
result=(root[0]-origin[i][0])/(root[1]-origin[i][1])
print(str(root[0]-origin[i][0])+'/'+str(root[1]-origin[i][1])+' the result is: '+str(result))
if abs(result)==1:
return print(f'{n} =>False same diagonal')
return print(f'{n} =>True')
N_queens_validator([8, 6, 4, 2, 7, 1, 3, 5])
</code></pre>
<p>Result:</p>
<pre><code>-1/2 the result is: -0.5
-2/4 the result is: -0.5
-3/6 the result is: -0.5
-4/1 the result is: -4.0
-5/7 the result is: -0.7142857142857143
-6/5 the result is: -1.2
-7/3 the result is: -2.3333333333333335
-1/2 the result is: -0.5
-2/4 the result is: -0.5
-3/-1 the result is: 3.0
-4/5 the result is: -0.8
-5/3 the result is: -1.6666666666666667
-6/1 the result is: -6.0
-1/2 the result is: -0.5
-2/-3 the result is: 0.6666666666666666
-3/3 the result is: -1.0
[8, 6, 4, 2, 7, 1, 3, 5] =>False same diagonal
</code></pre>
|
python
| 2 |
1,907,406 | 55,689,719 |
Unable to download blobs from azure blob storage
|
<p>i'm trying to download blob from sub-directory of azure blob . i'm only able to download few files but for remaining it is throwing "HTTP status code=416, Exception=The range specified is invalid for the current size of the resource. ErrorCode: InvalidRange" . i am able to directly download the blobs from azure but programmatically only few were downloaded</p>
|
<p>I got this error, when I tried to download an empty blob. Using Peter Pan's code, you need to add a check for emptiness.</p>
<pre><code>blob_info = blob_service.get_blob_properties(container_name, blob_name)
if blob_info.properties.content_length == 0:
return "empty blob"
</code></pre>
|
python|azure-blob-storage
| 2 |
1,907,407 | 49,923,765 |
python pause infinite loop for code outside loop to be executed
|
<p>I am wondering if this is possible. I want to have an infinite while loop but still wont the rest of the code outside the while loop to continue running while the loop is on. like a way to break the while loop after each iteration for instance. i need to find a way for the second print statement to be executed without breaking the while loop completely.</p>
<pre><code>while True:
print('i will loop forever')
print('this code will never be executed because of the while loop')
</code></pre>
|
<p>There are a number of ways to accomplish this, such as threading. However, it looks like you may wish to serially loop for a while, break, then continue. Generators excel at this.</p>
<p>for example:</p>
<pre><code>def some_loop():
i=0
while True:
yield i
i+=1
my_loop=some_loop()
#loop for a while
for i in range(20):
print(next(my_loop))
#do other stuff
do_other_stuff()
#loop some more, picking up where we left off
for i in range(10):
print(next(my_loop))
</code></pre>
|
python
| 1 |
1,907,408 | 49,936,845 |
sys.stdout.close() running forever
|
<p>I want to create a file that will contain all my log messages from my Python script. Print commands, how much time each cell needs to run ect.
I use the following:</p>
<pre><code>import logging
sys.stdout = open('my/directory' +"log.txt", 'w')
print("here is my print message"+'\n')
sys.stdout.close()
</code></pre>
<p>I'm adding this last command to check and see when my script finishes</p>
<pre><code>print('finish!!!')
</code></pre>
<p>I tested in Windows (where I use spyder) and in Linux where I use (Jyputer)
in both environments the last cell never stops and I never get the 'finish' message. Any ideas? Or any alternatives for I need to do?
My problem is that </p>
|
<p>I had the same problem. Let's say you have this in a test.py file:</p>
<pre><code>def print_hello():
with open('log.txt') as f:
sys.stdout = f
print('hello')
</code></pre>
<p>If you run this in the command line <code>python test.py</code> you wont have any problem. But if you run this in a Jupyter notebook, you have to add stdout_obj like so:</p>
<pre><code>stdout_obj = sys.stdout # store original stdout
print_hello()
sys.stdout = stdout_obj # restore print commands to to interactive prompt
</code></pre>
|
python|linux|windows|logging
| 0 |
1,907,409 | 66,416,018 |
delete CSV rows with conditional in python
|
<p>I have a csv file with the following:</p>
<pre><code>storeNumber, sale1, sale2
1, 1, 1
2, 0, 0
3, 1, 0
4, 0, 1
...
25, 0, 0
26, 1, 0
27, 0, 1
28, 0,0
</code></pre>
<p>I need to delete rows with sale1 and sale2 that are equal to 0.</p>
<p>I have the following code setup:</p>
<pre><code>import pandas as pd
df = pd.read_csv('sales.csv', index_col=0)
df_new = df[df.sale1 != 0] and df[df.sale2 != 0]
print(df_new)
</code></pre>
<p>the code works if I will only delete one of each column that has 0 value.</p>
<pre><code>df_new = df[df.sale1 != 0]
</code></pre>
<p>or</p>
<pre><code>df_new = df[df.sale2 != 0]
</code></pre>
<p>However, when put the code above with the "and", I get an error that says:</p>
<pre><code>ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
</code></pre>
<p>what is the right code for deleting rows that have 0 value for both sale1 and sale2?</p>
|
<p>To operator you need to use to combine the two logical conditions is <code>&</code> instead of <code>and</code>. This is explained in detail <a href="https://stackoverflow.com/questions/21415661/logical-operators-for-boolean-indexing-in-pandas">here</a>. So, what you need is:</p>
<pre><code>df_new = df[(df.sale1 != 0) & (df[df.sale2 != 0)]
</code></pre>
<p>Notice that both conditions must be in parentheses since <code>&</code> binds stronger than <code>!=</code>.</p>
|
python|pandas|dataframe|delete-row
| 1 |
1,907,410 | 66,361,452 |
Find the target difference in a pair with recursion
|
<p>Given a list of unsorted integers and a target integer, find out if any pair's difference in the list is equal to the target integer with recursion.</p>
<pre><code>>>> aList = [5, 4, 8, -3, 6]
>>> target = 9
return True
>>> aList = [-1, 5, 4]
>>> target = 3
return False
</code></pre>
<ul>
<li>For and while loops are not allowed.</li>
<li>No imports allowed.</li>
<li>.sort() is not allowed.</li>
</ul>
<p>I tried this and it didn't work.</p>
<pre><code>def calculate(aList, target):
if len(aList) == 0 and diff != 0:
return False
startIndex = 0
endIndex = len(aList) - 1
return resursive_sum(aList, target, startIndex, endIndex)
def resursive_sum(aList, targ, start, end):
print(f'Start: {start}')
print(f'End: {end}')
if start == end:
return False
elif aList[end] - aList[start] == targ:
return True
elif aList[end] - aList[start] < targ:
return resursive_sum(values, targ, start, end - 1)
return resursive_sum(aList, targ, start + 1, end)
</code></pre>
<p>I'm unsure of how this problem could be solved if we aren't able to use loops to sort the list. Even if we could use recursion to sort the list, how should the recursion look so that it can scan every pair's difference?</p>
|
<p>So I actually implemented it, but for educational purposes I'm not gonna post it until a bit later (I'll update it in a few hours) as I assume this is for a class or some other setting where you should figure it out on your own.</p>
<p>Assume you are trying to hit a difference target <code>t = 5</code> and you are evaluating an arbitrary element <code>8</code>. There are only two values that would allow <code>8</code> to have a complement in the set: <code>8 + 5 = 13</code> and <code>8 - 5 = 3</code>.</p>
<p>If <code>3</code> or <code>13</code> had been in any previous elements, you would know that the set has a pair of complements. Otherwise, you'd want to record the fact that <code>8</code> had been seen. Thereby, if <code>3</code> was found later, <code>8</code> would be queried as <code>3 + 5 = 8</code> would be considered.</p>
<p>In other words, I am proposing a method where you recursively traverse the list and either</p>
<ul>
<li>(base case) Are at the end of the list</li>
<li>Have a current element <code>a</code> such that <code>a + t</code> or <code>a - t</code> has been seen</li>
<li>Record that the current element has been seen and go to the next element</li>
</ul>
<p>Ideally, this should have O(n) time complexity and O(n) space complexity in the worst case (assuming efficient implementation with pass-by-reference or similar, and also amortized constant-time set query). It can also be implemented using a basic array, but I'm not going to say that's better (in python).</p>
<p>I'll post my solution in a few hours. Good luck!</p>
<hr />
<p><strong>EDIT 1:</strong> Hopefully, you had enough time to get it to work. The method I described can be done as follows:</p>
<pre><code>def hasDiffRecur(L, t, i, C):
"""
Recursive version to see if list has difference
:param L: List to be considered
:param t: Target difference
:param i: Current index to consider
:param C: Cache set
"""
# We've reached the end. Give up
if i >= len(L):
return False
print(f" > L[{i}] = {L[i]:2}; is {L[i]-t:3} or {L[i]+t:2} in {C}")
# Has the complement been cached?
if L[i] - t in C:
print(f"! Difference between {L[i]} and {L[i]-t} is {t}")
return True
if L[i] + t in C:
print(f"! Difference between {L[i]} and {L[i]+t} is {t}")
return True
# Complement not seen yet. Cache element and go to next element
C.add(L[i])
return hasDiffRecur(L, t, i+1, C)
###################################################################
def hasDiff(L, t):
"""
Initialized call for hasDiffRecur. Also prints intro message.
See hasDiffRecur for param info
"""
print(f"\nIs a difference of {t} present in {L}?")
return hasDiffRecur(L, t, 0, set())
###################################################################
hasDiff([5, 4, 8, -3, 6], 9)
hasDiff([-1, 5, 4], 3)
hasDiff([-1, 5, 4, -1, 7], 0) # If concerned about set non-duplicity
</code></pre>
<p><strong>OUTPUT:</strong></p>
<pre><code>Is a difference of 9 present in [5, 4, 8, -3, 6]?
> L[0] = 5; is -4 or 14 in set()
> L[1] = 4; is -5 or 13 in {5}
> L[2] = 8; is -1 or 17 in {4, 5}
> L[3] = -3; is -12 or 6 in {8, 4, 5}
> L[4] = 6; is -3 or 15 in {8, -3, 4, 5}
! Difference between 6 and -3 is 9
Is a difference of 3 present in [-1, 5, 4]?
> L[0] = -1; is -4 or 2 in set()
> L[1] = 5; is 2 or 8 in {-1}
> L[2] = 4; is 1 or 7 in {5, -1}
Is a difference of 0 present in [-1, 5, 4, -1, 7]?
> L[0] = -1; is -1 or -1 in set()
> L[1] = 5; is 5 or 5 in {-1}
> L[2] = 4; is 4 or 4 in {5, -1}
> L[3] = -1; is -1 or -1 in {4, 5, -1}
! Difference between -1 and -1 is 0
</code></pre>
<hr />
<p><strong>EDIT 2:</strong></p>
<p>This is a pretty clever and efficient solution. I do realize that maybe it is the intention to not allow any traversal at all (i.e. no existance querying for set). If that is the case, the above approach can be done with a constant-size list that is pre-allocated to size equal to the range of the values of the list.</p>
<p>If the notion of pre-allocating to the size of the range of the list is still too much iteration, I can think of the exhaustive approach implemented recursively. There is likely a more efficient approach for this, but you could boil the problem down to a double-for-loop-like problem (O(n^2) time complexity). This is a trivial algorithm and I think you can understand it without documentation, so I'll just throw it in there to be complete:</p>
<pre><code>def hasDiffRecur(L, t, i = 0, j = 1):
if i >= len(L): return False
if j >= len(L): return hasDiffRecur(L, t, i+1, i+2)
if abs(L[i] - L[j]) == t: return True
return hasDiffRecur(L, t, i, j+1)
###################################################################
print(hasDiffRecur([5, 4, 8, -3, 6], 9)) # True
print(hasDiffRecur([-1, 5, 4], 3)) # False
print(hasDiffRecur([-1, 5, 4, -1, 7], 0)) # True
</code></pre>
|
python|recursion
| 1 |
1,907,411 | 64,903,507 |
(Python) If statement to always search for specific class name to be true and present and only then will it run the code below it
|
<p>I have been struggling on this for a while.</p>
<p>I want it to always be searching for this class name and trigger the code below it when the class is found. I dont want the script below it to be active until the class name is present. Until a pop up box is present for example.</p>
<p>I am using selenium and webdriver.</p>
<p>I am trying to make this work:</p>
<pre><code>while true:
time.sleep(random.randint(2,3))
if driver.find_element_by_class_name("recaptcha-checkbox-border"):
#driver.refresh()
#time.sleep(random.randint(2,3))
#switch to recaptcha frame
driver = webdriver.Chrome(os.getcwd()+"\\webdriver\\chromedriver.exe")
delay()
frames=driver.find_elements_by_tag_name("iframe")
driver.switch_to.frame(frames[0]);
delay()
#click on checkbox to activate recaptcha
driver.find_element_by_class_name("recaptcha-checkbox-border").click()
#switch to recaptcha audio control frame
driver.switch_to.default_content()
frames=driver.find_element_by_xpath("/html/body/div[2]/div[4]").find_elements_by_tag_name("iframe")
driver.switch_to.frame(frames[0])
delay()
#click on audio challenge
driver.find_element_by_id("recaptcha-audio-button").click()
#switch to recaptcha audio challenge frame
driver.switch_to.default_content()
frames= driver.find_elements_by_tag_name("iframe")
driver.switch_to.frame(frames[-1])
delay()
</code></pre>
|
<pre><code>while True:
sleep(10)
#driver.refresh()
#time.sleep(random.randint(2,3))
if driver.find_element_by_class_name("rc-anchor-pt"):
driver.find_element_by_xpath("/html/body/div/div/div[3]/div/button").click()
#get the mp3 audio file
src = driver.find_element_by_id("audio-source").get_attribute("src")
print("[INFO] Audio src: %s"%src)
#download the mp3 audio file from the source
urllib.request.urlretrieve(src, os.getcwd()+"\\sample.mp3")
sound = pydub.AudioSegment.from_mp3(os.getcwd()+"\\sample.mp3")
sound.export(os.getcwd()+"\\sample.wav", format="wav")
sample_audio = sr.AudioFile(os.getcwd()+"\\sample.wav")
r= sr.Recognizer()
break
</code></pre>
|
python|class|if-statement|xpath
| 0 |
1,907,412 | 53,067,197 |
Python 2.7: This code isn't working. Any Ideas?
|
<p>Its supposed to be a simple fun password cracker but whenever I run it it does nothing. There is apparently no error. Any ideas on what is wrong??? </p>
<pre><code>number = 0
password = 200
i = 10
while i == 10:
if number != password:
number = float(number) + float(1)
while i == 10:
if number == password:
print("Password found, Password is: {1}".format (number))
</code></pre>
<p>Thanks!</p>
|
<p>Your code is entering an infinite loop since <code>i==10</code> will always be true.</p>
<pre><code>while i == 10:
if number != password:
number = float(number) + float(1)
</code></pre>
<p>This loop runs forever since you never redefine <code>i</code> within it</p>
<p>Additionally, the string formatting in the print is incorrect since there is no element of index <code>{1}</code>. Try <code>{0}</code> instead.</p>
<pre><code>print("Password found, Password is: {0}".format (number))
</code></pre>
|
python
| 2 |
1,907,413 | 53,033,636 |
Why are the data I want to extract from a web page not in my soup page?
|
<p><a href="https://www.questdiagnostics.com/testcenter/TestDetail.action?tabName=OrderingInfo&ntc=36127&searchString=36127" rel="nofollow noreferrer">The web page I am attempting to extract data from.</a></p>
<p><a href="https://i.stack.imgur.com/bB8Bp.png" rel="nofollow noreferrer">Picture of the data I am trying to extract.</a> I want to extract the Test Code, CPT Code(s), Preferred Specimen(s), Minimum Volume, Transport Container, and Transport Temperature.</p>
<p>When I print the soup page, it does not contain the data I need. Therefore, I cannot extract it. Here is how I print the soup page:</p>
<pre><code>soup_page = soup(html_page, "html.parser")
result = soup_page
print(result)
</code></pre>
<p>But when I inspect the elements of interest from the web page, I can see the HTML contains the data of interest. Here is some of the HTML:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code><h4>Test Code</h4><p>36127</p><span class="LegacyOrder" style="word-wrap:break-word;visibility:hidden"></span><input id="primaryTestCode" value="36127" type="hidden"><input id="searchStringValue" value="36127" type="hidden"><span class="LisTranslatableVerbiage" style="word-wrap:break-word;visibility:hidden"></span></code></pre>
</div>
</div>
</p>
|
<p>It appears that in order to find the desired page, you must first select the test region from a dropdown menu, and press the button "Go". To do so in Python, you will have to use <code>selenium</code>:</p>
<pre><code>from selenium import webdriver
from bs4 import BeautifulSoup as soup
import re, time
def get_page_data(_source):
headers = ['Test Code', 'CPT Code(s)', 'Preferred Specimen(s)', 'Minimum Volume', 'Transport Container', 'Transport Temperature']
d1 = list(filter(None, [i.text for i in _source.find('div', {'id':'labDetail'}).find_all(re.compile('h4|p'))]))
return {d1[i]:d1[i+1] for i in range(len(d1)-1) if d1[i].rstrip() in headers}
d = webdriver.Chrome('/path/to/chromedriver')
d.get('https://www.questdiagnostics.com/testcenter/TestDetail.action?tabName=OrderingInfo&ntc=36127&searchString=36127')
_d = soup(d.page_source, 'html.parser')
_options = [i.text for i in _d.find_all('option', {'value':re.compile('[A-Z]+')})]
_lab_regions = {}
for _op in _options:
d.find_element_by_xpath(f"//select[@id='labs']/option[text()='{_op}']").click()
try:
d.find_element_by_xpath("//button[@class='confirm go']").click()
except:
d.find_element_by_xpath("//button[@class='confirm update']").click()
_lab_regions[_op] = get_page_data(soup(d.page_source, 'html.parser'))
time.sleep(2)
print(_lab_regions)
</code></pre>
<p>Output:</p>
<pre><code>{'AL - Solstas Birmingham 2732 7th Ave South (866)281-9838 (SKB)': {'Test Code': '36127', 'CPT Code(s) ': '84443', 'Preferred Specimen(s) ': '1 mL serum', 'Minimum Volume ': '0.7 mL', 'Transport Container ': 'Serum Separator Tube (SST®)', 'Transport Temperature ': 'Room temperature'}, 'AZ - Tempe 1255 W Washington St (800)766-6721 (QSO)': {}, 'CA - Quest Diagnostics Infectious Disease, Inc. 33608 Ortega Hwy (800) 445-4032 (FDX)': {}, 'CA - Sacramento 3714 Northgate Blvd (866)697-8378 (MET)': {'Test Code': '36127', 'CPT Code(s) ': '84443', 'Preferred Specimen(s) ': '1 mL serum', 'Minimum Volume ': '0.7 mL', 'Transport Container ': 'Serum Separator Tube (SST®)', 'Transport Temperature ': 'Room temperature'}, 'CA - San Jose 967 Mabury Rd (866)697-8378 (MET)': {'Test Code': '36127', 'CPT Code(s) ': '84443', 'Preferred Specimen(s) ': '1 mL serum', 'Minimum Volume ': '0.7 mL', 'Transport Container ': 'Serum Separator Tube (SST®)', 'Transport Temperature ': 'Room temperature'}, 'CA - San Juan Capistrano 33608 Ortega Hwy (800) 642-4657 (SJC)': {'Test Code': '36127', 'CPT Code(s) ': '84443', 'Preferred Specimen(s) ': '1 mL serum', 'Minimum Volume ': '0.7 mL', 'Transport Temperature ': 'Room temperature'}, 'CA - Valencia 27027 Tourney Road (800) 421-7110 (SLI)': {}, 'CA - West Hills 8401 Fallbrook Ave (866)697-8378 (MET)': {'Test Code': '36127', 'CPT Code(s) ': '84443', 'Preferred Specimen(s) ': '1 mL serum', 'Minimum Volume ': '0.7 mL', 'Transport Container ': 'Serum Separator Tube (SST®)', 'Transport Temperature ': 'Room temperature'}, 'CO - Midwest 695 S Broadway (866) 697-8378 (STL)': {'Test Code': '36127', 'CPT Code(s) ': '84443', 'Preferred Specimen(s) ': '1 mL serum', 'Minimum Volume ': '0.7 mL', 'Transport Container ': 'Serum Separator Tube (SST®)', 'Transport Temperature ': 'Room temperature'}, 'CT - Wallingford 3 Sterling Dr (866)697-8378 (NEL)': {'Test Code': '36127', 'CPT Code(s) ': '84443', 'Preferred Specimen(s) ': '1 mL serum', 'Minimum Volume ': '0.7 mL', 'Transport Container ': 'Serum Separator Tube (SST®)', 'Transport Temperature ': 'Room temperature'}, 'FL - Miramar 10200 Commerce Pkwy (866)697-8378 (TMP)': {'Test Code': '36127', 'CPT Code(s) ': '84443', 'Preferred Specimen(s) ': '1 mL serum', 'Minimum Volume ': '0.7 mL', 'Transport Container ': 'Serum Separator Tube (SST®)', 'Transport Temperature ': 'Room temperature'}, 'FL - Tampa 4225 E Fowler Ave (866)697-8378 (TMP)': {'Test Code': '36127', 'CPT Code(s) ': '84443', 'Preferred Specimen(s) ': '1 mL serum', 'Minimum Volume ': '0.7 mL', 'Transport Container ': 'Serum Separator Tube (SST®)', 'Transport Temperature ': 'Room temperature'}, 'GA - Tucker 1777 Montreal Cir (866)697-8378 (SKB)': {'Test Code': '36127', 'CPT Code(s) ': '84443', 'Preferred Specimen(s) ': '1 mL serum', 'Minimum Volume ': '0.7 mL', 'Transport Container ': 'Serum Separator Tube (SST®)', 'Transport Temperature ': 'Room temperature'}, 'IL - Wood Dale 1355 Mittel Blvd (866)697-8378 (WDL)': {'Test Code': '36127', 'CPT Code(s) ': '84443', 'Preferred Specimen(s) ': '1 mL serum', 'Minimum Volume ': '0.7 mL', 'Transport Container ': 'Serum Separator Tube (SST®)', 'Transport Temperature ': 'Room temperature'}, 'IN - Indianapolis 2560 North Shadeland Avenue (317)803-1010 (MJV)': {'Test Code': '36127', 'CPT Code(s) ': '84443'}, 'KS - Lenexa 10101 Renner Blvd (866)697-8378 (STL)': {'Test Code': '36127', 'CPT Code(s) ': '84443', 'Preferred Specimen(s) ': '1 mL serum', 'Minimum Volume ': '0.7 mL', 'Transport Container ': 'Serum Separator Tube (SST®)', 'Transport Temperature ': 'Room temperature'}, 'MA - Marlborough 200 Forest Street (866) 697-8378 (NEL)': {'Test Code': '36127', 'CPT Code(s) ': '84443', 'Preferred Specimen(s) ': '1 mL serum', 'Minimum Volume ': '0.7 mL', 'Transport Container ': 'Serum Separator Tube (SST®)', 'Transport Temperature ': 'Room temperature'}, 'MD - East Region - Baltimore, 1901 Sulphur Spring Rd (866) 697-8378) (PHP)': {'Test Code': '36127', 'CPT Code(s) ': '84443', 'Preferred Specimen(s) ': '1 mL serum', 'Minimum Volume ': '0.7 mL', 'Transport Container ': 'Serum Separator Tube (SST®)', 'Transport Temperature ': 'Room temperature'}, 'MD - Baltimore 1901 Sulphur Spring Rd (866)697-8378 (QBA)': {'Test Code': '36127X', 'CPT Code(s) ': '84443', 'Preferred Specimen(s) ': '1 mL (0.7 mL minimum) serumPlasma is no longer acceptable', 'Minimum Volume ': '0.7 mL'}, 'MI - Troy 1947 Technology Drive (866)697-8378 (WDL)': {'Test Code': '36127', 'CPT Code(s) ': '84443', 'Preferred Specimen(s) ': '1 mL serum', 'Minimum Volume ': '0.7 mL', 'Transport Container ': 'Serum Separator Tube (SST®)', 'Transport Temperature ': 'Room temperature'}, 'MO - Maryland Heights 11636 Administration Dr (866)697-8378 (STL)': {'Test Code': '36127', 'CPT Code(s) ': '84443', 'Preferred Specimen(s) ': '1 mL serum', 'Minimum Volume ': '0.7 mL', 'Transport Container ': 'Serum Separator Tube (SST®)', 'Transport Temperature ': 'Room temperature'}, 'NC - Greensboro 4380 Federal Drive (866)697-8378 (SKB)': {'Test Code': '36127', 'CPT Code(s) ': '84443', 'Preferred Specimen(s) ': '1 mL serum', 'Minimum Volume ': '0.7 mL', 'Transport Container ': 'Serum Separator Tube (SST®)', 'Transport Temperature ': 'Room temperature'}, 'NJ - Teterboro 1 Malcolm Ave (866)697-8378 (QTE)': {'Test Code': '36127', 'CPT Code(s) ': '84443', 'Preferred Specimen(s) ': '1 mL serum', 'Minimum Volume ': '0.7 mL', 'Transport Container ': 'Serum Separator Tube (SST®)', 'Transport Temperature ': 'Room temperature'}, 'NM - Albuquerque 5601 Office Blvd ABQ (866) 697-8378 (DAL)': {'Test Code': '36127', 'CPT Code(s) ': '84443', 'Preferred Specimen(s) ': '1 mL serum', 'Minimum Volume ': '0.7 mL', 'Transport Container ': 'Serum Separator Tube (SST®)', 'Transport Temperature ': 'Room temperature'}, 'NV - Las Vegas 4230 Burnham Ave (866)697-8378 (MET)': {'Test Code': '36127', 'CPT Code(s) ': '84443', 'Preferred Specimen(s) ': '1 mL serum', 'Minimum Volume ': '0.7 mL', 'Transport Container ': 'Serum Separator Tube (SST®)', 'Transport Temperature ': 'Room temperature'}, 'NY - Melville 50 Republic Rd Suite 200 - (516) 677-3800 (QTE)': {'Test Code': '36127', 'CPT Code(s) ': '84443', 'Preferred Specimen(s) ': '1 mL serum', 'Minimum Volume ': '0.7 mL', 'Transport Container ': 'Serum Separator Tube (SST®)', 'Transport Temperature ': 'Room temperature'}, 'OH - Cincinnati 6700 Steger Dr (866)697-8378 (WDL)': {'Test Code': '36127', 'CPT Code(s) ': '84443', 'Preferred Specimen(s) ': '1 mL serum', 'Minimum Volume ': '0.7 mL', 'Transport Container ': 'Serum Separator Tube (SST®)', 'Transport Temperature ': 'Room temperature'}, 'OH - Dayton 2308 Sandridge Dr. (937) 297 - 8305 (DAP)': {'Test Code': '36127'}, 'OK - Oklahoma City 225 NE 97th Street (800)891-2917 (DLO)': {'Test Code': '36127', 'CPT Code(s) ': '84443', 'Preferred Specimen(s) ': 'PATIENT PREPARATION:SPECIMEN COLLECTION AFTER FLUORESCEIN DYE ANGIOGRAPHY SHOULDBE DELAYED FOR AT LEAST 3 DAYS. FOR PATIENTS ONHEMODIALYSIS, SPECIMEN COLLECTION SHOULD BE DELAYED FOR 2WEEKS. ACCORDING TO THE ASSAY MANUFACTURER SIEMENS:"SAMPLES CONTAINING FLUORESCEIN CAN PRODUCE FALSELYDEPRESSED VALUES WHEN TESTED WITH THE ADVIA CENTAUR TSH3ULTRA ASSAY."1 ML SERUMINSTRUCTIONS:THIS ASSAY SHOULD ONLY BE ORDERED ON PATIENTS 1 YEAR OF AGEOR OLDER. ORDERS ON PATIENTS YOUNGER THAN 1 YEAR WILL HAVEA TSH ONLY PERFORMED.', 'Minimum Volume ': '0.7 ML', 'Transport Container ': 'SERUM SEPARATOR TUBE (SST)', 'Transport Temperature ': 'ROOM TEMPERATURE'}, 'OR - Portland 6600 SW Hampton Street (800)222-7941 (SEA)': {'Test Code': '36127', 'CPT Code(s) ': '84443', 'Preferred Specimen(s) ': '1 mL serum', 'Minimum Volume ': '0.7 mL', 'Transport Container ': 'Serum Separator Tube (SST®)', 'Transport Temperature ': 'Room temperature'}, 'PA - Erie 1526 Peach St (814)461-2400 (QER)': {'Test Code': '36127', 'CPT Code(s) ': '84443', 'Preferred Specimen(s) ': 'Preferred Specimen Volume: 1.0 mlSpecimen Type SERUMSpecimen State Room temperaturePatient preparation: Specimen collection after fluoresceindye angiography should be delayed for at least 3 days. Forpatients on hemodialysis, specimen collection should bedelayed for 2 weeks. According to the assay manufacturerSiemens: Samples containing fluorescein can produce falselydepressed values when tested with the ADVIA Centaur TSH3Ultra Assay.STABILITYSerum:Room temperature: 7 daysRefrigerated: 7 daysFrozen: 28 days', 'Minimum Volume ': '0.7 ml', 'Transport Container ': 'Serum Separator'}, 'PA - Horsham 900 Business Center Dr (866)697-8378 (PHP)': {'Test Code': '36127', 'CPT Code(s) ': '84443', 'Preferred Specimen(s) ': '1 mL serum', 'Minimum Volume ': '0.7 mL', 'Transport Container ': 'Serum Separator Tube (SST®)', 'Transport Temperature ': 'Room temperature'}, 'PA - Pittsburgh 875 Greentree Rd (866)697-8378 (QPT)': {'Test Code': '36127', 'CPT Code(s) ': '84443', 'Preferred Specimen(s) ': '1 mL serum', 'Minimum Volume ': '0.7 mL', 'Transport Container ': 'Serum Separator Tube (SST®)', 'Transport Temperature ': 'Room temperature'}, 'TN - Knoxville, 501 19th St, Trustee Towers – Ste 300 & 301 (866)MY-QUEST (SKB)': {'Test Code': '36127', 'CPT Code(s) ': '84443', 'Preferred Specimen(s) ': '1 mL serum', 'Minimum Volume ': '0.7 mL', 'Transport Container ': 'Serum Separator Tube (SST®)', 'Transport Temperature ': 'Room temperature'}, 'TN - Nashville 525 Mainstream Dr (866)697-8378 (SKB)': {'Test Code': '36127', 'CPT Code(s) ': '84443', 'Preferred Specimen(s) ': '1 mL serum', 'Minimum Volume ': '0.7 mL', 'Transport Container ': 'Serum Separator Tube (SST®)', 'Transport Temperature ': 'Room temperature'}, 'TX - Houston 5850 Rogerdale Road (866)697-8378 (DAL)': {'Test Code': '36127', 'CPT Code(s) ': '84443', 'Preferred Specimen(s) ': '1 mL serum', 'Minimum Volume ': '0.7 mL', 'Transport Container ': 'Serum Separator Tube (SST®)', 'Transport Temperature ': 'Room temperature'}, 'TX - Irving 4770 Regent Blvd (866)697-8378 (DAL)': {'Test Code': '36127', 'CPT Code(s) ': '84443', 'Preferred Specimen(s) ': '1 mL serum', 'Minimum Volume ': '0.7 mL', 'Transport Container ': 'Serum Separator Tube (SST®)', 'Transport Temperature ': 'Room temperature'}, 'VA - Chantilly 14225 Newbrook Dr (703)802-6900 (AMD)': {'Test Code': '36127', 'CPT Code(s) ': '84443', 'Preferred Specimen(s) ': '1 mL serum', 'Minimum Volume ': '0.7 mL', 'Transport Temperature ': 'Room temperature'}, 'WA - Seattle 1737 Airport Way S (866)697-8378 (SEA)': {'Test Code': '36127', 'CPT Code(s) ': '84443', 'Preferred Specimen(s) ': '1 mL serum', 'Minimum Volume ': '0.7 mL', 'Transport Container ': 'Serum Separator Tube (SST®)', 'Transport Temperature ': 'Room temperature'}}
</code></pre>
<hr>
<p>Specifically, for the <code>"WA - Seattle 1737 Airport Way S (866)697-8378 (SEA)"</code> laboratory:</p>
<pre><code>print(_lab_regions["WA - Seattle 1737 Airport Way S (866)697-8378 (SEA)"])
</code></pre>
<p>Output:</p>
<pre><code>{'Test Code': '36127', 'CPT Code(s) ': '84443', 'Preferred Specimen(s) ': '1 mL serum', 'Minimum Volume ': '0.7 mL', 'Transport Container ': 'Serum Separator Tube (SST®)', 'Transport Temperature ': 'Room temperature'}
</code></pre>
|
python|web-scraping|beautifulsoup
| 0 |
1,907,414 | 68,814,471 |
Using pandas.melt to get all values for all variables
|
<p>I have the following df</p>
<pre><code>print(df1)
var0 var1 var2 var3
0 Andr HP 20 132
1 Valr Hone 21 542
2 Kor Star 12 623
</code></pre>
<p>I want to get all values for every variable in a 2 row table like so:</p>
<pre><code> var value
0 var0 Andr
1 var1 HP
2 var2 20
3 var3 132
4 var0 Valr
5 var1 Hone
...
</code></pre>
<p>I've tried using pandas.melt like this:</p>
<p><code>pd.melt(df1, id_vars =['var0'], value_vars =['var1'])</code></p>
<p>but this requires defining the columns used.</p>
<p>Is there any way to get the desired output without strictly defining which columns to use with <strong>pandas.melt</strong> or <strong>pandas.wide_to_long</strong>?</p>
|
<p>see <a href="https://pandas.pydata.org/docs/reference/api/pandas.melt.html" rel="nofollow noreferrer">pd.melt</a> doc</p>
<pre><code>pd.melt(df, var_name='var', ignore_index=False).sort_index().reset_index(drop=True)
</code></pre>
<p>Output:</p>
<pre><code> var value
0 var0 Andr
1 var1 HP
2 var2 20
3 var3 132
4 var0 Valr
5 var1 Hone
6 var2 21
7 var3 542
8 var0 Kor
9 var1 Star
10 var2 12
11 var3 623
</code></pre>
|
python|pandas
| 2 |
1,907,415 | 71,481,447 |
tensorflow.saved_model.load() takes extremely long time to execute
|
<p>I have the following function to download, store and load models from the <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md" rel="nofollow noreferrer">tensorflow model zoo</a>:</p>
<pre><code>def load_object_detection_model(model_name: str):
models = load_model_zoo_list()
model_url = models[model_name]['url']
model_filename = models[model_name]['filename']
pretrained_path = os.path.join(os.path.dirname(__file__), "pretrained_models")
os.makedirs(pretrained_path, exist_ok=True)
get_file(fname=model_filename, origin=model_url, cache_dir=pretrained_path, cache_subdir='cptr', extract=True)
loaded_model = tf.saved_model.load(os.path.join(pretrained_path, 'cptr', model_name, "saved_model"))
return loaded_model
def load_model_zoo_list():
"""
:return:
"""
path = os.path.join(os.path.dirname(__file__), "model_zoo.json")
with open(path, 'r') as f:
model_zoo_json = json.load(f)
return model_zoo_json
</code></pre>
<p><strong>model_zoo.json</strong></p>
<pre><code>{
"ssd_mobilenet_v2_320x320_coco17_tpu-8": {
"url": "http://download.tensorflow.org/models/object_detection/tf2/20200711/ssd_mobilenet_v2_320x320_coco17_tpu-8.tar.gz",
"filename": "ssd_mobilenet_v2_320x320_coco17_tpu-8.tar.gz"
}
}
</code></pre>
<p>The idea is tho simply add more models to the json later, <code>ssd_mobilenet_v2_320x320_coco17_tpu-8</code> was simply chosen at the moment for testing.</p>
<p>The problem is the following. The line <code> loaded_model = tf.saved_model.load(os.path.join(pretrained_path, 'cptr', model_name, "saved_model"))</code> takes around 25-30 seconds to execute. The model is already downloaded at this point and the <code>saved_model</code> folder has a size of around 32Mb. I also tested with bigger models, which took even longer. Inference seems to be much to slow as well (compared to the speeds listed on the model zoo page).</p>
<p>Apart from that, the model seems to work.</p>
<p>What could be the reason for these models being so slow?</p>
|
<p>Got it! On the first model call, the graph is built, so the first call to the model is always slow. I tried your code on google colab using a GPU:</p>
<pre><code>model = load_object_detection_model("ssd_mobilenet_v2_320x320_coco17_tpu-8")
</code></pre>
<pre><code>%%time
a= model(np.random.randint(0, 255, size=(1, 320, 320, 3)).astype("uint8"))
</code></pre>
<p>CPU times: user 4.32 s, sys: 425 ms, total: 4.75 s<br />
Wall time: 4.71 s</p>
<pre><code>%%time
a= model(np.random.randint(0, 255, size=(1, 320, 320, 3)).astype("uint8"))
</code></pre>
<p>CPU times: user 124 ms, sys: 18.4 ms, total: 143 ms<br />
Wall time: 85.4 ms</p>
<p>In the document, they say 22 ms for this model, but maybe they got a faster GPU.</p>
|
python|tensorflow|keras
| 1 |
1,907,416 | 10,838,596 |
Python class--Super variable
|
<p>The below piece of code is giving me a error for some reason, Can someone tell me what would be the problem..</p>
<p>Basically, I create 2 classes Point & Circle..THe circle is trying to inherit the Point class.</p>
<pre><code>Code:
class Point():
x = 0.0
y = 0.0
def __init__(self, x, y):
self.x = x
self.y = y
print("Point constructor")
def ToString(self):
return "{X:" + str(self.x) + ",Y:" + str(self.y) + "}"
class Circle(Point):
radius = 0.0
def __init__(self, x, y, radius):
super(Point,self).__init__(x,y)
self.radius = radius
print("Circle constructor")
def ToString(self):
return super().ToString() + \
",{RADIUS=" + str(self.radius) + "}"
if __name__=='__main__':
newpoint = Point(10,20)
newcircle = Circle(10,20,0)
</code></pre>
<p>Error:</p>
<pre><code>C:\Python27>python Point.py
Point constructor
Traceback (most recent call last):
File "Point.py", line 29, in <module>
newcircle = Circle(10,20,0)
File "Point.py", line 18, in __init__
super().__init__(x,y)
TypeError: super() takes at least 1 argument (0 given)
</code></pre>
|
<p>It looks like you already may have fixed the original error, which was caused by <code>super().__init__(x,y)</code> as the error message indicates, although your fix was slightly incorrect, instead of <code>super(Point, self)</code> from the <code>Circle</code> class you should use <code>super(Circle, self)</code>.</p>
<p>Note that there is another place that calls <code>super()</code> incorrectly, inside of <code>Circle</code>'s <code>ToString()</code> method:</p>
<pre><code> return super().ToString() + \
",{RADIUS=" + str(self.radius) + "}"
</code></pre>
<p>This is valid code on Python 3, but on Python 2 <code>super()</code> requires arguments, rewrite this as the following:</p>
<pre><code> return super(Circle, self).ToString() + \
",{RADIUS=" + str(self.radius) + "}"
</code></pre>
<p>I would also recommend getting rid of the line continuation, see the <a href="http://www.python.org/dev/peps/pep-0008/#maximum-line-length">Maximum Line Length section of PEP 8</a> for the recommended way of fixing this.</p>
|
python
| 13 |
1,907,417 | 5,059,337 |
startup script to check if external monitor is attached
|
<p>My first question here; normally I can find answers with a couple searches, but not this time.</p>
<p>I want to write a script that will run on start up to check if an external monitor is attached to a laptop. </p>
<p>I would like write the script in python. </p>
<p>I'm using 32-bit Ubuntu 10.04. I've searched around but can't really find anything useful. Any recommendations? Thanks </p>
|
<p>The command <code>xrandr</code> should do the trick:</p>
<pre><code>$ xrandr -q
...
VGA-0 connected 1280x1024+0+26 ...
...
LVDS connected 1400x1050+1280+0 ...
...
DVI-0 disconnected ...
</code></pre>
<p>Inside your python script, run this command using the <a href="http://docs.python.org/library/subprocess.html" rel="nofollow">subprocess</a> module, then search the output string for the screen identifier you're looking for, e.g. "VGA-0 connected".</p>
|
python|hardware|detection
| 4 |
1,907,418 | 62,596,380 |
How does tensorflow-keras calculate the cost during training in each epoch?
|
<p>When training a model with tensorflow-keras using the <strong>fit</strong> function, I get the cost value in each epoch.</p>
<p>Let's say the training set contains 100 observations and the batch size is 20, therefore, the model's weights are updated 5 times in each epoch. When the epoch is finished, it prints the cost value of that epoch. My question is the following: Is the cost value the average of 5 batches cost, or is it the cost value of the last batch?</p>
|
<p>I believe you can read more on this topic on <a href="https://github.com/keras-team/keras/issues/10426" rel="nofollow noreferrer">this Github issue</a>. Insofar as I read it, it is the running average over the 5 batches.</p>
|
python|tensorflow|keras
| 1 |
1,907,419 | 62,516,866 |
How can I draw 3-d plot with this dataframe
|
<p>I edited the question :</p>
<p>my data looks like this</p>
<pre><code> m/z 300 301 302 303 … 1249
Rt
7.01 0 0 0 2.34 … 0
7.23 0 19.29 0 0 … 0
7.34 2.43 0 0 0 … 2.34
7.46 0 10.32 2.31 0 … 0
.
.
33.1314 0 0 24242.23 0 0
</code></pre>
<p>I would like to draw 3d plot where x = m/z(300 to 1249) y= R/t(7.01 to 33.1314) and z = the value where not 0 in the table(Intensity), with z as height on 3d plot.</p>
|
<p>Try this</p>
<pre><code>import pandas as pd
df = pd.DataFrame(
data=[[300, 7.0034],
[300, 10.232],
[0, 999.9],
[301, 7.00023],
[301, 7.00046]],
columns=['wavelength', 'm/z'],
)
non_zero_df = df[df.wavelength!=0]
</code></pre>
<p>Using <a href="https://matplotlib.org/3.2.1/gallery/shapes_and_collections/scatter.html" rel="nofollow noreferrer">matplotlib</a></p>
<pre><code>import matplotlib.pyplot as plt
plt.scatter(non_zero_df['wavelength'], non_zero_df['m/z'])
plt.show()
</code></pre>
<p>Using <a href="https://plotly.com/python/plotly-express/" rel="nofollow noreferrer">plotly</a></p>
<pre><code>import plotly_express as px
px.scatter(non_zero_df, x='wavelength', y='m/z')
</code></pre>
|
python|r|pandas|reshape
| 0 |
1,907,420 | 62,514,600 |
Can i run Selenium WebDriver visible browser window?
|
<p>Can i run Selenium WebDriver visible browser window?
I mean all the code will work seamlessly for the user.</p>
|
<p>Yes, you can simply import time and insert a time.sleep(10) call to sleep 10 seconds while you manually interact with the open browser window. Selenium should resume where it left off once the sleep timer is expired.</p>
<pre><code>import time
time.sleep(10)
</code></pre>
|
python|selenium|selenium-webdriver
| 2 |
1,907,421 | 62,505,148 |
Scraping dropdown lists with Scrapy
|
<p>I am trying to scrape a dropdown list that has the following source code format using Scrapy.</p>
<pre><code> - ul>
- li>
- a> text=header_1
- nested_ul>
- nested_li> value_1
- li>
- a> text=header_2
- nested_ul>
- nested_li> value_2
- nested_li> value_3
- nested_li> value_4
- li>
- a> text=header_3
- nested_ul>
- nested_li> value_5
- nested_li> value_6
</code></pre>
<p>I am able to scrape all the headers to one list and all the values to one list, however I am unsure how to scrape the values nested, as seen below. My problem is related to python syntax rather than scraping the data, which is why I didn't include classes/ids for lists. I appreciate the help.</p>
<pre><code># Desired Output
headers_list = [h1, h2, h3]
value_list = [[v1], [v2,v3,v4], [v5,v6]]
</code></pre>
|
<p>You can loop through the selector of <code><li></code> tag and use it to get the data from there.</p>
<pre><code>headers_list = list()
value_list = list()
for li in response.xpath('//li'):
headers_list.append(li.xpath('./a/text()').get())
value_list.append(li.xpath('./ul/li/text()').getall())
</code></pre>
|
python|loops|scrapy
| 0 |
1,907,422 | 61,797,439 |
Joining duplicates rows
|
<p>Suppose I have a dataframe like this one:</p>
<pre><code> Date Issuer Ticker Duplicate Value
0 05/14/20 00:00:00 BARCLAYS SQ 0 NaN
1 05/11/20 00:00:00 BARCLAYS SQ 0 1
2 05/11/20 00:00:00 ARGUS TTD 0 NaN
3 05/11/20 00:00:00 ARGUS TTD 0 1
4 05/11/20 00:00:00 BARCLAYS SQ 0 NaN
</code></pre>
<p>And I want to give 'Duplicate' a value of '1' whenever there's an event happening twice in the same date, such as: 05/11/20 BARCLAYS SQ (occuring twice) and join the two rows so that if 'Value' exists it overrides NaNs in the other row.</p>
<p>I'll be very thankful of some help guys!</p>
<p>THX!!!</p>
<p>edit:
expected output after joining:</p>
<pre><code> Date Issuer Ticker Duplicate Value
0 05/14/20 00:00:00 BARCLAYS SQ 0 NaN
1 05/11/20 00:00:00 BARCLAYS SQ 0 1
3 05/11/20 00:00:00 ARGUS TTD 0 1
</code></pre>
|
<p>If need remove rows with missing values only for duplicated rows by 3 columns names use:</p>
<pre><code>mask1 = df.duplicated(['Date','Issuer','Ticker'], keep=False)
mask2 = df['Value'].notna()
df = df[~mask1 | mask2]
print (df)
Date Issuer Ticker Duplicate Value
0 05/14/20 00:00:00 BARCLAYS SQ 0 NaN
1 05/11/20 00:00:00 BARCLAYS SQ 0 1.0
3 05/11/20 00:00:00 ARGUS TTD 0 1.0
</code></pre>
|
python|pandas|dataframe|join
| 0 |
1,907,423 | 61,716,209 |
How to find the probability from a normal probability density function in python?
|
<p>Basically, I have plotted a normal curve by using the values of mean and standard deviation. The y-axis gives the probability density. </p>
<p>How do I find the probability at a certain value "x" on the x-axis? Is there any Python function for it or how do I code it?</p>
|
<p>Not very sure if you mean the probability density function, which is:</p>
<p><a href="https://i.stack.imgur.com/cjlOn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cjlOn.png" alt="enter image description here"></a></p>
<p>given a certain mean and standard deviation. In python you can use the <code>stats.norm.fit</code> to get the probability, for example, we have some data where we fit a normal distribution:</p>
<pre><code>from scipy import stats
import seaborn as sns
import numpy as np
import matplotlib.pyplot as plt
data = stats.norm.rvs(10,2,1000)
x = np.linspace(min(data),max(data),1000)
mu, var = stats.norm.fit(data)
p = stats.norm.pdf(x, mu, std)
</code></pre>
<p>Now we have estimated the mean and standard deviation, we use the pdf to estimate probability at for example 12.5:</p>
<pre><code>xval = 12.5
p_at_x = stats.norm.pdf(xval,mu,std)
</code></pre>
<p>We can plot to see if it is what you want:</p>
<pre><code>fig, ax = plt.subplots(1,1)
sns.distplot(data,bins=50,ax=ax)
plt.plot(x,p)
ax.hlines(p_at_x,0,xval,linestyle ="dotted")
ax.vlines(xval,0,p_at_x,linestyle ="dotted")
</code></pre>
<p><a href="https://i.stack.imgur.com/Zept7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Zept7.png" alt="enter image description here"></a></p>
|
python|statistics|probability|probability-density|probability-distribution
| 4 |
1,907,424 | 67,563,041 |
Creating a list using the last element of each line in a text file
|
<p>I'm a bit of a newbie in programming. Currently I'm trying to create lists by reading a text file. Also without importing any modules.</p>
<p>I basically want to put each category into their own list.</p>
<p>This is how I started:</p>
<pre><code>def lecturer():
lecturer_info = open('timetables.txt', 'r')
line = lecturer_info.readline().split(',')
while line != '':
lecturer_info = line.split(',')
full_name = lecturer_info[-1].split()
print('full_name')
</code></pre>
<p>But I'm currently getting an error...
"lecturer_info = line.split(',')
AttributeError: 'list' object has no attribute 'split'.</p>
<p>Any ideas on how to fix this, or an alternate way to do it. Thanks.</p>
|
<p>The error comes from your trying to use the split method on a list, split can only be used on strings. The variable "line" is assigned to a list because split() returns a list.</p>
|
python
| 1 |
1,907,425 | 64,490,785 |
open_mfdataset with xarray failing to find coordinates
|
<p>I'm attempting to download a bunch of GOES-16 radiance data and open it all together in xarray to analyze with the <a href="http://xarray.pydata.org/en/stable/generated/xarray.open_mfdataset.html" rel="nofollow noreferrer"><code>xr.open_mfdataset()</code></a> function. These netcdf files have a coordinate <code>t</code> that is the time stamp that I'm trying to use as a joining but I'm getting the error <code>ValueError: Could not find any dimension coordinates to use to order the datasets for concatenation</code> when I try to do this. Here is my code along with links to download two example .nc files.</p>
<p>Download two files with:</p>
<pre><code>wget https://noaa-goes16.s3.amazonaws.com/ABI-L1b-RadF/2019/141/02/OR_ABI-L1b-RadF-M6C14_G16_s20191410240370_e20191410250078_c20191410250143.nc
wget https://noaa-goes16.s3.amazonaws.com/ABI-L1b-RadF/2019/141/03/OR_ABI-L1b-RadF-M6C14_G16_s20191410310370_e20191410320078_c20191410320142.nc
</code></pre>
<p>And the code:</p>
<pre><code>import xarray as xr
ds_sst = xr.open_mfdataset("OR_ABI-L1b-RadF*nc", concat_dim='t',combine='by_coords')
</code></pre>
<p>Is there anything I can do to make this work so I can open a couple dozen of these files together?</p>
|
<p>Use <code>combine='nested'</code> instead.</p>
<p>From the Xarray <a href="http://xarray.pydata.org/en/stable/generated/xarray.combine_by_coords.html?highlight=combine_by_coords" rel="noreferrer">documentation</a> on combining by coords:</p>
<blockquote>
<p>Attempt to auto-magically combine the given datasets into one by using
dimension coordinates.</p>
</blockquote>
<p>'t' is not a dimension coordinate, so the xarray magic doesn't work in this case, because xarray's <code>combine_by_coords</code> looks for matching dimension coordinates between the imported netcdfs.</p>
<p>In this case you need to be more specific: use <code>combine = 'nested'</code> and specify the new dimension name with <code>concat_dim='t'</code>. As there is already a coordinate named 't' xarray will automatically promote it to dimension coordinate.</p>
<p><code>ds_sst = xr.open_mfdataset("OR_ABI-L1b-RadF*nc", concat_dim='t', combine='nested')</code></p>
<p>The resulting dataset looks like this.</p>
<pre><code><xarray.Dataset>
Dimensions: (band: 1, num_star_looks: 24, number_of_image_bounds: 2, number_of_time_bounds: 2, t: 2, x: 5424, y: 5424)
Coordinates:
band_wavelength_star_look (num_star_looks) float32 dask.array<chunksize=(24,), meta=np.ndarray>
x_image float32 0.0
y_image float32 0.0
band_wavelength (band) float32 dask.array<chunksize=(1,), meta=np.ndarray>
band_id (band) int8 dask.array<chunksize=(1,), meta=np.ndarray>
t_star_look (num_star_looks) datetime64[ns] dask.array<chunksize=(24,), meta=np.ndarray>
* y (y) float32 0.151844 ... -0.151844
* x (x) float32 -0.151844 ... 0.151844
* t (t) datetime64[ns] 2019-05-21T02:45:22.400760064 2019-05-21T03:15:22.406056960
Dimensions without coordinates: band, num_star_looks, number_of_image_bounds, number_of_time_bounds
Data variables:
Rad (t, y, x) float32 dask.array<chunksize=(1, 5424, 5424), meta=np.ndarray>
DQF (t, y, x) float32 dask.array<chunksize=(1, 5424, 5424), meta=np.ndarray>
time_bounds (t, number_of_time_bounds) datetime64[ns] dask.array<chunksize=(1, 2), meta=np.ndarray>
goes_imager_projection (t) int32 -2147483647 -2147483647
y_image_bounds (t, number_of_image_bounds) float32 dask.array<chunksize=(1, 2), meta=np.ndarray>
x_image_bounds (t, number_of_image_bounds) float32 dask.array<chunksize=(1, 2), meta=np.ndarray>
nominal_satellite_subpoint_lat (t) float64 0.0 0.0
nominal_satellite_subpoint_lon (t) float64 -75.2 -75.2
nominal_satellite_height (t) float64 3.579e+04 3.579e+04
geospatial_lat_lon_extent (t) float32 9.96921e+36 9.96921e+36
yaw_flip_flag (t) float64 0.0 0.0
esun (t) float64 nan nan
kappa0 (t) float64 nan nan
planck_fk1 (t) float64 8.51e+03 8.51e+03
planck_fk2 (t) float64 1.286e+03 1.286e+03
planck_bc1 (t) float64 0.2252 0.2252
planck_bc2 (t) float64 0.9992 0.9992
valid_pixel_count (t) float64 2.305e+07 2.305e+07
missing_pixel_count (t) float64 268.0 290.0
saturated_pixel_count (t) float64 0.0 0.0
undersaturated_pixel_count (t) float64 0.0 0.0
focal_plane_temperature_threshold_exceeded_count (t) float64 0.0 0.0
min_radiance_value_of_valid_pixels (t) float64 8.217 8.472
max_radiance_value_of_valid_pixels (t) float64 125.5 123.2
mean_radiance_value_of_valid_pixels (t) float64 82.01 81.96
std_dev_radiance_value_of_valid_pixels (t) float64 24.64 24.53
maximum_focal_plane_temperature (t) float64 62.12 62.12
focal_plane_temperature_threshold_increasing (t) float64 81.0 81.0
focal_plane_temperature_threshold_decreasing (t) float64 81.0 81.0
percent_uncorrectable_L0_errors (t) float64 0.0 0.0
earth_sun_distance_anomaly_in_AU (t) float64 1.012 1.012
algorithm_dynamic_input_data_container (t) int32 -2147483647 -2147483647
processing_parm_version_container (t) int32 -2147483647 -2147483647
algorithm_product_version_container (t) int32 -2147483647 -2147483647
star_id (t, num_star_looks) float32 dask.array<chunksize=(1, 24), meta=np.ndarray>
Attributes:
naming_authority: gov.nesdis.noaa
Conventions: CF-1.7
Metadata_Conventions: Unidata Dataset Discovery v1.0
standard_name_vocabulary: CF Standard Name Table (v35, 20 July 2016)
institution: DOC/NOAA/NESDIS > U.S. Department of Commerce,...
project: GOES
production_site: WCDAS
production_environment: OE
spatial_resolution: 2km at nadir
orbital_slot: GOES-East
platform_ID: G16
instrument_type: GOES R Series Advanced Baseline Imager
scene_id: Full Disk
instrument_ID: FM1
title: ABI L1b Radiances
summary: Single emissive band ABI L1b Radiance Products...
keywords: SPECTRAL/ENGINEERING > INFRARED WAVELENGTHS > ...
keywords_vocabulary: NASA Global Change Master Directory (GCMD) Ear...
iso_series_metadata_id: a70be540-c38b-11e0-962b-0800200c9a66
license: Unclassified data. Access is restricted to ap...
processing_level: National Aeronautics and Space Administration ...
cdm_data_type: Image
dataset_name: OR_ABI-L1b-RadF-M6C14_G16_s20191410240370_e201...
production_data_source: Realtime
timeline_id: ABI Mode 6
date_created: 2019-05-21T02:50:14.3Z
time_coverage_start: 2019-05-21T02:40:37.0Z
time_coverage_end: 2019-05-21T02:50:07.8Z
id: abb3657a-03c0-47a9-a1ba-f3196c07c5a9
</code></pre>
<p>Alternatively, you can define a function that promotes the coordinate 't' to a dimension coordinate and pass it to the <code>preprocess</code> argument in <code>open_mfdataset</code>. This function is applied to every imported NetCDF before it's concatenated with the others.</p>
<pre><code>def preprocessing(ds):
return ds.expand_dims(dim='t')
ds_sst = xr.open_mfdataset("OR_ABI-L1b-RadF*nc", concat_dim='t',combine='by_coords', preprocess = preprocessing)
</code></pre>
<p>The result is the same as above.</p>
|
python|netcdf|python-xarray
| 5 |
1,907,426 | 11,441,323 |
Python install packages
|
<p>How do i install packages, specificity <a href="http://klappnase.bubble.org/TkinterTreectrl" rel="nofollow">http://klappnase.bubble.org/TkinterTreectrl</a>, do i drag the file somewhere or install using the terminal? also how do install from the terminal if thats what i need to do? </p>
|
<p>If you're on Ubuntu you should just use the Software Center. </p>
<pre><code>sudo apt-get install {package-name}
</code></pre>
<p>Should be easy as that. Also yes, this should be from the terminal.</p>
|
python|installation|package
| 1 |
1,907,427 | 70,582,934 |
'NoneType' object has no attribute 'send'?
|
<p>The code is below</p>
<pre><code>@client.event
async def on_member_join(Member: discord.member):
# Welcome message I guess
channel = discord.utils.get(Member.guild.channels, name='☭│chat')
embed = discord.Embed(
title="A new member has joined the server :grin:",
description=f"Thanks for joining {Member.mention} \n Hope you brought pizza!!",
)
embed.set_image(url=Member.avatar_url)
try:
await channel.send(embed=embed)
finally:
await channel.send(":skull:")
</code></pre>
<p>The error is described in the title, the welcome message really never seems to work because the channel is apparently a "None" object, is the issue the channel declaration?</p>
|
<p>I am assuming you are using the <code>discordpy</code> module.
According to its <a href="https://discordpy.readthedocs.io/en/stable/api.html#discord.utils.get" rel="nofollow noreferrer">documentation</a>, the <code>discord.utils.get()</code> function will return <code>None</code> if "nothing is found that matches the attributes passed".</p>
<p>This is occouring in your code, and so the later <code>channel.send()</code> fails. To solve this, consider adding an <code>except AttributeError as e</code> clause to your (conveniently placed) try-finally statement, and within it check if the type of <code>channel</code> is None.</p>
<pre class="lang-py prettyprint-override"><code>try:
await channel.send(embed=embed)
except AttributeError as e:
if channel is None: # None is a singleton (unique) object, so all none instances satisfy this 'is' expression.
... you can now know that no channel could be attained
</code></pre>
<p>@Barmar correctly said that the <code>finally</code> statement will be executed even after the exception is caught (and so an error will still occour). This can be circumvented by encapsulating the contents of the <code>finally</code> cause in another <code>try-except</code> clause pair [though this is somewhat unattractive].</p>
|
python|discord.py
| 1 |
1,907,428 | 63,718,450 |
Compare 2 dfs and append values in df1 based on query in df2
|
<p>I have 2 dataframes: I want to take out values from df2 and append values of price_1, price_2, price_3, price_4 for each iteration of getting data from db (in df2) in df1 for matching df1.id = df2.id and df1.name = df2.name</p>
<p>df1:</p>
<pre><code>id name tag price_1 price_2 price_3 price_4
1 a a1
1 b b1
1 c c1
2 x d1
2 y e1
2 z a1
</code></pre>
<p>df2(results form db):</p>
<pre><code>1st iteration
id name tag price_1 price_2 price_3 price_4 discount
1 a x1 10 11 12 11 Y
1 b x2 11 44 22 55 Y
1 c x3 76 56 45 34 N
2nd iteration
id name tag price_1 price_2 price_3 price_4 discount
2 x x2 10 11 12 11 N
2 y x5 11 44 22 55 Y
2 z x6 76 56 45 34 N
</code></pre>
<p>output:</p>
<pre><code>df1 (after 1st iteration)
id name tag price_1 price_2 price_3 price_4
1 a a1 10 11 12 11
1 b b1 11 44 22 55
1 c c1 76 56 45 34
2 x
2 y
2 z
df1 (after 2nd iteration)
id name tag price_1 price_2 price_3 price_4
1 a a1 10 11 12 11
1 b b1 11 44 22 55
1 c c1 76 56 45 34
2 x d1 10 11 12 11
2 y e1 11 44 22 55
2 z a1 76 56 45 34
</code></pre>
<p>loop:</p>
<pre><code>grouped = df1.groupby('id')
for i,groups in grouped:
df2 = sql(i) #goes to sql to fetch details for df1.id
sql_df = df2.name.unique()
dd = groups.name
if (set(sql_df) == set(sql_df) & set(dd)) & (set(dd) == set(sql_df) & set(dd)):
print ("ID:", i, "Names Match: Y")
for df2 in iter:
df4 = pd.DataFrame()
df_temp = df1[['id', 'name']].merge(df2, on = ['id', 'name'])
df4 = df4.append(df_temp, ignore_index = True)
else:
print("ID:", i, "Names Match: N")
</code></pre>
<p>I don't need the <code>tag</code> and <code>discount</code>columns from <code>df2</code>, I just need to compare if <code>name</code> is equal in both df1 and df22. If yes, then take all the price_1/2/3/4</p>
|
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.append.html" rel="nofollow noreferrer"><code>DataFrame.append</code></a> for each iteration only merged rows by default inner join:</p>
<pre><code>df4 = pd.DataFrame()
grouped = df1.groupby('id')
for i,groups in grouped:
df2 = sql(i) #goes to sql to fetch details for df1.id
sql_df = df2.name.unique()
dd = groups.name
if (set(sql_df) == set(sql_df) & set(dd)) & (set(dd) == set(sql_df) & set(dd)):
print ("ID:", i, "Names Match: Y")
df_temp = (df1[['id', 'name', 'tag']].merge(df2.drop('tag', axis=1),
on = ['id', 'name']))
df4 = df4.append(df_temp, ignore_index = True)
else:
print("ID:", i, "Names Match: N")
</code></pre>
|
python|pandas|dataframe
| 0 |
1,907,429 | 63,656,759 |
How to extract only contours that look like handwritting?
|
<p>I would like to <strong>extract only those contours that belong to the handwriting</strong> on this image:</p>
<p><a href="https://i.stack.imgur.com/lE1Pt.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lE1Pt.jpg" alt="enter image description here" /></a></p>
<p>Here is a code to extract the contours:</p>
<pre><code>import cv2
import matplotlib.pyplot as plt
img = cv2.imread(PATH TO FRAME)
print("img shape=", img.shape)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
cv2.imshow("image", gray)
cv2.waitKey(1)
#### extract all contours
# Find Canny edges
edged = cv2.Canny(gray, 30, 200)
cv2.waitKey(0)
# Finding Contours
# Use a copy of the image e.g. edged.copy()
# since findContours alters the image
contours, hierarchy = cv2.findContours(edged,
cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
cv2.imshow('Canny Edges After Contouring', edged)
cv2.waitKey(0)
print("Number of Contours found = " + str(len(contours)))
# Draw all contours
# -1 signifies drawing all contours
cv2.drawContours(img, contours, -1, (0, 255, 0), 3)
cv2.imshow('Contours', img)
cv2.waitKey(0)
</code></pre>
<p><a href="https://i.stack.imgur.com/nxE6i.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nxE6i.png" alt="enter image description here" /></a></p>
|
<p>Are you asking for this image specifically or for a generic image (of which this happens to be one example) - do you have the image before the working was written?</p>
<p>If for this image you could determine where the handwriting could be (large left hand lower rectangle) and then extract only those contours that are in that area.
You could determine this rectangle by processing the image before any writing and after, or by knowing that the area bottom left had the working on it.</p>
<p>Another option: You could subtract the the before image from the after to only leave what has been added.</p>
<p>I realise that whilst they don't actually detect handwriting from the contours, these may achieve the same goal - the more copperplate the handwriting the harder it would be to detect. Pre-processing or obtaining the Region of Interest beforehand may make the contour problem much simpler.</p>
|
python|image-processing
| 0 |
1,907,430 | 63,625,592 |
insert/append value to pandas column at specific location without replacing existing values
|
<p>I want conditional add to "remark" column without overwriting it.
I have following dataframe.</p>
<pre><code>remark Rule1 Rule2
Banana False False
Apple True False
Orange False True
Kiwi True True
</code></pre>
<p>if Rule1 == True then Red; if Rule2 == True then Yellow</p>
<p>I have written below code but it is overwriting existing value.</p>
<pre><code>data.loc[data['Rule1']==True,"remark"] = "Red"
data.loc[data['Rule2']==True,"remark"] = "Yellow"
</code></pre>
<p>The expected output should be something like this:</p>
<pre><code>remark Rule1 Rule2
Banana False False
Apple, Red True False
Orange,Yellow False True
Kiwi, Red, Yellow True True
</code></pre>
|
<p>Let us try <code>dot</code></p>
<pre><code>df.remark = df.remark + ',' + df[['Rule1','Rule2']].dot(pd.Index(['Red,','Yellow,']))
df.remark = df.remark.str[:-1]
df
Out[88]:
remark Rule1 Rule2
0 Banana False False
1 Apple,Red True False
2 Orange,Yellow False True
3 Kiwi,Red,Yellow True True
</code></pre>
|
python|pandas|dataframe
| 3 |
1,907,431 | 56,489,472 |
What tool in spaCy to use for recognising company name from ticker symbol?
|
<p>I'm trying to do sentiment analysis on financial news, and I want to be able to recognise companies based on the ticker symbol. Eg. recognise Spotify from SPOT. The final objective would be to generate sentiment models of each company.
spaCy is pretty good at named entity recognition out of the box but it falls short when comparing ticker symbol and company. I have a list of ticker symbol and company names (from NASDAQ, NYSE, AMEX) in csv format.</p>
<p>Based on using the similarity() function in spaCy, the results aren't good so far. The table below shows a sample of a few companies which have a low similarity score, even though the names are similiar visually. I want to train the model using the list of company names/ticker symbols, and have a higher similarity score after this training process.</p>
<pre><code>+------------+-------------------------+------------+
| Stock | Name | Similarity |
+------------+-------------------------+------------+
| CSPI stock | CSP Inc. | 0.072 |
| CHGG stock | Chegg, Inc. | 0.071 |
| QADA stock | QAD Inc. | 0.065 |
| SPOT stock | Spotify Technology S.A. | 0.064 |
+------------+-------------------------+------------+
</code></pre>
<p>Based on spaCy's documentation, some tools include using <a href="https://spacy.io/api/phrasematcher/" rel="nofollow noreferrer">PhraseMatcher</a>, <a href="https://spacy.io/api/entityruler" rel="nofollow noreferrer">EntityRuler</a>, <a href="https://spacy.io/api/matcher" rel="nofollow noreferrer">Rule-based matching</a>, Token Matcher. Which one would be most suited for this use case?</p>
|
<p>My recommendation would be to not try to match the ticke symbol to the company name, but the company name in the text to the company name you have in te CSV. You will get <strong>MUCH</strong> better results.</p>
<p>As a fuzzy match, I would recommend using Levenshtein algoritm, example here:
<a href="https://stackoverflow.com/questions/8518695/t-sql-get-percentage-of-character-match-of-2-strings">T-SQL Get percentage of character match of 2 strings</a></p>
<p>For a Python Levenshtein, I would recommend this:
<a href="https://github.com/ztane/python-Levenshtein/#documentation" rel="nofollow noreferrer">https://github.com/ztane/python-Levenshtein/#documentation</a></p>
<p>I've personally have used <code>EntityRuler</code> with a combination of jsonl rules sets</p>
<p>But you will have to bring your own data.
You need a DB with ticker simbols and company names.</p>
<pre><code>nlp = spacy.load('en_core_web_lg')
stock_symbol_shapes_ruler = EntityRuler(nlp)
stock_symbol_shapes_ruler.name="stock_symbol_shapes_ruler"
patterns_stock_symbol_shapes = [
{"label": "ORG", "pattern": "NASDAQ"},
{"label": "STOCK_SYMBOL", "pattern": [{"SHAPE": "XXX.X"}]},
{"label": "STOCK_SYMBOL", "pattern": [{"SHAPE": "XXXX.X"}]},
]
stock_symbol_shapes_ruler.add_patterns(patterns_stock_symbol_shapes)
nlp.add_pipe(stock_symbol_shapes_ruler, before='ner')
stock_symbol_ruler = EntityRuler(nlp).from_disk("./stock_symbol_pattern.jsonl")
stock_symbol_ruler.name = 'stock_symbol_ruler'
nlp.add_pipe(stock_symbol_ruler, before='ner')
company_name_ruler = EntityRuler(nlp).from_disk("./company_name_patterns.jsonl")
company_name_ruler.name="company_name_ruler"
nlp.add_pipe(company_name_ruler, before='ner')
doc = nlp(test_text)
</code></pre>
<p>The files are generated using SQL</p>
<pre><code>{"label": "STOCK_SYMBOL", "pattern": "AAON"}
{"label": "STOCK_SYMBOL", "pattern": "AAP"}
{"label": "STOCK_SYMBOL", "pattern": "AAPL"}
{"label": "STOCK_SYMBOL", "pattern": "AAVL"}
{"label": "STOCK_SYMBOL", "pattern": "AAWW"}
{"label": "ORG", "pattern": "AMAG Pharmaceuticals"}
{"label": "ORG", "pattern": "AMAG Pharmaceuticals Inc"}
{"label": "ORG", "pattern": "AMAG Pharmaceuticals Inc."}
{"label": "ORG", "pattern": "AMAG Pharmaceuticals, Inc."}
{"label": "ORG", "pattern": "Amarin"}
{"label": "ORG", "pattern": "Amarin Corporation plc"}
{"label": "ORG", "pattern": "Amazon.com Inc."}
{"label": "ORG", "pattern": "Amazon Inc"}
{"label": "ORG", "pattern": "Amazonm"}
</code></pre>
|
python|spacy
| 3 |
1,907,432 | 56,656,349 |
Compare two dataframes based off of key
|
<p>I have two dataframes, df1 and df2, which have the exact same columns and most of the time the same values for each key.</p>
<blockquote>
<pre><code>Country A B C D E F G H Key
Argentina xylo 262 4632 0 0 26.12 2 0 Argentinaxylo
Argentina phone 6860 155811 48 0 4375.87 202 0 Argentinaphone
Argentina land 507 1803728 2 117 7165.810566 3 154 Argentinaland
Australia xylo 7650 139472 69 0 16858.42 184 0 Australiaxylo
Australia mink 1284 2342788 1 0 39287.71 53 0 Australiamink
Country A B C D E F G H Key
Argentina xylo 262 4632 0 0 26.12 2 0 Argentinaxylo
Argentina phone 6860 155811 48 0 4375.87 202 0 Argentinaphone
Argentina land 507 1803728 2 117 7165.810566 3 154 Argentinaland
Australia xylo 7650 139472 69 0 16858.42 184 0 Australiaxylo
Australia mink 1284 2342788 1 0 39287.71 53 0 Australiamink
</code></pre>
</blockquote>
<p>I want a snippet that compares the keys (key = column Country + column A) in each dataframe against each other and calculates the percent difference for each column B-H, if there is any. If there isn't, output nothing.</p>
|
<p>Hope, the below given code may help you to solve your problem. I have compared both datasets based on the Key column data and generate the difference of their (B-H) columns respectively. Thereafter, having the difference in percentage, I just have merge on the both datasets on Key column, compare the difference and have the final output in df3diff column of df3 dataset.</p>
<pre><code>import pandas as pd
df1 = pd.DataFrame([['Argentina', 'xylo', 262 ,4632, 0 , 0 , 26.12 , 2 , 0 , 'Argentinaxylo']
,['Argentina', 'phone',6860,155811 , 48 , 0 ,4375.87 ,202, 0 , 'Argentinaphone']
,['Argentina', 'land', 507 ,1803728, 2 , 117 ,7165.810,566, 3 , '154 Argentinaland']
,['Australia', 'xylo', 7650,139472 , 69 , 0 ,16858.42,184, 0 , 'Australiaxylo']
,['Australia', 'mink', 1284,2342788, 1 , 0 ,39287.71, 53, 0 , 'Australiamink']]
,columns=['Country', 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'Key'])
df1['df1BH'] = (df1['B']-df1['H'])/100.00
print(df1)
df2 = pd.DataFrame([['Argentina', 'xylo', 262 ,4632 , 0 , 0 ,26.12 ,2 , 0 ,'Argentinaxylo']
,['Argentina', 'phone',6860,155811 , 48, 0 ,4375.87 ,202, 0 ,'Argentinaphone']
,['Argentina', 'land', 507 ,1803728, 2 , 117 ,7165.810,566, 3 ,'154 Argentinaland']
,['Australia', 'xylo', 97650,139472 , 69, 0 ,96858.42,184, 0 ,'Australiaxylo']
,['Australia', 'mink', 1284,2342788, 1 , 0 ,39287.71, 53, 0 ,'Australiamink']]
,columns=['Country', 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'Key'])
df2['df2BH'] = (df2['B']-df2['H'])/100.00
print(df2)
df3 = pd.merge(df1[['Key','df1BH']],df2[['Key','df2BH']], on=['Key'],how='outer')
df3['df3diff'] = df3['df1BH'] - df3['df2BH']
print(df3)
</code></pre>
<p><strong>Output:</strong></p>
<pre><code> Key df1BH df2BH df3diff
0 Argentinaxylo 2.62 2.62 0.0
1 Argentinaphone 68.60 68.60 0.0
2 154 Argentinaland 5.04 5.04 0.0
3 Australiaxylo 76.50 976.50 -900.0
4 Australiamink 12.84 12.84 0.0
</code></pre>
|
python|python-3.x|pandas|dataframe|analytics
| 0 |
1,907,433 | 69,681,830 |
PyTest and controlled test failure for GitHub Workflow/Automation
|
<p>Although I've been using Python for a number of years now, I realised that working predominantly on personal projects, I never needed to do Unit testing before, so apologies for the obvious questions or wrong assumptions I might make.
My goal is to understand how I can make tests and possibly combine everything with the GitHub workflow to create some automation.
I've seen Failures/Errors (which are conceptually different) thrown locally are not treated differently once online. But before I go, I have some doubts that I want to clarify.</p>
<p>From reading online, my initial understanding seems to be that a test should always SUCCEED, even if it contains errors or failure.
But if it succeeds, how can then I record a failure or an error? So I'm tempted to say I'm capturing this in the wrong way?
I appreciate that in an Agile environment, some would like to say it's a controlled process, and errors can be intercepted while looking into the code. But I'm not sure this is the best approach.<br>And this leads me to the second question.</p>
<p>Say I have a function accepting dates, and I know that it cannot accept anything else than that.</p>
<ol>
<li>Would it make sense to do a test to say pass in strings (and get
a failure)?</li>
<li>Or should I test only for the expected circumstances?</li>
</ol>
<p>Say case 1) is a best practice; what should I do in the context of running these tests? Should I let the test fail and get a long list of errors? Or should I decorate functions with a <code>@pytest.mark.xfail()</code> (a sort of Soft fail, where I can use a try ... catch)?</p>
<p>And last question (for now): would an xfail decorator let the workflow automation consider the test as "passed". Probably not, but at this stage, I've so much confusion in my head that any clarity from experienced users could help.</p>
<p>Thanks for your patience in reading.</p>
|
<p>The question is a bit fuzzy, but I will have a shot.</p>
<ol>
<li><p>The notion that tests should always succeed even if they have errors is probably a misunderstanding. Failing tests are errors and should be shown as such (with the exception of tests known to fail, but that is a special case, see below). From the comment I guess what was actually meant was that other tests shall continue to run, even if one test failed - that makes certainly sense, especially in CI tests, where you want to get the whole picture.</p>
</li>
<li><p>If you have a function accepting dates and nothing else, it shall be tested that it indeed only accepts dates, and raises an exception or something in the case an invalid date is given. What I meant in the comment is if your software ensures that only a date can be passed to that function, and this is also ensured via tests, it would not be needed to test this again, but in general - yes, this should be tested.</p>
</li>
</ol>
<p>So, to give a few examples: if your function is specified to raise an exception on invalid input, this has to be tested using something like <code>pytest.raises</code> - it would fail, if no exception is raised. If your function shall handle invalid dates by logging an error, the test shall verify that the error is logged. If an invalid input should just be ignored, the test shall ensure that no exception is raised and the state does not change.</p>
<ol start="3">
<li>For <code>xfail</code>, I just refer you to the <a href="https://docs.pytest.org/en/6.2.x/skipping.html" rel="nofollow noreferrer">pytest documentation</a>, where this is described nicely:</li>
</ol>
<blockquote>
<p>An xfail means that you expect a test to fail for some reason. A common example is a test for a feature not yet implemented, or a bug not yet fixed. When a test passes despite being expected to fail (marked with pytest.mark.xfail), it’s an xpass and will be reported in the test summary.</p>
</blockquote>
<p>So a passing <code>xfail</code> test will be shown as passed indeed. You can easily test this yourself:</p>
<pre class="lang-py prettyprint-override"><code>import pytest
@pytest.mark.xfail
def test_fails():
assert False
@pytest.mark.xfail
def test_succeeds():
assert True
</code></pre>
<p>gives something like:</p>
<pre><code>============================= test session starts =============================
collecting ... collected 2 items
test_xfail.py::test_fails
test_xfail.py::test_succeeds
======================== 1 xfailed, 1 xpassed in 0.35s ========================
</code></pre>
<p>and the test is considered passed (e.g. has the exit code 0).</p>
|
python|unit-testing|pytest
| 1 |
1,907,434 | 61,178,521 |
What is the proper way to benchmark part of tensorflow graph?
|
<p>I want to benchmark some part of graph, here is for simplicity I use <code>conv_block</code> that is just conv3x3.</p>
<ol>
<li>Is it ok that <code>x_np</code> used in the loop is the same or I need to regenerate it each time?</li>
<li>Do I need to do some 'warm up' run before run actual benchmark(seems this is needed for benchmark on GPU)? how to do it properly? is <code>sess.run(tf.global_variables_initializer())</code> enough?</li>
<li>What is proper way of measuring time in python, i.e. more precise method.</li>
<li>Do I need to reset some system cache on linux before run script(maybe disabling np.random.seed is sufficient)?</li>
</ol>
<p>Example code:</p>
<pre><code>import os
import time
import numpy as np
import tensorflow as tf
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '1'
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)
np.random.seed(2020)
def conv_block(x, kernel_size=3):
# Define some part of graph here
bs, h, w, c = x.shape
in_channels = c
out_channels = c
with tf.variable_scope('var_scope'):
w_0 = tf.get_variable('w_0', [kernel_size, kernel_size, in_channels, out_channels], initializer=tf.contrib.layers.xavier_initializer())
x = tf.nn.conv2d(x, w_0, [1, 1, 1, 1], 'SAME')
return x
def get_data_batch(spatial_size, n_channels):
bs = 1
h = spatial_size
w = spatial_size
c = n_channels
x_np = np.random.rand(bs, h, w, c)
x_np = x_np.astype(np.float32)
#print('x_np.shape', x_np.shape)
return x_np
def run_graph_part(f_name, spatial_size, n_channels, n_iter=100):
print('=' * 60)
print(f_name.__name__)
tf.reset_default_graph()
with tf.Session() as sess:
x_tf = tf.placeholder(tf.float32, [1, spatial_size, spatial_size, n_channels], name='input')
z_tf = f_name(x_tf)
sess.run(tf.global_variables_initializer())
x_np = get_data_batch(spatial_size, n_channels)
start_time = time.time()
for _ in range(n_iter):
z_np = sess.run(fetches=[z_tf], feed_dict={x_tf: x_np})[0]
avr_time = (time.time() - start_time) / n_iter
print('z_np.shape', z_np.shape)
print('avr_time', round(avr_time, 3))
n_total_params = 0
for v in tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope='var_scope'):
n_total_params += np.prod(v.get_shape().as_list())
print('Number of parameters:', format(n_total_params, ',d'))
if __name__ == '__main__':
run_graph_part(conv_block, spatial_size=128, n_channels=32, n_iter=100)
</code></pre>
|
<p>Adding to the awesomely explained Steve's <a href="https://stackoverflow.com/a/61432059/2478346">answer</a>, the following worked for me on the TensorFlow-GPU v2.3</p>
<pre><code>import tensorflow as tf
tf.config.experimental.set_memory_growth(tf.config.experimental.list_physical_devices('GPU')[0], True)
import os
import time
import numpy as np
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '1'
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)
np.random.seed(2020)
def conv_block(x, kernel_size=3):
# Define some part of graph here
bs, h, w, c = x.shape
in_channels = c
out_channels = c
with tf.compat.v1.variable_scope('var_scope'):
w_0 = tf.compat.v1.get_variable('w_0', [kernel_size, kernel_size, in_channels, out_channels], initializer=tf.keras.initializers.glorot_normal())
x = tf.nn.conv2d(x, w_0, [1, 1, 1, 1], 'SAME')
return x
def get_data_batch(spatial_size, n_channels):
bs = 1
h = spatial_size
w = spatial_size
c = n_channels
x_np = np.random.rand(bs, h, w, c)
x_np = x_np.astype(np.float32)
#print('x_np.shape', x_np.shape)
return x_np
def run_graph_part(f_name, spatial_size, n_channels, n_iter=100):
print('=' * 60)
print(f_name.__name__)
# tf.reset_default_graph()
tf.compat.v1.reset_default_graph()
with tf.compat.v1.Session() as sess:
x_tf = tf.compat.v1.placeholder(tf.float32, [1, spatial_size, spatial_size, n_channels], name='input')
z_tf = f_name(x_tf)
sess.run(tf.compat.v1.global_variables_initializer())
x_np = get_data_batch(spatial_size, n_channels)
start_time = time.time()
for _ in range(n_iter):
z_np = sess.run(fetches=[z_tf], feed_dict={x_tf: x_np})[0]
avr_time = (time.time() - start_time) / n_iter
print('z_np.shape', z_np.shape)
print('avr_time', round(avr_time, 3))
n_total_params = 0
for v in tf.compat.v1.get_collection(tf.compat.v1.GraphKeys.TRAINABLE_VARIABLES, scope='var_scope'):
n_total_params += np.prod(v.get_shape().as_list())
print('Number of parameters:', format(n_total_params, ',d'))
# USING TENSORFLOW BENCHMARK
benchmark = tf.test.Benchmark()
results = benchmark.run_op_benchmark(sess=sess, op_or_tensor=z_tf,
feed_dict={x_tf: x_np}, burn_iters=2, min_iters=n_iter,
store_memory_usage=False, name='example')
return results
if __name__ == '__main__':
results = run_graph_part(conv_block, spatial_size=512, n_channels=32, n_iter=100)
</code></pre>
<p>Which in my case will output something like -</p>
<pre><code>============================================================
conv_block
z_np.shape (1, 512, 512, 32)
avr_time 0.072
Number of parameters: 9,216
entry {
name: "TensorFlowBenchmark.example"
iters: 100
wall_time: 0.049364686012268066
}
</code></pre>
|
python|linux|tensorflow|time|benchmarking
| 7 |
1,907,435 | 66,036,216 |
Regular expression to match all Python definitions in a module
|
<p>I have a Python module and I want to catch all it's public methods, no private or child methods, for example,</p>
<pre><code>def Func1(c, v1=None, v2=None, v3=None):
def childFunc(c): # I don't want to catch this
</code></pre>
<p>This is my regular expression:</p>
<pre><code>/def ([a-zA-Z]\w+\([\w, =]+\)):/
</code></pre>
<p>However this also matched the child methods as well and it doesn't work on definitions that spanned multiple lines:</p>
<pre><code>def Func2(c, v1=None, v2=None, v3=None, \
v4=None, v5=None):
</code></pre>
|
<p>Use</p>
<pre class="lang-py prettyprint-override"><code>(?m)^def ([a-zA-Z]\w*)\(([^()]*)\):
</code></pre>
<p>See <a href="https://regex101.com/r/4xHy7Q/1" rel="nofollow noreferrer">proof</a></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>NODE</th>
<th>EXPLANATION</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>^</code></td>
<td>the beginning of the string</td>
</tr>
<tr>
<td><code>def<SPACE></code></td>
<td>'def '</td>
</tr>
<tr>
<td><code>(</code></td>
<td>group and capture to \1:</td>
</tr>
<tr>
<td><code>[a-zA-Z]</code></td>
<td>any character of: 'a' to 'z', 'A' to 'Z'</td>
</tr>
<tr>
<td><code>\w*</code></td>
<td>word characters (a-z, A-Z, 0-9, _) (0 or more times (matching the most amount possible))</td>
</tr>
<tr>
<td><code>)</code></td>
<td>end of \1</td>
</tr>
<tr>
<td><code>\(</code></td>
<td>'('</td>
</tr>
<tr>
<td><code>(</code></td>
<td>group and capture to \2:</td>
</tr>
<tr>
<td><code>[^()]*</code></td>
<td>any character except: '(', ')' (0 or more times (matching the most amount possible))</td>
</tr>
<tr>
<td><code>)</code></td>
<td>end of \2</td>
</tr>
<tr>
<td><code>\)</code></td>
<td>')'</td>
</tr>
<tr>
<td><code>:</code></td>
<td>':'</td>
</tr>
</tbody>
</table>
</div>
|
python|regex
| 2 |
1,907,436 | 66,211,842 |
unable to contact snap store error message when trying to install PyCharm on Ubuntu
|
<p>My Ubuntu 20.04 is connected to the Internet and I used update and upgrade commands on the machine and it's up to date now. But when I type <code>sudo snap install pycharm-community --classic</code> to install PyCharm Community version, it returns back this error: <em>error: unable to contact snap store</em>.<br />
What to do next, please?</p>
<p>I also searched for the program on Ubuntu Software, but it can't find the app either!</p>
|
<p>It was possible and done very easily, after struggling too much with snap, Ubuntu software and facing failure on each, to download and install PyCharm Community using <a href="https://www.jetbrains.com/help/pycharm/installation-guide.html#toolbox" rel="nofollow noreferrer">Toolbox App</a>.</p>
|
python|ubuntu|pycharm
| 0 |
1,907,437 | 66,046,297 |
Pandas weekly schedule including holidays
|
<p>I would like to make a weekly schedule/calendar to list every Friday between two dates.
If the Friday falls on a holiday, then the date shown should the previous working day (a Thursday unless that is also a holiday, in which case the Wednesday...).</p>
<p>I have tried a few things:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
local_holidays = ['2021-01-08']
my_bank_holidays = pd.tseries.offsets.CustomBusinessDay(
holidays=local_holidays,
weekmask='Fri')
reporting_date = pd.date_range('2021-01-01', '2021-01-31', freq=my_bank_holidays)
pd.DataFrame(reporting_date)
</code></pre>
<p>Results (below) - This skips Friday the 8th (in this example I would like to see Thursday 7th):</p>
<ul>
<li>2021-01-01</li>
<li>2021-01-15</li>
<li>2021-01-22</li>
<li>2021-01-29</li>
</ul>
<p>This next attempt doesn't include holidays:</p>
<pre class="lang-py prettyprint-override"><code>s = pd.date_range('2021-01-01', '2021-01-31', freq='W-FRI')
pd.DataFrame(s, columns=['Date'])
</code></pre>
<p>I have tried a monthly frequency, and that works fine for month ends - I just can't convert it to week ends:</p>
<pre class="lang-py prettyprint-override"><code>local_holidays = ['2020-01-31']
my_bank_holidays = pd.tseries.offsets.CustomBusinessMonthEnd(holidays=local_holidays)
reporting_date = pd.date_range('2021-01-01', '2021-03-31', freq=my_bank_holidays)
pd.DataFrame(reporting_date)
</code></pre>
<p>Any ideas would be appreciated :-)</p>
|
<pre><code>import numpy as np
holidays = np.datetime64('2021-01-08')
date_range = np.arange('2021-01-01', '2021-01-31', step=7, dtype='datetime64[D]')
reporting_date = pd.DataFrame(np.busday_offset(dates=date_range, offsets=0, holidays=[holidays], roll='backward'))
print(reporting_date)
</code></pre>
<p>Output:</p>
<pre><code>['2021-01-01' '2021-01-07' '2021-01-15' '2021-01-22' '2021-01-29']
</code></pre>
|
python|pandas
| 1 |
1,907,438 | 65,958,022 |
GZip and output file
|
<p>I'm having difficulty with the following code (which is simplified from a larger application I'm working on in Python).</p>
<pre><code>from io import StringIO
import gzip
jsonString = 'JSON encoded string here created by a previous process in the application'
out = StringIO()
with gzip.GzipFile(fileobj=out, mode="w") as f:
f.write(str.encode(jsonString))
# Write the file once finished rather than streaming it - uncomment the next line to see file locally.
with open("out_" + currenttimestamp + ".json.gz", "a", encoding="utf-8") as f:
f.write(out.getvalue())
</code></pre>
<p>When this runs I get the following error:</p>
<pre><code>File "d:\Development\AWS\TwitterCompetitionsStreaming.py", line 61, in on_status
with gzip.GzipFile(fileobj=out, mode="w") as f:
File "C:\Python38\lib\gzip.py", line 204, in __init__
self._write_gzip_header(compresslevel)
File "C:\Python38\lib\gzip.py", line 232, in _write_gzip_header
self.fileobj.write(b'\037\213') # magic header
TypeError: string argument expected, got 'bytes'
</code></pre>
<p>PS ignore the rubbish indenting here...I know it doesn't look right.</p>
<p>What I'm wanting to do is to create a json file and gzip it in place in memory before saving the gzipped file to the filesystem (windows). I know I've gone about this the wrong way and could do with a pointer. Many thanks in advance.</p>
|
<p>You have to use bytes everywhere when working with gzip instead of strings and text. First, use BytesIO instead of StringIO. Second, mode should be <code>'wb'</code> for bytes instead of <code>'w'</code> (last is for text) (samely <code>'ab'</code> instead of <code>'a'</code> when appending), here <code>'b'</code> character means "bytes". Full corrected code below:</p>
<p><a href="https://tio.run/##bY8/T8MwEMV3f4pTGJqobdSWCSQWBhAMdOjAiJz0krjyP9mXhvTLh3NaiQE8WGff3Xvv50fqnL2fpiY4A8qBMt4FgueRML7txe3ZXpQX4hSdPVBQtoUnWLwf9h@AtnZHPEK8fncYEOqAkvivGkGCD3hWro9cuBpjBGWBOgTpvVa1JOXsQgjXE0veTPNCDIq62bR85etFacwbvlx14jEeXoFh25RiqBYFyAjNowA@TTkERZhznvKaLf9NXRTzjKj7ENASKYORpPFJZ7fZbdeb7Xr3wHHu4DOpzEGTLzhbp8qq2DFYkNwI3JU2gaM0iV0RrKFnT2NYfN61@E2glWUlBxFvYtrVUuuxvFI6jzbPmOkrgyX8ibaErEwEZXvJVpANVfYfLq@XLdJZ6h5zxpymHw" rel="nofollow noreferrer" title="Python 3 – Try It Online">Try it online!</a></p>
<pre><code>from io import BytesIO
import gzip
jsonString = 'JSON encoded string here created by a previous process in the application'
out = BytesIO()
with gzip.GzipFile(fileobj = out, mode = 'wb') as f:
f.write(str.encode(jsonString))
currenttimestamp = '2021-01-29'
# Write the file once finished rather than streaming it - uncomment the next line to see file locally.
with open("out_" + currenttimestamp + ".json.gz", "wb") as f:
f.write(out.getvalue())
</code></pre>
|
python|json|gzip
| 1 |
1,907,439 | 72,769,082 |
Repeated rows filled with NaNs after convertion to wide format in pandas
|
<p>I have a dataframe which I had to convert between long and wide formats and perform certain calculations.</p>
<p>After the convertions I have ended up with a dataframe, as below:</p>
<pre><code>df={'Time':['0', '1', '0', '1','0', '1','0', '1'],
'Parameter':['down_A', 'down_A', 'up_A','up_A','down_A', 'down_A', 'up_A','up_A'],
'NodeA':['2.56', '0.06', '0.14', '1.005','NaN', 'NaN', 'NaN','NaN'],
'NodeB':['NaN', 'NaN','NaN', 'NaN', '1.44', '1.11','0.56','1.98'],}
</code></pre>
<p>and I would like to multiply the values of NodeA and NodeB for the same Time-Parameter pairs.</p>
<p>The output I would ultimately like to have is:</p>
<pre><code>df = {'Time':['0', '1', '0', '1'],
'Parameter':['down_A', 'down_A', 'up_A','up_A'],
'NodeA':['2.56', '0.06', '0.14', '1.005'],
'NodeB':['1.44', '1.11','0.56','1.98'],
'Multiplied':['3.6864','0.0666','0.0784','1.9899']}
</code></pre>
<p>How can I remove the duplicated rows and have a single row for each Time-Parameter pair?</p>
|
<p>First convert values to floats and then aggregate <code>first</code> or <code>sum</code>, <code>min</code>, 'max'..., last multiple columns:</p>
<pre><code>df[['NodeA','NodeB']] = df[['NodeA','NodeB']].astype(float)
df= df.groupby(['Time','Parameter'], as_index=False, sort=False).first()
df['Multiplied'] = df['NodeA'].mul(df['NodeB'])
print (df)
Time Parameter NodeA NodeB Multiplied
0 0 down_A 2.560 1.44 3.6864
1 1 down_A 0.060 1.11 0.0666
2 0 up_A 0.140 0.56 0.0784
3 1 up_A 1.005 1.98 1.9899
</code></pre>
|
python|pandas|dataframe
| 2 |
1,907,440 | 68,159,257 |
Question on adding a ranking column to csv in python
|
<p>I am working on a csv file and I want to add a ranking column that will reset when the value of the <code>name</code> column is different. This is my original code:</p>
<pre><code>import pandas as pd
l = [{'sku': 'WD-0215', 'name': 'Sofa', 'price': '$1,299.00'},
{'sku': 'WD-1345', 'name': 'Sofa', 'price': '$1,399.00'},
{'sku': 'WD-0416', 'name': 'Sofa', 'price': '$1,199.00'},
{'sku': 'sfr20', 'name': 'TV', 'price': '$1,861.00'},
{'sku': 'sfr40', 'name': 'TV', 'price': '$1,561.00'},
{'sku': 'sfr30', 'name': 'TV', 'price': '$1,961.00'}
]
df = pd.DataFrame(l)
df["rank"]=""
for i in range(len(df.values)):
df.iloc[i,3]=i+1
i+=1
df
</code></pre>
<p>This will create a <code>rank</code> column with values from 1 to 6,
but my expected output should be like this:</p>
<pre><code> sku name price rank
WD-0215 Sofa $1299.00 1
WD-1345 Sofa $1399.00 2
WD-0416 Sofa $1199.00 3
sfr20 TV $1861.00 1
sfr40 TV $1561.00 2
sfr30 TV $1961.00 3
</code></pre>
|
<p>You can use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>groupby</code></a> together with <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.cumcount.html" rel="nofollow noreferrer"><code>cumcount</code></a> to get a cumulative count for each value of <code>name</code>:</p>
<pre><code>df['rank'] = df.groupby('name').cumcount() + 1
</code></pre>
<p>Result:</p>
<pre><code> sku name price rank
0 WD-0215 Sofa $1,299.00 1
1 WD-1345 Sofa $1,399.00 2
2 WD-0416 Sofa $1,199.00 3
3 sfr20 TV $1,861.00 1
4 sfr40 TV $1,561.00 2
5 sfr30 TV $1,961.00 3
</code></pre>
|
python|pandas|csv
| 3 |
1,907,441 | 68,335,570 |
How to delete empty space from a text file using python?
|
<p><a href="https://i.stack.imgur.com/21pdA.png" rel="nofollow noreferrer">Image of the text file</a></p>
<p>Now what I want in this text file is the complete text written in a single line so that it looks like this one:<a href="https://i.stack.imgur.com/QN1o5.png" rel="nofollow noreferrer">What I want it to be like</a>
How do I do this using python? I tried strip function ,replace function etc. It isnt just working.</p>
|
<p>You just need to read the file in, remove the new lines and write it again.</p>
<pre class="lang-py prettyprint-override"><code>with open("./foo.txt", "r") as f:
formatted = ""
for line in f.readlines():
formatted += line.replace('\n', " ") # Removes the new lines, add spaces instead
formatted.replace(" ", " ") # Replace double space with one space
# Writes single line to textfile
with open("./bar.txt", "w") as out:
out.write(formatted)
</code></pre>
|
python|flask|text
| 0 |
1,907,442 | 59,427,019 |
How to initialize a kernel with a tensor
|
<p>I have created a custom layer in keras, which simply perform a dot product between the input and a kernel. But for the kernel I wanted to use the mean of the batch as a kernel initialization, meaning taking the mean of the batch and producing a kernel which initial value is that mean. To do so I have created a custom kernel initializer as follow:</p>
<pre><code> class Tensor_Init(Initializer):
"""Initializer that generates tensors initialized to a given tensor.
# Arguments
Tensor: the generator tensors.
"""
def __init__(self, Tensor=None):
self.Tensor = Tensor
def __call__(self, shape, dtype=None):
return tf.Variable(self.Tensor)
def get_config(self):
return {'Tensor': self.Tensor}
</code></pre>
<p>This is the call method of the custom layer in keras. I simply compute the mean of the batch and use it with the above initializer class to produce a kernel. I use it as follow in the custom layer</p>
<pre><code> def call(self, inputs):
data_format = conv_utils.convert_data_format(self.data_format, self.rank + 2)
inputs = tf.extract_image_patches(
inputs,
ksizes=(1,) + self.kernel_size + (1,),
strides=(1,) + self.strides + (1,),
rates=(1,) + self.dilation_rate + (1,),
padding=self.padding.upper(),
)
inputs = K.reshape(inputs,[-1,inputs.get_shape().as_list()[1],inputs.get_shape().as_list()
[2],self.kernel_size[0]*self.kernel_size[1] ,self.output_dim])
self.kernel = self.add_weight(name='kernel',shape=(),initializer=Tensor_Init(Tensor=tf.reduce_mean(inputs, 0)),trainable=True)
outputs = (tf.einsum('NHWKC,HWKC->NHWC',inputs,self.kernel)+self.c)**self.p
if self.data_format == 'channels_first':
outputs = K.permute_dimensions(outputs, (0, 3, 1, 2))
return outputs
</code></pre>
<p>Th e model is created and compiled normaly but I start training I am getting this error </p>
<pre><code>InvalidArgumentError: You must feed a value for placeholder tensor 'conv2d_1_input' with dtype float and shape [?,48,48,3]
[[node conv2d_1_input (defined at C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\keras\backend\tensorflow_backend.py:736) ]]
Original stack trace for 'conv2d_1_input':
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\ipykernel_launcher.py", line 16, in <module>
app.launch_new_instance()
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\traitlets\config\application.py", line 658, in launch_instance
app.start()
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\ipykernel\kernelapp.py", line 563, in start
self.io_loop.start()
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tornado\platform\asyncio.py", line 148, in start
self.asyncio_loop.run_forever()
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\asyncio\base_events.py", line 438, in run_forever
self._run_once()
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\asyncio\base_events.py", line 1451, in _run_once
handle._run()
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\asyncio\events.py", line 145, in _run
self._callback(*self._args)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tornado\ioloop.py", line 690, in <lambda>
lambda f: self._run_callback(functools.partial(callback, future))
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tornado\ioloop.py", line 743, in _run_callback
ret = callback()
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tornado\gen.py", line 787, in inner
self.run()
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tornado\gen.py", line 748, in run
yielded = self.gen.send(value)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\ipykernel\kernelbase.py", line 378, in dispatch_queue
yield self.process_one()
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tornado\gen.py", line 225, in wrapper
runner = Runner(result, future, yielded)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tornado\gen.py", line 714, in __init__
self.run()
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tornado\gen.py", line 748, in run
yielded = self.gen.send(value)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\ipykernel\kernelbase.py", line 365, in process_one
yield gen.maybe_future(dispatch(*args))
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tornado\gen.py", line 209, in wrapper
yielded = next(result)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\ipykernel\kernelbase.py", line 272, in dispatch_shell
yield gen.maybe_future(handler(stream, idents, msg))
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tornado\gen.py", line 209, in wrapper
yielded = next(result)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\ipykernel\kernelbase.py", line 542, in execute_request
user_expressions, allow_stdin,
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tornado\gen.py", line 209, in wrapper
yielded = next(result)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\ipykernel\ipkernel.py", line 294, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\ipykernel\zmqshell.py", line 536, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\IPython\core\interactiveshell.py", line 2855, in run_cell
raw_cell, store_history, silent, shell_futures)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\IPython\core\interactiveshell.py", line 2881, in _run_cell
return runner(coro)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\IPython\core\async_helpers.py", line 68, in _pseudo_sync_runner
coro.send(None)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\IPython\core\interactiveshell.py", line 3058, in run_cell_async
interactivity=interactivity, compiler=compiler, result=result)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\IPython\core\interactiveshell.py", line 3249, in run_ast_nodes
if (await self.run_code(code, result, async_=asy)):
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\IPython\core\interactiveshell.py", line 3326, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-2-35eda01d200a>", line 75, in <module>
model = create_vgg16()
File "<ipython-input-2-35eda01d200a>", line 12, in create_vgg16
model.add(Conv2D(64, (5, 5), input_shape=(48,48,3), padding='same'))
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\keras\engine\sequential.py", line 162, in add
name=layer.name + '_input')
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\keras\engine\input_layer.py", line 178, in Input
input_tensor=tensor)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\keras\legacy\interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\keras\engine\input_layer.py", line 87, in __init__
name=self.name)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\keras\backend\tensorflow_backend.py", line 736, in placeholder
shape=shape, ndim=ndim, dtype=dtype, sparse=sparse, name=name)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\keras\backend.py", line 998, in placeholder
x = array_ops.placeholder(dtype, shape=shape, name=name)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\ops\array_ops.py", line 2143, in placeholder
return gen_array_ops.placeholder(dtype=dtype, shape=shape, name=name)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 7401, in placeholder
"Placeholder", dtype=dtype, shape=shape, name=name)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 788, in _apply_op_helper
op_def=op_def)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\util\deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\framework\ops.py", line 3616, in create_op
op_def=op_def)
File "C:\ProgramData\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\framework\ops.py", line 2005, in __init__
self._traceback = tf_stack.extract_stack()
</code></pre>
|
<p>I was able to pass the mean of the batch to the kernel by simply creating a zero initialized kernel then assigning the mean value to it, without even creating a custom initilizer. I modified the custom layer as follow</p>
<pre><code>
def call(self, inputs):
data_format = conv_utils.convert_data_format(self.data_format, self.rank + 2)
inputs = tf.extract_image_patches(
inputs,
ksizes=(1,) + self.kernel_size + (1,),
strides=(1,) + self.strides + (1,),
rates=(1,) + self.dilation_rate + (1,),
padding=self.padding.upper(),
)
inputs = K.reshape(inputs,[-1,inputs.get_shape().as_list()[1],inputs.get_shape().as_list()
[2],self.kernel_size[0]*self.kernel_size[1] ,self.output_dim])
weights = tf.reduce_mean(inputs, 0)
self.kernel = self.add_weight(name='kernel',
shape=(weights.get_shape().as_list()[0],weights.get_shape().as_list()
[1],weights.get_shape().as_list()[2],weights.get_shape().as_list()[3]),
initializer='zeros',
trainable=True)
tf.compat.v1.assign(self.kernel, weights)
outputs = (tf.einsum('NHWKC,HWKC->NHWC',inputs,self.kernel)+self.c)**self.p
if self.data_format == 'channels_first':
outputs = K.permute_dimensions(outputs, (0, 3, 1, 2))
return outputs
</code></pre>
|
python|tensorflow|keras
| 0 |
1,907,443 | 59,310,990 |
Django get PK dynamically from views.py
|
<p>I have a view that renders a comment form along with a template:</p>
<p><strong>views.py</strong></p>
<pre><code>def news(request):
if request.method == "POST":
form = CommentForm(request.POST)
if form.is_valid():
comment = form.save(commit=False)
comment.post = Article.objects.get(pk=2)
print(comment.post)
comment.author = request.user.username
comment.save()
return HttpResponseRedirect('')
else:
form = CommentForm()
return render(request, '../templates/news.html', context={"form": form})
</code></pre>
<p><strong>models.py</strong></p>
<pre><code>class Comment(models.Model):
post = models.ForeignKey(Article, on_delete=models.CASCADE, related_name='comments', blank=True)
author = models.TextField()
text = models.TextField()
def __str__(self):
return self.text
</code></pre>
<p><strong>forms.py</strong></p>
<pre><code>class CommentForm(ModelForm):
class Meta:
model = Comment
fields = ('text',)
</code></pre>
<p>In views.py, where <strong>comment.post</strong> is getting assigned to Article objects, I want the pk to be applied dynamically. I tried doing it in the templates, where putting {{ article.pk }} in the templates output the right pk for the Article object but I wasn't sure how I'd go about applying it to my form. </p>
<p>The templates look simple: Article object, below it a comment form.</p>
<p>The problem is simple, I want the news(request) function to dynamically apply the pk of the current Article object in order to make the comment go to the right post.</p>
|
<p>You can either use the path if it's unique or you can just add a hidden field and set the article pk as value:</p>
<pre><code><input name="post" type="hidden" value={{ article.pk }} />
</code></pre>
<p>And your form:</p>
<pre><code>class CommentForm(ModelForm):
class Meta:
model = Comment
fields = ('text', 'post')
</code></pre>
<p>and you can access it from validated data in view.</p>
|
python|django
| 0 |
1,907,444 | 59,109,708 |
UnboundLocalError in the same function - Python
|
<p>This piece of code in python is outputting an error "UnboundLocalError: local variable 'accelY' referenced before assignment ". But accelY has already been assigned in the same function. Does anyone know why this is happening?</p>
<pre><code> while True:
for event in pygame.event.get():
if event.type == pygame.QUIT:
pygame.quit()
quit()
if event.type == pygame.KEYDOWN:
if event.key == pygame.K_SPACE:
accelY = -2
y += accelY
</code></pre>
|
<p><code>accelY</code> isn't <em>necessarily</em> set earlier in the function; if there is no <code>event</code> with the specified type & key, it won't be.</p>
|
python|python-3.x|pygame
| 0 |
1,907,445 | 59,188,704 |
How to design a custom callback in Keras?
|
<p>I was trying to design a custom callback in keras with the following information:</p>
<pre><code>
- Number of samples processed in the current batch
- Current epoch
- Time taken to execute the current epoch
- Any metric supplied to the model in its compilation stage
- Number of epochs remaining
</code></pre>
<p>A progress bar should be displayed for the number of epochs and current epoch. I tried to use tqdm but could not figure out how to implement it. Any help will be appreciated </p>
|
<p>Here is the permalink for keras callback for ProgBar as stated earlier by @Dr. Snoopy (broken link though):
<a href="https://github.com/keras-team/keras/blob/461c7b1dc1797d2b05110837f5a4cd12886cc390/keras/callbacks.py#L926" rel="nofollow noreferrer">https://github.com/keras-team/keras/blob/461c7b1dc1797d2b05110837f5a4cd12886cc390/keras/callbacks.py#L926</a></p>
<p>It is quite interesting, I was personally looking for this to improve features of my own callbacks:</p>
<pre><code>def set_params(self, params):
super().set_params(params)
self.epochs = params['epochs']
</code></pre>
|
tensorflow|keras|callback|tqdm
| 0 |
1,907,446 | 72,896,626 |
FileNotFoundError raceback (most recent call last) while using os.listdir()
|
<p>I am facing a problem of file not found. os.listdir() method should be able to load folder. Why it cannot work correctly? Make me any advice and suggestions. Thank you.</p>
<pre><code>scene = 'scene1'
folders = os.listdir("graph_state_list/" + scene + "/")
for folder in folders:
try:
activity_directory = "graph_state_list/" + scene + "/" + folder
directories = os.listdir(activity_directory)
program_discription_list = []
for directory in directories:
program_description_path = "graph_state_list/" + scene + "/" + folder + "/" + directory + "/program-description.txt"
program_description = {}
input_file = open(program_description_path, "r")
name_desc = []
for line in input_file:
name_desc.append(line.strip())
input_file.close()
program_description = {
"name": name_desc[0],
"description": name_desc[1]
}
program_discription_list.append(program_description)
activity_program = get_activity_program("graph_state_list/" + scene + "/" + folder + "/" + directory + "/activityList-program.txt")
graph_state_list = get_graph_state_list("graph_state_list/" + scene + "/" + folder + "/" + directory + "/activityList-graph-state-*.json")
create_rdf(graph_state_list, program_description, activity_program, scene, directory)
except Exception as e:
print(e.args)
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
Input In [66], in <cell line: 2>()
1 scene = 'scene1'
----> 2 folders = os.listdir("graph_state_list/" + scene + "/")
3 for folder in folders:
4 try:
FileNotFoundError: [Errno 2] No such file or directory: 'graph_state_list/scene1/'
</code></pre>
|
<p>The problem is with the slash character you have in your path.</p>
<p>Try to use this instead:</p>
<pre><code>activity_directory = "graph_state_list\\" + scene + "\\" + folder
</code></pre>
<p>EDIT: the error is raised in line 2, so please change that one as well to:</p>
<pre><code>folders = os.listdir("graph_state_list\\" + scene + "\\")
</code></pre>
|
python|ubuntu|jupyter-notebook|windows-subsystem-for-linux
| 1 |
1,907,447 | 63,019,205 |
Group By in Django ORM?
|
<p>I have two models</p>
<pre><code>class ShareType(models.Model):
code = models.CharField(max_length=10, null=False, blank=False)
type = models.CharField(max_length=56, null=False, blank=False)
class ShareAllocation(models.Model):
session_key = models.CharField(max_length=40, null=True)
share_type = models.ForeignKey(ShareType, on_delete=models.CASCADE, null=True)
share_holder = models.ForeignKey(ShareHolder, on_delete=models.CASCADE, null=True)
number_of_shares = models.IntegerField(null=False, blank=False)
amount_paid = models.IntegerField(null=False, blank=False)
amount_unpaid = models.IntegerField(null=False, blank=False)
is_beneficial_owner = models.BooleanField(null=False, blank=False)
</code></pre>
<p>i am using this query, which is obviously incorrect.</p>
<pre><code>share_structure = ShareType.objects.annotate(Sum('shareallocation__number_of_shares')).\
filter(shareallocation__session_key=self.storage.request.session.session_key)
</code></pre>
<p>sample data for shareallocation table</p>
<pre><code>12,5,2,1,false,9,9,098fiy6m9f0tkkmnk8jr715pyjlzcai6
13,10,2,1,false,10,12,098fiy6m9f0tkkmnk8jr715pyjlzcai6
14,15,2,1,false,12,12,098fiy6m9f0tkkmnk8jr715pyjlzcai6
</code></pre>
<p>sample data for share type table</p>
<pre><code>9,ORD,(ORD) Ordinary
10,A,(A) A
11,B,(B) B
12,MAN,(MAN) Management
13,LG,(LG) Life Governors
14,EMP,(EMP) Employees
15,FOU,(FOU) Founders
16,PRF,(PRF) Preference
17,CUMP,(CUMP) Cumulative Preference
18,NCP,(NCP) Non Cumulative Preference
19,REDP,(REDP) Redeemable Preference
20,NRP,(NRP) Non Redeemable Preference
21,NCRP,(NCRP) Non Cum. Redeemable Preference
22,PARP,(PARP) Participative Preference
23,RED,(RED) Redeemable
24,INI,(INI) Initial
25,SPE,(SPE) Special
</code></pre>
<p>required output:</p>
<ul>
<li>Code</li>
<li>Number Of Shares(total)</li>
<li>Amount Paid(total)</li>
<li>Amount Unpaid(total)</li>
</ul>
|
<p>i think this is what is called change! i have a feeling there is going to be alot of it that i am going to see...that aside</p>
<p>here is the solution</p>
<pre><code> share_structure = ShareAllocation.objects.values('share_type__code','share_type__type').\
annotate(share_count=Sum('share_type')). \
annotate(number_of_shares=Sum('number_of_shares')).\
annotate(amount_paid=Sum('amount_paid')).\
annotate(amount_unpaid=Sum('amount_unpaid'))
</code></pre>
<p>the challenge was in understanding or lack of understanding that all of my column names will come from values field.</p>
<p>The major learning from this exercise shows that column for referenced table will go in values as tablename__filedname
and summed columns will come from the annotate function</p>
|
python|django|django-orm
| 0 |
1,907,448 | 62,090,960 |
Is it possible to call/use instance attributes or global variables from a custom loss function using Keras?
|
<p>I would like to define a loss function like the following:</p>
<pre><code>def custom_loss_function(y_true, y_pred):
calculate loss based on y_true, y_pred and self.list_of_values
</code></pre>
<p>where variable self.list_of_values is modified outside this function every iteration and therefore, will have different values each time the custom_loss_function is "called". I know from <a href="https://stackoverflow.com/questions/55653015/how-do-i-call-a-global-variable-in-keras-custom-loss-function-to-change-the-ret">this post</a> that the loss function is called only once, and then "a session iteratively evaluates the loss".</p>
<p>My doubt is whether it is possible to work with global/external variables (with dynamic values) from the loss function that is then used like this:</p>
<pre><code>model.compile(loss=custom_loss_function, optimizer=Adam(lr=LEARNING_RATE), metrics=['accuracy'])
</code></pre>
|
<p>Specifying the solution here (Answer Section) even though it is present in Comments Section, for the <strong>benefit of the Community</strong>.</p>
<p>The Variable, <code>list_of_values</code> can be considered as an <code>Input Variable</code> like </p>
<p><code>list_of_values = Input(shape=(1,), name='list_of_values')</code> and define the <code>Custom Loss function</code> as shown below:</p>
<pre><code>def sample_loss( y_true, y_pred, list_of_values ) :
return list_of_values * categorical_crossentropy( y_true, y_pred )
</code></pre>
<p>Also, the same <code>Global Variable</code> can be passed as an Input to the Model like:</p>
<pre><code>model = Model( inputs=[x, y_true, list_of_values], outputs=y_pred, name='train_only' )
</code></pre>
<p>Complete code for an example is shown below:</p>
<pre><code>from keras.layers import Input, Dense, Conv2D, MaxPool2D, Flatten
from keras.models import Model
from keras.losses import categorical_crossentropy
def sample_loss( y_true, y_pred, list_of_values ) :
return list_of_values * categorical_crossentropy( y_true, y_pred )
x = Input(shape=(32,32,3), name='image_in')
y_true = Input( shape=(10,), name='y_true' )
list_of_values = Input(shape=(1,), name='list_of_values')
f = Conv2D(16,(3,3),padding='same')(x)
f = MaxPool2D((2,2),padding='same')(f)
f = Conv2D(32,(3,3),padding='same')(f)
f = MaxPool2D((2,2),padding='same')(f)
f = Conv2D(64,(3,3),padding='same')(f)
f = MaxPool2D((2,2),padding='same')(f)
f = Flatten()(f)
y_pred = Dense(10, activation='softmax', name='y_pred' )(f)
model = Model( inputs=[x, y_true, list_of_values], outputs=y_pred, name='train_only' )
model.add_loss( sample_loss( y_true, y_pred, list_of_values ) )
model.compile( loss=None, optimizer='sgd' )
print model.summary()
</code></pre>
<p>For more information, please refer this <a href="https://stackoverflow.com/a/50127646/13465258">Stack Overflow Answer</a>.</p>
<p>Hope this helps. Happy Learning!</p>
|
python-3.x|tensorflow|keras
| 1 |
1,907,449 | 62,190,021 |
Why can't I use '+' to merge dictionaries in Python?
|
<p>I am a new Python user and I would have some doubts.</p>
<p>I know I that the <code>+</code> operator not only performs sum between numbers but also concatenation between strings or lists. Why this is not allowed for dictionaries?</p>
|
<p>How would the <code>+</code> operator for dicts handle duplicate keys? e.g.</p>
<pre><code>>>> {'d': 2} + {'d': 1}
</code></pre>
<p>Perhaps like a <code>Counter</code>?</p>
<pre><code>>>> from collections import Counter
>>> Counter({'d': 2}) + Counter({'d': 1})
Counter({'d': 3})
</code></pre>
<p>Or like a <code>defaultdict</code>?</p>
<pre><code>{'d': [2, 1]}
</code></pre>
<p>Or overwriting the 1st key like <code>dict.update</code>?</p>
<pre><code>>>> d = {'d': 2}
>>> d.update({'d':1})
>>> d
{'d': 1}
</code></pre>
<p>Or leaving only the 1st key?</p>
<pre><code>{'d': 2}
</code></pre>
<p>It's frankly ambiguous!</p>
<p>See also <a href="https://www.python.org/dev/peps/pep-0584/#use-the-addition-operator" rel="nofollow noreferrer">PEP 0584</a>:</p>
<blockquote>
<p><strong>Use The Addition Operator</strong></p>
<p>This PEP originally started life as a
proposal for dict addition, using the + and += operator. That choice
proved to be exceedingly controversial, with many people having
serious objections to the choice of operator. For details, see
previous versions of the PEP and the mailing list discussions.</p>
</blockquote>
<p>Note Guido himself did <a href="https://mail.python.org/archives/list/python-ideas@python.org/thread/BHIJX6MHGMMD3S6D7GVTPZQL4N5V7T42/" rel="nofollow noreferrer">consider and discuss this</a>; see also <a href="https://bugs.python.org/issue36144" rel="nofollow noreferrer">issue36144</a>.</p>
|
python|dictionary|concatenation|addition
| 3 |
1,907,450 | 35,517,038 |
Increasing Frequency of x-axis labels for dates on DataFrame plot
|
<p>I have a pandas DataFrame with two columns: <code>month_of_sale</code> which is a date, and <code>number_of_gizmos_sold</code> which is a number.</p>
<p>I'm trying to increase the frequency of the labels on the x-axis so it's easier to read, but I can't!</p>
<p>Here is the <code>df.head()</code> of my table:</p>
<p><a href="https://i.stack.imgur.com/AGToT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AGToT.png" alt="DataFrame.head()"></a></p>
<p>and this is what it plots:
<code>df.plot(y='number_of_gizmos_sold', figsize=(15,5))</code></p>
<p><a href="https://i.stack.imgur.com/26dyd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/26dyd.png" alt="DataFrame.plot()"></a></p>
<p>I'd like to increase the frequency of the labels, because there's a big space in between them.</p>
<h2>What I've tried</h2>
<p><code>plot.xaxis.set_major_locator(MonthLocator())</code> but that seems to increase the distance between the labels even more.</p>
<p><a href="https://i.stack.imgur.com/y2Ny6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/y2Ny6.png" alt="enter image description here"></a></p>
<p><code>plot.xaxis.set_major_formatter(DateFormatter('%Y-%m-%d'))</code></p>
<p>Strangely, I end up with this:
<a href="https://i.stack.imgur.com/G1niT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/G1niT.png" alt="enter image description here"></a></p>
<p>The questions that last plot raises for me are:</p>
<ul>
<li>What's up with 0002 as the year?</li>
<li>And why do I still have the old <code>Jul</code> labels there too?</li>
</ul>
|
<p>I haven't traced the problem back to its source, but per <a href="https://stackoverflow.com/a/13674286/190597">bmu's
solution</a>, if you call <code>ax.plot</code>
instead of <code>df.plot</code>, then you can configure the result using
<code>ax.xaxis.set_major_locator</code> and <code>ax.xaxis.set_major_formatter</code>. </p>
<pre><code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
np.random.seed(2016)
dates = pd.date_range('2013-03-01', '2016-02-01', freq='M')
nums = (np.random.random(len(dates))-0.5).cumsum()
df = pd.DataFrame({'months': dates, 'gizmos': nums})
df['months'] = pd.to_datetime(df['months'])
df = df.set_index('months')
fig, ax = plt.subplots()
ax.plot(df.index, df['gizmos'])
# df.plot(y='gizmos', ax=ax)
ax.xaxis.set_major_locator(mdates.MonthLocator(interval=2))
ax.xaxis.set_major_formatter(mdates.DateFormatter('%Y-%m-%d'))
fig.autofmt_xdate()
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/t6lAt.png" rel="noreferrer"><img src="https://i.stack.imgur.com/t6lAt.png" alt="enter image description here"></a></p>
|
pandas|matplotlib|plot|axis-labels
| 8 |
1,907,451 | 58,802,781 |
Two identical AutomationID using xpath python with appium
|
<p>In in our mobile app, there are two boxes with the same AutomationId.
For automated testing i need to find the first of the two elements by xpath.
I tried following code, bt it didn't work:</p>
<pre><code>self.driver.find_element_by_xpath(
"xpath=(//[@contentDescription='Cards'])[1]").click()
time.sleep(0.5)
self.assertEqual('Angle x:',
self.driver.find_element_by_accessibility_id('MovementsTitle').text)
time.sleep(0.5)
</code></pre>
<p>Thanks!</p>
|
<p>if they have the same id, you can do the following, use the findelements method (yes in plural) as follows:</p>
<pre><code>driver.findElementsByAccessibilityID("Cards").get(0).click();
</code></pre>
<p>just specify the index of the element try 0 or 1.</p>
|
python|testing|xpath|automation|appium
| 0 |
1,907,452 | 58,903,554 |
Using MIT Indoor scene database in CNN
|
<p>I'm an engineering student and kind of a noob to programming. I'm taking an AI course, currently trying to do my final project. </p>
<p>I have to create a CNN net, I have to use de MIT Indoor scene database (it can be found here: <a href="http://web.mit.edu/torralba/www/indoor.html" rel="nofollow noreferrer">http://web.mit.edu/torralba/www/indoor.html</a>). I don't have a problem doing the CNN since I've done a few before in the semester using CIFAR10, but I'm having trouble with this one since I don't know who to use this set of images.<br>
I think I need to create a dataset of my own, I've tried with PyTorch using <a href="https://pytorch.org/tutorials/beginner/data_loading_tutorial.html" rel="nofollow noreferrer">https://pytorch.org/tutorials/beginner/data_loading_tutorial.html</a>, but I get confused because I don't have a .csv with features, I have a lot of .xml files with several features for each picture. Also, I don't have a file just saying "bedroom, bar, etc" as I've seen in other tutorials.</p>
<p>I would rather use PyTorch since I can use the "train_test_split" function, but if anyone could help me to understand how to make those 15620 my input to the net, I would really appreciate it. </p>
|
<p>you can generate your own csv files, although, you might not need it.
There is a good tutorial on pytorch website <a href="https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html#load-data" rel="nofollow noreferrer">https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html#load-data</a>, which is very similar or easily applicable to your case.</p>
<p>MIT Indoor dataset has the images one folder per class, and the txt files mentioned on the website are the train / test splits.</p>
<p>So, if you create the following folder structure:</p>
<pre><code>train
|- class 1
|- class 2
...
|- class n
</code></pre>
<p>and the same for val / test, it should be straight-foward to use (adapt) the datasets.ImageFolder example for your case.</p>
|
machine-learning|artificial-intelligence|conv-neural-network|pytorch
| 1 |
1,907,453 | 58,822,230 |
docker-compose ignores installed django, virtualenv
|
<p>docker-compose ignores installed virtual environment</p>
<p>I am trying to dockerize my existing django app. Also I am new to docker, so please forgive me, if this is some sort of "just read the instructions" thing. Question 51490937 seems to be somewhat related, but I am not sure about that. My app runs apache with a pip install mod-wsgi and it already deploys well to a native ubuntu 18 and 19 vm.
I run win10 and I am using a linux container with Docker version 19.03.4, build 9013bf5 and I run a python:3.7 base image.</p>
<p>My problem: Running my installed image fails when I use docker-compose up, but succeeds when I run the same manually. I first assumed, I have a caching problem, but trying all the things as suggested by <a href="https://forums.docker.com/t/how-to-delete-cache/5753" rel="nofollow noreferrer">https://forums.docker.com/t/how-to-delete-cache/5753</a> did not help. Also the time stamp of my local project folder is identical to the time stamp of the manually run container and the one returned by the Dockerfile. So now I assume there is something wrong with my docker-compose.yml, or I activated some branching, that I am not aware of. (files are attached at the end) For debugging I removed the postgress service and now run the default sqlite db.</p>
<p><code>docker-compose up (ls -la && source venv/bin/activate)</code> returns:</p>
<pre><code>some other files...
302_game_container | -rwxr-xr-x 1 root root 400 Nov 11 16:49 test.py
302_game_container | drwxrwxrwx 2 root root 0 Nov 12 11:55 web_project
302_game_container | bash: venv/bin/activate: No such file or directory <---- The ERROR
302_game_container exited with code 1
</code></pre>
<p>Note: The time stamp shown here is (11:55 web_project) and venv folder is missing. It looks like the venv was not properly installed.</p>
<p>However, when I now run the same image manually, by typing:</p>
<pre><code>docker run -it -d 302_chess_game_2019_11_12_15_20_33_354574
docker exec -it myDockerContainer bash
ls -la && source venv/bin/activate
</code></pre>
<p>I get the desired result:</p>
<pre><code>some other files...
-rwxr-xr-x 1 root root 400 Nov 11 16:49 test.py
drwxr-xr-x 5 root root 4096 Nov 12 11:57 venv
drwxrwxrwx 1 root www-data 4096 Nov 12 11:59 web_project
(venv) root@d98fde4a6316:/302_chess_game#
</code></pre>
<p>I can then successfully run python web_project/manage.py runserver and even sudo /etc/wsgi-port-8000/apachectl restart. So the image contains the installed venv. I can then view the page under 0.0.0.0:8000.</p>
<p>my setup:
The Dockerfile runs a script.sh which installs all prod programs, creates my venv, installs requirements.txt, activates the venv and then RUNs makemigrations, migrate and collectstatic. All tasks run successfully. The last step in
<code>Dockerfile RUNs ls -la</code> returns:</p>
<pre><code>some other files...
-rwxr-xr-x 1 root root 400 Nov 11 16:49 test.py
drwxr-xr-x 5 root root 4096 Nov 12 11:57 venv
drwxrwxrwx 1 root www-data 4096 Nov 12 11:59 web_project
</code></pre>
<p>As you can see, the venv is installed inside the image and the time stamp for the web_project folder is (11:59 web_project).</p>
<p>Here is my reduced <code>docker-compose.yml</code> (project=302_chess_game, timeStamp=_2019_11_12_15_20_33_354574, port=8000, image=python:3.7)</p>
<pre><code>version: '3'
services:
python:
image: {{ project }}{{ timeStamp }}
container_name: 302_game_container
volumes:
- .:/{{ project }}
ports:
- {{ port }}:{{ port }}
command: >
bash -c "ls -la && source venv/bin/activate"
</code></pre>
<p>Here is my Dockerfile</p>
<pre><code>FROM {{ image }}
# install ubuntu stuff
RUN apt -y update
RUN apt -y upgrade
RUN apt-get -y install sudo
# copy files
RUN sudo mkdir /{{ project }}
WORKDIR /{{ project }}
COPY . /{{ project }}/
# set default evn variables
ENV PYTHONUNBUFFERED 1
ENV LANG C.UTF-8
#ENV DEBIAN_FRONTEND=noninteractive
# set environment variables
ENV PORT={{ port }}
ENV PRODUCTION=1
ENV SERVER={{ server }}
ENV DEBUG={{ debug }}
ENV DATABASE={{ database }}
# install shell script
RUN chmod 777 ./resources/{{ install_sh }}
RUN sed -i -e 's/\r$//' ./resources/{{ install_sh }}
RUN ./resources/{{ install_sh }}
RUN ls -la
EXPOSE {{ port }}
# add user implemented later
#RUN useradd -ms /bin/bash {{ username }}
#RUN echo '{{ username }}:test321' | chpasswd
#RUN adduser {{ username }} sudo
#RUN chown -R {{ username }}:sudo /{{ project }}
#USER {{ username }}
</code></pre>
<p>Here is the shell script:</p>
<pre><code>#!/bin/bash
sudo apt install vim -y
sudo apt install net-tools
sudo apt install -y python3-pip
sudo apt install build-essential libssl-dev libffi-dev python3-dev -y
sudo apt install apache2 -y
sudo apt install apache2-dev -y
sudo apt install -y python3-venv
pip3 install mod_wsgi
python3 -m venv venv
source venv/bin/activate
python -m pip install --upgrade pip
pip install --upgrade setuptools
pip install -r requirements.txt
pip install mod_wsgi
sudo mkdir /etc/wsgi-port-{{ port }}
sudo chown -R {{ username }}:{{ username }} /etc/wsgi-port-{{ port }}
sudo groupadd www-data
sudo adduser www-data www-data
sudo chown -R :www-data web_project/media/
sudo chmod -R 775 web_project/media/
sudo chown -R :www-data web_project
sudo chmod 777 web_project
sudo chown :www-data web_project/{{ project }}.sqlite3
sudo chmod 664 web_project/{{ project }}.sqlite3
python web_project/manage.py makemigrations
python web_project/manage.py migrate
python web_project/manage.py collectstatic
python web_project/manage.py runmodwsgi --server-root /etc/wsgi-port-{{ port }} --user www-data --group www-data --port {{ port }} --url-alias /static static --url-alias /media media --setup-only
#
# sudo cp conf_files/sshd_config /etc/ssh/sshd_config
# sudo systemctl restart sshd
# sudo apt install ufw -y
# sudo ufw default allow outgoing
# sudo ufw default deny incoming
# sudo ufw allow ssh
# sudo ufw allow http/tcp
# sudo ufw allow https/tcp
# sudo ufw allow 3389
# sudo ufw enable
# sudo passwd {{ username }}
# netstat -nat | grep LISTEN
# sudo ufw status
# sudo /etc/wsgi-port-{{ port }}/apachectl restart
# sudo apt install xrdp
# sudo apt remove lightdm
# sudo apt install xfce4
# sudo apt-get install xfce4-terminal tango-icon-theme
# echo xfce4-session > ~/.xsession
# sudo apt install libexo-1-0
# sudo apt install firefox
# sudo service xrdp restart
# sudo /etc/wsgi-port-{{ port }}/apachectl restart
</code></pre>
<p>To reproduce run the following docker-compose.yml:</p>
<pre><code>version: '3'
services:
python:
image: lmielke/302_chess_game_2019_11_12_15_49_04_892687:testcontainer
container_name: 302_game_container
volumes:
- .:/302_chess_game
ports:
- 8000:8000
command: >
bash -c "source venv/bin/activate &&
sudo apachectl stop &&
sudo /etc/wsgi-port-8000/apachectl start"
</code></pre>
<p>This will pull the image but container build will fail as described. Then run:</p>
<pre><code>docker run -it -d (imgId),
docker exec -it (containerId),
source venv/bin/activate,
sudo /etc/wsgi-port-8000/apachectl start,
</code></pre>
<p>It will show some alias warning but run ok. You can add <code>ls -la</code> to the bash cmds to see that there is no venv folder.</p>
|
<p>Ok, so eventually it was a no brainer. I intended to activate the venv within my image project folder, which is of cause challenging, when beforehand I mount my local folder within docker-compose.</p>
|
python|django|docker|docker-compose|mod-wsgi
| 0 |
1,907,454 | 59,864,508 |
How to change the cellsize, xllcorner and yllcorner of an ASCII file?
|
<p>I need to convert the <code>cellsize</code>, <code>xllcorner</code> and <code>yllcorner</code> of multiple ASCII files from <code>m</code> into <code>km</code>.
I've been trying to overwrite them in the header of the ASCII files like I would with a regular text file, like this:</p>
<pre><code>for rw_file in os.listdir(r"C:\Users\Marie\Test"):
rw_file_path = os.path.join(r"C:\Users\Marie\Test", rw_file)
with open(rw_file_path, 'r+') as f:
# skip the first two lines of the header
f.readline()
f.readline()
# convert the values of cellsize, xllcorner and yllcorner into km
line3 = f.readline()
header_x, xllcorner = line3.split()
xllcorner_new = int(xllcorner) / 1000
f.seek(2)
f.write(re.sub(header_x, xllcorner_new)) #third argument??
line4 = f.readline()
header_y, yllcorner = line4.split()
yllcorner_new = int(yllcorner) / 1000
f.seek(3)
f.write(re.sub(header_y, yllcorner_new))
line5 = f.readline()
header_size, cellsize = line5.split()
cellsize_new = int(cellsize) / 1000
f.seek(4)
f.write(re.sub(header_size, cellsize_new))
</code></pre>
<p>But of course the function re.sub needs three arguments. I am not sure how else to do this. I'm still a beginner so I'm sure there is an easy way, but I can't find it.
Can I overwrite these lines in the header somehow, or is there another way?</p>
|
<p>You should write into a new file (and then maybe copy that new file over the old one) instead of changing the old file in-place. Since the new values you're writing into the file are of a different size in bytes than the old ones, changing it in-place will produce garbage content.</p>
<p>As for your specfic question about <code>re.sub</code>, you can find its documentation here: <a href="https://docs.python.org/3/library/re.html#re.sub" rel="nofollow noreferrer">https://docs.python.org/3/library/re.html#re.sub</a>. It requires the string to be changed as its 3rd parameter.</p>
|
python|python-3.x|ascii
| 0 |
1,907,455 | 59,683,726 |
Python API response - GCP
|
<p>I'm calling google API from python requests, I'm able to get response from the API also able to extract status code if request is failing like 404. But how do I get success response as 200? I do not see any attribute with that status.</p>
<p>For example:</p>
<pre><code>request = service.disks().get(project=project, zone=zone, disk=disk).execute()
</code></pre>
<p><code>response.status</code> or <code>response.status_code</code> does not help.</p>
|
<p>The <code>service.disks().get(project=project, zone=zone, disk=disk).execute()</code> returns the dictionary.</p>
<p>Check the <code>request['status']</code>, it will return <code>READY</code>.</p>
<p><a href="https://cloud.google.com/compute/docs/reference/rest/v1/disks/get?hl=en-GB" rel="nofollow noreferrer">Document reference</a></p>
|
python|google-cloud-platform
| 2 |
1,907,456 | 59,635,516 |
Errors when using SMTPLIB SSL email with a 365 email address
|
<pre><code>context = ssl.create_default_context()
with smtplib.SMTP_SSL("smtp.office365.com", 587, context=context) as server:
</code></pre>
<p>(587) When I run this I get an SSL error: [SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:1056).</p>
<p>(465) I get a timeout error.</p>
<p>I tried using ports 465 and 587. I get different errors when I use different ports. I did try 995 just for the heck of it and still no luck. If I use my gmail account, I have no issues. </p>
<p>Is there something I need to do to my email account so it works. I also tried .SMTP() and still no luck.</p>
<pre><code>smtp = smtplib.SMTP("smtp.office365.com",587)
context = ssl.create_default_context()
with smtp.starttls(context=context) as server:
server.login(from_address, password)
for i, r in newhire[mask].iterrows():
server.sendmail(
from_address,
r["Email"],
message.format(Employee=r["Employee Name"],
StartDate=r["StartDate"],
PC=r["PC"],
Title=r["Title"],
Email=r["Email"],
)
)
</code></pre>
|
<p>From <a href="https://docs.python.org/3/library/smtplib.html#smtplib.SMTP_SSL" rel="nofollow noreferrer">the documentation of SMTP_SSL</a>:</p>
<blockquote>
<p>SMTP_SSL should be used for situations where SSL is required from the beginning of the connection and using starttls() is not appropriate.</p>
</blockquote>
<p>Thus, SMTP_SSL is for implicit SMTP and the common port for this is 465. Port 587 is instead used for explicit SMTP where a plain connect is done and later an upgrade to SSL with the STARTTLS command. </p>
<p>What happens here is that the client tries to speak SSL/TLS to a server which does not expect SSL/TLS at this stage and thus replies with non-TLS data. These get interpreted as TlS nonetheless which results in this strange <code>[SSL: WRONG_VERSION_NUMBER]</code>.</p>
<p>To fix this either use port 465 (and not 587) with SMTP_SSL (not supported by Office365) or use port 587 but with <a href="https://docs.python.org/3/library/smtplib.html#smtplib.SMTP.starttls" rel="nofollow noreferrer">starttls</a>:</p>
<pre><code>with smtplib.SMTP("smtp.office365.com", 587) as server:
server.starttls(context=context)
</code></pre>
|
python|email|ssl|office365|smtplib
| 1 |
1,907,457 | 49,050,287 |
Reading numpy arrays from disk in tensorflow
|
<p>I am a tensorflow beginner, trying to read numpy arrays stored on disk into TF using the TextLineReader. But when I read the arrays in TF, I see values different from the original array. Could someone please point to the mistake I am making here? Please see a sample code below. Thanks </p>
<pre><code>import tensorflow as tf
import numpy as np
import csv
#Write two numpy arrays to disk
a = np.arange(15).reshape(3, 5)
np.save("a.npy",a,allow_pickle=False)
b = np.arange(30).reshape(5, 6)
np.save("b.npy",b,allow_pickle=False)
with open('files.csv', 'w') as csvfile:
filewriter = csv.writer(csvfile, delimiter=',')
filewriter.writerow(['a.npy', 'b.npy'])
# Load a csv with the two array filenames
csv_filename = "files.csv"
filename_queue = tf.train.string_input_producer([csv_filename])
reader = tf.TextLineReader()
_, csv_filename_tf = reader.read(filename_queue)
record_defaults = [tf.constant([], dtype=tf.string), tf.constant([], dtype=tf.string)]
filename_i,filename_j = tf.decode_csv(
csv_filename_tf, record_defaults=record_defaults)
file_contents_i = tf.read_file(filename_i)
file_contents_j = tf.read_file(filename_j)
bytes_i = tf.decode_raw(file_contents_i, tf.int16)
array_i = tf.reshape(tf.cast(tf.slice(bytes_i, [0], [3*5]), tf.int16), [3, 5])
bytes_j = tf.decode_raw(file_contents_j, tf.int16)
array_j = tf.reshape(tf.cast(tf.slice(bytes_j, [0], [5*6]), tf.int16), [5, 6])
with tf.Session() as sess:
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
a_out, b_out = (sess.run([array_i, array_j]))
print(a)
print(a_out)
coord.request_stop()
coord.join(threads)
</code></pre>
<p>Here is the output that I get:</p>
<p>Expected output (a)</p>
<pre><code>[[ 0 1 2 3 4]
[ 5 6 7 8 9]
[10 11 12 13 14]]
</code></pre>
<p>Received output: (a_out)</p>
<pre><code>[[20115 19797 22864 1 118]
[10107 25956 25459 10098 8250]
[15399 14441 11303 10016 28518]]
</code></pre>
|
<p>In order to figure out what was happening I printed <code>bytes_i</code> instead of <code>array_i</code>.</p>
<pre><code>a_out, b_out = (sess.run([bytes_i, bytes_j]))
print(a_out)
</code></pre>
<p>And I obtained the following list:</p>
<pre><code>[20115 19797 22864 1 118 10107 25956 25459 10098 8250 15399 14441
11303 10016 28518 29810 24946 24430 29295 25956 10098 8250 24902 29548
11365 10016 26739 28769 10085 8250 13096 8236 10549 8236 8317 8224
8224 8224 8224 8224 8224 8224 8224 8224 8224 8224 8224 8224
8224 8224 8224 8224 8224 8224 8224 8224 8224 8224 8224 8224
8224 8224 8224 2592 0 0 0 0 1 0 0 0
2 0 0 0 3 0 0 0 4 0 0 0
5 0 0 0 6 0 0 0 7 0 0 0
8 0 0 0 9 0 0 0 10 0 0 0
11 0 0 0 12 0 0 0 13 0 0 0
14 0 0 0]
</code></pre>
<p>It appears that there is a header in front of the data stored in the numpy file. Moreover it seems that data values are saved as <code>int64</code> and not as <code>int16</code>.
<br></p>
<h1>Solution</h1>
<p>First specify the type of the values in the array:</p>
<pre><code>a = np.arange(15).reshape(3, 5).astype(np.int16)
b = np.arange(30).reshape(5, 6).astype(np.int16)
</code></pre>
<p>Then read the last bytes of the file:</p>
<pre><code>array_i = tf.reshape(tf.cast(tf.slice(bytes_i,
begin=[tf.size(bytes_i) - (3*5)],
size=[3*5]), tf.int16), [3, 5])
array_j = tf.reshape(tf.cast(tf.slice(bytes_j,
begin=[tf.size(bytes_j) - (5*6)],
size=[5*6]), tf.int16), [5, 6])
</code></pre>
|
python|numpy|tensorflow
| 0 |
1,907,458 | 70,810,161 |
How to verify if a button from a page is really disabled in a chromedriver window on Python? Selenium related
|
<p>I'm trying to automate a process on <a href="https://opensea.io/" rel="nofollow noreferrer">this page</a>, and after several attempts, I still have not figured out why the code below print <code>True</code> when checking if the <code>first_wallet_option</code> button is enabled, here:</p>
<pre><code>driver.get('https://opensea.io/') #go to the opensea main page.
WebDriverWait(driver, 3).until(EC.element_to_be_clickable((By.XPATH, '//*[@id="__next"]/div/div[1]/nav/ul/div[2]/li/button'))) #wait for the wallet button to be enabled for clicking
wallet_button = driver.find_element(By.XPATH, '//*[@id="__next"]/div/div[1]/nav/ul/div[2]/li/button')
wallet_button.click() #click that wallet button
first_wallet_option = driver.find_element(By.XPATH, "/html/body/div[1]/div/aside[2]/div[2]/div/div[2]/ul/li[1]/button")
print(first_wallet_option.is_enabled())
</code></pre>
<p>What happens is that after the page has been loaded, the program clicks the wallet icon located at the top right corner of this page, and according to its html source, those buttons are clearly <code>disabled</code> at the moment it is done, as shown down below:</p>
<p><a href="https://i.stack.imgur.com/haDAk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/haDAk.png" alt="problem_preview1" /></a></p>
<p>But, if I click twice the same wallet icon (one to close the aside element, and other to open it again), they get enabled:</p>
<p><a href="https://i.stack.imgur.com/d7XZI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/d7XZI.png" alt="problem_preview2" /></a></p>
<p>So, I would like to learn how could I improve the code above to make sure it really detects when all of those 4 buttons (Metamask and the other 3) are enabled.</p>
|
<p>Ok, here it is:
First: The locators are not relative enough, so I took the freedom to refactor them.
Second: MetaMask is not a button as per DOM <a href="https://i.stack.imgur.com/iwWMF.png" rel="nofollow noreferrer">DOM Snapshot</a> but the rest are. However, instead of relying on the buttons, I tried to rely on the li (which is common for all), and checking whether they are enabled or not.
Third: If you are trying to find if all the 4 elements are enabled, then you must use <code>find_elements</code> instead of <code>find_element</code> and then loop through the elements to check if each is enabled.
Please check if this works for you:</p>
<pre><code>driver.get('https://opensea.io/') #go to the opensea main page.
WebDriverWait(driver, 3).until(EC.element_to_be_clickable((By.XPATH, "//*[@title='Wallet']"))) #wait for the wallet button to be enabled for clicking
wallet_button = driver.find_element(By.XPATH, "//*[@title='Wallet']")
wallet_button.click() #click that wallet button
wallet_options = driver.find_elements(By.XPATH, "//*[@data-testid='WalletSidebar--body']//li")
for wallet in wallet_options:
if wallet.is_enabled():
print(f"{wallet.text} is enabled")
else:
print(f"f{wallet.text} is not enabled")
</code></pre>
|
python-3.x|selenium|debugging|selenium-chromedriver|isenabled
| 1 |
1,907,459 | 2,345,217 |
How can I capture and print packets from the internet on Windows?
|
<p>How can I capture them?
Is there any module/lib to do it?</p>
<p>Please if it do, post an example</p>
|
<p>If you can install <a href="http://www.wireshark.org/" rel="nofollow noreferrer">Wireshark</a>, you can <a href="http://wiki.wireshark.org/Python" rel="nofollow noreferrer">use it programaticaly from Python</a>. (This isn't <em>yet</em> supported on Windows, as per <a href="https://bugs.wireshark.org/bugzilla/show_bug.cgi?id=3500" rel="nofollow noreferrer">bug 3500</a>.)</p>
<hr>
<p>You also have <a href="http://pycap.sourceforge.net/" rel="nofollow noreferrer">PyCap</a>, a Python Packet Capture and Injection Library that seems to be platform independent.</p>
<hr>
<p>Yet another packet sniffing module is <a href="http://trac.secdev.org/scapy/wiki/WindowsInstallationGuide" rel="nofollow noreferrer">Scapy</a>, that I though didn't work on Windows, but was fortunately mistaken.</p>
|
python|windows|packets
| 1 |
1,907,460 | 2,363,483 |
Python: slicing a very large binary file
|
<p>Say I have a binary file of 12GB and I want to slice 8GB out of the middle of it. I know the position indices I want to cut between.</p>
<p>How do I do this? Obviously 12GB won't fit into memory, that's fine, but 8GB won't either... Which I thought was fine, but it appears binary doesn't seem to like it if you do it in chunks! I was appending 10MB at a time to a new binary file and there are discontinuities on the edges of each 10MB chunk in the new file.</p>
<p>Is there a Pythonic way of doing this easily? </p>
|
<p>Here's a quick example. Adapt as needed:</p>
<pre><code>def copypart(src,dest,start,length,bufsize=1024*1024):
with open(src,'rb') as f1:
f1.seek(start)
with open(dest,'wb') as f2:
while length:
chunk = min(bufsize,length)
data = f1.read(chunk)
f2.write(data)
length -= chunk
if __name__ == '__main__':
GIG = 2**30
copypart('test.bin','test2.bin',1*GIG,8*GIG)
</code></pre>
|
python|binary|large-files
| 8 |
1,907,461 | 2,846,416 |
how to re-invoke python script within itself
|
<p>I am trying to find the best way of re-invoking a Python script within itself. Currently it is working like <a href="http://github.com/benoitc/gunicorn/blob/master/gunicorn/arbiter.py#L285" rel="nofollow noreferrer">http://github.com/benoitc/gunicorn/blob/master/gunicorn/arbiter.py#L285</a>. The <code>START_CTX</code> is created at <a href="http://github.com/benoitc/gunicorn/blob/master/gunicorn/arbiter.py#L82-86" rel="nofollow noreferrer">http://github.com/benoitc/gunicorn/blob/master/gunicorn/arbiter.py#L82-86</a>.</p>
<p>The code is relying on <code>sys.argv[0]</code> as the "caller". However, this fails in cases where it is invoked with:</p>
<pre><code>python script.py ...
</code></pre>
<p>This case does work:</p>
<pre><code>python ./script.py ...
</code></pre>
<p>because the code uses <code>os.chdir</code> before running <code>os.execlp</code>.</p>
<p>I did notice <code>os.environ["_"]</code>, but I am not sure how reliable that would be. Another possible case is to check if <code>sys.argv[0]</code> is not on <code>PATH</code> and is not executable and use <code>sys.executable</code> when calling <code>os.execlp</code>.</p>
<p>Any thoughts on a better approach solving this issue?</p>
|
<p>I think the real issue here is that the gunicorn/arbiter.py code wants to execute the Python script with the exact same environment every time. This is important because the Python script being invoked is an unknown and it is important for it be called <strong>exactly</strong> the same way every time.</p>
<p>My feeling is that the problem you are experiencing has to do with the environment having changed between invokations of the Python script by the arbiter.</p>
<ol>
<li><p>In <a href="http://github.com/benoitc/gunicorn/blob/master/gunicorn/arbiter.py#L85-89" rel="nofollow noreferrer">http://github.com/benoitc/gunicorn/blob/master/gunicorn/arbiter.py#L85-89</a>, we see that the python executable and the args are being stored by the arbiter into self.START_CTX. </p></li>
<li><p>Then in <a href="http://github.com/benoitc/gunicorn/blob/master/gunicorn/arbiter.py#L303-305" rel="nofollow noreferrer">http://github.com/benoitc/gunicorn/blob/master/gunicorn/arbiter.py#L303-305</a>, we see that the execvpe is called with the sys.executable, the modified args and then os.environ.</p></li>
</ol>
<p>If os.environ had changed somewhere else (i.e. the PWD variable), then your executable will fail to be called properly (because you're no longer in the correct folder). The arbiter seems to take care of that possibility by storing the cwd in START_CTX. So the question remains, why is the invokation failing for you?</p>
<p>I tried some test code out which I wrote as follows:</p>
<pre><code>#!/usr/bin/env python
import sys
import os
def main():
"""Execute twice"""
cwd = os.getcwd()
print cwd
print sys.argv
if os.path.exists("/tmp/started.txt"):
os.unlink("/tmp/started.txt")
print "Deleted /tmp/started.txt"
print
return
args = [sys.executable] + sys.argv[:]
os.system("touch /tmp/started.txt")
print "Created /tmp/started.txt"
print
os.execvpe(sys.executable, args, os.environ)
if __name__ == '__main__':
main()
</code></pre>
<p>When I execute this code from the command line, it works just fine:</p>
<pre><code>guest@desktop:~/Python/Test$ python selfreferential.py
/Users/guest/Python/Test
['selfreferential.py']
Created /tmp/started.txt
/Users/guest/Python/Test
['selfreferential.py']
Deleted /tmp/started.txt
guest@desktop:~/Python/Test$ python ./selfreferential.py
/Users/guest/Python/Test
['./selfreferential.py']
Created /tmp/started.txt
/Users/guest/Python/Test
['./selfreferential.py']
Deleted /tmp/started.txt
guest@desktop:~/Python/Test$ cd
guest@desktop:~$ python Python/Test/selfreferential.py
/Users/guest
['Python/Test/selfreferential.py']
Created /tmp/started.txt
/Users/guest
['Python/Test/selfreferential.py']
Deleted /tmp/started.txt
guest@desktop:~$ python /Users/guest/Python/Test/selfreferential.py
/Users/guest
['/Users/guest/Python/Test/selfreferential.py']
Created /tmp/started.txt
/Users/guest
['/Users/guest/Python/Test/selfreferential.py']
Deleted /tmp/started.txt
guest@desktop:~$
</code></pre>
<p>As you can see, there was no problem doing what gunicorn was doing. So, maybe your problem has something to do with the environment variable. Or maybe is has something to do with the way your operating system executes things.</p>
|
python
| 1 |
1,907,462 | 5,614,741 |
Can't use a list of methods in a Python class, it breaks deepcopy. Workaround?
|
<p>I'm trying to learn more about Python by implementing a k-Nearest Neighbor classifier. KNN works by labeling the new data based on what existing data its most similar to. So for a given table of data, you try to determine the 3 most similar points (if k = 3) and pick whatever label is more frequent. There's different ways you determine "similarity", some kind of distance function. So you can implement various distance functions (cosine distance, manhattan, euclidean, etc) and pick whichever you want. </p>
<p>I'm trying to make something that lets me swap distance functions in and out easily without doing cases, and the solution I have so far is to just store a list of references to methods. This is great, but it breaks on deepcopy and I want to figure out how to either fix my implementation or come up with a compromise between not needing to do cases and getting deepcopy to work.</p>
<p>Here's my paired down class</p>
<pre><code>class DataTable:
def __init__(self, filename,TrueSymbol,FalseSymbol):
self.t = self.parseCSV(filename)
self.TrueSymbol = TrueSymbol
self.FalseSymbol = FalseSymbol
# This is the problem line of code
self.distList = [self.euclideanDistance,
self.manhattanDistance,
self.cosineDistance]
def nearestNeighbors(self,entry,k=None,distanceMetric=None):
"""
distanceMetrics you can choose from:
0 = euclideanDistance
1 = manhattanDistance
2 = cosineDistance
"""
if distanceMetric == None:
distanceFunction = self.euclideanDistance
else:
self.distList[distanceMetric]
# etc..
def euclideanDistance(self,entry):
pass
def manhattanDistance(self,entry):
pass
def cosineDistance(self,entry):
pass
# open up that csv
def parseCSV(self,filename):
pass
</code></pre>
<p>And here's the code that calls it
import deepcopytestDS
import copy</p>
<pre><code>data = deepcopytestDS.DataTable("ionosphere.data","g","b")
deepCopy = copy.deepcopy(data) # crash.
</code></pre>
<p>Here's the callstack</p>
<pre><code>>>> deepCopy = copy.deepcopy(data)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.6/copy.py", line 162, in deepcopy
y = copier(x, memo)
File "/usr/lib/python2.6/copy.py", line 292, in _deepcopy_inst
state = deepcopy(state, memo)
File "/usr/lib/python2.6/copy.py", line 162, in deepcopy
y = copier(x, memo)
File "/usr/lib/python2.6/copy.py", line 255, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/usr/lib/python2.6/copy.py", line 162, in deepcopy
y = copier(x, memo)
File "/usr/lib/python2.6/copy.py", line 228, in _deepcopy_list
y.append(deepcopy(a, memo))
File "/usr/lib/python2.6/copy.py", line 189, in deepcopy
y = _reconstruct(x, rv, 1, memo)
File "/usr/lib/python2.6/copy.py", line 323, in _reconstruct
y = callable(*args)
File "/usr/lib/python2.6/copy_reg.py", line 93, in __newobj__
return cls.__new__(cls, *args)
TypeError: instancemethod expected at least 2 arguments, got 0
</code></pre>
<p>What does this crash mean, and is there some way to make deepcopy work without getting rid of my shortcut for swapping distance functions?</p>
|
<p>It was a bug: <a href="http://bugs.python.org/issue1515" rel="nofollow">http://bugs.python.org/issue1515</a></p>
<p>You can put this at the top of your file to make it work:</p>
<pre><code>import copy
import types
def _deepcopy_method(x, memo):
return type(x)(x.im_func, copy.deepcopy(x.im_self, memo), x.im_class)
copy._deepcopy_dispatch[types.MethodType] = _deepcopy_method
</code></pre>
|
python|design-patterns|deep-copy
| 4 |
1,907,463 | 67,624,180 |
how to include "else" statement in "for" loop with "if" statement?
|
<p>I know how to make if statement inside for loop, but I don't know how to also add "else" there.</p>
<p>For example:</p>
<pre><code>check_state = 1
for v in (v for v in range(0,10 +1, 1) if check_state == 1):
print v
</code></pre>
<p>Output: It will print from 0 to 10</p>
<p>And I want to add "else" statement there, something like this:</p>
<pre><code>check_state = 0
for v in (v for v in range(0,10 +1, 1) if check_state == 1, else v for v in range(1)):
print v
</code></pre>
<p>Hoping for this output: prints 0</p>
<p>I don't know how to put it in correct syntax. Can somebody help?</p>
<p>Thank you!</p>
|
<p>I think you want to use the if expression to pick between two range calls:</p>
<pre><code>for v in (range(0,10 +1, 1) if check_state == 1 else range(1)):
</code></pre>
|
python
| 6 |
1,907,464 | 67,921,522 |
Write struct columns to parquet with pyarrow
|
<p>I have the following dataframe and schema:</p>
<pre><code>df = pd.DataFrame([[1,2,3], [4,5,6], [7,8,9]], columns=['a', 'b', 'c'])
SCHEMA = pa.schema([("a_and_b", pa.struct([('a', pa.int64()), ('b', pa.int64())])), ('c', pa.int64())])
</code></pre>
<p>Then I want to create a pyarrow table from df and save it to parquet with this schema. However, I could not find a way to create a proper type in pandas that would correspond to a struct type in pyarrow. Is there a way to do this?</p>
|
<p>For <code>pa.struct</code> convertion from pandas you can use a tuples (eg: <code>[(1, 4), (2, 5), (3, 6)]</code>):</p>
<pre><code>df_with_tuples = pd.DataFrame({
"a_and_b": zip(df["a"], df["b"]),
"c": df["c"]
})
pa.Table.from_pandas(df_with_tuples, SCHEMA)
</code></pre>
<p>or dict <code>[{'a': 1, 'b': 2}, {'a': 4, 'b': 5}, {'a': 7, 'b': 8}]</code>:</p>
<pre><code>df_with_dict = pd.DataFrame({
"a_and_b": df.apply(lambda x: {"a": x["a"], "b": x["b"] }, axis=1),
"c": df["c"]
})
pa.Table.from_pandas(df_with_dict , SCHEMA)
</code></pre>
<p>When converting back from arrow to pandas, struct are represented as dict:</p>
<pre><code>pa.Table.from_pandas(df_with_dict , SCHEMA).to_pandas()['a_and_b']
| a_and_b |
|:-----------------|
| {'a': 1, 'b': 2} |
| {'a': 4, 'b': 5} |
| {'a': 7, 'b': 8} |
</code></pre>
|
python|pandas|dataframe|parquet|pyarrow
| 2 |
1,907,465 | 67,678,357 |
How can I plot figures from pandas series in different windows?
|
<p>I am new in python and I create pandas series in a for-loop. Each time I want to plot the series in a figure. I use <code>ax = series.plot(title = str(i)+'.jpg')</code> but all figures are plotted in same window. How can I plot them in different windows?</p>
|
<p>If you are using matplotlib, use</p>
<pre><code>plt.figure()
</code></pre>
<p>for every new figure you want to plot.</p>
<p>You then show all figures with</p>
<pre><code>plt.show()
</code></pre>
<p>also, you should check out what <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.subplots.html" rel="nofollow noreferrer">subplots</a> does.</p>
|
python|pandas
| 0 |
1,907,466 | 30,684,679 |
Can matplotlib produce a log bar-chart with X==0 as a valid data-point?
|
<p>I'd like to create a <strong>bar-chart</strong>, where the <em>X axis would include hundred of thousands of data points.</em></p>
<p>Thus, I need to employ the <strong>logarithmic scale</strong>. Alas, <code>X == 0</code> is a valid data-point.<br>
BTW, the Y axis should employ the linear scale (where y are distributions, <code>0 < Y <= 1</code>).</p>
<p>Following is <strong>minimal</strong> <em>demonstration code</em>:</p>
<pre><code>$ cat stack_example.py
#!/usr/bin/env python
def test_plot3():
import pylab as pl
_graph = {0: 0.25, 1: 0.25, 2: 0.25, 3: 0.25}
epsilon = 0.00000000001
x = [ pl.log(k) if k > 0 else pl.log(epsilon) for k in _graph ]
y = [ _graph[k] for k in _graph ]
lx = pl.xlabel("in degree (logarithmic scale)")
ly = pl.ylabel("normalized distribution (0 to 1)")
tl = pl.title("graph in-degree normalized distribution")
_width = 1.0 / (len(x) * 5.0)
pl.bar(x, y, width=_width, log=True)
pl.xscale('log')
pl.yscale('linear')
pl.show()
if __name__ == "__main__":
test_plot3()
</code></pre>
<p>Which produced the following <em>invalid</em> graph (the large blue rectangle on the left seems to be a bug):</p>
<p><img src="https://i.stack.imgur.com/U0JFP.png" alt="semi-log bar chart"></p>
<p>Can you suggest a way to produce a <em>correct bar-chart plot</em> from Python, that will employ the <em>logarithmic scale</em> on the <code>X axis,</code> and the <em>Linear scale</em> on the <code>Y axis,</code> and will accept <code>0</code> as a valid <code>x</code> point?</p>
<h2>EDIT 1</h2>
<p>Based on @Ed's comment, I amended my code to:</p>
<pre><code>#!/usr/bin/env python
def test_plot3():
import pylab as pl
_graph = {0: 0.25, 1: 0.25, 2: 0.25, 3: 0.25}
epsilon = 0.1
x = [ pl.log(k) if k > 0 else pl.log(epsilon) for k in _graph ]
y = [ _graph[k] for k in _graph ]
lx = pl.xlabel("in degree (logarithmic scale)")
ly = pl.ylabel("normalized distribution (0 to 1)")
tl = pl.title("graph in-degree normalized distribution")
_width = 1.0 / (len(x) * 5.0)
pl.bar(x, y, width=_width, color="blue", log=True)
pl.xscale('symlog', linthreshx=2)
pl.yscale('linear')
pl.show()
if __name__ == "__main__":
test_plot3()
if __name__ == "__main__":
test_plot3()
</code></pre>
<p>but the resulting graph still doesn't seem right:</p>
<p><img src="https://i.stack.imgur.com/Yw8TW.png" alt="amended graph"></p>
|
<p>You can use <code>symlog</code> instead of log, which includes negaive numbers and a small linear region near zero. For your example,</p>
<pre><code>#!/usr/bin/env python
def test_plot3():
import pylab as pl
_graph = {0: 0.25, 1: 0.25, 2: 0.25, 3: 0.25}
epsilon = 0.00000000001
x = [ pl.log(k) if k > 0 else pl.log(epsilon) for k in _graph ]
y = [ _graph[k] for k in _graph ]
lx = pl.xlabel("in degree (logarithmic scale)")
ly = pl.ylabel("normalized distribution (0 to 1)")
tl = pl.title("graph in-degree normalized distribution")
_width = 1.0 / (len(x) * 5.0)
pl.bar(x, y, width=_width, log=True)
pl.xscale('symlog')
pl.yscale('linear')
pl.show()
if __name__ == "__main__":
test_plot3()
</code></pre>
<p>You can tune the size of the linear region with <code>linthreshx</code> argument to <code>xscale</code>.
Check out this <a href="https://stackoverflow.com/questions/3305865/what-is-the-difference-between-log-and-symlog/3513150#3513150">question</a> for details on how to use it. </p>
|
python|matplotlib|plot|bar-chart|logarithm
| 4 |
1,907,467 | 67,180,063 |
if an item in a list doesn't match a column name in a data frame, produce exception statement
|
<p>I have the following code which creates a list, takes inputs of column names the user wants, then a for loop applies each list attribute individually to check in the if statement if the user input matches the columns in the data frame.</p>
<p>Currently this produces an exception handling statement if all inputs to the list are unmatching, but if item in the list matches the column in the dataframe but others do not, then jupyter will produce its own error message "KeyError: "['testcolumnname'] not in index", because it is trying to move onto the else part of my statement and create the new dataframe with this but it cant (because those columns do not exist)</p>
<p>I want it to be able to produce this error message 'Attribute does not exist in Dataframe. Make sure you have entered arguments correctly.' if even 1 inputted list attribute does not match the dataframe and all other do. But Ive been struggling to get it to do that, and it produces this KeyError instead.</p>
<p>My code:</p>
<pre><code> lst = []
lst = [item for item in str(input("Enter your attributes here: ")).lower().split()]
for i in lst:
if i not in df.columns:
print('Attribute does not exist in Dataframe. Make sure you have entered arguments correctly.')
break
else:
df_new = df[lst]
# do other stuff
</code></pre>
<p>for example if i have a dataframe:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>A</th>
<th>B</th>
<th>C</th>
</tr>
</thead>
<tbody>
<tr>
<td>NA</td>
<td>yes</td>
<td>yes</td>
</tr>
<tr>
<td>yes</td>
<td>no</td>
<td>yes</td>
</tr>
</tbody>
</table>
</div>
<p>and my list contains:</p>
<pre><code>['A','B','C']
</code></pre>
<p>It works correctly, and follows the else statement, because all list items match the dataframes columns so it has no problem.</p>
<p>Or if it has this:</p>
<pre><code>['x','y','z']
</code></pre>
<p>It will give the error message I have, correctly. Because no items match the data frames items so it doesn't continue.</p>
<p>But if it is like this, where one attribute is matching the dataframe, and others not...</p>
<pre><code>['A','D','EE']
</code></pre>
<p>it gives the jupyter KeyError message but I want it to bring back the print message i created ('Attribute does not exist in Dataframe. Make sure you have entered arguments correctly.').</p>
<p>The KeyError appears on the line of my else statement: 'df_new = df[lst]'</p>
<p>Can anyone spot an issue i have here that will stop it from going this? Thank you all</p>
|
<p>Try not to print, but to raise exception
And you need to fix your indentation</p>
<pre class="lang-py prettyprint-override"><code>lst = []
lst = [item for item in str(input("Enter your attributes here: ")).lower().split()]
for i in lst:
if i not in df.columns:
raise ValueError('Attribute does not exist in Dataframe. Make sure you have entered arguments correctly.')
df_new = df[lst]
# do other stuff
</code></pre>
|
python|dataframe|for-loop|if-statement
| 1 |
1,907,468 | 67,164,170 |
Artifact in matplotlib.pyplot.imshow
|
<p>I'm trying to make a colorplot of a function with matplotlob.pyplot.imshow. However, depending on the size of the plot, I get a vertical line as an artifact.</p>
<p>The code to generate the plot is:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from matplotlib import gridspec
from matplotlib import cm
def double_vortex(X,Y):
return np.angle((X + 25)+1j*Y) - np.angle((X - 25)+1j*Y)
X = np.arange(-50,50)
Y = np.arange(-50,50)
X, Y = np.meshgrid(X, Y)
phi0_vortex = double_vortex(X,Y)
fig = plt.figure(figsize=(16,8))
gs = gridspec.GridSpec(1, 3, width_ratios=[2.5, 1.5,1])
for i in range(3):
ax = plt.subplot(gs[i])
ax.imshow(phi0_vortex % (2*np.pi), cmap=cm.hsv, vmin=0, vmax=2*np.pi)
</code></pre>
<p>The resulting plot is this:
<img src="https://i.stack.imgur.com/kbOB1.png" alt="enter image description here" /></p>
<p>You can see that the two smaller plots exhibit a vertical line as an artefact. Is this a bug in matplotlib or somehow actually to be expected?</p>
|
<p>This is a consequence of matplotlib's downsampling algorithm, which happens in data space, and in your case a pair of pixels that has [359, 1] in them, get averaged to 180, and you get the cyan line. This is <a href="https://github.com/matplotlib/matplotlib/issues/18735" rel="nofollow noreferrer">https://github.com/matplotlib/matplotlib/issues/18735</a> for which we are working on a solution to allow RGB-space downsampling (as well).</p>
<p>What can you do about this until that is improved in Matplotlib? Don't downsample in Matplotlib is the simple answer - make a big png, and then resample in post-processing software like imagemagick.</p>
|
python|matplotlib
| 2 |
1,907,469 | 66,740,400 |
How to reuse code from parent class efficiently? (python)
|
<p>Is there any nice way to reuse code from parent class other than just copy paste all code from parent class and replace some code if I want to change few lines of code are in the middle of the code?</p>
<ol>
<li>I can't change code in parent class directly.</li>
<li>Most of codes are same but only few lines of code are different.</li>
<li>Real code is hundreds lines of code and only few lines of code is different between parent and child class.</li>
</ol>
<p>parent class:</p>
<pre><code>class Parent():
def example(self):
doSomething1()
doSomething2()
...
doSomething100()
doSomething101()
...
doSomething200()
return
</code></pre>
<p>child class:</p>
<pre><code>class Child(Parent):
def example(self):
doSomething1()
doSomething2()
...
doSomething100()
doSomethingNew() #add a new line
#doSomething101() #delete a line
doSomethingReplace() #replace a line
...
doSomething200()
return
</code></pre>
|
<p>You can invoke the parent version of an overloaded method from the child using the <code>super()</code> syntax then perform additional operations on the returned value. That is not exactly the same thing as what you are asking, but I believe that is the best you can do as a practical matter.</p>
<p>Example:</p>
<pre><code>class Parent:
def perform_task(self, x):
x += 10
return x
class Child(Parent):
def perform_task(self, x):
x = super().perform_task(x)
x *= 2
return x
c = Child()
print(c.perform_task(5))
</code></pre>
<p>Output:</p>
<pre><code>30
</code></pre>
|
python|python-3.x|oop|software-design
| 0 |
1,907,470 | 65,906,695 |
How do I test Python with test files and expected output files
|
<p>I have a Python script.<br />
A particular method takes a text file and may creates0-3 files.<br />
I have sample text files, and the expected output files for each.</p>
<p>How would I set up this script for testing?<br />
I use unittest for testing functions that do not have file i/o currently.</p>
<ul>
<li>SampleInputFile1 -> expect 0 files generated.</li>
<li>SampleInputFile2 -> expect 1 files generated with specific output.</li>
<li>SampleInputFile3 -> expect 3 files generated, each with specific output.</li>
</ul>
<p>I want to ensure all three sample files, expected files are generated with expected content. Testing script question</p>
|
<p>maybe I dont understand the question but you use a program to divide a text into different subtexts?</p>
<p>Why dont you use train_test_split from sklearn to get the test and training files,</p>
<p>sklearn.model_selection.train_test_split(*arrays, test_size=None, train_size=None, random_state=None, shuffle=True, stratify=None)</p>
|
python|testing|io
| 0 |
1,907,471 | 65,647,941 |
"Invert" Axis on Matplotlib / Seaborn
|
<p>Good evening/morning/evening !
I am only a leisure programmer so I appologise if this question has been answered on here before under a different title, I didn't know what to search for.</p>
<p>If you see below I have plotted two graphs, one a 2d and the other a 3d using matplotlib. <a href="https://i.stack.imgur.com/NP179.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NP179.png" alt="enter image description here" /></a></p>
<p>My issue is that I wish for (0,0) to be in the bottom left corner and a step to the right to be +1 and a step upwards to be -1. Instead of having x increase and y decrease. If it is needed I will post the entire code for these plots but they have both been done conventionally with seaborn.heatmap(z) and ax.plot_surface(x,y,z).</p>
<p>Also I am using the following line I found on here: ax = fig.add_subplot(2, 1, 1)
Could someone please explain the parameters of this function to me I am struggling to understand what they mean.</p>
<p>Any help is greatly appreciated and again I apologise if this has been posted before :)</p>
|
<p>In matplotlib:</p>
<p>If you want to invert the x-axis:</p>
<pre><code>ax.invert_xaxis()
</code></pre>
<p>If you want to invert the y-axis:</p>
<pre><code>ax.invert_yaxis()
</code></pre>
<p>If you want to invert the z-axis:</p>
<pre><code>ax.invert_zaxis()
</code></pre>
<p>I'm pretty sure that these functions will work in seaborn as well, since it is built on top of matplotlib!</p>
|
python|matplotlib|seaborn
| 3 |
1,907,472 | 50,987,691 |
Linear search the given array to give required element index
|
<p>I have coded for linear search in Python3 but i am not getting required output.Below is the Problem:</p>
<blockquote>
<p>You have been given an array of size N consisting of integers. In addition you have been given an element M you need to find and </p>
<p>print the index of the last occurrence of this element M in the array if it exists in it, otherwise print -1. Consider this array to be 1 indexed.</p>
<p>Input Format: The first line consists of 2 integers N and M denoting the size of the array and the element to be searched for in the array respectively . </p>
<p>The next line contains N space separated integers denoting the elements of of the array.
Output Format Print a single integer denoting the index of the last occurrence of integer M in the array if it exists, otherwise print -1.</p>
<p>SAMPLE INPUT<br>
5 1 </p>
<p>1 2 3 4 1</p>
<p>SAMPLE OUTPUT<br>
5</p>
</blockquote>
<pre><code>arr_len , num = input("Enter length & no to be search: ").split()
#num = int(input("Enter number to search: "))
list_of_elements = list(map(int, input("Enter array to search: ").split()))
found = False
for i in range(len(list_of_elements)):
temp = list_of_elements[i]
if(temp == num):
print('--IF cond working--')
found = True
print("%d found at %dth position"%(num,i+1))
break
if(found == False):
print("-1")
</code></pre>
<p>Check here for my code (<a href="https://ide.geeksforgeeks.org/FSYpglmfnz" rel="nofollow noreferrer">https://ide.geeksforgeeks.org/FSYpglmfnz</a>)</p>
<p>I didn't understand why <strong>if</strong> condition is not working inside <strong>for</strong> loop</p>
|
<p>To find the LAST position, you may search BACKWARDS and stop at the FIRST hit:</p>
<pre><code>arr_len, num = 6, 1 # Test Data
list_of_elements = [1, 2, 3, 4, 1, 6] # Test Data
pos = -1 # initial pos (not found)
for i in range(arr_len, 0, -1): # 6,5,4,3,2,1
temp = list_of_elements[i-1] # adjust for 0-based index
if(temp == num):
pos = i # Store position where num is found
break
print(pos)
</code></pre>
|
python|arrays|python-3.x|algorithm|linear-search
| 2 |
1,907,473 | 50,711,558 |
How to iterate character in csv reader python
|
<p>I have a number of text files (.txt), the first one is named 1_1_A_A and the last one is named 10_10_B_C. The first element in the names goes from 1 to 10, the second also goes from 1 to 10, the third can be A or B and the fourth can be A, B or C. It makes a total of 600 instances. I want python to read them with CSV reader. For the first two elements, I use <code>%s</code> in two loops and it works properly. But what should I do to iterate the characters in third and fourth place?</p>
<p>The code is something like this, iterating the firs two elements:</p>
<pre><code>for i in range (len(JobSize)):
for j in range(len(Iteration)):
with open('%s_%s_A_A.txt' % (JobSize[i], Iteration[j]), 'rt') as Data:
reader = csv.reader(Data, delimiter='\t')
</code></pre>
|
<p>You can iterate over any iterable the same way. It doesn't have to be a <code>range</code>; <code>for i in range(3):</code> does the same thing as <code>for i in [1, 2, 3]:</code>. And the values don't have to be ints—you can do <code>for i in ['A', 'B', 'C']</code>. Or, even more simply, a string is itself an iterable of characters, so <code>for i in 'ABC':</code>.</p>
<p>And, while we're at it, this means you can iterate over the lists <code>JobSize</code> and <code>Iteration</code> directly. Instead of using <code>for i in range(len(JobSize)):</code> and then <code>JobSize[i]</code>, just do <code>for i in JobSize:</code> and use <code>i</code> directly.</p>
<p>So:</p>
<pre><code>for i in JobSize:
for j in Iteration:
for k in 'AB':
for l in 'ABC':
with open('%s_%s_%s_%s.txt' % (i, j, k, l), 'rt') as Data:
</code></pre>
<hr>
<p>However, four levels of nesting is pretty ugly, and pushes your code off the right edge of the screen. You'll probably find this nicer with <a href="https://docs.python.org/3/library/itertools.html#itertools.product" rel="nofollow noreferrer"><code>product</code></a>:</p>
<blockquote>
<p>Roughly equivalent to nested for-loops… The nested loops cycle like an odometer with the rightmost element advancing on every iteration.</p>
</blockquote>
<p>In other words, this does exactly the same thing as the above:</p>
<pre><code>import itertools
for i, j, k, l in itertools.product(JobSize, Iteration, 'AB', 'ABC'):
with open('%s_%s_%s_%s.txt' % (i, j, k, l), 'rt') as Data:
</code></pre>
<hr>
<p>Or, even more simply: instead of using tuple unpacking to get separate <code>i, j, k, l</code> variables, just keep it as a single <code>ijkl</code> and pass it to <a href="https://docs.python.org/3/library/stdtypes.html#str.join" rel="nofollow noreferrer"><code>join</code></a>:</p>
<pre><code>for ijkl in itertools.product(JobSize, Iteration, 'AB', 'ABC'):
with open('_'.join(ijkl), 'rt') as Data:
</code></pre>
<hr>
<p>If the elements of <code>JobSize</code> and <code>Iteration</code> aren't strings, that last version won't work—but if you <a href="https://docs.python.org/3/library/functions.html#map" rel="nofollow noreferrer"><code>map</code></a> them all to strings, it works just as well as <code>%s</code>:</p>
<pre><code>for ijkl in itertools.product(JobSize, Iteration, 'AB', 'ABC'):
with open('_'.join(map(str, ijkl)), 'rt') as Data:
</code></pre>
<hr>
<p>Of course you probably want to come up with a better name than <code>ijkl</code>. Maybe <code>name_components</code>?</p>
|
python|loops|csv|character
| 2 |
1,907,474 | 50,487,655 |
django 1.11 no changes detected
|
<p>Hello i am stuck to a point in django migrations i have created a model class in "connection" app's model.py but my ubantu 14.04 terminal still saying while running "python manage.py makemigrations" no changes detected, also i have specify the app name in above command but did't worked. Please help me out. Thanks
Here are my migrations:</p>
<pre><code>admin
[X] 0001_initial
[X] 0002_logentry_remove_auto_add
analytics
(no migrations)
auth
[X] 0001_initial
[X] 0002_alter_permission_name_max_length
[X] 0003_alter_user_email_max_length
[X] 0004_alter_user_username_opts
[X] 0005_alter_user_last_login_null
[X] 0006_require_contenttypes_0002
[X] 0007_alter_validators_add_error_messages
[X] 0008_alter_user_username_max_length
connection
[X] 0001_initial
contenttypes
[X] 0001_initial
[X] 0002_remove_content_type_name
reports
(no migrations)
sessions
[X] 0001_initial
</code></pre>
<p>is there anything else is required please ask in comment.</p>
<p>And here is the table which i wanna add in db via model.py file of connection app. </p>
<pre><code>class BModel(models.Model):
user = models.ForeignKey(User, related_name="reports_user")
dataset = models.ForeignKey(DatasetRecord, related_name="dataset_record")
name = models.CharField(max_length=100)
created_date = models.DateTimeField(default=timezone.now, blank=True, null=True)
updated_date = models.DateTimeField(default=timezone.now)
def __unicode__(self):
return '%s' % self.id
class Meta:
app_label = 'daas'
</code></pre>
<p>and here is the model.py file code.</p>
<pre><code># -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.utils import timezone
from django.db import models
from daas.models import *
# Create your models here.
#on_delete=models.DO_NOTHING,
class Dbconnection(models.Model):
user = models.ForeignKey(User, related_name="db_user")
hostname = models.CharField(max_length=50)
port = models.CharField(max_length=10,default=3306)
dbname = models.CharField(max_length=50)
dbuser = models.CharField(max_length=50)
dbpassword = models.CharField(max_length=50)
connection_type = models.CharField(max_length=50)
connection_name = models.CharField(max_length=50)
def __unicode__(self):
return '%s' % self.id
class Meta:
app_label = 'daas'
class DatasetRecord(models.Model):
user = models.ForeignKey(User, related_name="dataset_user")
dataset = models.CharField(max_length=50)
sqlquery = models.TextField()
connection = models.ForeignKey(Dbconnection, related_name="db_connection")
created_date = models.DateTimeField(default=timezone.now, blank=True, null=True)
updated_date = models.DateTimeField(default=timezone.now)
def __unicode__(self):
return '%s' % self.id
class Meta:
app_label = 'daas'
class Reports(models.Model):
user = models.ForeignKey(User, related_name="reports_user")
dataset = models.ForeignKey(DatasetRecord, related_name="dataset_record")
row_dimension = models.TextField()
col_metrics = models.TextField()
report_type = models.CharField(max_length=50)
status = models.CharField(max_length=50)
report_name = models.CharField(max_length=100)
custom_data = models.TextField()
query_json = models.TextField()
created_date = models.DateTimeField(default=timezone.now, blank=True, null=True)
updated_date = models.DateTimeField(default=timezone.now)
def __unicode__(self):
return '%s' % self.id
class Meta:
app_label = 'daas'
class CustomDashboard(models.Model):
user = models.ForeignKey(User, related_name="daas_userdashboard")
dashboard_name = models.TextField()
created_date = models.DateTimeField(default=timezone.now, blank=True, null=True)
updated_date = models.DateTimeField(default=timezone.now)
def __unicode__(self):
return '%s' % self.id
class Meta:
app_label = 'daas'
class CustomDashboardReport(models.Model):
customdashboard = models.ForeignKey(CustomDashboard, related_name="daas_customdashboardreport")
report = models.ForeignKey(Reports, related_name="Reports_daas_customdashboardreport")
report_title = models.TextField()
custom_data = models.TextField()
created_date = models.DateTimeField(default=timezone.now, blank=True, null=True)
updated_date = models.DateTimeField(default=timezone.now)
class Meta:
app_label = 'daas'
class Dashboard_Drill_Report(models.Model):
user = models.ForeignKey(User, related_name="daas_dashboarddrillreport")
customdashboard = models.ForeignKey(CustomDashboard, related_name="daas_dashboarddrillreport")
parent_report = models.ForeignKey(Reports, related_name="parentreports_daas_dashboarddrillreport")
child_report = models.ForeignKey(Reports, related_name="childreports_daas_dashboarddrillreport")
canvas_id = models.TextField()
created_date = models.DateTimeField(default=timezone.now, blank=True, null=True)
updated_date = models.DateTimeField(default=timezone.now)
class Meta:
app_label = 'daas'
class SalesMetadata(models.Model):
id = models.AutoField(primary_key=True)
column_name = models.CharField(max_length=255)
data_type = models.CharField(max_length=255, blank=True, null=True)
identifier = models.CharField(max_length=45, blank=True, null=True)
definition = models.CharField(max_length=255, blank=True, null=True)
aggregation = models.CharField(max_length=50, blank=True, null=True)
table_name = models.CharField(max_length=50, blank=True, null=True)
type = models.CharField(max_length=45, blank=True, null=True)
class Meta:
managed = False
db_table = 'sales_metadata'
class SalesMetadataDemo(models.Model):
id = models.AutoField(primary_key=True)
column_name = models.CharField(max_length=255)
data_type = models.CharField(max_length=255, blank=True, null=True)
identifier = models.CharField(max_length=45, blank=True, null=True)
definition = models.CharField(max_length=255, blank=True, null=True)
aggregation = models.CharField(max_length=50, blank=True, null=True)
table_name = models.CharField(max_length=50, blank=True, null=True)
type = models.CharField(max_length=45, blank=True, null=True)
class Meta:
managed = False
db_table = 'sales_metadata_demo'
class FactOrderDemo(models.Model):
business_date = models.IntegerField(db_column='Business_Date', blank=True, null=True) # Field name made lowercase.
product = models.CharField(db_column='Product', max_length=250, blank=True, null=True) # Field name made lowercase.
brand = models.CharField(db_column='Brand', max_length=50, blank=True, null=True) # Field name made lowercase.
category = models.CharField(db_column='Category', max_length=100, blank=True, null=True)
channel = models.CharField(db_column='Channel', max_length=50, blank=True, null=True) # Field name made lowercase.
payment_method = models.CharField(db_column='Payment_Method', max_length=50, blank=True, null=True) # Field name made lowercase.
city = models.CharField(db_column='City', max_length=200, blank=True, null=True) # Field name made lowercase.
state = models.CharField(db_column='State', max_length=200, blank=True, null=True) # Field name made lowercase.
sales = models.DecimalField(db_column='Sales', max_digits=34, decimal_places=3, blank=True, null=True) # Field name made lowercase.
discount = models.DecimalField(db_column='Discount', max_digits=34, decimal_places=4, blank=True, null=True) # Field name made lowercase.
cancellations = models.DecimalField(db_column='Cancellations', max_digits=34, decimal_places=4) # Field name made lowercase.
total_orders = models.BigIntegerField(db_column='Total_Orders') # Field name made lowercase.
returns = models.CharField(db_column='Returns', max_length=200, blank=True, null=True) # Field name made lowercase.
class Meta:
managed = False
db_table = 'fact_order_demo'
#unique_together = (('business_date', 'product', 'category', 'promotion', 'channel', 'payment_method', 'city', 'state', 'country'),)
</code></pre>
<p>Please help me i am using python verison 3.4.3 with django version 1.11</p>
|
<p>You need to change the app_label to the name of the app that you added in your installed_apps inside settings.py</p>
<p>app_label = 'connection'</p>
<p>Reference : <a href="https://docs.djangoproject.com/en/2.1/ref/models/options/" rel="nofollow noreferrer">app_label django docs</a></p>
<p>And the answer to "but i am still in confusion that i everything was working file with app_label = 'daas' and created others table as well, but why this time it is not working" is maybe this "If a model is defined outside of an application in INSTALLED_APPS, it must declare which app it belongs to" not sure until i see actual apps.</p>
|
python|django|django-models
| 0 |
1,907,475 | 26,585,153 |
How to get all groups that specific user is member of - python, Active Directory
|
<p>I'm trying to set filter to get all groups that specific user is member of.
I'm using Python, Currently </p>
<pre><code>import traceback
import ldap
try:
l = ldap.open("192.168.1.1")
.
.
.
l.simple_bind_s(username, password)
#######################################################################
f_filterStr = '(objectclass=group)' # Would like to modify this, so I'll not have to make the next loop ...
#######################################################################
# the next command take some seconds
results = l.search_s(dn_recs, ldap.SCOPE_SUBTREE, f_filterStr)
for i in results:
if dict == type(i[1]):
group_name = i[1].get('name')
if list == type(group_name):
group_name = group_name[0];
search_str = "CN=%s," % username_bare
if -1 != ("%s" % i[1].get('member')).find (search_str):
print "User belong to this group! %s" % group_name
except Exception,e :
pass # handle as you wish
</code></pre>
|
<p>I think you are making this much too hard.</p>
<p>No python expert, but you can easily query Microsoft Active Directory <a href="https://ldapwiki.com/wiki/Active%20Directory%20User%20Related%20Searches#section-Active+Directory+User+Related+Searches-AllGroupsAUserIsAMemberOfIncludingNestedGroups" rel="nofollow">for all groups a user is a member of</a> using a filter like:</p>
<pre><code>(member:1.2.840.113556.1.4.1941:=(CN=UserName,CN=Users,DC=YOURDOMAIN,DC=NET))\
</code></pre>
<p>-jim</p>
|
python-2.7|active-directory|ldap
| 4 |
1,907,476 | 26,617,865 |
Django CAS and TGT(Ticket Granting Tickets) and service ticket validation
|
<p>I'm using CAS to provide authentication for a number of secure services in my stack. The authentication front-end is implemented using Django 1.6 and the django-cas module. However, I'm reading around and I don't seem to get information on how django-cas handles Ticket Granting Tickets and also validation of service tickets.</p>
<p>Does anyone know how the aspects mentioned are handled by django-cas?</p>
|
<p>Turns out django-cas handles TGT using django sessions. However, for validation of the service ticket, you have to manually make a validation request including the ST(service ticket) granted after login and the service being accessed.</p>
|
python|django|cas
| 0 |
1,907,477 | 61,300,295 |
How much time does it take for a TSNE Plot or any other plot to run?
|
<p>I am using the amazon food reviews data set and trying to do a TSNE plot. It is taking a lot of time to run. I am using only 5000 rows in the data set and also i am trying to run it in google colab.</p>
<p>It has been running for the past half an hour and no output yet.</p>
<p>Any one knows how much time it takes to run?</p>
|
<p>It's hard to say without seeing what the data looks like or what you're running on.
I understand that you're working with some big data. I encourage you to take a look at this function i found. The link below explains how it works.
credit for this function is here:
<a href="https://www.mikulskibartosz.name/how-to-reduce-memory-usage-in-pandas/" rel="nofollow noreferrer">credit</a></p>
<pre><code>def reduce_mem_usage(df):
start_mem = df.memory_usage().sum() / 1024**2
print('Memory usage of dataframe is {:.2f} MB'.format(start_mem))
for col in df.columns:
col_type = df[col].dtype
if col_type != object:
c_min = df[col].min()
c_max = df[col].max()
if str(col_type)[:3] == 'int':
if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:
df[col] = df[col].astype(np.int8)
elif c_min > np.iinfo(np.uint8).min and c_max < np.iinfo(np.uint8).max:
df[col] = df[col].astype(np.uint8)
elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:
df[col] = df[col].astype(np.int16)
elif c_min > np.iinfo(np.uint16).min and c_max < np.iinfo(np.uint16).max:
df[col] = df[col].astype(np.uint16)
elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:
df[col] = df[col].astype(np.int32)
elif c_min > np.iinfo(np.uint32).min and c_max < np.iinfo(np.uint32).max:
df[col] = df[col].astype(np.uint32)
elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:
df[col] = df[col].astype(np.int64)
elif c_min > np.iinfo(np.uint64).min and c_max < np.iinfo(np.uint64).max:
df[col] = df[col].astype(np.uint64)
else:
if c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max:
df[col] = df[col].astype(np.float16)
elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max:
df[col] = df[col].astype(np.float32)
else:
df[col] = df[col].astype(np.float64)
end_mem = df.memory_usage().sum() / 1024**2
print('Memory usage after optimization is: {:.2f} MB'.format(end_mem))
print('Decreased by {:.1f}%'.format(100 * (start_mem - end_mem) / start_mem))
return df
</code></pre>
<p>This will make sure your dataframe use as low memory as possible when you're working with it. </p>
|
pandas|data-visualization|visualization
| 0 |
1,907,478 | 61,215,371 |
Reading csv file as a table with Dictreader
|
<p>If I have a CSV file which is organized as below:</p>
<p><a href="https://i.stack.imgur.com/5oPNt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5oPNt.png" alt="enter image description here"></a></p>
<p>I am trying to use Dictreader for csv module:</p>
<pre><code>file = open('file.csv')
csv_file = csv.DictReader(file)
data = {row['model']: row for row in csv_file}
</code></pre>
<p>I am trying to read this as a dictionary in which there is a separate dictionary for each model and its coefficients like this:</p>
<pre><code>{
first:{a:1,b:2,c:3,d:4},
second:{a:1,b:2,c:3,d:4},
third:{a:1,b:2,c:3,d:4},
}
</code></pre>
<p>But I still have model name in the inner dictionary. How can I remove that?</p>
|
<p>You can pop the <code>model</code> key from each <code>row</code> dict instead:</p>
<pre><code>data = {row.pop('model'): row for row in csv_file}
</code></pre>
|
python|csv|dictionary
| 3 |
1,907,479 | 57,739,450 |
Call a dataframe from a list with the names of dataframes
|
<p>I have a list with all the names of my dataframes (e.g list =['df1','df2','df3','df4'] I would like to extract specifically df4, by using something like list[3], meaning instead of getting the 'df4' to get the df4 dataframe itself. help?</p>
|
<p>It sounds like you have this in pseduocode:</p>
<pre><code>df1 = DataFrame()
df2 = DataFrame()
df3 = DataFrame()
df4 = DataFrame()
your_list = ["df1", "df2", "df3", "df4"]
</code></pre>
<p>And your goal is to get <code>df4</code> from <code>your_list['df4']</code></p>
<p>You could, instead, put all the dataframes in the list in the first place, rather than strings.</p>
<pre><code>your_list = [df1, df2, df3, df4]
</code></pre>
<p>Or even better, a dictionary with names:</p>
<pre><code>list_but_really_a_dict = {"df1": df1, "df2": df2, "df3": df3, "df4": df4}
</code></pre>
<p>And then you can do <code>list_but_really_a_dict['df1']</code> and get <code>df1</code></p>
|
python|list
| 1 |
1,907,480 | 57,914,830 |
Sorting a list using argsort in Numpy?
|
<p>I have a list in python and the first numbers are <code>[[29.046875, 1], [33.65625, 1], [18.359375, 1], [11.296875, 1], [36.671875, 1], [23.578125, 1],.........,[34.5625, 1]]</code></p>
<p>The above list is given an id of <code>listNumber</code>. I'm trying to use numpy.argsort to sort it based on the float elements:</p>
<pre><code>listNumber = np.array(listNumber)
print(np.argsort(listNumber))
</code></pre>
<p>But this gives me the following but not sure why:</p>
<pre><code>[[1 0]
[1 0]
[1 0]
...
[1 0]
[1 0]
[1 0]]
</code></pre>
<p>Why is this returning this? and is there another way to approach this?</p>
|
<p>Ok so i think there's two things going on here:</p>
<p>1- Your list is a list of lists</p>
<p>2- The 'argsort' function:</p>
<blockquote>
<p>returns the indices that <em>would</em> sort an array.</p>
</blockquote>
<p>According to the documentation.</p>
<p>So what is happening is the function reads through each item of the list, which in itself is a list, say index 0 is:</p>
<blockquote>
<p>[29.046875, 1]</p>
</blockquote>
<p>Then it is saying, okay this is another list so let me sort it and then return a number based on <em>where it would go if it was the new index</em>:</p>
<blockquote>
<p>[29.046875, 1] -> [1, 0] </p>
</blockquote>
<p>Because 1 would come before 29 if it was sorted in ascending order.
It does this for every nested list then gives you a final list containing all these 1's and 0's. </p>
<p>This answers the first question. Another user was able to answer the second :)</p>
|
python
| 3 |
1,907,481 | 56,423,904 |
Stop mainloop in turtle by check variable
|
<p>I am trying to write program with turtle python that ask the user for a number and let him click on the screen the same number of times.</p>
<pre><code>import turtle
t = turtle.Turtle()
count = 0
def up_count(x,y):
global count
count = count + 1
print count
return
def start():
num1=int(raw_input("enter number"))
print "in the gaeme you need to enter number and click on button",num1,"times"
s = t.getscreen()
if not num1 == count:
s.onclick(up_count)
else:
t.mainloop()
start()
</code></pre>
<p>The problem is that I can not get out of mainloop when num1 == count.</p>
<p>How can I get out of mainloop?</p>
<p>I use <a href="https://repl.it/@eliadchoen/BrightShinyKernel" rel="nofollow noreferrer">https://repl.it/@eliadchoen/BrightShinyKernel</a> for the program.</p>
|
<p>You don't <em>get out</em> of <code>mainloop()</code> until you're done with your use of turtle graphics. For example:</p>
<pre><code>from turtle import Screen, Turtle, mainloop
count = 0
number = -1
def up_count(x, y):
global count
count += 1
if number == count:
screen.bye()
def start():
global number
number = int(raw_input("Enter number: "))
print "You need to click on window", number, "times"
screen.onclick(up_count)
screen = Screen()
turtle = Turtle()
start()
mainloop()
print "Game over!"
</code></pre>
<p>My guess is this isn't your goal, so explain what it is you want to happen, in your question.</p>
|
python|turtle-graphics
| 0 |
1,907,482 | 56,286,625 |
Programmatic Checkout from Amazon.com ( & Amazon.in )
|
<p>I am trying to automate the purchase of products listed on Amazon.com in Python. We would like to make 1-click purchase with the saved payment details. </p>
<p>I understand <a href="https://zincapi.com" rel="noreferrer">Zinc</a> enables the automated product purchase through their API. Does Amazon have a Sandbox account to test this ? Any idea how this can be done </p>
<p>Any suggestion, pointers in this regard will be really helpful. </p>
|
<p>I found many sources online that stated much the same thing; that as a buyer you can not make an Amazon sandbox, but as a seller, you can (Amazon Pay)</p>
<p><a href="https://www.quora.com/How-do-I-create-a-Sandbox-account-in-Amazon" rel="nofollow noreferrer">https://www.quora.com/How-do-I-create-a-Sandbox-account-in-Amazon</a></p>
<p>The instructions for how to make a seller virtual account can be found <a href="https://developer.amazon.com/docs/amazon-pay-hosted/set-up-test-account.html" rel="nofollow noreferrer">here</a>.</p>
<p>From your question, it seems like you know how to do the 1-click order thing, but <a href="https://www.geeksforgeeks.org/mouse-keyboard-automation-using-python/" rel="nofollow noreferrer">here</a> it is, just in case. (Basically you just use the <code>pyautogui</code> module. </p>
|
python|amazon-web-services|automation|amazon-product-api
| 0 |
1,907,483 | 56,032,077 |
How can I scrape data from an HTML table into a Python list/dict?
|
<p>I'm trying to import data from <a href="https://legacy.baseballprospectus.com/card/70917/trea-turner" rel="nofollow noreferrer">Baseball Prospectus</a> into a Python table / dictionary (which would be better?). </p>
<p>Below is what I have, based on following along to Automate The Boring Stuff with Python.</p>
<p>I get that my method isn't properly using these functions, but I can't figure out what tools I should be using.</p>
<pre><code>import requests
import webbrowser
import bs4
res = requests.get('https://legacy.baseballprospectus.com/card/70917/trea-turner')
res.raise_for_status()
webpage = bs4.BeautifulSoup(res.text)
table = webpage.select('newstat_career_log_datagrid')
list = []
for item in table:
list.append(item)
print(list)
</code></pre>
|
<p>Use pandas Data Frame to fetch the <code>MLB Statistics</code> table first and then convert dataframe into dictionary object.If you don't have pandas install you can do it in a single command.</p>
<pre><code> pip install pandas
</code></pre>
<p>Then use the below code.</p>
<pre><code>import pandas as pd
df=pd.read_html('https://legacy.baseballprospectus.com/card/70917/trea-turner')
data_dict = df[5].to_dict()
print(data_dict)
</code></pre>
<p>Output:</p>
<pre><code>{'PA': {0: 44, 1: 324, 2: 447, 3: 740, 4: 15, 5: 1570}, '2B': {0: 1, 1: 14, 2: 24, 3: 27, 4: 1, 5: 67}, 'TEAM': {0: 'WAS', 1: 'WAS', 2: 'WAS', 3: 'WAS', 4: 'WAS', 5: 'Career'}, 'SB': {0: 2, 1: 33, 2: 46, 3: 43, 4: 4, 5: 128}, 'G': {0: 27, 1: 73, 2: 98, 3: 162, 4: 4, 5: 364}, 'HR': {0: 1, 1: 13, 2: 11, 3: 19, 4: 2, 5: 46}, 'FRAA': {0: 0.5, 1: -3.2, 2: 0.2, 3: 7.1, 4: -0.1, 5: 4.5}, 'BWARP': {0: 0.1, 1: 2.4, 2: 2.7, 3: 5.0, 4: 0.1, 5: 10.4}, 'CS': {0: 2, 1: 6, 2: 8, 3: 9, 4: 0, 5: 25}, '3B': {0: 0, 1: 8, 2: 6, 3: 6, 4: 0, 5: 20}, 'H': {0: 9, 1: 105, 2: 117, 3: 180, 4: 5, 5: 416}, 'AGE': {0: '22', 1: '23', 2: '24', 3: '25', 4: '26', 5: 'Career'}, 'OBP': {0: 0.295, 1: 0.37, 2: 0.33799999999999997, 3: 0.344, 4: 0.4, 5: 0.34700000000000003}, 'AVG': {0: 0.225, 1: 0.342, 2: 0.284, 3: 0.271, 4: 0.35700000000000004, 5: 0.289}, 'DRC+': {0: 77, 1: 128, 2: 99, 3: 107, 4: 103, 5: 108}, 'SO': {0: 12, 1: 59, 2: 80, 3: 132, 4: 5, 5: 288}, 'YEAR': {0: '2015', 1: '2016', 2: '2017', 3: '2018', 4: '2019', 5: 'Career'}, 'SLG': {0: 0.325, 1: 0.5670000000000001, 2: 0.451, 3: 0.41600000000000004, 4: 0.857, 5: 0.46}, 'DRAA': {0: -1.0, 1: 11.4, 2: 1.0, 3: 8.5, 4: 0.1, 5: 20.0}, 'HBP': {0: 0, 1: 1, 2: 4, 3: 5, 4: 0, 5: 10}, 'BRR': {0: 0.1, 1: 5.9, 2: 6.8, 3: 2.7, 4: 0.2, 5: 15.7}, 'BB': {0: 4, 1: 14, 2: 30, 3: 69, 4: 1, 5: 118}}
</code></pre>
|
python|web|beautifulsoup|screen-scraping
| 0 |
1,907,484 | 18,452,186 |
S3 and Filepicker.io - Multi-file, zipped download on the fly
|
<p>I am using Ink (Filepicker.io) to perform multi-file uploads and it is working brilliantly.</p>
<p>However, a quick look around the internet shows multi-file downloads are more complicated. While I know it is possible to spin up an EC2 instance to zip on the fly, this would also entail some wait-time on the user's part, and the newly created file would also not be available on my Cloudfront immediately. </p>
<p>Has anybody done this before, and what are the practical UX implications - is the wait time significant enough to negatively affect the user experience?</p>
<p>The obvious solution would be to create the zipped files ahead of time, but this would result in some (unnecessary?) redundancy.</p>
<p>What is the best way to avoid redundant storage while reducing wait times for on-the-fly folder compression?</p>
|
<p>You can create the ZIP archive on the client side using JavaScript. Check out:</p>
<p><a href="http://stuk.github.io/jszip/" rel="nofollow">http://stuk.github.io/jszip/</a></p>
|
python|django|amazon-s3|amazon-cloudfront|filepicker.io
| 0 |
1,907,485 | 69,514,653 |
How to properly unescape the STDOUT from the PostgreSQL COPY syntax via python/psycopg2's copy_expert
|
<p>I stream jsonb objects from a postgres table as a large <code>application/jsonl</code> http response. This works really well:</p>
<pre><code>def stream_json_from_copy_syntax():
conn = psycopg2.connect(...)
cur = conn.cursor()
file = BytesIO()
cur.copy_expert(
sql="COPY (select my_jsonb from large_table) TO STDOUT WITH (FORMAT text)",
file=file
)
file.seek(0)
for line in file.readlines():
line = unescape(line)
yield line
</code></pre>
<p>The above is pushed into a Flask response. This is fast, and has a very low memory footprint.</p>
<p>The jsonb field is initially properly escaped, but it seems the <code>(FORMAT text)</code> is adding some extra escaping.</p>
<p>This turns <code>My \ string</code> into the propper json <code>"My \\ string"</code> and then into <code>"My \\\\ string"</code>. So there is an extra escaping going on. I've handled this for now with:</p>
<pre><code>def unescape(line):
return line.replace(b'\\\\', b'\\')
</code></pre>
<p>I think this solves most issues, if not all, but I'm a litte unsure if there are other things the <code>(FORMAT text)</code> is escaping which I haven't handled.</p>
<p>How do I properly unescape what <code>(FORMAT text)</code> does? Or is there a way not to let it escape anything at all, because the initial json escaping works fine.</p>
<p>Note that using <code>(ESCAPE 'escape_character')</code> only applies to <code>(FORMAT csv)</code>: <a href="https://www.postgresql.org/docs/current/sql-copy.html" rel="nofollow noreferrer">https://www.postgresql.org/docs/current/sql-copy.html</a></p>
|
<p>You are exporting data in the format that Postgres COPY...FROM would understand. Maybe you need a different format, such as CSV.</p>
|
python|json|postgresql|psycopg2
| 0 |
1,907,486 | 69,538,178 |
Medical imaging with tcia data
|
<p>I'm trying to build CNN network for MRI data from tcia dataset.</p>
<p>However, transferring DICOM format to nifti format is not working after using dicom2nifti.</p>
<p>The coding did worked, but did not gave a file.</p>
<p>Is this is because of the data format from tcia dataset that ends with .tcia?</p>
<p>Thanks a lot. Below is the data format and the code I used for dicom2n<a href="https://i.stack.imgur.com/2vDX1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2vDX1.png" alt="enter image description here" /></a></p>
<p>[<img src="https://i.stack.imgur.com/EALVK.png" alt="1" /></p>
|
<p>the .tcia file extension stands for "The Cancer Imaging Archive", and is essentially a collection of hyperlinks</p>
<p>You will need to use the TCIA downloader client ("NBIA Retriever") to get the actual DICOM files you want.</p>
<p>You can download it using the following guide:
<a href="https://wiki.cancerimagingarchive.net/display/NBIA/Downloading+TCIA+Images" rel="nofollow noreferrer">https://wiki.cancerimagingarchive.net/display/NBIA/Downloading+TCIA+Images</a></p>
<p>Once installed, a simple double-click on a .tcia file should open the NBIA retriever client.</p>
|
python|image|medical|medical-imaging|nifti
| 2 |
1,907,487 | 55,402,792 |
QLineEdit set title case
|
<p>Is it possible to use QValidator to set the text of a QLineEdit while the input is being typed in? If so, can someone provide a push in the right direction on how to accomplish it? Thank you.</p>
|
<p>You just have to overwrite the validate method:</p>
<pre><code>from PyQt5 import QtCore, QtGui, QtWidgets
class TitleValidator(QtGui.QValidator):
def validate(self, _input, pos):
return QtGui.QValidator.Acceptable , _input.title(), pos
if __name__ == '__main__':
import sys
app = QtWidgets.QApplication(sys.argv)
w = QtWidgets.QLineEdit()
validator = TitleValidator(w)
w.setValidator(validator)
w.show()
sys.exit(app.exec_())
</code></pre>
|
python|python-3.x|pyqt|pyqt5|qlineedit
| 2 |
1,907,488 | 55,184,516 |
Adding a key/value pair once I have recursively searched a dict
|
<p>I have searched a nested dict for certain keys, I have succeeded in being able to locate the keys I am looking for, but I am not sure how I can now add a key/value pair to the location the key I am looking for is. Is there a way to tell python to append the data entry to the location it is currently looking at?</p>
<p>Code:</p>
<pre><code>import os
import json
import shutil
import re
import fileinput
from collections import OrderedDict
#Finds and lists the folders that have been provided
d='.'
folders = list(filter (lambda x: os.path.isdir(os.path.join(d, x)), os.listdir(d)))
print("Folders found: ")
print(folders)
print("\n")
def processModelFolder(inFolder):
#Creating the file names
fileName = os.path.join(d, inFolder, inFolder + ".mdl")
fileNameTwo = os.path.join(d, inFolder, inFolder + ".vg2.json")
fileNameThree = os.path.join(d, inFolder, inFolder + "APPENDED.vg2.json")
#copying the json file so the new copy can be appended
shutil.copyfile(fileNameTwo, fileNameThree)
#assigning IDs and properties to search for in the mdl file
IDs = ["7f034e5c-24df-4145-bab8-601f49b43b50"]
Properties = ["IDSU_FX[0]"]
#Basic check to see if IDs and Properties are valid
for i in IDs:
if len(i) != 36:
print("ID may not have been valid and might not return the results you expect, check to ensure the characters are correct: ")
print(i)
print("\n")
if len(IDs) == 0:
print("No IDs were given!")
elif len(Properties) == 0:
print("No Properties were given!")
#Reads code untill an ID is found
else:
with open(fileName , "r") as in_file:
IDCO = None
for n, line in enumerate(in_file, 1):
if line.startswith('IDCO_IDENTIFICATION'):
#Checks if the second part of each line is a ID tag in IDs
if line.split('"')[1] in IDs:
#If ID found it is stored as IDCO
IDCO = line.split('"')[1]
else:
if IDCO:
pass
IDCO = None
#Checks if the first part of each line is a Prop in Propterties
elif IDCO and line.split(' ')[0] in Properties:
print('Found! ID:{} Prop:{} Value: {}'.format(IDCO, line.split('=')[0][:-1], line.split('=')[1][:-1]))
print("\n")
#Stores the property name and value
name = str(line.split(' ')[0])
value = str(line.split(' ')[2])
#creates the entry to be appended to the dict
#json file editing
with open(fileNameThree , "r+") as json_data:
python_obj = json.load(json_data)
#calling recursive search
get_recursively(python_obj, IDCO, name, value)
with open(fileNameThree , "w") as json_data:
json.dump(python_obj, json_data, indent = 1)
print('Processed {} lines in file: {}'.format(n , fileName))
def get_recursively(search_dict, IDCO, name, value):
"""
Takes a dict with nested lists and dicts,
and searches all dicts for a key of the field
provided, when key "id" is found it checks to,
see if its value is the current IDCO tag, if so it appends the new data.
"""
fields_found = []
for key, value in search_dict.iteritems():
if key == "id":
if value == IDCO:
print("FOUND IDCO IN JSON: " + value +"\n")
elif isinstance(value, dict):
results = get_recursively(value, IDCO, name, value)
for result in results:
x = 1
elif isinstance(value, list):
for item in value:
if isinstance(item, dict):
more_results = get_recursively(item, IDCO, name, value)
for another_result in more_results:
x=1
return fields_found
for modelFolder in folders:
processModelFolder(modelFolder)
</code></pre>
<p>In short, once it finds a key/id value pair that I want, can I tell it to append name/value to that location directly and then continue?</p>
<p>nested dict:</p>
<pre><code>{
"id": "79cb20b0-02be-42c7-9b45-96407c888dc2",
"tenantId": "00000000-0000-0000-0000-000000000000",
"name": "2-stufiges Stirnradgetriebe",
"description": null,
"visibility": "None",
"method": "IDM_CALCULATE_GEAR_COUPLED",
"created": "2018-10-16T10:25:20.874Z",
"createdBy": "00000000-0000-0000-0000-000000000000",
"lastModified": "2018-10-16T10:25:28.226Z",
"lastModifiedBy": "00000000-0000-0000-0000-000000000000",
"client": "STRING_BEARINX_ONLINE",
"project": {
"id": "10c37dcc-0e4e-4c4d-a6d6-12cf65cceaf9",
"name": "proj 2",
"isBookmarked": false
},
"rootObject": {
"id": "6ff0010c-00fe-485b-b695-4ddd6aca4dcd",
"type": "IDO_GEAR",
"children": [
{
"id": "1dd94d1a-e52d-40b3-a82b-6db02a8fbbab",
"type": "IDO_SYSTEM_LOADCASE",
"children": [],
"childList": "SYSTEMLOADCASE",
"properties": [
{
"name": "IDCO_IDENTIFICATION",
"value": "1dd94d1a-e52d-40b3-a82b-6db02a8fbbab"
},
{
"name": "IDCO_DESIGNATION",
"value": "Lastfall 1"
},
{
"name": "IDSLC_TIME_PORTION",
"value": 100
},
{
"name": "IDSLC_DISTANCE_PORTION",
"value": 100
},
{
"name": "IDSLC_OPERATING_TIME_IN_HOURS",
"value": 1
},
{
"name": "IDSLC_OPERATING_TIME_IN_SECONDS",
"value": 3600
},
{
"name": "IDSLC_OPERATING_REVOLUTIONS",
"value": 1
},
{
"name": "IDSLC_OPERATING_DISTANCE",
"value": 1
},
{
"name": "IDSLC_ACCELERATION",
"value": 9.81
},
{
"name": "IDSLC_EPSILON_X",
"value": 0
},
{
"name": "IDSLC_EPSILON_Y",
"value": 0
},
{
"name": "IDSLC_EPSILON_Z",
"value": 0
},
{
"name": "IDSLC_CALCULATION_WITH_OWN_WEIGHT",
"value": "CO_CALCULATION_WITHOUT_OWN_WEIGHT"
},
{
"name": "IDSLC_CALCULATION_WITH_TEMPERATURE",
"value": "CO_CALCULATION_WITH_TEMPERATURE"
},
{
"name": "IDSLC_FLAG_FOR_LOADCASE_CALCULATION",
"value": "LB_CALCULATE_LOADCASE"
},
{
"name": "IDSLC_STATUS_OF_LOADCASE_CALCULATION",
"value": false
}
],
"position": 1,
"order": 1,
"support_vector": {
"x": 0,
"y": 0,
"z": 0
},
"u_axis_vector": {
"x": 1,
"y": 0,
"z": 0
},
"w_axis_vector": {
"x": 0,
"y": 0,
"z": 1
},
"role": "_none_"
},
{
"id": "ab7fbf37-17bb-4e60-a543-634571a0fd73",
"type": "IDO_SHAFT_SYSTEM",
"children": [
{
"id": "7f034e5c-24df-4145-bab8-601f49b43b50",
"type": "IDO_RADIAL_ROLLER_BEARING",
"children": [
{
"id": "0b3e695b-6028-43af-874d-4826ab60dd3f",
"type": "IDO_RADIAL_BEARING_INNER_RING",
"children": [
{
"id": "330aa09d-60fb-40d7-a190-64264b3d44b7",
"type": "IDO_LOADCONTAINER",
"children": [
{
"id": "03036040-fc1a-4e52-8a69-d658e18a8d4a",
"type": "IDO_DISPLACEMENT",
"children": [],
"childList": "DISPLACEMENT",
"properties": [
{
"name": "IDCO_IDENTIFICATION",
"value": "03036040-fc1a-4e52-8a69-d658e18a8d4a"
},
{
"name": "IDCO_DESIGNATION",
"value": "Displacement 1"
}
],
"position": 1,
"order": 1,
"support_vector": {
"x": -201.3,
"y": 0,
"z": -229.8
},
"u_axis_vector": {
"x": 1,
"y": 0,
"z": 0
},
"w_axis_vector": {
"x": 0,
"y": 0,
"z": 1
},
"shaftSystemId": "ab7fbf37-17bb-4e60-a543-634571a0fd73",
"role": "_none_"
},
{
"id": "485f5bf4-fb97-415b-8b42-b46e9be080da",
"type": "IDO_CUMULATED_LOAD",
"children": [],
"childList": "CUMULATEDLOAD",
"properties": [
{
"name": "IDCO_IDENTIFICATION",
"value": "485f5bf4-fb97-415b-8b42-b46e9be080da"
},
{
"name": "IDCO_DESIGNATION",
"value": "Cumulated load 1"
},
{
"name": "IDCO_X",
"value": 0
},
{
"name": "IDCO_Y",
"value": 0
},
{
"name": "IDCO_Z",
"value": 0
}
],
"position": 2,
"order": 1,
"support_vector": {
"x": -201.3,
"y": 0,
"z": -229.8
},
"u_axis_vector": {
"x": 1,
"y": 0,
"z": 0
},
"w_axis_vector": {
"x": 0,
"y": 0,
"z": 1
},
"shaftSystemId": "ab7fbf37-17bb-4e60-a543-634571a0fd73",
"role": "_none_"
}
],
"childList": "LOADCONTAINER",
"properties": [
{
"name": "IDCO_IDENTIFICATION",
"value": "330aa09d-60fb-40d7-a190-64264b3d44b7"
},
{
"name": "IDCO_DESIGNATION",
"value": "Load container 1"
},
{
"name": "IDLC_LOAD_DISPLACEMENT_COMBINATION",
"value": "LOAD_MOMENT"
},
{
"name": "IDLC_TYPE_OF_MOVEMENT",
"value": "LB_ROTATING"
},
{
"name": "IDLC_NUMBER_OF_ARRAY_ELEMENTS",
"value": 20
}
],
"position": 1,
"order": 1,
"support_vector": {
"x": -201.3,
"y": 0,
"z": -229.8
},
"u_axis_vector": {
"x": 1,
"y": 0,
"z": 0
},
"w_axis_vector": {
"x": 0,
"y": 0,
"z": 1
},
"shaftSystemId": "ab7fbf37-17bb-4e60-a543-634571a0fd73",
"role": "_none_"
},
{
"id": "3258d217-e6e4-4a5c-8677-ae1fca26f21e",
"type": "IDO_RACEWAY",
"children": [],
"childList": "RACEWAY",
"properties": [
{
"name": "IDCO_IDENTIFICATION",
"value": "3258d217-e6e4-4a5c-8677-ae1fca26f21e"
},
{
"name": "IDCO_DESIGNATION",
"value": "Raceway 1"
},
{
"name": "IDRCW_UPPER_DEVIATION_RACEWAY_DIAMETER",
"value": 0
},
{
"name": "IDRCW_LOWER_DEVIATION_RACEWAY_DIAMETER",
"value": 0
},
{
"name": "IDRCW_PROFILE_OFFSET",
"value": 0
},
{
"name": "IDRCW_PROFILE_ANGLE",
"value": 0
},
{
"name": "IDRCW_PROFILE_CURVATURE_RADIUS",
"value": 0
},
{
"name": "IDRCW_PROFILE_CENTER_POINT_OFFSET",
"value": 0
},
{
"name": "IDRCW_PROFILE_NUMBER_OF_WAVES",
"value": 0
},
{
"name": "IDRCW_PROFILE_AMPLITUDE",
"value": 0
},
{
"name": "IDRCW_PROFILE_POSITION_OF_FIRST_WAVE",
"value": 0
},
</code></pre>
|
<h2>Bug</h2>
<p>First of all, replace the <code>value</code> variable's name by something else, because you have a <code>value</code> variable as the method argument and another <code>value</code> variable with the same name when iterating over the dictionary:</p>
<pre class="lang-py prettyprint-override"><code>for key, value in search_dict.iteritems(): # <-- REPLACE value TO SOMETHING ELSE LIKE val
</code></pre>
<p>Otherwise you will have bugs, because the <code>value</code> from the dictionary is the new value which you will insert. But if you iterate like <code>for key, val in</code> then you can actually use the outer <code>value</code> variable.</p>
<h2>Adding The Value Pair</h2>
<p>It seems <code>id</code> is a key inside your <code>search_dict</code>, but reading your JSON file your <code>search_dict</code> may have several nested lists like <code>properties</code> and/or <code>children</code>, so it depends on where you want to add the new pair.</p>
<h3>If you want to add it to the same dictionary where your <code>id</code> is:</h3>
<pre class="lang-py prettyprint-override"><code>if key == "id":
if value == IDCO:
print("FOUND IDCO IN JSON: " + value +"\n")
search_dict[name] = value
</code></pre>
<p>Result:</p>
<pre><code>{
"id": "3258d217-e6e4-4a5c-8677-ae1fca26f21e",
"type": "IDO_RACEWAY",
"children": [],
"childList": "RACEWAY",
"<new name>": "<new value>",
"properties": [
{
"name": "IDCO_IDENTIFICATION",
"value": "3258d217-e6e4-4a5c-8677-ae1fca26f21e"
},
</code></pre>
<h3>If you want to add it to the <code>children</code> or <code>properties</code> <em>list</em> inside the dictionary where <code>id</code> is:</h3>
<pre class="lang-py prettyprint-override"><code>if key == "id":
if value == IDCO:
print("FOUND IDCO IN JSON: " + value +"\n")
if search_dict.has_key("properties"): # you can swap "properties" to "children", depends on your use case
search_dict["properties"].append({"name": name, "value": value}) # a new dictionary with 'name' and 'value' keys
</code></pre>
<p>Result:</p>
<pre><code>{
"id": "3258d217-e6e4-4a5c-8677-ae1fca26f21e",
"type": "IDO_RACEWAY",
"children": [],
"childList": "RACEWAY",
"properties": [
{
"name": "IDCO_IDENTIFICATION",
"value": "3258d217-e6e4-4a5c-8677-ae1fca26f21e"
},
{
"name": "<new name>",
"value": "<new value>"
},
</code></pre>
|
python|dictionary|recursion
| 0 |
1,907,489 | 57,639,736 |
Python - Append line under another line that contains a certain string
|
<p>I want to append the string <code>"$basetexturetransform" "center .5 .5 scale 4 4 rotate 0 translate 0 0"</code> (including the quotation marks) as a new line under every line that contains the string <code>$basetexture</code>.</p>
<p>So that, for example, the file</p>
<pre><code>"LightmappedGeneric"
{
"$basetexture" "Concrete/concrete_modular_floor001a"
"$surfaceprop" "concrete"
"%keywords" "portal"
}
</code></pre>
<p>turns into</p>
<pre><code>"LightmappedGeneric"
{
"$basetexture" "Concrete/concrete_modular_floor001a"
"$basetexturetransform" "center .5 .5 scale 4 4 rotate 0 translate 0 0"
"$surfaceprop" "concrete"
"%keywords" "portal"
}
</code></pre>
<p>and I want to do it for every file that has the file extension ".vmt" in a folder (including sub folders). </p>
<p>Is there an easy way to do this in Python? I have like 400 <code>.vmt</code> files in a folder that I need to modify and it would be a real pain to have to do it manually.</p>
|
<p>This expression might likely do that with <code>re.sub</code>:</p>
<pre><code>import re
regex = r"(\"\$basetexture\".*)"
test_str = """
"LightmappedGeneric"
{
"$basetexture" "Concrete/concrete_modular_floor001a"
"$surfaceprop" "concrete"
"%keywords" "portal"
}
"LightmappedGeneric"
{
"$nobasetexture" "Concrete/concrete_modular_floor001a"
"$surfaceprop" "concrete"
"%keywords" "portal"
}
"""
subst = "\\1\\n\\t\"$basetexturetransform\" \"center .5 .5 scale 4 4 rotate 0 translate 0 0\""
print(re.sub(regex, subst, test_str, 0, re.MULTILINE))
</code></pre>
<hr />
<h3>Output</h3>
<pre><code>"LightmappedGeneric"
{
"$basetexture" "Concrete/concrete_modular_floor001a"
"$basetexturetransform" "center .5 .5 scale 4 4 rotate 0 translate 0 0"
"$surfaceprop" "concrete"
"%keywords" "portal"
}
"LightmappedGeneric"
{
"$nobasetexture" "Concrete/concrete_modular_floor001a"
"$surfaceprop" "concrete"
"%keywords" "portal"
}
</code></pre>
<hr />
<blockquote>
<p>If you wish to explore/simplify/modify the expression, it's been
explained on the top right panel of
<a href="https://regex101.com/r/ZHZhc4/1/" rel="nofollow noreferrer">regex101.com</a>. If you'd like, you
can also watch in <a href="https://regex101.com/r/ZHZhc4/1/debugger" rel="nofollow noreferrer">this
link</a>, how it would match
against some sample inputs.</p>
</blockquote>
<hr />
<h3>Reference</h3>
<p><a href="https://stackoverflow.com/questions/3964681/find-all-files-in-a-directory-with-extension-txt-in-python">Find all files in a directory with extension .txt in Python</a></p>
|
python|regex|find|append|line
| 2 |
1,907,490 | 57,699,583 |
is it possible to run spark udf functions (mainly python) under docker?
|
<p>I'm using <code>pyspark</code> on <code>emr</code>. To simplify the setup of <code>python</code> libraries and dependencies, we're using <code>docker</code> images.</p>
<p>This works fine for general python applications (non spark), and for the spark driver (calling spark submit from within a docker image)</p>
<p>However, I couldn't find a method to make the workers run within a docker image (either the "full" worker, or just the <code>UDF</code> functions)</p>
<p><strong>EDIT</strong>
Found a solution with beta EMR version, if there's some alternative with current (5.*) EMR versions it's still relevant </p>
|
<p>Apparently yarn 3.2 supports this feature: <a href="https://hadoop.apache.org/docs/r3.2.0/hadoop-yarn/hadoop-yarn-site/DockerContainers.html" rel="nofollow noreferrer">https://hadoop.apache.org/docs/r3.2.0/hadoop-yarn/hadoop-yarn-site/DockerContainers.html</a></p>
<p>and it expected to be available with EMR 6 (now in beta) <a href="https://aws.amazon.com/blogs/big-data/run-spark-applications-with-docker-using-amazon-emr-6-0-0-beta/" rel="nofollow noreferrer">https://aws.amazon.com/blogs/big-data/run-spark-applications-with-docker-using-amazon-emr-6-0-0-beta/</a></p>
|
python|docker|apache-spark|pyspark|amazon-emr
| 1 |
1,907,491 | 42,289,686 |
How to switch back and forth between (general) pyenv python and system python in Ubuntu?
|
<p>I used to work with python installed under <code>anaconda3</code> in my Ubuntu. But for some reason, I needed to also create a <code>pyenv</code> and generalize it for all users. To run python scripts, I learned that unlike <code>anaconda3</code>, I have to build <code>pyenv</code> with all the needed python packages as I was receiving errors saying that modules are not defined. For this reason, after installing <code>pyenv</code>, I installed required modules using <code>pip install <package_name></code> in <code>(general) pyenv</code> shell. And now I am able to run the scripts. Is there a way to switch back and forth between <code>anaconda3</code> system python and <code>pyenv</code> python? </p>
<p>(just from the prompt <em>(general) username@username-Rev-1-0:~$,</em> I know that I am in <code>pyenv</code> right now.)</p>
<p>Here is the relevant portion of <code>.bashrc</code> file:</p>
<pre><code># added by Anaconda3 4.3.0 installer
export PATH="/home/username/anaconda3/bin:$PATH"
# Load pyenv automatically by adding
# the following to ~/.bash_profile:
export PATH="/home/username/.pyenv/bin:$PATH"
eval "$(pyenv init -)"
eval "$(pyenv virtualenv-init -)"
</code></pre>
|
<p>Inspired by answers, thank you. I have used a similar approach on MacOs:</p>
<pre><code># in my ~/.bash_profile
# Anaconda app is installed and initiated at the terminal start
# path to Anaconda: /Users/<USER>/opt/anaconda3/
switch_pyenv(){
conda deactivate
conda deactivate # in case you're not in base env
eval "$(pyenv init -)"
eval "$(pyenv virtualenv-init -)"
}
switch_conda(){
conda activate base
export PATH="/Users/<USER>/opt/anaconda3/bin:$PATH"
}
# quick check which python, pip
w(){
which python
which pip
py -V
}
</code></pre>
<p>When I switch to an environment, I check 'where am i' with the shorthand <code>w</code>.</p>
|
python-3.x|anaconda|pyenv
| 0 |
1,907,492 | 42,516,701 |
MrJob multi step job execution time
|
<p>What is the best way to display execution time of a multi step map reduce job?</p>
<p>I tried to set a self variable in mapper init of <strong>step1</strong> of of the job</p>
<pre><code> def mapper_init_timer(self):
self.start= time.process_time()
</code></pre>
<p>But when I try to read this in reducer_final of <strong>Step2</strong> </p>
<pre><code>def reducer_final_timmer(self):
#self.start is None here
MRJob.set_status(self,"total time")
</code></pre>
<p>I can't figure out why self veriable is lost between steps.
And if that is by design then how can we calculate time of exection of a MrJob script that also gives correct result when run with -r hadoop.</p>
|
<p>A simplest way would be get the time before and after invoking the <code>run()</code> and finding their difference,</p>
<pre><code>from datetime import datetime
import sys
if __name__ == '__main__':
start_time = datetime.now()
MRJobClass.run()
end_time = datetime.now()
elapsed_time = end_time - start_time
sys.stderr.write(elapsed_time)
</code></pre>
|
python|hadoop|mrjob
| 1 |
1,907,493 | 54,136,563 |
How to list all children of a tag which shares its name with another sibling in beautifulsoup?
|
<p>I am trying to get a list of children tags of a particular tag. The tag is div. However, it has another sibling called div which is 2nd in the list of its siblings.</p>
<pre><code>enter code here
print(len(soup.body.div.main.div.section))
8
for i in range(8):
print(soup.body.div.main.div.section.contents[i].name)
None
a
div
None
script
None
input
div
print(soup.body.div.main.div.section.contents[7].name)
div
print(soup.body.div.main.div.section.div)
<div class="front-end-breadcrumb"></div>
print(len(soup.body.div.main.div.section.div))
0
print(len(soup.body.div.main.div.section.contents[2]))
0
print(len(soup.body.div.main.div.section.contents[7]))
2
print(soup.body.div.main.div.section[7])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/bs4/element.py", line 1016, in __getitem__
return self.attrs[key]
KeyError: 7
</code></pre>
<p>What I want is to be able to get the length of the second div tag. By usin...contents[7] I was able to find the length. However, I may not always know where there second div tag is in the list of children of section. </p>
<p>I would like to be able to get a list of all children tags of the second div tag in the above code.</p>
<p>Also if second div has child main then I want to be able to call contents.div[2].main. However it doesn't work because of the keyerror. What is a workaround for it.</p>
<p>This is the webpage I am working on:</p>
<p><a href="https://www.indiatoday.in/magazine/cover-story/story/20071231-a-lost-cause-734888-2007-12-21" rel="nofollow noreferrer">https://www.indiatoday.in/magazine/cover-story/story/20071231-a-lost-cause-734888-2007-12-21</a></p>
<p>There is a lot of html content so I don't think I can post everything.</p>
|
<p>You're using "non-standard" way to select the element, if the DOM tree changed it will fail. Use <code>find()</code>, <code>findAll()</code>, <code>select()</code>, <code>select_one()</code> or read the <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/" rel="nofollow noreferrer">Docs here</a>.</p>
<p><code>contents.div[2].main</code> is not valid because <code>contents</code> is a list <strong>not</strong> DOM tree.</p>
<p>You want to select <code><div class="story-section"></code> and all <code>div</code> inside it?</p>
<pre><code># select first element
story_section = soup.find('div', class_='story-section')
# or
story_section= soup.select_one('div.story-section')
print(story_section)
# get all "div" inside ".story-section"
div_in_aricle = story_section.findAll('div')
for div in story_section:
print(div)
#To get article body
article = soup.select_one('div.description')
# or
article = soup.find('div', class_='description')
print(article.text)
# 60 REVOLUTIONS — KHALISTAN(from left) Kanwar Pal, Zaffarwal,.....
</code></pre>
|
python|beautifulsoup
| 0 |
1,907,494 | 58,290,341 |
Dynamically create an inner class in a class in Python 3?
|
<p>To automate a Django API I'd like to dynamically generate ModelSerializer classes, such as:</p>
<pre><code>class AppleSerializer(serializers.ModelSerializer):
class Meta:
model = Apple
fields = '__all__'
class BananaSerializer(serializers.ModelSerializer):
class Meta:
model = Banana
fields = '__all__'
</code></pre>
<p>Creating a regular class is fairly simple (see below). How do you dynamically create the inner Meta class as well??</p>
<pre><code>for serialmodel in ['Apple', 'Banaba']:
class_name = serialmodel + 'Serializer'
globals()[class_name] = type(class_name, (serializers.ModelSerializer,), {})
</code></pre>
|
<p>Found the answer:</p>
<pre><code>for serialmodel in ['Apple', 'Banana']:
class_name = serialmodel + 'Serializer'
Meta = type('Meta', (object, ), {'model': globals()[serialmodel], 'fields': '__all__'})
globals()[class_name] = type(class_name, (serializers.ModelSerializer,), {'Meta':Meta, })
</code></pre>
<p>Very useful if you have a lot of generic models in Django :)</p>
<p>Or, when you want to do this for all models automatically:</p>
<pre><code>models = dict(apps.all_models['your_django_app'])
for name, model in models.items():
class_name = name[:1].upper() + name[1:] + 'Serializer'
Meta = type('Meta', (object, ), {'model': model, 'fields': '__all__'})
globals()[class_name] = type(class_name, (serializers.ModelSerializer,), {'Meta':Meta, })
</code></pre>
|
python-3.x|django-models
| 0 |
1,907,495 | 22,774,640 |
NameError in Python, name is not defined
|
<p>I'm trying to create a script, such when debugmode = 1 you can break the script by pressing the "UP" key on the LCD. On the other hand, when debugmode = 0 it goes back to the main menu. However, I'm getting this error:</p>
<pre><code>NameError: name 'debugmode' is not defined
</code></pre>
<p>This is where debugmode is set:</p>
<pre><code>if lcd.buttonPressed(lcd.LEFT):
lcd.clear()
lcd.message('Debug mode is enabled.')
sleep(3)
lcd.clear
debugmode = 1
elif lcd.buttonPressed(lcd.RIGHT):
lcd.clear()
lcd.message('Debug mode is disabled.')
sleep(3)
lcd.clear
debugmode = 0
</code></pre>
<p>And this is where debugmode is called:</p>
<pre><code>if debugmode == 1:
break
else:
subprocess.Popen("/home/fakepath/mainmenu.py")
break
</code></pre>
<p><strong>UPDATE</strong>: Ignacio's reply fixed my name error and kindall's comment fixed my problem with the variable not being set. Thanks Ignacio and kindall!</p>
|
<p>Bind the name first, then rebind it after.</p>
<pre><code>debugmode = 0
if lcd....
...
</code></pre>
|
python|raspberry-pi
| 3 |
1,907,496 | 45,471,427 |
Replace values in a column based on a dictionary (Pandas)
|
<p>I have a text file (subject_ID_dict.csv) that contains subject # mappings, like this:</p>
<pre><code>30704 703
30705 849
30714 682
30720 699
30727 105
30729 708
30739 707
30757 854
30758 710
30763 724
30771 715
30773 99
30777 719
30779 717
30798 728
30805 732
30809 727
30831 734
30838 736
30868 735
30908 742
30929 115
30942 747
30944 743
30993 745
31006 116
31018 113
31040 758
31055 756
31057 755
31058 754
31068 760
31091 885
31147 764
31193 765
31196 767
31202 766
31209 117
31235 118
31268 772
31275 771
40017 -88
40018 542
40021 557
40023 28
</code></pre>
<p>I want to load this as a dictionary and use it to replace values in the first column in data.csv. So 40023 would become 28, for example.</p>
<p>Here is the code I have:</p>
<pre><code>import pandas as pd
from collections import defaultdict
# load text file where we want to replace things
df = pd.read_csv('data.csv', header=0)
# make dictionary
d = defaultdict(list)
with open('subject_ID_dict.csv') as f:
for line in f:
line=str(line)
k, v = map(int, line.split())
d[k].append(v)
print df['subid'].replace(d, inplace=True)
</code></pre>
<p>when I print d, I get this (snippet because it's quite long):</p>
<pre><code>defaultdict(<type 'list'>, {30720: [699], 30727: [105], 30729: [708], 30739: [707], 70319: [7066], 30757: [854], 30758: [710], 30763: [724], 30771: [715], 30773: [99], 70514: [7052], 30777: [719], 30779: [717], 70721: [-88], 70405: [-88], 30798: [728], 50331: [503310], 30805: [732], 30809: [727], 70674: [7080], 30831: [734], 30838: [736],
</code></pre>
<p>How can I remap the "subjid" column of data.csv using my dictionary, d, from subject_ID_dict.csv?</p>
|
<p>First, to facilitate quick replacement, create a flat dictionary. Don't use a <code>defaultdict</code>.</p>
<pre><code>d = {}
with open('subject_ID_dict.csv') as f:
for line in f:
k, v = map(int, line.split())
d[k] = v
</code></pre>
<p>Next, use <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.map.html" rel="noreferrer"><code>df.map</code></a> to transform your <code>subid</code> column.</p>
<pre><code>df['subid'] = df['subid'].map(d)
</code></pre>
|
python|pandas|dictionary|dataframe|replace
| 6 |
1,907,497 | 45,547,318 |
building tensorboard from source
|
<p>I built Tensorflow from source and built Tensorboard via
bazel build tensorflow/contrib/tensorboard:tensorboard</p>
<p>The build process finished but there is no tensorboard folder in
./bazel-bin/tensorflow or ./bazel-bin/tensorflow/contrib</p>
<p>any ideas how to find the binary so i can run
./bazel-bin/tensorflow/contrib/tensorboard/tensorboard --logdir=path/to/logs
?</p>
<p>OS: Ubuntu 16.04
Tensorflow for Python3 with GPU support
Thanks in advance</p>
|
<p>Tensorboard is in <a href="https://github.com/tensorflow/tensorboard" rel="nofollow noreferrer">its own repository</a> now. Try:</p>
<pre><code>git clone https://github.com/tensorflow/tensorboard
cd tensorboard
bazel build //tensorboard
...
Target //tensorboard:tensorboard up-to-date:
bazel-bin/tensorboard/tensorboard
INFO: Elapsed time: 87.181s, Critical Path: 50.59s
</code></pre>
<p>Then you can find it at <code>bazel-bin/tensorboard/tensorboard</code>.</p>
|
tensorflow|tensorboard|bazel
| 1 |
1,907,498 | 45,412,269 |
How to run only specific method from test suite
|
<p>I have a test suite with multiple methods in it. Is it possible to run only one method from the test suite ?</p>
<pre><code>class TestSuite()
def setUp():
...
def test_one():
...
def test_two():
...
</code></pre>
<p><strike>I tried following</p>
<pre><code>python testSuite.py.test_one
</code></pre>
<p>with no luck.</strike></p>
<p><strong>UPDATE</strong></p>
<p>To be more precise about the context, I try to launch Selenium functional automated tests written in python against a website.
To execute a given test suite, i run (from a virtual environment)</p>
<pre><code>test.py testSuite.py
</code></pre>
<p>Is it possible to launch only a specific method declared in the testSuite.py file ?</p>
|
<p>You need to pass <code>Class Name</code> also :</p>
<pre><code>>>> $ python -m unittest test_module.TestClass.test_method
</code></pre>
<p>In your case, it would be like </p>
<pre><code>>>> $ python -m unittest test_module.TestSuite.test_one
</code></pre>
<p>Other way round, Add <code>@unittest.skip()</code> to skip the particular test case</p>
<p>In below case, <code>test_one</code> will not excecute</p>
<pre><code>@unittest.skip()
def test_one():
...
def test_two():
...
</code></pre>
|
python|selenium
| 0 |
1,907,499 | 45,503,470 |
Can't substract datetime and NaT if in columns
|
<p>When I try the code below, I can compute everything in the loop but not substract series. Why?</p>
<pre><code>a = pd.Series(pd.date_range(start='1/1/1980', periods=10, freq='1m'))
b = pd.Series([pd.NaT] * 10)
for i in range(10):
a[i] - b[i]
a - b
</code></pre>
<p>TypeError: data type "datetime" not understood</p>
|
<p>I could find two related (closed) bugs:</p>
<ul>
<li><a href="https://github.com/pandas-dev/pandas/issues/16726" rel="nofollow noreferrer">Different behaviour on two different environments. TypeError: data type "datetime" not understood</a></li>
<li><a href="https://github.com/facebookincubator/prophet/issues/225" rel="nofollow noreferrer">TypeError: data type "datetime" not understood When fitting Non-daily data</a></li>
</ul>
<p>Upgrading to Pandas 0.20.1+ usually helps</p>
|
python|pandas|datetime
| 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.