Unnamed: 0
int64 0
1.91M
| id
int64 337
73.8M
| title
stringlengths 10
150
| question
stringlengths 21
64.2k
| answer
stringlengths 19
59.4k
| tags
stringlengths 5
112
| score
int64 -10
17.3k
|
---|---|---|---|---|---|---|
1,908,100 | 61,403,051 |
Why Selenium is giving me NoSuchElementException error
|
<p>I have this piece of code:</p>
<pre><code>path_web = "D:/chromedriver.exe"
insta = webdriver.Chrome(path_web)
insta.get("https://www.instagram.com/")
wait = WebDriverWait(insta, 20)
insta.find_element(By.TAG_NAME, "input")
</code></pre>
<p>When I run this I get NoSuchElementException error. But with the same code when I try to do this using cmd. I don't get the error. I get the element very easily. What am I doing wrong here?? why this code is only working on cmd?</p>
|
<p>You are using not correctly the explicit wait.
Here is the working code:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
insta = webdriver.Chrome()
insta.get("https://www.instagram.com")
WebDriverWait(insta, 20).until(EC.element_to_be_clickable(
(By.TAG_NAME, "input")))
</code></pre>
<p>PS: FYI, there are 2 inputs on the page. You need to locate them differently:</p>
<pre><code>...
inputs = WebDriverWait(insta, 20).until(EC.visibility_of_all_elements_located(
(By.TAG_NAME, "input")))
print(inputs[0]) # Username input.
print(inputs[1]) # Password input.
</code></pre>
<p>I hope it helps you!</p>
|
python-3.x|selenium|web-scraping
| 1 |
1,908,101 | 18,475,321 |
Digits in Eclipse's console after HTTP status codes
|
<p>I am a programming newbie, and I recently installed Python + Django, and successfully created a very small web app. Everything works fine, but I am puzzled about 4 digits that appear after HTTP status codes in Eclipse's console following any request I make to my server.</p>
<p>Example: [27/Aug/2013 22:53:32] "GET / HTTP/1.1" 200 1305</p>
<p>What does 1305 represent here and in every other request?</p>
|
<p>It's the size of the response, in bytes.</p>
<p>Note that this has nothing to do with Eclipse, it's just the way Django's <code>runserver</code> formats its output.</p>
|
python|django|eclipse|http|pydev
| 1 |
1,908,102 | 55,387,198 |
Passing arguments to URL object
|
<p>Using the URL object in kivy, <a href="https://kivy.org/doc/stable/api-kivy.network.urlrequest.html" rel="nofollow noreferrer">https://kivy.org/doc/stable/api-kivy.network.urlrequest.html</a>, if I wanted to change the on_success function to take in another parameter, how would I pass the value to it?</p>
<pre><code>def generate_images(sensor_id):
req = UrlRequest(URL, on_success=url_success)
</code></pre>
<p>And then in the on_success having something like this</p>
<pre><code>def url_success(req, result, sensor_id):
</code></pre>
|
<p>One solution is to use <code>functools.partial()</code>:</p>
<pre><code>from functools import partial
# ...
def generate_images(sensor_id):
req = UrlRequest(URL, on_success=partial(url_success, sensor_id))
# ...
def url_success(sensor_id, req, result):
print(sensor_id, req, result)
</code></pre>
<p>Another solution is to use a <code>lambda</code> functions:</p>
<pre><code>def generate_images(sensor_id):
req = UrlRequest(URL, on_success= lambda req, result, sensor_id=sensor_id : url_success(req, result, sensor_id))
</code></pre>
|
python|kivy|kivy-language
| 0 |
1,908,103 | 57,321,328 |
Copy & automate data from multiple workbook into an existing Master workbook without losing formatting using python
|
<p>I have multiple excel workbooks with the same format but different monthly data. I want to copy these data into an existing worksheet under an existing Master wkbook (same data format with the other workbooks)& without losing the formatting in the Master file using python</p>
<p>I have tried using xlwings and pywin libraries. The xlwings code below was able to copy the contents of a source wkbk into the Result wkbook but however into a separate sheet. I want the data to be copied into a specified sheet of the Master wkbook!(Both libraries generated the same result)</p>
<pre><code>#Using xlwings
import xlwings as wx
path1='C:\\Users\\G852589\\data transfer\\data1.xlsx'
#path0 = 'C:\\Users\\G852589\\data transfer\\data2.xlsx'
path2='C:\\Users\\G852589\\data transfer\\Result.xlsx'
wb1 = xw.Book(path1)
wb2 = xw.Book(path2)
ws1 = wb1.sheets(1)
ws1.api.Copy(Before=wb2.sheets(1).api)
wb2.save()
wb2.app.quit()
</code></pre>
<hr>
<pre><code>#Using pywin32
import os
import win32com.client as win32
from win32com.client import Dispatch
path1='C:\\Users\\G852589\\data transfer\\data1.xlsx'
#path0 = 'C:\\Users\\G852589\\data transfer\\data2.xlsx'
path2='C:\\Users\\G852589\\data transfer\\Result.xlsx'
xl=Dispatch('Excel.Application')
xl.Visible = True
wb1= xl.Workbooks.Open(Filename=path1)
wb2= xl.Workbooks.Open(Filename=path2)
ws1 =wb1.Worksheets(1)
ws1.Copy(Before=wb2.Worksheets(1))
wb2.Close(SaveChanges=True)
xl.Quit()
</code></pre>
<p>I need to be able to copy multiple data from several workbook sheets into a specified existing sheets in the Result workbook</p>
<p>I have attached screenshot to show the visual representation of what I am trying to achieve. data 1&2 are the original data files, the result sheet is how I want my Master workbook to look like after the files have been copied. </p>
<p><a href="https://i.stack.imgur.com/0G4lM.png" rel="nofollow noreferrer">https://i.stack.imgur.com/0G4lM.png</a></p>
|
<p>Use <a href="https://pandas.pydata.org/" rel="nofollow noreferrer">pandas</a> library for this:</p>
<pre><code>import pandas as pd
import os
# collect files names
files_list = os.listdir('files_folder')
# collect data frames from each file
data_list = []
for file in files_list:
df = pd.read_excel('files_folder/'+file)
data_list.append(df)
# concat all data frames into one
result = pd.concat(data_list, sort=True)
result.to_excel('final_data.xlsx')
</code></pre>
|
python|excel
| 0 |
1,908,104 | 54,229,581 |
How can I check the existence of tags in XML before parsing using ElementTree?
|
<p>I'm relatively new to python and currently trying to parse and convert XML to CSV. My code works if the parent and child tags exists, but I receive this error message: </p>
<p>Phone = element[3][0].text
IndexError: child index out of range </p>
<p>when a tag exist in the first attribute but not the second attribute. </p>
<p>I tried to put in an if statement, but it didn't quite work. This is what the xml and my original code looks like. If anyone can point me in the right the direction, I would appreciate it! </p>
<p>XML File</p>
<pre><code> <Member>
<Person>
<FirstName>JOHN</FirstName>
<LastName>DOE</LastName>
<Address>
<Address1>1234 TEST DR</Address1>
<Address2></Address2>
<City>SIMCITY</City>
<State>TD</State>
<ZipCode>12345 </ZipCode>
</Address>
<Phone>
<AreaCode>212</AreaCode>
<PhoneNumber>2223333</PhoneNumber>
</Phone>
</Person>
<Person>
<FirstName>JANE</FirstName>
<LastName>DOE</LastName>
<Address>
<Address1>1234 DEE ST</Address1>
<Address2></Address2>
<City>LCITY</City>
<State>TD</State>
<ZipCode>12345 </ZipCode>
</Address>
</Person>
</Member>
</code></pre>
<p>My Code:</p>
<pre><code> import csv
import xml.etree.ElementTree as ET
tree = ET.parse("Stack.xml")
root = tree.getroot()
xml_data_to_csv =open('Out.csv','w')
Csv_writer=csv.writer(xml_data_to_csv)
list_head=[]
count=0
for element in root.findall('Person'):
person = []
address_list = []
phone_list = []
#get head node
if count == 0:
FirstName = element.find('FirstName').tag
list_head.append(FirstName)
LastName = element.find('LastName').tag
list_head.append(LastName)
Address = element[2].tag
list_head.append(Address)
Phone = element[3].tag
list_head.append(Phone)
Csv_writer.writerow(list_head)
count = count +1
#get child node
FirstName = element.find('FirstName').text
person.append(FirstName)
LastName = element.find('LastName').text
person.append(LastName)
Address = element[2][0].text
address_list.append(Address)
Address2 = element[2][1].text
address_list.append(Address2)
City = element[2][2].text
address_list.append(City)
State = element[2][3].text
address_list.append(State)
ZipCode = element[2][4].text
address_list.append(ZipCode)
person.append(address_list)
Phone = element[3][0].text
phone_list.append(Phone)
AreaCode = element[3][1].text
phone_list.append(AreaCode)
person.append(phone_list)
#Write List_nodes to csv
Csv_writer.writerow(person)
xml_data_to_csv.close()
</code></pre>
|
<p>Try using <a href="https://docs.python.org/2/library/xml.etree.elementtree.html#xpath-support" rel="nofollow noreferrer">xpath</a> to find the tags you need, for example, you can replace this code:</p>
<pre><code>Phone = element[3][0].text
phone_list.append(Phone)
AreaCode = element[3][1].text
phone_list.append(AreaCode)
person.append(phone_list)
</code></pre>
<p>something like this:</p>
<pre><code>phone_list = [e.text for e in element.findall('Phone//')]
person.append(phone_list)
</code></pre>
<p>or is it (in my opinion the best option):</p>
<pre><code>person.append([e.text for e in element.findall('Phone//')])
</code></pre>
<p>Thus, you will be able to bypass the error and significantly reduce the amount of code :)</p>
|
python|xml|csv|parsing|elementtree
| 0 |
1,908,105 | 56,921,944 |
Why are both class instances changed?
|
<p>I was reading <a href="https://kite.com/blog/python/functional-programming" rel="nofollow noreferrer">this article</a> about functional programming in Python (3).</p>
<p>However I don't understand this example in the text:</p>
<pre><code>class Bus(object):
passengers = set()
def add_passenger(self, person):
self.passengers.add(person)
bus1 = Bus()
bus2 = Bus()
bus1.add_passenger('abe')
bus2.add_passenger('bertha')
bus1.passengers # returns ['abe', 'bertha']
bus2.passengers # also ['abe', 'bertha']
</code></pre>
<p>Why would calling add_passenger() on bus1 instance of the class change the passenger set of bus2?</p>
<p>And what would be the correct way to do it when you don't want this behaviour?</p>
|
<blockquote>
<p>Why would calling add_passenger() on bus1 instance of the class change the passenger set of bus2?</p>
</blockquote>
<p>Because there <em>is</em> no "passenger set <em>of <code>bus2</code></em>" (and no passenger set <em>of <code>bus1</code></em>). In this code:</p>
<pre><code>class Bus(object):
passengers = set()
</code></pre>
<p>...<code>passengers</code> is a class variable which is shared among all instances of this class, yet <em>belongs</em> not to these instances, but to the <em>class itself</em>, so when you change <code>self.passengers</code>, you actually change <code>Bus.passengers</code>, and since <code>bus1.passengers</code> and <code>bus2.passengers</code> <em>refer to</em> <code>Bus.passengers</code>, <code>bus1.passengers == bus2.passengers</code> is always true.</p>
<p>If you do not want this behaviour, implement an <code>__init__</code> method:</p>
<pre><code>class Bus:
def __init__(self):
self.passengers = set()
</code></pre>
|
python|python-3.x
| 5 |
1,908,106 | 25,773,816 |
clicking on a link with the same href value using selenium python
|
<p>I have a html code that has two links but both the links have the same href value, but the onclick and the text are different.
I wasn't sure as to how to access the second link.
I tried using driver.find_element_by_link_text('text'), but I get a no such element found error. </p>
<pre><code><div id="member">
<"a href="#" onclick="add_member("abc"); return false;">run abc<"/a>
<br>
<"a href="#" onclick="add_member("def"); return false;">run def<"/a>
</div>
</code></pre>
|
<p>There are multiple options to get the desired link.</p>
<p>One option would be to get use <a href="http://selenium-python.readthedocs.org/api.html#selenium.webdriver.remote.webdriver.WebDriver.find_element_by_xpath" rel="nofollow"><code>find_element_by_xpath()</code></a> and check <code>onclick</code> attribute value:</p>
<pre><code>link = driver.find_element_by_xpath('//div[@id="member"]/a[contains(@onclick, "add_member(\"def\")")]')
link.click()
</code></pre>
<hr>
<p>Another one would be to simply find both links and get the desired one by index:</p>
<pre><code>div = driver.find_element_by_id('member')
links = div.find_elements_by_tag_name('a')
links[1].click()
</code></pre>
<p>Which option to choose depends on the whole HTML content. Hope at least one of two suggested solutions solves the issue.</p>
|
python|html|selenium|selenium-webdriver
| 1 |
1,908,107 | 23,747,054 |
Insert html into document
|
<p>I successfully created a Python script to create a document and output as PDF using UNO interface to LibreOffice Headless.</p>
<p>Now I have an HTML string I need to convert and insert into the document.</p>
<p>What I'm using right now is this:</p>
<pre><code>document.Text.insertString(cursor, "<h1>Title</h1><p>Lorem ipsum...</p>" , False)
</code></pre>
<p>But of course it is written as is, I would like to convert the HTML styles to LibreOffice Writer.</p>
<p>Is this possible?.</p>
<p>Edit:</p>
<p>I want to get the same result as when I do</p>
<pre><code>soffice --headless --convert-to pdf ipsum.html
</code></pre>
<p><em>The file ipsum.html is just the Kitchen Sink example I copied from <a href="http://html-ipsum.com/" rel="nofollow">http://html-ipsum.com/</a>.</em></p>
<p>I cannot use this because I need to add a header and footer programmatically.</p>
|
<p>You have to insert text with paragraph style "header1", to be interpreted as H1</p>
|
python|uno
| 0 |
1,908,108 | 49,355,561 |
Cast binary to enum in python for redis conversion
|
<p>I am storing an enum in redis. When I load it, the value is in binary. Howe can I cast it to be a python enum?</p>
<p>Example code:</p>
<pre><code>class Position(Enum):
LEFT = 10
RIGHT = 11
current_position = Position.LEFT
r.set('current_position', Position.LEFT)
loaded_current_position = r.get('current_position_side')
print(current_position) # Position.LEFT
print(loaded_current_position) # b'Position.LEFT'
</code></pre>
<p>In this example, I'd like to get <code>loaded_current_position</code> to equal <code>Position.LEFT</code> not <code>b'Position.LEFT'</code></p>
|
<p>What's being stored is the name of the Enum member. There may be a redis specific answer, but in general you can mix your Enum with another data type, such as <code>int</code>, and then cast it back to an Enum upon retrieval; something like:</p>
<pre><code>class Position(int, Enum):
LEFT = 10
RIGHT = 11
current_position = Position.LEFT
r.set('current_position', Position.LEFT)
loaded_current_position = Position(r.get('current_position_side'))
print(current_position) # Position.LEFT
print(loaded_current_position) # Position.LEFT
</code></pre>
<p>Note: for the above to work, redis must save and return an <code>int</code></p>
|
python|enums|redis
| 0 |
1,908,109 | 52,164,882 |
Understanding Weights shape of an LSTM cell with 2-D input tensor
|
<p>I am building a simple LSTM model as follows:</p>
<pre><code>model = Sequential()
model.add(LSTM(10, return_sequences = False, input_shape = (8, 8)))
model.add(Activation('softmax'))
model.compile(loss = 'categorical_crossentropy', optimizer = adam, metrics = ['accuracy'])
</code></pre>
<p>Here, my input is a ndarray of shape (8,8). From the trained model out of this network, when I dump out the weights, I get the values as: </p>
<pre><code>print(model.layers.layer[0].get_weights[0].shape) # W [W_i, W_f, W_c, W_o]
print(model.layers.layer[0].get_weights[1].shape) # U
print(model.layers.layer[0].get_weights[2].shape) # b
</code></pre>
<p>Outputs:</p>
<pre><code>(8, 40)
(10, 40)
(40,)
</code></pre>
<p>W is a combined matrix of <code>W_i</code>, <code>W_f</code>, <code>W_c</code> and <code>W_o</code> each with <code>(8, 10)</code>. But this doesn't match with the equation:</p>
<pre><code>f_t = sigmoid( W_f * x + U_f * h_{t-1} + b_f )
</code></pre>
<p>If I take just the matrix dimension of the above equation, it goes like this:</p>
<pre><code>W_f' * x + U_f' * h_{t-1} + b_f
--> [10, 8] x [8, 8] + [10, 10] x [10, 1] + [10, 1]
--> [10, 8] + [10, 1] + [10, 1]
</code></pre>
<p>So looking at the above equation, it seems the shape of <code>X(input_tensor)</code> is incorrect. Only vector input shape seems to be fitting the above equation. Can someone help me understand the above equation with input shape as 2-D?</p>
<p>TIA</p>
|
<p>The equation you mentioned is for computing the output for <code>t</code>-th timestep. Therefore, only the input at timestep <code>t</code> is used (i.e. <code>x_t</code>) and not all the inputs (i.e. <code>x</code>):</p>
<pre><code>f_t = sigmoid( W_f * x_{t} + U_f * h_{t-1} + b_f )
</code></pre>
<p>As a result we would have:</p>
<pre><code>W_f' * x + U_f' * h_{t-1} + b_f
--> [10, 8] x [8, 1] + [10, 10] x [10, 1] + [10, 1]
--> [10, 1] + [10, 1] + [10, 1]
--> [10, 1] # output at timestep t
</code></pre>
<p>And this is in harmony with what LSTM layers are meant to do: they get the input at timestep <code>t</code> and give an output based on the that input and the state resulted from processing the first to <code>(t-1)</code>-th timesteps.</p>
|
python|keras|lstm|rnn|mnist
| 1 |
1,908,110 | 51,685,606 |
conversion of elements in dataframe to string
|
<p>I want to convert bytes to string in dataframe.</p>
<pre><code>data['CleanedText'].head()
0 b'witti littl book make son laugh loud recit c...
1 b'grew read sendak book watch realli rosi movi...
2 b'fun way children learn month year learn poem...
3 b'great littl book read nice rhythm well good ...
4 b'book poetri month year goe month cute littl ...
Name: CleanedText, dtype: object
</code></pre>
<p>I am using normal <strong>for loop</strong> to do this but it is taking too much time to convert.</p>
<pre><code>for i,j in enumerate(text_data):
data['newtext'][i] = text_data[i].decode('utf-8')
</code></pre>
<p>Is there anyway to convert bytes to string using <strong>numpy</strong> as it is computationally fast?</p>
|
<p>You can use <code>apply()</code> plus <a href="http://book.pythontips.com/en/latest/lambdas.html" rel="nofollow noreferrer">Lambda functions</a>:</p>
<pre><code>data['newtext'] = data['CleanedText'].apply(lambda x: x.decode('utf-8'))
</code></pre>
|
python-3.x|pandas|numpy|dataframe
| 0 |
1,908,111 | 19,079,859 |
Python Tkinter: AttributeError: Checkbutton instance has no attribute 'get'
|
<p>I am new to Tkinter. I am trying to create a GUI that has two spinboxes and two checkboxes whose values will be printed when a user clicks a particular button ("Start"). Here is my code so far:</p>
<pre><code>from Tkinter import *
from tkFileDialog import askopenfilename
from PIL import Image
#create TK frame
root = Tk()
#identify the dimensions of the TK frame
root.geometry("360x150")
#title the TK frame
root.title("Literature Online API")
#create a function that will return the filepath for a file provided by the user
def selectfile():
user_defined_filepath['filename'] = askopenfilename(filetypes=[("Text","*.txt")]) # user_defined_filepath['filename'] may now be accessed in the global scope.
#create a function that will allow the "start" button to begin further processes
def startapi(event = "<Button>"):
print windowlengthspinbox.get()
print slideintervalspinbox.get()
print fuzzyspellingbutton.get()
print lemmatizedsearchbutton.get()
#create variables for the checkbuttons -- default = 0, checked = 1
fuzzyspellingvariable = IntVar()
lemmatizedsearchvariable = IntVar()
#create a caption that will appear as the first line of the grid
firstlinelabel = Label(root, text = "Please select any desired search options:")
firstlinelabel.grid(row = 0, column = 0, sticky = W)
#create a button that allows users to employ Literature Online's fuzzy spelling feature. Add the object.grid() method on new line because appending .grid() to the line in which one defines object causes Python to give the object attribute "NoneType." http://stackoverflow.com/questions/1101750/python-tkinter-attributeerror-nonetype-object-has-no-attribute-get
fuzzyspellingbutton = Checkbutton(root, text="Fuzzy Spelling", variable=fuzzyspellingvariable)
fuzzyspellingbutton.grid(row = 1, column = 0, sticky = W)
#create a button that allows users to employ Literature Online's lemmatized search feature
lemmatizedsearchbutton = Checkbutton(root, text="Lemmatized Search", variable=lemmatizedsearchvariable)
lemmatizedsearchbutton.grid(row = 2, column = 0, sticky = W)
#create a spinbox that allows users to identify desired window length
windowlengthspinbox = Spinbox(root, from_=1, to=10)
windowlengthspinbox.grid(row = 3, column = 1, sticky = W)
windowlengthspinboxlabel = Label(root, text = "Please select window size")
windowlengthspinboxlabel.grid(row = 3, column = 0, sticky = W)
#create a spinbox that allows users to identify desired window length
slideintervalspinbox = Spinbox(root, from_=1, to=10)
slideintervalspinbox.grid(row = 4, column = 1, sticky = W)
slideintervalspinboxlabel = Label(root, text = "Please select window slide interval")
slideintervalspinboxlabel.grid(row = 4, column = 0, sticky = W)
#create a button that allows users to find a file for analysis
selectfilebutton = Button(root,text="Select File",command=selectfile)
selectfilebutton.grid(row = 5, column = 0, sticky = W)
#create a start button that allows users to submit selected parameters and file
startbutton = Button(root, text="Start", command = startapi, width = 8)
startbutton.grid(row = 5, column = 1, sticky = E)
startbutton.bind("<Button>", startapi)
#startbutton.focus()
#instantiate the Tk window
root.mainloop()
</code></pre>
<p>When I click the "Start" button, the GUI prints the values of the spinboxes, then tells me that Checkbutton instance has no attribute 'get':</p>
<pre><code>Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Python27\lib\lib-tk\Tkinter.py", line 1410, in __call__
return self.func(*args)
File "groundupgui.py", line 20, in startapi
print fuzzyspellingbutton.get()
AttributeError: Checkbutton instance has no attribute 'get'
</code></pre>
<p>Does anyone know how I can print the values of the checkboxes? I would be grateful for any suggestions or insight.</p>
|
<p>To find the state of the Checkbutton use the <code>fuzzyspellingvariable</code> that you've attached to the Checkbutton:</p>
<pre><code>fuzzyspellingvariable.get()
</code></pre>
<hr>
<p>By the way, since the checkbutton only has two states, you could use a BooleanVar instead of an IntVar here:</p>
<pre><code>fuzzyspellingvariable = BooleanVar()
</code></pre>
|
python|user-interface|checkbox|tkinter
| 4 |
1,908,112 | 36,552,029 |
Properly formatting numbers in pygal
|
<p>I'm using <code>Pygal</code> (with <code>Python</code> / <code>Flask</code>) relatively successfully in regards to loading data, formatting colors, min/max, etc., but can't figure out how to format a number in <code>Pygal</code> using dollar signs and commas. </p>
<p>I'm getting 265763.557372895 and instead want $265,763.</p>
<p>This goes for both the pop-up boxes when hovering over a data point, as well as the y-axes. </p>
<p>I've looked through pygal.org's documentation to no avail. Does anyone know how to properly format those numbers?</p>
<p>UPDATE:</p>
<p>I'm not quite ready to mark this question "answered" as I still can't get the separating commas. However, I did find the following native formatting option in pygal. This eliminates trailing decimals (without using Python's int()) and adds a dollar sign:</p>
<p><code>graph.value_formatter = lambda y: "$%.0f" % y</code></p>
<p>Change the <code>0f</code> to <code>2f</code> if you prefer two decimals, etc.</p>
|
<p>graph.value_formatter = lambda y: "{:,}".format(y) will get you the commas.</p>
<p>graph.value_formatter = lambda y: "${:,}".format(y) will get you the commas and the dollar sign.
Note that this formatting seems to be valid for Python 2.7 but would not work on 2.6.</p>
|
python|flask|formatting|pygal
| 3 |
1,908,113 | 54,621,921 |
How to iterate through a list and use .isdigit to verify if a list of strings is made up of only digits or not
|
<p>Write a function that takes, as an argument, a list, identified by the variable aList. If the list only contains elements containing digits (either as strings as integers), return the string formed by concatenating all of the elements in the list (see the example that follows). Otherwise, return a string indicating the length of the list, as specified in the examples that follow.</p>
<p>I am just starting to learn how to code and this is my first CS class. </p>
<pre><code>def amIDigits(aList):
for element in range(aList):
if element in aList.isdigit():
bList=[]
bList.append(aList)
return str(bList)
</code></pre>
<p><code>amIDigits([“hello”, 23])</code> should return the string <code>“The length of the input is 2.”</code>
<code>amIDigits ([“10”, “111”])</code> should return the string <code>“10111”</code></p>
|
<p>First, I highly recommend having someone literally sit down and have you walk them through your thought process. It is more useful as a learner to debug your thought process than to have someone give you the answer.</p>
<p>One thing I noticed is that you created your empty list, <code>bList</code>, inside the <code>for</code> block. This will not work. You need to create an empty list to store things into before you begin <code>for</code> looping through the old list, otherwise you will be over-writing your new list every time it loops. So right now, your <code>bList.append()</code> statement is appending an element onto an empty list every time it runs. (You will get only the very last element in the <code>aList</code> stored into your <code>bList</code>.)</p>
<p>Another problem is that you use the <code>range()</code> function, but you don't need to. You want to look at each element inside the list. Range creates a sequence of numbers from 0 to whatever number is inside the parentheses: <a href="https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&uact=8&ved=2ahUKEwjwo63So7LgAhUEX60KHaUiCocQFjABegQIBBAB&url=https%3A%2F%2Fwww.w3schools.com%2Fpython%2Fref_func_range.asp&usg=AOvVaw0FRoMhazzFHuErzuq8wPRk" rel="nofollow noreferrer">range() documentation</a>. Your code tries to pass a list into <code>range()</code>, so it is invalid.</p>
<p>The "<code>for</code> blank <code>in</code> blank" statement breaks up whatever list is in the second blank and goes through each of its elements one at a time. For the duration of the <code>for</code> statement, the first blank is the name of the variable that refers to the element being looked at. so for example:</p>
<pre><code>apples = ["Granny Smith","Red Delicious","Green"]
for apple in apples:
eat(apple) #yum!
</code></pre>
<p>The <code>for</code> <code>in</code> statement is more naturally spoken as "for each blank in blank:"</p>
|
python
| 0 |
1,908,114 | 39,493,368 |
Python RSA Decryption is throwing TypeErrors
|
<p>My RSA decrypt function:</p>
<pre><code>def decrypt(ctext,private_key):
key,n = private_key
text = [chr(pow(char,key)%n) for char in ctext]
return "".join(text)
</code></pre>
<p>is sometimes throwing a <code>TypeError</code> which tells that <code>pow(char,key)%n</code> provides a <code>float</code>. Why is that? I cant explain it myself and it would be interesting to know why.</p>
<p>For example it happens when:</p>
<blockquote>
<p>ctext = [513300, 369218, 473524, 473524, 500307, 509880, 264366, 500307, 337068, 473524, 264834]<br />
private_key = [-159317, 540767]</p>
</blockquote>
|
<p>It's hard to figure out much from a tiny fragment of code. You are getting float results because your variable <code>key</code> is negative number. It is clear from the description of <a href="https://docs.python.org/3/library/functions.html#pow" rel="nofollow">pow</a> that you will get float results, which is not what you want. You really should be using the 3-argument form of pow, and your exponent should always be positive. By using standard <a href="https://en.wikipedia.org/wiki/RSA_(cryptosystem)#Key_generation" rel="nofollow">RSA math</a> you can always make your exponent positive by adding an appropriate multiple of Φ(n). In your particular case, Φ(n) = (631 - 1) * (857 - 1) = 539280, so key = 379963 mod Φ(n). </p>
<p>You should correct the code that gives you a negative exponent.</p>
|
python|python-3.x|rsa|typeerror|precision
| 0 |
1,908,115 | 40,738,047 |
python ThreadPoolExecutor: after submit a task, will the thread die if task failed?
|
<p>I have the following code:</p>
<p>my question is: if the function of check_running_job raise an exception and didn't catch it up in check_running_job, will it cause the thread running check_running_job die? So if I have max workers as 3, after it dies, then only 2 threads can serve future request?</p>
<pre><code>with futures.ThreadPoolExecutor(max_workers = setting.parallelism_of_job_checking) as te:
while True:
cursor.execute(sql)
result = fetch_rows_as_dict(cursor)
for x in result:
id = x["id"]
te.submit(check_running_job, id,)
time.sleep(10)
</code></pre>
|
<p>ThreadPoolExecutor threads complete cleanly <em>either</em> by finishing their task or by raising an exception; a raised exception won't block the thread or prevent another worker from being assigned to it, and will cleanly set <code>.done()</code> to True just as if the task had finished correctly.</p>
<p>(You're probably aware of this, but if you try to access the <code>.return()</code> method of a task that has failed, its exception will be raised - so accessing the return value should always be done in a <code>try ... except</code> structure. If your code needs to know whether a task completed successfully or failed, this is one way of doing so.)</p>
|
python|multithreading
| 1 |
1,908,116 | 26,323,241 |
Pygtk Adjustment & Vertical Scale: How to invert curser max/min
|
<p>I have translated this <a href="http://ruby-gnome2.sourceforge.jp/hiki.cgi?tut-gtk2-numtxt-hvscales" rel="nofollow noreferrer">code</a> to python 2.7, it's working.
But now I have a problem I can't solve by myself:
I would like to have the 100 value on the top, and 0 on the down. How to do this?</p>
<p>Here's my code:</p>
<pre><code>#!/usr/bin/env python
# coding: utf8
import pygtk
pygtk.require('2.0')
import gtk
class Fan():
def destroy(self, widget, donnees=None):
gtk.main_quit()
exit()
def __init__(self):
window = gtk.Window(gtk.WINDOW_TOPLEVEL)
window.set_title('Vertical Scales')
window.set_border_width(10)
window.connect("destroy", self.destroy)
window.set_size_request(-1, 150)
# Récupération des informations de surface d'affichage
self.screen = window.get_screen()
self.largeur = int(self.screen.get_width())
self.hauteur = int(self.screen.get_height())
window.move(self.largeur/2-150,self.hauteur/2-40)
window.set_default_size(100, 200)
#Adjustments
#value, min, max, step, pg-incr,pg-size
adj1 = gtk.Adjustment(50, 0, 100, 1.0, 1.0, 0.0)
adj2 = gtk.Adjustment(50, 0, 100, 1.0, 1.0, 0.0)
scale1 = gtk.VScale(adj1)
scale2 = gtk.VScale(adj2)
scale1.value_pos = gtk.POS_RIGHT
scale2.value_pos = gtk.POS_LEFT
hbox = gtk.HBox(True, 5)
hbox.pack_start_defaults(scale1)
hbox.pack_start_defaults(scale2)
window.add(hbox)
hbox.show()
window.show_all()
if __name__ == "__main__":
base = Fan()
gtk.main()
</code></pre>
<p>Here is a screenshot, I would like to have 100 with the curser on the top, and 0 with the curseur on the bottom of the window:</p>
<p><img src="https://i.stack.imgur.com/qjwV6.jpg" alt="gtk.Adjutment example"></p>
<p>Thanks</p>
|
<p>gtk.Scale objects inherit from <a href="http://www.pygtk.org/pygtk2reference/class-gtkrange.html#method-gtkrange--set-inverted" rel="nofollow">gtk.Range</a>; you need to set the Range property "inverted" to True.</p>
<p>So put this somewhere after the statements that create your VScales:</p>
<pre><code>scale1.set_inverted(True)
scale2.set_inverted(True)
</code></pre>
<hr>
<p>PS. To run your code on my system I had to change your coding statement to </p>
<p><code># -*- coding: latin-1 -*-</code> </p>
<p>but I <em>think</em> that's because of changes the Stackoverflow formatting software does to text posted here. </p>
<hr>
<p>The official GTK+ tutorials are good (although they contain some examples that use deprecated code) but GTK is so big that it's not easy for the tutorials to cover everything. You need to get familiar with the GTK reference manual, and how it <s>hides</s> explains things. :)</p>
<p>So if you look at the page for gtk.HScale, it has a link to various properties, including gtk.Range properties. That link mentions "inverted" as one of the properties, and elsewhere on the page it explicitly describes the <code>set_inverted</code> and <code>get_inverted</code> methods. Note that functions named like that are standard in GTK for setting & getting properties, so if you know (or can guess :) ) a property name, then you also know how to set or get it.</p>
<p>The gtk.HScale page also mentions that it inherits from gtk.Scale, so that page is also worth looking at for extra info.</p>
<hr>
<p>I just noticed that you have </p>
<pre><code>self.largeur = int(self.screen.get_width())
self.hauteur = int(self.screen.get_height())
</code></pre>
<p>But you can just do </p>
<pre><code>self.largeur = self.screen.get_width()
self.hauteur = self.screen.get_height()
</code></pre>
<p>since those get_width() & get_height() methods both return <code>int</code>.</p>
<p>Rather than using window.move(), you should take a look at <code>gtk.Window.set_position()</code> and the <a href="http://www.pygtk.org/pygtk2reference/gtk-constants.html#gtk-window-position-constants" rel="nofollow">GTK Window Position Constants</a>.</p>
|
python|gtk|adjustment
| 3 |
1,908,117 | 44,314,161 |
How to Change a Dictionaries Key Based on the Value Designated to the Same Key in Another Dictionary?
|
<p>I am trying to update a dictionary key using key value pairs in another dictionary. The two dictionaries I am trying to combine are both nested into lists:</p>
<pre><code>dictionary1 = [{ '32639': {'78549': {'526' : { 'sum': 8930.40, 'min' : 2380, 'max': 74839}}}} , {'89304': {'20384': {'152' : { 'sum': 51235.20, 'min' : 4512, 'max': 362.69}}}}, { '41526': {'45315': {'364' : { 'sum': 8985.65, 'min' : 3632.32, 'max': 4558.15}}}}]
dictionary2 = [{'32639':'90283'}, {'49034': '89203'}, {'28942': '39024'}, {'41526':'24903'} ]
</code></pre>
<p>I want the resulting dictionary to look exactly like dictionary1 however if the key of the dictionaries in dictionary1 is in the keys in the dictionaries of dictionary2 they should be changed. </p>
<p>Resulting dictionary:</p>
<pre><code>new_dictionary = [{ '90283': {'78549': {'526' : { 'sum': 8930.40, 'min' : 2380, 'max': 74839}}}} , {'89304': {'20384': {'152' : { 'sum': 51235.20, 'min' : 4512, 'max': 362.69}}}}, { '24903': {'45315': {'364' : { 'sum': 8985.65, 'min' : 3632.32, 'max': 4558.15}}}}]
</code></pre>
<p>I have attempted:</p>
<pre><code>list1 = []
for d1, d2 in zip(dictionary1, dictionary2):
for key, value in d1.iteritems():
new_dict = {}
if key in d2:
new_dict[d2[key]] = value
list1.append(new_dict)
else:
new_dict[key] = value
list1.append(new_dict)
</code></pre>
<p>However it is not working, on this sample data it works however, the program only iterates through dictionary1 based on the length of dictionary2. So I am trying to run this with a list that has 841 dictionaries (dictionary 1) and a list of dictionaries of 53 (dictionary 2) and it will only convert the first 53 keys in dictionary 1 before it quits. </p>
|
<p>Try something like this:</p>
<pre><code>ds1 = [{ '32639': {'78549': {'526' : { 'sum': 8930.40, 'min' : 2380, 'max': 74839}}}} , {'89304': {'20384': {'152' : { 'sum': 51235.20, 'min' : 4512, 'max': 362.69}}}}, { '41526': {'45315': {'364' : { 'sum': 8985.65, 'min' : 3632.32, 'max': 4558.15}}}}]
ds2 = [{'32639':'90283'}, {'49034': '89203'}, {'28942': '39024'}, {'41526':'24903'} ]
def trans(ds1, ds2):
for d1 in ds1:
for d2 in ds2:
ckeys = d1.keys() & d2.keys()
ukeys = d1.keys() - d2.keys()
for ck in ckeys:
yield (d2[ck], d1[ck])
for uk in ukeys:
yield (uk, d1[uk])
print(dict(trans(ds1, ds2)))
</code></pre>
<p>I got output:</p>
<p><code>{'90283': {'78549': {'526': {'sum': 8930.4, 'min': 2380, 'max': 74839}}}, '32639': {'78549': {'526': {'sum': 8930.4, 'min': 2380, 'max': 74839}}}, '89304': {'20384': {'152': {'sum': 51235.2, 'min': 4512, 'max': 362.69}}}, '41526': {'45315': {'364': {'sum': 8985.65, 'min': 3632.32, 'max': 4558.15}}}, '24903': {'45315': {'364': {'sum': 8985.65, 'min': 3632.32, 'max': 4558.15}}}}
</code></p>
|
python|python-2.7|list|dictionary|iteration
| 0 |
1,908,118 | 32,641,391 |
Static Methods in Python
|
<p>I am trying to simulate the rolling of a die and have used this code</p>
<pre><code>class dicesimulator:
def __init__(self, list = [0,0,0,0,0,0]):
self.list = list
@staticmethod
def diceroller():
outcome = random.randit(0,5)
print outcome + 1
mydice = dicesimulator()
print mydice.diceroller
</code></pre>
<p>However when I run the code it returns rather then a number. Why is this happening. Also as far as I am aware I should also be able to call the class itself on a static method ie dicesimulator.diceroller. However, it also returns </p>
|
<p>So there's a couple of issues. Firstly, there's not really a terribly good reason to use static methods. It's essentially no different than just making a function. Secondly, you aren't actually returning the number from within your diceroller() method. Thirdly, you aren't actually calling diceroller because you forgot to put parens so instead you're just printing the function directly (the string representation of it).</p>
|
python|static-methods
| 0 |
1,908,119 | 32,606,631 |
Python - condition beased on query result
|
<p>I have a code which saves query results to csv file. Is it possible to make condition based on query results? For example:</p>
<p><strong>if query results contains string "Test" then execute another query and save it to existing file.</strong></p>
<p>Should I use "fetchone" method?</p>
<p>My code:</p>
<pre><code>try:
connection = cx_Oracle.connect('TAGATE','TAGATE','GPSPL')
cursor = connection.cursor()
print "connected"
try:
query = """select example1 , example 2 from dbo1""".format(line_name)
tmp = cursor.execute(query)
columns = [i[0] for i in cursor.description]
results = tmp.fetchall()
cursor.close()
except:
pass
except:
print IOError
filename='{0}.csv'.format(line_name)
if results:
csv_file = open(filename,'wb')
myFile = csv.writer(csv_file)
myFile.writerow(columns)
myFile.writerows(results)
#i think another statment should be here
</code></pre>
|
<p>I'd do the following:</p>
<pre><code>...
if results:
for ex1, ex2 in results:
# I assumed string was to be found in field 'Example1'
if 'Test' in ex1:
# Save your results as you do with csv_file = open(filename,'wb') and so on
</code></pre>
|
python|oracle|if-statement
| 0 |
1,908,120 | 43,097,347 |
delete occurence in list with regex in python
|
<p>So I'm reading an .avi file and storing it's index into a list.
Each element corresponds to an index movie frame.
I'm working on a script to delete all the occurrences in the list that start with :</p>
<blockquote>
<p>00dc\x10\x00\x00\x00</p>
</blockquote>
<p>Here's a short version of the code</p>
<pre><code> list = ['00dc\x00\x00\x00\x00\x025,\x00\x08\x19\x00\x00',
'00dc\x00\x00\x00\x00\x12N,\x00\x0b6\x00\x00',
'00dc\x10\x00\x00\x00&\x84,\x00\x95D\x01\x00',
'00dc\x00\x00\x00\x00\xc4\xc8-\x00\xe0l\x00\x00',
'00dc\x00\x00\x00\x00\xac5.\x00t.\x00\x00']
regex1 = b'00dc\x10\x00\x00\x00.{8}'
newlist = [x for x in list if x != regex1]
</code></pre>
<p>Aaand it doesn't do anything, the list stays the same when I expected the third element to be popped out.</p>
<p>I don't think it matches anything because even when I set the regex1 to :</p>
<blockquote>
<p>b'.*'</p>
</blockquote>
<p>The list stays the same. Having trouble figuring out where the issue is coming from. Thanks</p>
|
<p>Python doesn't work with regex built-in. You need to import <code>regex</code> module. </p>
<pre><code>import re
list = ['00dc\x00\x00\x00\x00\x025,\x00\x08\x19\x00\x00',
'00dc\x00\x00\x00\x00\x12N,\x00\x0b6\x00\x00',
'00dc\x10\x00\x00\x00&\x84,\x00\x95D\x01\x00',
'00dc\x00\x00\x00\x00\xc4\xc8-\x00\xe0l\x00\x00',
'00dc\x00\x00\x00\x00\xac5.\x00t.\x00\x00']
pattern = re.compile(b'00dc\x10\x00\x00\x00.{8}')
newlist = [x for x in list if not re.match(pattern,x)]
</code></pre>
<p>Output: </p>
<pre><code>['00dc\x00\x00\x00\x00\x025,\x00\x08\x19\x00\x00', '00dc\x00\x00\x00\x00\x12N,\x00\x0b6\x00\x00', '00dc\x00\x00\x00\x00\xc4\xc8-\x00\xe0l\x00\x00', '00dc\x00\x00\x00\x00\xac5.\x00t.\x00\x00']
</code></pre>
|
python|regex
| 1 |
1,908,121 | 37,117,798 |
TensorFlow missing CPU Op for FFT (InvalidArgumentError: No OpKernel was registered to support Op 'FFT' with these attrs)
|
<p>I am new to tensorflow and want to create a graph which performs fft on real data, similar to numpys rfft function:</p>
<pre><code>def rfftOp(_in, name='rfft', graph=tf.get_default_graph()):
with graph.as_default():
with tf.device('/cpu:0'):
with tf.name_scope(name):
cast = tf.complex(tf.cast(_in, tf.float32, name='cast_to_float32'), tf.constant(0.0, dtype=tf.float32), name='cast_to_complex')
fftOp = tf.fft(cast, name='fft')
half, _ = tf.split(0, 2, fftOp, name='split')
double = tf.mul(tf.constant(2.0, dtype=tf.complex64), half)
return double
sess = tf.InteractiveSession()
inp = tf.placeholder(np.float64, shape=(256,), name='input')
fftOp = rfftOp(inp)
print(sess.run(fftOp, feed_dict={inp: d}))
</code></pre>
<p>However, I am getting the following error message:</p>
<pre><code>---------------------------------------------------------------------------
InvalidArgumentError Traceback (most recent call last)
<ipython-input-18-0f6d789c912c> in <module>()
6 inp = tf.placeholder(np.float64, shape=(256,), name='input')
7 fftOp = rfftOp(inp)
----> 8 print(sess.run(fftOp, feed_dict={inp: d}))
/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.pyc in run(self, fetches, feed_dict, options, run_metadata)
338 try:
339 result = self._run(None, fetches, feed_dict, options_ptr,
--> 340 run_metadata_ptr)
341 if run_metadata:
342 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)
/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.pyc in _run(self, handle, fetches, feed_dict, options, run_metadata)
562 try:
563 results = self._do_run(handle, target_list, unique_fetches,
--> 564 feed_dict_string, options, run_metadata)
565 finally:
566 # The movers are no longer used. Delete them.
/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.pyc in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata)
635 if handle is None:
636 return self._do_call(_run_fn, self._session, feed_dict, fetch_list,
--> 637 target_list, options, run_metadata)
638 else:
639 return self._do_call(_prun_fn, self._session, handle, feed_dict,
/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.pyc in _do_call(self, fn, *args)
657 # pylint: disable=protected-access
658 raise errors._make_specific_exception(node_def, op, error_message,
--> 659 e.code)
660 # pylint: enable=protected-access
661
InvalidArgumentError: No OpKernel was registered to support Op 'FFT' with these attrs
[[Node: rfft_4/fft = FFT[_device="/device:CPU:0"](rfft_4/cast_to_complex)]]
Caused by op u'rfft_4/fft', defined at:
File "/usr/lib/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/usr/local/lib/python2.7/dist-packages/ipykernel/__main__.py", line 3, in <module>
app.launch_new_instance()
File "/usr/local/lib/python2.7/dist-packages/traitlets/config/application.py", line 596, in launch_instance
app.start()
File "/usr/local/lib/python2.7/dist-packages/ipykernel/kernelapp.py", line 442, in start
ioloop.IOLoop.instance().start()
File "/usr/local/lib/python2.7/dist-packages/zmq/eventloop/ioloop.py", line 162, in start
super(ZMQIOLoop, self).start()
File "/usr/local/lib/python2.7/dist-packages/tornado/ioloop.py", line 883, in start
handler_func(fd_obj, events)
File "/usr/local/lib/python2.7/dist-packages/tornado/stack_context.py", line 275, in null_wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/zmq/eventloop/zmqstream.py", line 440, in _handle_events
self._handle_recv()
File "/usr/local/lib/python2.7/dist-packages/zmq/eventloop/zmqstream.py", line 472, in _handle_recv
self._run_callback(callback, msg)
File "/usr/local/lib/python2.7/dist-packages/zmq/eventloop/zmqstream.py", line 414, in _run_callback
callback(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/tornado/stack_context.py", line 275, in null_wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/ipykernel/kernelbase.py", line 276, in dispatcher
return self.dispatch_shell(stream, msg)
File "/usr/local/lib/python2.7/dist-packages/ipykernel/kernelbase.py", line 228, in dispatch_shell
handler(stream, idents, msg)
File "/usr/local/lib/python2.7/dist-packages/ipykernel/kernelbase.py", line 391, in execute_request
user_expressions, allow_stdin)
File "/usr/local/lib/python2.7/dist-packages/ipykernel/ipkernel.py", line 199, in do_execute
shell.run_cell(code, store_history=store_history, silent=silent)
File "/usr/local/lib/python2.7/dist-packages/IPython/core/interactiveshell.py", line 2723, in run_cell
interactivity=interactivity, compiler=compiler, result=result)
File "/usr/local/lib/python2.7/dist-packages/IPython/core/interactiveshell.py", line 2825, in run_ast_nodes
if self.run_code(code, result):
File "/usr/local/lib/python2.7/dist-packages/IPython/core/interactiveshell.py", line 2885, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-18-0f6d789c912c>", line 7, in <module>
fftOp = rfftOp(inp)
File "<ipython-input-17-e44d5219afe4>", line 6, in rfftOp
fftOp = tf.fft(cast, name='fft')
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_math_ops.py", line 518, in fft
return _op_def_lib.apply_op("FFT", in_=in_, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/op_def_library.py", line 655, in apply_op
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2154, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1154, in __init__
self._traceback = _extract_stack()
</code></pre>
<p>indicating that the Op for tensorflows fft is missing.
I've found a similar <a href="https://github.com/tensorflow/tensorflow/issues/386" rel="nofollow">issue</a>, but it focus on the GPU Op.
I am using the <a href="https://hub.docker.com/r/tensorflow/tensorflow/" rel="nofollow">tensorflow/tensorflow</a> docker image.</p>
<p>So, is there anything missing in the docker image or do I have to use tensorflows fft another way?</p>
|
<p>You are forcing TensorFlow to try to run the FFT operation on CPU by calling <code>with tf.device('/cpu:0')</code>. However the FFT operations are currently only implemented for GPU, which is why you end up with an error message.</p>
<p>If you have a GPU available you can simply remove the call to tf.device(). TensorFlow will then automatically run the FFT operation on GPU.</p>
|
python|tensorflow
| 2 |
1,908,122 | 19,959,556 |
How to open an M-JPEG video in an AVI container using Python 2.7.5 with OpenCV 2.4.6?
|
<p>I'm trying to open a *.avi file with OpenCV using Python(x, y) and every time I run the following code, the file fails to open.</p>
<pre><code>import cv2
from cv2 import cv
# Set up
fn = "./video.avi"
v = cv2.VideoCapture()
# Configure codec:
# From ffprobe, codec is M-JPEG ==> MJPG;
codec = cv.CV_FOURCC("M", "J", "P", "G")
v.set(cv.CV_CAP_PROP_FOURCC, codec)
# Test
v.open(fn)
print fn
if not v.isOpened():
print "Video failed to open"
else:
print "Video opened!"
</code></pre>
<p>The following is the output of running</p>
<pre><code>ffprobe video.avi
</code></pre>
<p>Output:</p>
<pre><code>libavutil 52. 52.100 / 52. 52.100
libavcodec 55. 41.100 / 55. 41.100
libavformat 55. 21.100 / 55. 21.100
libavdevice 55. 5.100 / 55. 5.100
libavfilter 3. 90.102 / 3. 90.102
libswscale 2. 5.101 / 2. 5.101
libswresample 0. 17.104 / 0. 17.104
libpostproc 52. 3.100 / 52. 3.100
Input #0, avi, from 'L4R01CA_T05032300.avi':
Duration: 00:00:06.01, start: 0.000000, bitrate: 16136 kb/s
Stream #0:0: Video: mjpeg (MJPG / 0x47504A4D), yuvj420p(pc), 720x480, 29.97 tbr, 29.97 tbn, 29.97 tbc
Stream #0:1: Audio: ac3 ([0] [0][0] / 0x2000), 48000 Hz, stereo, fltp, 256 kb/s
</code></pre>
<p>Anyone have any ideas as to why OpenCV cannot open this file? I have also tried to open the file without setting cv.CV_CAP_PROP_FOURCC to MJPG with the same results.</p>
<p>Thank you.</p>
|
<p>Setting the FourCC is not necessary. It is enough to just use</p>
<pre><code>c = VideoCapture(fn)
</code></pre>
<p>Most probable causes why it doesn't work:</p>
<ol>
<li>You don't have the correct codecs installed (Try installing a codec pack).</li>
<li>Make sure the video file is at the correct location.</li>
<li>Make sure you have the correct opencv_ffmpeg_xx.dll and the program can find it.</li>
</ol>
|
python|opencv|video
| 1 |
1,908,123 | 69,839,958 |
Display a list of products in a custom Shopify app
|
<p>I have updated the code to the following:</p>
<pre><code>@app.route('/products_linking', methods=['GET'])
# @helpers.verify_web_call
def products_linking():
shop = request.args.get('shop')
global ACCESS_TOKEN
if ACCESS_TOKEN:
url = f'https://{shop}.myshopify.com/admin/api/2021-10/'
def get_products():
endpoint = 'products.json'
r = requests.get(url + endpoint)
return r.json()
products = get_products()
print(products)
print(shop)
return render_template('products_linking.html', shop=shop, products=products)
</code></pre>
<p>When I click on the link for /products_linking in my app I get {'errors': '[API] Invalid API key or access token (unrecognized login or wrong password)'} for products and 'None' for the shop variable. If the code inside the if ACCESS_TOKEN: block is being executed why am I getting an invalid access token error? Also why is the value of the shop variable coming back as None?</p>
|
<p>You're hitting a wrong URL. The Shopify API endpoint to get products is</p>
<p><code>https://{shop}.myshopify.com/admin/api/2021-10/products.json</code></p>
<p>You also need to include an access token in the <code>X-Shopify-Access-Token</code> header.</p>
|
python|json|flask|python-requests|shopify
| 1 |
1,908,124 | 55,900,401 |
django.core.exceptions.ImproperlyConfigured. When I tried running a python program on my local machine, I encounted this error
|
<p>When I tried running a python program on my local machine, I encountered this error:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\Thomas\AppData\Local\Programs\Python\Python37\lib\threading.py", line 917, in _bootstrap_inner
self.run()
File "C:\Users\Thomas\AppData\Local\Programs\Python\Python37\lib\threading.py", line 865, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\Thomas\AppData\Local\Programs\Python\Python37\lib\site-packages\django\utils\autoreload.py", line 54, in wrapper
fn(*args, **kwargs)
File "C:\Users\Thomas\AppData\Local\Programs\Python\Python37\lib\site-packages\django\core\management\commands\runserver.py", line 137, in inner_run
handler = self.get_handler(*args, **options)
File "C:\Users\Thomas\AppData\Local\Programs\Python\Python37\lib\site-packages\django\contrib\staticfiles\management\commands\runserver.py", line 27, in get_handler
handler = super().get_handler(*args, **options)
File "C:\Users\Thomas\AppData\Local\Programs\Python\Python37\lib\site-packages\django\core\management\commands\runserver.py", line 64, in get_handler
return get_internal_wsgi_application()
File "C:\Users\Thomas\AppData\Local\Programs\Python\Python37\lib\site-packages\django\core\servers\basehttp.py", line 50, in get_internal_wsgi_application
) from err
django.core.exceptions.ImproperlyConfigured: WSGI application 'mysite.wsgi.application' could not be loaded; Error importing module.
</code></pre>
<p>There are also similar posts online, and I tried the suggested solution from each post, but none of them could solve my error.</p>
<p>Here is the content of my setting.py</p>
<p>My python version: Python 3.7.3
My Django verison: 2.2</p>
<pre><code>"""
Django settings for mysite project.
Generated by 'django-admin startproject' using Django 2.0.7.
For more information on this file, see
https://docs.djangoproject.com/en/2.0/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/2.0/ref/settings/
"""
import os
import django_heroku
import dj_database_url
config = dj_database_url.config
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
PROJECT_ROOT = os.path.dirname(os.path.abspath(__file__))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/2.0/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
DEBUG = False
# Experimenting here
ALLOWED_HOSTS = ["localhost", "https://stormy-stream-43261.herokuapp.com/"]
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'blog',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'whitenoise.middleware.WhiteNoiseMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'mysite.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'mysite.wsgi.application'
# WSGI_APPLICATION = 'mysite.wsgi.application'
# from django.core.exceptions import ImproperlyConfigured
# def get_env_variable(var_name):
# try:
# return os.environ[var_name]
# except KeyError:
# error_msg = "Set the %s environment variable" % var_name
# raise ImproperlyConfigured(error_msg)
# Database
# https://docs.djangoproject.com/en/2.0/ref/settings/#databases
# one for development (maybe move that to local postgres) and another for running and development
DATABASES = {
# 'default': {
# 'ENGINE': 'django.db.backends.postgresql',
# 'NAME': 'xsboediz',
# 'USER': 'xsboediz',
# 'PASSWORD': 'Qcn3tmpnSgp1QHgk_dsCvukFFPJAofm0',
# 'HOST': 'baasu.db.elephantsql.com',
# 'PORT': '5432'
# }
#
# 'default': dj_database_url.config(
# default=config('DATABASE_URL')
# )
#
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': 'mydatabase',
}
}
# Password validation
# https://docs.djangoproject.com/en/2.0/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/2.0/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/2.0/howto/static-files/
#STATIC_URL = './blog/staticfiles/'
STATIC_URL = '/static/'
# STATICFILES_DIRS = (
# #'/Diversity_Policy_Site/blog/static',
# os.path.join(BASE_DIR, '/static/'),
# )
STATIC_ROOT = os.path.join(BASE_DIR, 'static')
STATICFILES_STORAGE = 'whitenoise.django.GzipManifestStaticFilesStorage'
django_heroku.settings(locals())
</code></pre>
<p>Any help or suggestion is appreciated.</p>
|
<p>Try to comment or remove this
'django.contrib.auth.middleware.AuthenticationMiddleware',
in settings.py in MIDDLEWARE [].</p>
|
python|django|python-3.x
| 0 |
1,908,125 | 55,772,915 |
Interpretors of AIML 2.0
|
<p>I am new to AIML 2.0 can any one suggest me the available interpretors that support AIML 2.0,and some useful resources. </p>
<p>Thanks in Advance.</p>
|
<p>Program AB also fully supports AIML 2 and was written by Dr Richard Wallace who invented AIML
<a href="https://code.google.com/archive/p/program-ab/" rel="nofollow noreferrer">https://code.google.com/archive/p/program-ab/</a></p>
<p>A good resource for learning AIML is this free course:
<a href="https://www.udemy.com/course/artificial-intelligence-markup-language/" rel="nofollow noreferrer">https://www.udemy.com/course/artificial-intelligence-markup-language/</a></p>
|
python-3.7|aiml
| 1 |
1,908,126 | 55,682,577 |
How to solve error of define Synset in python
|
<p>i face problem of syntax it show error name synset not define</p>
<p>code in python:</p>
<pre><code>Synset('cookbook.n.01')
wordnet.synsets('cooking')[0].examples()
['cooking can be a great art', 'people are needed who have experiencein
cookery', 'he left the preparation of meals to his wife']
syn.hypernyms()
wordnet.synsets(word)
[Synset('reference_book.n.01')]
syn.hypernyms()[0].hyponyms()
[Synset('annual.n.02'), Synset('atlas.n.02'), Synset('cookbook.n.01'),
Synset('directory.n.01'), Synset('encyclopedia.n.01'),
Synset('handbook.n.01'), Synset('instruction_book.n.01'),
Synset('source_book.n.01'), Synset('wordbook.n.01')]
syn.root_hypernyms()
[Synset('entity.n.01')]
</code></pre>
<p>comiling error in spyder</p>
<blockquote>
<p>File "C:/Users/atiqpc/Documents/spyder/firstprogram.py", line 68, in
Synset('cookbook.n.01') NameError: name 'Synset' is not
defined</p>
</blockquote>
|
<p>Looking at your code you seem to have copied both the input and output lines from an example online leading to this error. </p>
<p>The line <code>>>> wordnet.synsets('cookbook')[0]</code> gives the output of <code>Synset('cookbook.n.01')</code></p>
<p>The rest of your lines in the code you posted follow the same format where you have a function in a line then you are showing the output without differentiating between them.</p>
<p>Also you cant use Synset by itself since you code has no idea what it is, it does know wordnet.synsets since you did import wordnet (I assume you did)
* I assume you have used from <code>nltk.corpus import wordnet</code> since you are using wordnet in other lines.</p>
|
python-3.x
| 0 |
1,908,127 | 73,181,818 |
Why can't I connect the clicked event of a QPushButton to a function inside a Python class?
|
<p>I'm trying to connect the clicked event of a QPushButton inside a class (MyButton) to a function inside the same class (print_hello_world) in PyQt5. I'm expecting to print "Hello World" when the user clicks on the button. Can anyone explain why the following code does not work? (i.e. clicking on the button does not print anything)</p>
<pre><code>import sys
from PyQt5.QtWidgets import *
class MyButton:
def __init__(self, parent):
self.parent = parent
self.push_button = QPushButton('Print', parent)
self.push_button.clicked.connect(self.print_hello_world)
def print_hello_world(self):
print("Hello World")
class Window(QMainWindow):
def __init__(self):
QMainWindow.__init__(self)
button = MyButton(parent=self)
App = QApplication(sys.argv)
window = Window()
window.show()
sys.exit(App.exec())
</code></pre>
<p>The above code will work if I add <code>button.push_button.clicked.connect(lambda:button)</code> after I instantiate the MyButton object in the Window class. Can anyone explain to me why the first code does not work and the following code works?</p>
<pre><code>import sys
from PyQt5.QtWidgets import *
class MyButton:
def __init__(self, parent):
self.parent = parent
self.push_button = QPushButton('Print', parent)
self.push_button.clicked.connect(self.print_hello_world)
def print_hello_world(self):
print("Hello World")
class Window(QMainWindow):
def __init__(self):
QMainWindow.__init__(self)
button = MyButton(parent=self)
button.push_button.clicked.connect(lambda:button)
App = QApplication(sys.argv)
window = Window()
window.show()
sys.exit(App.exec())
</code></pre>
<p>What is an alternative way to make the first code work by modifying the <code>MyButton</code> class without having to add extra lines of code after instantiating the class?</p>
|
<p>'button' is destroyed because of garbage collection inside 'Window's init method, because it is not an instance variable so 'button' goes out of scope when init method finishes.</p>
<p>So this version works as intended:</p>
<pre><code>import sys
from PyQt5.QtWidgets import *
class MyButton:
def __init__(self, parent):
self.parent = parent
self.push_button = QPushButton('Print', parent)
self.push_button.clicked.connect(self.print_hello_world)
def print_hello_world(self):
print("Hello World")
class Window(QMainWindow):
def __init__(self):
QMainWindow.__init__(self)
self.button = MyButton(parent=self)
App = QApplication(sys.argv)
window = Window()
window.show()
sys.exit(App.exec())
</code></pre>
|
python|pyqt
| 0 |
1,908,128 | 50,055,285 |
Sum dataframe by conditionally defined groups
|
<p>I have python dataframe, I want to sum across different rows, separated by number 0. E.g: i have this DF here:</p>
<pre><code>data= DataFrame({'A':['a','b','c','d','e','f','g','h','i'],'B':[1,2,0,3,2,0,0,3,4]})
</code></pre>
<p>I want to generate this DF:</p>
<pre><code>data2= DataFrame({'AA':'a','d','h'],'BB':[3,5,7]})
</code></pre>
|
<p>One possible approach is to define some groups using the function cumsum:</p>
<pre><code>data = pd.DataFrame({'A':['a','b','c','d','e','f','g','h','i'],'B':[1,2,0,3,2,0,0,3,4]})
data['groups'] = (data['B'] == 0).cumsum()
# Out
# A B groups
# 0 a 1 0
# 1 b 2 0
# 2 c 0 1
# 3 d 3 1
# 4 e 2 1
# 5 f 0 2
# 6 g 0 3
# 7 h 3 3
# 8 i 4 3
</code></pre>
<p>Then, define an array with the output indices, which except for the first are one below the first occurrence of each group:</p>
<pre><code>indexes = data.loc[data.drop_duplicates('groups').index.values+1]['A'].values
indexes[0] = data['A'].values[0]
</code></pre>
<p>And eventually, grouping by, summing column a for each group and assigning the new AA column.</p>
<pre><code>sum_data = data.groupby('groups').sum().assign(AA=indexes).reset_index(drop=True)
# Out
# B AA
# 0 3 a
# 1 5 d
# 2 0 g
# 3 7 h
</code></pre>
<p>if having the row [2, 0, g] present is a nuisance, this last line can be added:</p>
<pre><code>sum_data = sum_data[sum_data['B'] != 0]
# Out
# B AA
# 0 3 a
# 1 5 d
# 3 7 h
</code></pre>
|
python|pandas|dataframe|sum|rows
| 1 |
1,908,129 | 66,723,606 |
Python dataframe select rows if any of the columns is greater than a value
|
<p>I want to consider only rows which have one or more columns greater than a value. My actual df has 26 columns. I wanted an iterative solution. Below I am giving an example with three columns.</p>
<p>My code:</p>
<pre><code>df = pd.DataFrame(np.random.randint(5, 15, (10, 3)), columns=list('abc'))
# In this dataframe I want to select rows that have one or more columns greater than 10.
# solution dataframe
sdf = df[(df['a']>10)|(df['b']>10)|(df['c']>10)]
</code></pre>
<p>How do I apply the same solution with 26 columns. Writing 26 columns inside <code>[]</code> I don't think is a pythonic way.</p>
|
<p>You can use apply function to calculate min of all columns. And min (all columns) should be greater than 10.</p>
<pre><code>df[df.apply(lambda x: min(x), axis=1) > 10]
</code></pre>
|
python|pandas|dataframe|numpy
| 2 |
1,908,130 | 63,798,920 |
How do I make a plotly timeline with int nr of months on the x-axis
|
<p>I am trying to make a plotly timeline where the x_start and x_end arguments are such that the x-axis is an integer nr of months instead of a date format. I have tried to use integers or arrays with integers with no luck.</p>
<p>Example code that fails:</p>
<pre><code>df1 = pd.DataFrame(data = {'Task':['T1.1'], 'Start':np.array((1,)), 'Finish':np.array((5,)), 'text':[""], 'WorkPackage':['WP1']})
fig = px.timeline(df, x_start="Start", x_end="Finish", y="Task", text = "text", color = 'WorkPackage',width = None, height = None)
</code></pre>
|
<p>If you still need help with this, it looks like your question was answered here: <a href="https://stackoverflow.com/questions/66078893/plotly-express-timeline-for-gantt-chart-with-integer-xaxis">Plotly Express timeline for Gantt Chart with integer xaxis?</a></p>
<p>Using what was suggested in the link, I made the following code and it worked.</p>
<pre><code># Import libraries
import pandas as pd
import plotly.express as px
# Create some data in a dictionary
mydict = {'Task': ['Task 1', 'Task 2', 'Task 3', 'Task 4'],
'Start': np.arange(1, 5),
'Finish': np.arange(6, 10)
}
# Make a DataFrame object from the dictionary
df = pd.DataFrame(mydict)
# Add a column for the delta
df['Delta'] = df['Finish'] - df['Start']
# Create Figure
fig = px.timeline(
df,
x_start='Start',
x_end='Finish',
y='Task',
color='Task',
)
# Update the layout
fig.layout.xaxis.type = 'linear'
# I found I needed to do this to get the tasks to show up as differnt colors
for i in np.arange(len(df)):
fig.data[i].x = df['Delta'].tolist()
# Show the figure
fig.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/l3CJe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/l3CJe.png" alt="enter image description here" /></a></p>
|
pandas|plotly|timeline|gantt-chart
| 0 |
1,908,131 | 52,909,718 |
Create an Array with the numbers of post of an other array :
|
<p>So here is my problem :</p>
<pre><code>toto = [1,4,5,2,4,7]
</code></pre>
<p>this array as 6 posts</p>
<p>So i would like to obtain an array with 6 posts going 0 to 5 (result here) from the toto array</p>
<pre><code>print(result)
..[0,1,2,3,4,5]
</code></pre>
<p>Something more elegant than :</p>
<pre><code>result = []
i = 0
for t in toto :
result.push(i)
i = i +1
</code></pre>
|
<p>You could use <code>range</code> between <code>0</code> and the length of <code>toto</code>:</p>
<pre><code>result = list(range(0, len(toto)))
</code></pre>
|
python
| 1 |
1,908,132 | 65,446,572 |
append row values from one df to another if no duplicates in pandas
|
<p>I have theses two dfs</p>
<pre><code>
df1 = pd.DataFrame({'pupil': ["sarah", "john", "fred"],
'class': ["1a", "1a", "1a"]})
df2 = pd.DataFrame({'pupil_mixed': ["sarah", "john", "lex"],
'class': ["1a", "1c", "1a"]})
</code></pre>
<p>I want to append the row values from the column "pupil_mixed" from df2
to the column "pupil" in df1 if the values are no duplicates</p>
<p>desired outcome:</p>
<pre><code>df1 = pd.DataFrame({'pupil': ["sarah", "john", "fred", 'lex'],
'class': ["1a", "1a", "1a", NaN]})
</code></pre>
<p>I used <code>append</code> with <code>loc</code></p>
<p><code>df1 = df1.append(df2.loc[df2['pupil_mixed'] != df1['pupil'] ])</code></p>
<p>which just appended the other column to the df with the matching row value and changed the non matching row values to NaN</p>
<pre><code> pupil class pupil_mixed
0 sarah 1a NaN
1 john 1a NaN
2 fred 1a NaN
2 NaN 1a lex
</code></pre>
|
<p>You could use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.concat.html" rel="noreferrer">concat</a> + <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.drop_duplicates.html" rel="noreferrer">drop_duplicates</a>:</p>
<pre><code>res = pd.concat((df1, df2['pupil_mixed'].to_frame('pupil'))).drop_duplicates('pupil')
print(res)
</code></pre>
<p><strong>Output</strong></p>
<pre><code> pupil class
0 sarah 1a
1 john 1a
2 fred 1a
2 lex NaN
</code></pre>
<p>As an alternative you could filter first (with <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.isin.html" rel="noreferrer">isin</a>) and then concat:</p>
<pre><code># filter the rows in df2, rename the column pupil_mixed
filtered = df2.loc[~df2['pupil_mixed'].isin(df1['pupil'])]
# create a new single column DataFrame with the pupil column
res = pd.concat((df1, filtered['pupil_mixed'].to_frame('pupil')))
print(res)
</code></pre>
<p>Both solutions use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.to_frame.html" rel="noreferrer">to_frame</a>, with the name parameter, effectively changing the <em>column</em> name.</p>
|
python|pandas|dataframe|append
| 6 |
1,908,133 | 72,136,155 |
Getting Error with UI for TikTok algorithm
|
<p>So I basically made a algorithm which takes the users input and decided on a rank from 1-100 and puts it on a UI.</p>
<pre><code>import tkinter, os, webbrowser, random
#algorithm
def tiktok_algorithm(share_count, like_count, comment_count, posts):
if (share_count >= 10000 and like_count >= 5000 and posts >20 and comment_count.find("amazing")!= -1):
rank = 1
elif (share_count >= 7000 and like_count >= 2000 and posts >25 and comment_count.find("amazing")!= -1):
rank = random.randint(2,10)
elif (share_count >= 6000 and like_count >= 1500 and posts >25 and comment_count.find("amazing")!= -1):
rank = random.randint(11,20)
elif (share_count >= 5000 and like_count >= 1300 and posts >20 and comment_count.find("amazing")!= -1):
rank = random.randint(21,40)
elif (share_count >= 4000 and like_count >= 1000 and posts >15 and comment_count.find("nice")!= -1):
rank = random.randint(41,55)
elif (share_count >= 3000 and like_count >= 500 and posts >11 and comment_count.find("cool")!= -1):
rank = random.randint(56,70)
elif (share_count >= 2000 and like_count >= 300 and posts >11 and comment_count.find("ok")!= -1):
rank = random.randint(71,80)
elif (share_count >= 1000 and like_count >= 150 and posts >11 and comment_count.find("ok")!= -1):
rank = random.randint(81,90)
elif (share_count >= 400 and like_count >= 100 and posts >11 and comment_count.find("ok")!= -1):
rank = random.randint(91,97)
else:
rank = random.randint(98,100)
return(rank)
def calculate_ranking():
# read in the entry values
like_count = int(likes_entry.get())
comment_count = comment_entry.get()
posts = int(posts_entry.get())
shares = int(share_entry.get())
rank = tiktok_algorithm(like_count, comment_count, shares, posts)
# "print" result to result label
result_label.config(text="Your tiktok ranking is: " + str(rank))
root = tkinter.Tk()
root.title("Magic Tiktok Algorithm")
root.geometry("400x600")
# likes
likes_label = tkinter.Label(root, text="Number of Likes:")
likes_label.pack()
likes_entry = tkinter.Entry(root)
likes_entry.pack()
# comment
comment_label = tkinter.Label(root, text="Comment:")
comment_label.pack()
comment_entry = tkinter.Entry(root)
comment_entry.pack()
#shares
share_label = tkinter.Label(root, text="number of shares:")
share_label.pack()
share_entry = tkinter.Entry(root)
share_entry.pack()
#posts
posts_label = tkinter.Label(root, text="Posts")
posts_label.pack()
posts_entry = tkinter.Entry(root)
posts_entry.pack()
#image
img_path = os.path.dirname(__file__) + "\\tiktok-tricks-09.PNG"
tiktok_image = tkinter.PhotoImage(file=img_path)
resized_tiktok_img = tiktok_image.subsample(4,4)
image_label = tkinter.Label(root, image=resized_tiktok_img)
image_label.pack()
# rank button
rank_button = tkinter.Button(root, text="Get Ranking", command=calculate_ranking)
rank_button.pack()
# result
result_label = tkinter.Label(root, text = "Your tiktok ranking is:")
result_label.pack()
# keep the UI running
root.mainloop()
</code></pre>
<p>I keep getting this error, I have tried casting it to an integer, but it still doesn't work can anyone help: Do I need to change the top to str(tiktok_algorithm)</p>
<p>typeError: '>=' not supported between instances of 'str' and 'int'</p>
|
<p>Lets look at the error: <code>TypeError: '>=' not supported between instances of 'str' and 'int'</code>.</p>
<p>This error occurs when you try to see if a string is >= an number. Examples of this would be "test" >= 42, or "42" >= 42. Even though the second case seems like it may work, "42" is a string an not an integer.</p>
<p>Just looking at <code>calculate_ranking</code>, I would guess that you need <code>comment_count = int(comment_entry.get())</code>.</p>
|
python|algorithm|user-interface|typeerror|tiktok
| 1 |
1,908,134 | 10,343,313 |
How can I redirect to a view in another application and still pass arguments (Django)
|
<p>Another question like this one was asked, but I don't 'like' the answer (in reality it didn't really answer the question in the first place): <a href="https://stackoverflow.com/questions/6581351/can-any-one-explain-how-can-i-pass-arg-or-kwargs-from-redirect-to-another-view">Can any one explain how can i pass arg or kwargs from redirect to another view?</a>)</p>
<p>I have a view and need to redirect to another view (in another application) and still send an argument along. Currently I have:</p>
<pre><code>return redirect('/projects/', login_error=error)
</code></pre>
<p>Which doesn't work (the redirect happens but the argument doesn't go through). Is it even possible to do this using <code>redirect()</code>? The documentation doesn't have anything on it.</p>
<p>However, I also tried referring to the view without using a URL:</p>
<pre><code>return redirect('projects.views.list_all', login_error=error)
</code></pre>
<p>But that doesn't work either.</p>
|
<p><a href="https://docs.djangoproject.com/en/dev/topics/http/shortcuts/#redirect" rel="nofollow"><code>redirect</code></a> returns an HTTP redirect to the supplied URL - that is, the browser receives a
30x response and initiates a new request.</p>
<p>To preserve state between the two requests, you either need to set a session variable (as per the other answer) or provide a query parameter, eg:</p>
<pre><code>return redirect('/projects/?login_error=error')
</code></pre>
<p>You will then need to process the incoming <code>request.GET</code> parameter in the other view.</p>
|
python|django|url|redirect|view
| 1 |
1,908,135 | 62,566,651 |
How to merge output results from lambda in s3
|
<p>I have some files in my s3 bucket and i use boto3 with lambda to look inside the files and count the frequency of a specific word in all files. I also added the date and a text. Until here, everything is fine. Now i got the output in 3 different lines. When i tried to put this output in s3, only the last line is uploaded in a file. But when i print the output in my lambda console all lines are there. I do not know what is wrong.</p>
<pre><code>from datetime import datetime
from datetime import timedelta
import boto3
import json
from json import dumps
s3 = boto3.resource('s3')
bucket = s3.Bucket('metrics')
def lambda_handler(event, context):
for obj in bucket.objects.all():
if 'pinpoint' in obj.key:
body = obj.get()['Body'].read().decode('utf-8')
response = ('Number of sessions:' + str(body.count('session')) + ' , ' + 'date:' + str(obj.last_modified.date()) + ' , ' + 'Unit:Count')
splitcontent = response.splitlines()
d = []
for lines in splitcontent:
pipesplit = lines.split(" , ")
d.append(dict(s.split(":",1) for s in pipesplit))
object = s3.Object('metrics', 'TESTS/tests')
object.put(Body=json.dumps(d, default=str).encode())
</code></pre>
<p>The output in my lambda console when i print(d) is:</p>
<pre><code>[{'Number of sessions': '3', 'date': '2020-05-22', 'Unit': 'Count'}]
[{'Number of sessions': '1', 'date': '2020-05-22', 'Unit': 'Count'}]
[{'Number of sessions': '1', 'date': '2020-06-25', 'Unit': 'Count'}]
</code></pre>
<p>But when i check the file in s3, only the last line is there.</p>
|
<p>Local variable is inside a for statement.</p>
<pre><code>from datetime import datetime
from datetime import timedelta
import boto3
import json
from json import dumps
s3 = boto3.resource('s3')
bucket = s3.Bucket('metrics')
def lambda_handler(event, context):
d = []
for obj in bucket.objects.all():
if 'pinpoint' in obj.key:
body = obj.get()['Body'].read().decode('utf-8')
response = ('Number of sessions:' + str(body.count('session')) + ' , ' + 'date:' + str(obj.last_modified.date()) + ' , ' + 'Unit:Count')
splitcontent = response.splitlines()
# d = []
for lines in splitcontent:
pipesplit = lines.split(" , ")
d.append(dict(s.split(":",1) for s in pipesplit))
object = s3.Object('metrics', 'TESTS/tests')
object.put(Body=json.dumps(d, default=str).encode())
</code></pre>
|
python|amazon-web-services|amazon-s3
| 1 |
1,908,136 | 62,834,483 |
how to change format of my error bars in matplotlib module
|
<p>I am trying to figure out how to make only upper error bar with cap "-" in my plot, which I already figured out by <code>lolims</code> argument. Unfortunately, my error bar is marked with an arrow, and I would prefer the default one. The upper subplot is with error bar I would like to have, and the lower one is with the mark I would like to have on my upper subplot.</p>
<p>My code:</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
sheet = pd.read_excel(load file with my data)
fig, axs = plt.subplots(2, 1)
fig.suptitle('Gene expression', fontsize=16)
axs[0].bar(sheet.columns, sheet.iloc[17],hatch="/", color="white", edgecolor="black")
axs[0].errorbar(sheet.columns, sheet.iloc[17], yerr=sheet.iloc[22], capsize=3, ecolor='black', fmt=' ', elinewidth=1,
lolims=True)
axs[1].bar(sheet.columns, sheet.iloc[17],hatch="-", color="white", edgecolor="black")
axs[1].errorbar(sheet.columns, sheet.iloc[17], yerr=sheet.iloc[22], capsize=3, ecolor='black', fmt=' ', elinewidth=1,)
plt.show()
</code></pre>
<p>and picture of my plots:
<a href="https://i.stack.imgur.com/GodN1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GodN1.png" alt="enter image description here" /></a></p>
<p>How can I achieve it?</p>
|
<p>You have three possibilites depending on whether you'd like to see the lower caps:</p>
<ol>
<li>Specify an (2,N)-shaped yerr or</li>
<li>Plot the error bars <strong>behind</strong> the bars</li>
<li><a href="https://stackoverflow.com/a/45754199/3944322">Change the caps afterwards</a> (as commented below by BigBen)</li>
</ol>
<br>
<pre><code>import matplotlib.pyplot as plt
fig,ax = plt.subplots(3)
x = range(1,4)
ax[0].bar(x, x, fc='w', ec='k')
ax[0].errorbar(x, x, yerr=[[0,0,0], [.1,.2,.3]], fmt='none', capsize=3)
ax[1].bar(x, x, fc='w', ec='k', zorder=1 )
ax[1].errorbar(x, x, yerr=[.1,.2,.3], fmt='none', capsize=3, zorder=0)
ax[2].bar(x, x, fc='w', ec='k')
_, cl, _ = ax[2].errorbar(x, x, yerr=[.1,.2,.3], lolims=True, fmt='none', capsize=3)
cl[0].set_marker('_')
#cl[1].set_marker('') # to remove the lower cap
</code></pre>
<p><a href="https://i.stack.imgur.com/SW0Ro.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SW0Ro.png" alt="enter image description here" /></a></p>
|
python|matplotlib|errorbar
| 2 |
1,908,137 | 67,319,883 |
In databricks using python, dbutils.fs.mount gives java.lang.NullPointerException: authEndpoint trying to mount using abfss. wasbs works fine
|
<p>When using db.fs.mount in databricks to connect to azure gen2 data lake a authEndpoint error is received when attempting to connect to "abfss://theDir@theDataLake.blob.core.windows.net/" HOWEVER, connecting to "wasbs://theDir@theDataLake.blob.core.windows.net/" works fine. I am trying to understand why abfss causes the authEndpoint error while wasbs does not.</p>
<pre><code>enter code here
#fails
endpoint = "abfss://theDir@theDataLake.blob.core.windows.net/";
dbutils.fs.mount(
source = endpoint,
mount_point = "/mnt/test",
extra_configs = {"fs.azure.account.key.theDataLake.blob.core.windows.net" : "xxxxxx"})
#works
endpoint = "wasbs://theDir@theDataLake.blob.core.windows.net/";
dbutils.fs.mount(
source = endpoint,
mount_point = "/mnt/test",
extra_configs = {"fs.azure.account.key.theDataLake.blob.core.windows.net" : "xxxxxx"})
</code></pre>
|
<p>You can't mount the ABFSS protocol using the storage key. You can mount with ABFSS only when using Service Principal (<a href="https://docs.microsoft.com/en-us/azure/databricks/data/data-sources/azure/adls-gen2/azure-datalake-gen2-sp-access#---mount-adls-gen2-storage" rel="nofollow noreferrer">docs</a>), and it requires another set of parameters for <code>extra_configs</code>:</p>
<pre><code>{"fs.azure.account.auth.type": "OAuth",
"fs.azure.account.oauth.provider.type": "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider",
"fs.azure.account.oauth2.client.id": "<application-id>",
"fs.azure.account.oauth2.client.secret": dbutils.secrets.get(scope="<scope-name>",key="<service-credential-key-name>"),
"fs.azure.account.oauth2.client.endpoint": "https://login.microsoftonline.com/<directory-id>/oauth2/token"}
</code></pre>
|
python|databricks|azure-databricks|azure-data-lake-gen2
| 1 |
1,908,138 | 71,402,907 |
How to load and preprocess a dataset by chunks?
|
<p>I have a large data frame to which I would like to apply a set of functions to one of its columns using <code>pipeline</code> and <code>progress_apply()</code>.</p>
<p>Here is my code snippet.</p>
<pre class="lang-py prettyprint-override"><code>df = # a dataFrame object with multiple columns where df.columns[-1] == 'text'
from tqdm.auto import tqdm
tqdm.pandas()
pipeline = # list of pre-defined methods
prepare(text, pipeline):
"""
a method that cleanup and remove stop words from text input
"""
return # list of clean tokens
# MemoryError! when reaching 50% of cleaning progress
df = df['text'].progress_apply(prepare, pipeline=pipeline)
</code></pre>
<p>I am trying to solve the issue of <code>MemoryError</code> using <code>progress_apply()</code> but loading data by <strong>chunks</strong>. I have no idea of how I can do this with <code>progress_apply()</code>.
I tried the following:</p>
<pre class="lang-py prettyprint-override"><code>for i in range(0, df.shape[0], 47):
df = df['text'][i:i+47].progress_apply(prepare, pipeline=pipeline)
</code></pre>
<p>What I have tried doesn't same the previous ranges.</p>
|
<pre class="lang-py prettyprint-override"><code>n, step = df.shape[0], 47
for i in range(0, n, step):
df[i:i+step] = df['text'][i:i+step].progress_apply(prepare, pipeline=pipeline)
</code></pre>
|
python|python-3.x|pandas|nlp
| 1 |
1,908,139 | 71,401,616 |
Chinese characters not showing up properly and changing JSON
|
<p>While writing a program to help myself study, I run into a problem with my program not displaying the Chinese characters properly.</p>
<p>The Chinese characters are loaded in from a .JSON file, and are then printed using a python program.</p>
<p>The JSON entries look like this.</p>
<pre><code>{
"symbol": "我",
"reading": "wo",
"meaning": "I",
"streak": 0
},
</code></pre>
<p>The output in the console looks like this
<a href="https://i.stack.imgur.com/UwPRi.png" rel="nofollow noreferrer">VS Code console output</a></p>
<p>And once the program has finished and dumps the info pack into the JSON, it looks like this.</p>
<pre><code>{
"symbol": "\u00e6\u02c6\u2018",
"reading": "wo",
"meaning": "I",
"streak": 0
}
</code></pre>
<p>Changing Language for non-Unicode programs to Chinese (simplified) didn't fix.</p>
<p>Using chcp 936 didn't fix the issue.</p>
<p>The program is not a .py file that is not being hosted online. The IDE is Visual Studio code.</p>
<p>The program for the python file is</p>
<pre><code> import json
#Import JSON file as an object
with open('cards.json') as f:
data = json.load(f)
def main():
for card in data['cards']:
#Store Current meaning reading and kanji in a local varialbe
currentKanji = card['symbol']
currentReading = card['reading']
currentMeaning = card['meaning']
#Ask the user the meaning of the kanji
inputMeaning = input(f'What is the meaning of {currentKanji}\n')
#Check if The user's answer is correct
if inputMeaning == currentMeaning:
print("Was correct")
else:
print("Was incorrect")
#Ask the User the reading of the kanji
inputReading = input(f'What is the reading of {currentKanji}\n')
#Check if the User's input is correct
if inputReading == currentReading:
print("Was Correct")
else:
print("Was incorrect")
#If both Answers correct, update the streak by one
if (inputMeaning == currentMeaning) and (inputReading == currentReading):
card['streak'] = card['streak'] + 1
print(card['streak'])
#If one of the answers is incorrect, decrease the streak by one
if not (inputMeaning == currentMeaning) or not (inputReading == currentReading):
card['streak'] = card['streak'] - 1
main()
#Reopen the JSON file an write new info into it.
with open('cards.json', 'w') as f:
json.dump(data,f,indent=2)
</code></pre>
|
<p>I think there are two problems with your code that are leading you to getting mojibake on your screen, and escaped nonsense in your file.</p>
<p>The first issue is some kind of encoding mismatch between your file and your program, or between your program and your console. I think it's the former, but I'm not sure. The best way to fix this is to specify the encoding you want to be using when you open your file at the beginning and end of the program, rather than using the default (which may not be what you expect).</p>
<p>Change:</p>
<pre><code>with open('cards.json') as f:
data = json.load(f)
</code></pre>
<p>To:</p>
<pre><code>with open('cards.json', encoding="utf-8") as f: # specify whatever actual
data = json.load(f) # encoding you're using
</code></pre>
<p>And do a similar change when you open the file to rewrite the contents at the end.</p>
<p>The second issue is that non-ASCII characters are not making the round trip from JSON into your program and then back to JSON. This problem isn't as big of an issue as the encoding problem, because an encoded JSON string will decode to the right character (if the character was correctly read in the first place, which due to the encoding issue above, it was not). That is, if you only fix the encoding issue, you might end up with <code>"\u6211"</code> in your JSON, which correctly decodes to <code>"我"</code> when you load the file again.</p>
<p>But if you want the Chinese character to be human-readable in the file, you just need to tell Python's JSON module not to escape it. Just pass <code>False</code> as the value of the <code>ensure_ascii</code> argument to <code>json.dump</code>:</p>
<pre><code>with open('cards.json', 'w', encoding="utf-8") as f: # encoding fix, from above
json.dump(data, f, indent=2, ensure_ascii=False) # new fix, to avoid escapes
</code></pre>
|
python|json
| 1 |
1,908,140 | 71,200,693 |
Change values in column based on condition- Python
|
<p>I have a dataset containing a thousand zeros, five hundred ones, and so on.
I want to change the first 400 zeros to 0.3, the next 600 zeros to 0.6.
Then, I want to change the first 200 ones to 1.4, the next 300 ones to 1.8.
And so on.</p>
<p>The whole point being I want to change the integer value to some fractions based on the frequency specified.</p>
<p>Ex: Dataset: 0,0,0,0,0,1,1,1,1,1,1
Output: 0.2,0.2,0.2,0.2,0.8,1.2,1.2,1.2,1.4,1.4,1.4
Input: Frequency, Dataset
Frequency=[4,1] for 0 & [3,3] for 1
New dataset=[0.2,0.8] for 0 & [1.2,1.4] for 1</p>
|
<p>Assuming your datapoints are sorted, a simple solution would be</p>
<pre><code>df = pd.DataFrame({'col':[0,0,0,0,0,1,1,1,1,1,1]})
frequencies = [
[4, 1],
[3, 3],
]
new_values = [
[0.2, 0.8],
[1.2, 1.4],
]
a = np.concatenate(frequencies)
df['new_col'] = np.concatenate(new_values)[np.arange(len(a)).repeat(a)]
</code></pre>
<p>Output:</p>
<pre><code>>>> df
col new_col
0 0 0.2
1 0 0.2
2 0 0.2
3 0 0.2
4 0 0.8
5 1 1.2
6 1 1.2
7 1 1.2
8 1 1.4
9 1 1.4
10 1 1.4
</code></pre>
|
python|dataframe|numpy
| 0 |
1,908,141 | 71,225,952 |
Try each function of a class with functools.wraps decorator
|
<p>I'm trying to define a <strong>decorator</strong> in order to execute a class method, <strong>try it first</strong> and, if an error is detected, raise it mentioning the method in which failed, so as to the user could see in which method is the error.</p>
<p>Here I show a <strong>MRE</strong> (Minimal, Reproducible Example) of my code.</p>
<pre><code>from functools import wraps
def trier(func):
"""Decorator for trying A-class methods"""
@wraps(func)
def inner_func(self, name, *args):
try:
func(self, *args)
except:
print(f"An error apeared while {name}")
return inner_func
class A:
def __init__(self):
self._animals = 2
self._humans = 5
@trier('getting animals')
def animals(self, num):
return self._animals + num
@trier('getting humans')
def humans(self):
return self._humans
A().animals
</code></pre>
<p>Many <strong>errors</strong> are raising, like:</p>
<blockquote>
<p>TypeError: inner_func() missing 1 required positional argument: 'name'</p>
</blockquote>
<p>or misunderstanding self class with self function.</p>
|
<p>As an alternative to Stefan's answer, the following simply uses <code>@trier</code> without any parameters to decorate functions, and then when printing out the error message we can get the name with <code>func.__name__</code>.</p>
<pre><code>from functools import wraps
def trier(func):
"""Decorator for trying A-class methods"""
@wraps(func)
def inner_func(self, *args, **kwargs):
try:
return func(self, *args, **kwargs)
except:
print(f"An error apeared in {func.__name__}")
return inner_func
class A:
def __init__(self):
self._animals = 2
self._humans = 5
@trier
def animals(self, num):
return self._animals + num
@trier
def humans(self):
return self._humans
print(A().animals(1))
</code></pre>
<p>I also fixed a couple of bugs in the code: In <code>trier</code>'s try and except the result of calling <code>func</code> was never returned, and you need to include <code>**kwargs</code> in addition to <code>*args</code> so you can use named parameters. I.e. <code>A().animals(num=1)</code> only works when you handle <code>kwargs</code>.</p>
|
python|decorator|wrapper|functools
| 2 |
1,908,142 | 70,333,098 |
Create and send an e-mail with a scheduled time in outlook with python
|
<p>I would like to create and send an e-mail with a scheduled time (with delay delivery in the options tab) in outlook with python.
The script below simply sends an email without a delivery time option:</p>
<pre><code>import win32com.client as win32
def Emailer(text, subject, recipient):
outlook = win32.Dispatch('outlook.application')
mail = outlook.CreateItem(0)
mail.To = recipient
mail.Subject = subject
mail.HtmlBody = text
mail.send
</code></pre>
|
<p>I added the line below, as an example, it worked.</p>
<pre><code>mail.DeferredDeliveryTime = datetime.datetime(2022, 4, 1, 17, 29, 25, tzinfo=datetime.timezone.utc)
</code></pre>
|
python|win32com
| 0 |
1,908,143 | 11,372,033 |
Python urllib2 not found
|
<p>I'm getting an error when testing a python script which is installed on my Android Emulator running SDK 2.2</p>
<p>I have installed "Python_for_android_r1.apk" and "sl4a_r5.apk" in my emulator. It seems that my code is trying to import the following:</p>
<pre><code>from urllib import urlencode
from urllib2 import urlopen
</code></pre>
<p>And from what I can tell urllib2 is not found based on the error below.</p>
<pre><code>( FILE "/home/manuel/A;tanaStudio3Workspace/python-for-android/python-build/output/usr/lib/python2.6/urllib2.py, line 124 in urlopen )
</code></pre>
<p>Any ideas how I can fix this problem??</p>
|
<p>Your urllib module seems to be found. If the module is not found, python will return you an error at the import. </p>
<p>Looking at the error, it appears that you are having problems with urlopen. Is the url you are trying to open valid? Line 124 in urllib2 refers to the opener that you are using to get your response.</p>
|
python|urllib2
| 1 |
1,908,144 | 56,854,176 |
Is there a way i can detect the image orientation and rotate the image to the right angle?
|
<p>I am making a script that repairs scanned documents and i now need a way to detect the image orientation and rotate the image so its rotation is correct.</p>
<p>Right now my script is unreliable and isn't that precise. </p>
<p>right now I look for a line and it rotates the first line it sees correctly but this barely works except for a few images</p>
<pre class="lang-py prettyprint-override"><code>img_before = cv2.imread('rotated_377.jpg')
img_gray = cv2.cvtColor(img_before, cv2.COLOR_BGR2GRAY)
img_edges = cv2.Canny(img_gray, 100, 100, apertureSize=3)
lines = cv2.HoughLinesP(img_edges, 1, math.pi / 180.0, 100, minLineLength=100, maxLineGap=5)
angles = []
for x1,y1,x2,y2 in lines[0]:
angle = math.degrees(math.atan2(y2 - y1, x2 - x1))
angles.append(angle)
median_angle = np.median(angles)
img_rotated = ndimage.rotate(img_before, median_angle)
print("Angle is {}".format(median_angle))
cv2.imwrite('rotated.jpg', img_rotated)
</code></pre>
<p>I want to make a script that gets an image like this one(don't mind the image its for testing purposes)
<a href="https://i.stack.imgur.com/i6rqL.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/i6rqL.jpg" alt="rotated image"></a></p>
<p>and rotates it in the right way so I get a correctly orientated image.</p>
|
<p>This is an interesting problem, i have tried with many approaches to correct orientation of document images but all of them have got different exceptions.
I am sharing one of the approaches based on text orientation. For text region detection i am using gradient map of input image.</p>
<p>All other implementation details are commented in the code.</p>
<p><strong>Please note that this only works if all the text present in image have same orientation.</strong></p>
<pre><code>#Document image orientation correction
#This approach is based on text orientation
#Assumption: Document image contains all text in same orientation
import cv2
import numpy as np
debug = True
#Display image
def display(img, frameName="OpenCV Image"):
if not debug:
return
h, w = img.shape[0:2]
neww = 800
newh = int(neww*(h/w))
img = cv2.resize(img, (neww, newh))
cv2.imshow(frameName, img)
cv2.waitKey(0)
#rotate the image with given theta value
def rotate(img, theta):
rows, cols = img.shape[0], img.shape[1]
image_center = (cols/2, rows/2)
M = cv2.getRotationMatrix2D(image_center,theta,1)
abs_cos = abs(M[0,0])
abs_sin = abs(M[0,1])
bound_w = int(rows * abs_sin + cols * abs_cos)
bound_h = int(rows * abs_cos + cols * abs_sin)
M[0, 2] += bound_w/2 - image_center[0]
M[1, 2] += bound_h/2 - image_center[1]
# rotate orignal image to show transformation
rotated = cv2.warpAffine(img,M,(bound_w,bound_h),borderValue=(255,255,255))
return rotated
def slope(x1, y1, x2, y2):
if x1 == x2:
return 0
slope = (y2-y1)/(x2-x1)
theta = np.rad2deg(np.arctan(slope))
return theta
def main(filePath):
img = cv2.imread(filePath)
textImg = img.copy()
small = cv2.cvtColor(textImg, cv2.COLOR_BGR2GRAY)
#find the gradient map
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3))
grad = cv2.morphologyEx(small, cv2.MORPH_GRADIENT, kernel)
display(grad)
#Binarize the gradient image
_, bw = cv2.threshold(grad, 0.0, 255.0, cv2.THRESH_BINARY | cv2.THRESH_OTSU)
display(bw)
#connect horizontally oriented regions
#kernal value (9,1) can be changed to improved the text detection
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (9, 1))
connected = cv2.morphologyEx(bw, cv2.MORPH_CLOSE, kernel)
display(connected)
# using RETR_EXTERNAL instead of RETR_CCOMP
# _ , contours, hierarchy = cv2.findContours(connected.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
contours, hierarchy = cv2.findContours(connected.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE) #opencv >= 4.0
mask = np.zeros(bw.shape, dtype=np.uint8)
#display(mask)
#cumulative theta value
cummTheta = 0
#number of detected text regions
ct = 0
for idx in range(len(contours)):
x, y, w, h = cv2.boundingRect(contours[idx])
mask[y:y+h, x:x+w] = 0
#fill the contour
cv2.drawContours(mask, contours, idx, (255, 255, 255), -1)
#display(mask)
#ratio of non-zero pixels in the filled region
r = float(cv2.countNonZero(mask[y:y+h, x:x+w])) / (w * h)
#assume at least 45% of the area is filled if it contains text
if r > 0.45 and w > 8 and h > 8:
#cv2.rectangle(textImg, (x1, y), (x+w-1, y+h-1), (0, 255, 0), 2)
rect = cv2.minAreaRect(contours[idx])
box = cv2.boxPoints(rect)
box = np.int0(box)
cv2.drawContours(textImg,[box],0,(0,0,255),2)
#we can filter theta as outlier based on other theta values
#this will help in excluding the rare text region with different orientation from ususla value
theta = slope(box[0][0], box[0][1], box[1][0], box[1][1])
cummTheta += theta
ct +=1
#print("Theta", theta)
#find the average of all cumulative theta value
orientation = cummTheta/ct
print("Image orientation in degress: ", orientation)
finalImage = rotate(img, orientation)
display(textImg, "Detectd Text minimum bounding box")
display(finalImage, "Deskewed Image")
if __name__ == "__main__":
filePath = 'D:\data\img6.jpg'
main(filePath)
</code></pre>
<p>Here is Image with detected text regions, from this we can see that some of the text regions are missing. Text orientation detection plays the key role here in overall document orientation detection so based on document type a few small tweaks should be made in the text detection algorithm to make this approach work better.</p>
<p><a href="https://i.stack.imgur.com/jj8J2.png" rel="noreferrer"><img src="https://i.stack.imgur.com/jj8J2.png" alt="Image with detected text regions" /></a></p>
<p>Here is the final image with correct orientation
<a href="https://i.stack.imgur.com/slvdz.png" rel="noreferrer"><img src="https://i.stack.imgur.com/slvdz.png" alt="Final deskewed image" /></a></p>
<p>Please suggest modifications in this approaches to make it more robust.</p>
|
python|image-processing
| 15 |
1,908,145 | 68,154,571 |
Converting an OpenCV program into a Module reduces frame rate drastically
|
<p>I wrote a code for pose estimation using OpenCV and mediapipe library. The program was working well and I was getting around 30-35 fps. When I tried to convert the same program to a module so that I can use it easily in future for different projects, the fps of the new code(module) reduced drastically to 3-4 fps.
My original Program:</p>
<pre><code>import cv2
import mediapipe as mp
import time
cap = cv2.VideoCapture(1)
pTime = 0
cTime = 0
mpDraw = mp.solutions.drawing_utils
mpPose = mp.solutions.pose
pose = mpPose.Pose()
while True:
success, img1 = cap.read()
img = cv2.flip(img1, 1)
imgRGB = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
results = pose.process(imgRGB)
if results.pose_landmarks:
mpDraw.draw_landmarks(img, results.pose_landmarks, mpPose.POSE_CONNECTIONS)
for id, lm in enumerate(results.pose_landmarks.landmark):
h, w, c = img.shape
cx, cy = int(lm.x*w), int(lm.y*h)
cv2.circle(img, (cx, cy), 5, (255, 0, 0), cv2.FILLED)
cTime = time.time()
fps = 1/(cTime - pTime)
pTime = cTime
cv2.putText(img, "FPS : " + str(int(fps)), (10, 50), cv2.FONT_HERSHEY_COMPLEX, 1, (255, 0, 8), 2)
cv2.imshow("Live Feed", img)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
</code></pre>
<p>My attempt at converting it into a module :</p>
<pre><code>import cv2
import mediapipe as mp
import time
class poseDetector():
def __init__(self, mode=False, upBody=False, smooth=True, detectionCon = 0.5, trackingCon=0.5):
self.mode = mode
self.upBody = upBody
self.smooth = smooth
self.detectionCon = detectionCon
self.trackingCon = trackingCon
self.mpDraw = mp.solutions.drawing_utils
self.mpPose = mp.solutions.pose
self.pose =self.mpPose.Pose(self.mode, self.upBody, self.smooth, self.detectionCon, self.trackingCon)
def findPose(self, img, draw=True):
imgRGB = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
self.results = self.pose.process(imgRGB)
if self.results.pose_landmarks:
if draw:
self.mpDraw.draw_landmarks(img, self.results.pose_landmarks, self.mpPose.POSE_CONNECTIONS)
return img
def findPosition(self, img, draw=True):
lmList = []
if self.results.pose_landmarks:
for id, lm in enumerate(self.results.pose_landmarks.landmark):
h, w, c = img.shape
cx, cy = int(lm.x*w), int(lm.y*h)
lmList.append([id, cx, cy])
if draw:
cv2.circle(img, (cx, cy), 5, (255, 0, 0), cv2.FILLED)
return lmList
def main():
cap = cv2.VideoCapture(1)
pTime = 0
cTime = 0
while True:
success, img1 = cap.read()
img = cv2.flip(img1, 1)
detector = poseDetector()
img = detector.findPose(img)
lmList = detector.findPosition(img)
cTime = time.time()
fps = 1/(cTime - pTime)
pTime = cTime
cv2.putText(img, "FPS : " + str(int(fps)), (10, 50), cv2.FONT_HERSHEY_COMPLEX, 1, (255, 0, 8), 2)
cv2.imshow("Live Feed", img)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
if __name__ == '__main__':
main()
</code></pre>
<p>According to me , both the code should have been working in the same manner, but they are not. Can anyone tell where am I making mistake ?</p>
|
<p>You need to place <code>detector = poseDetector()</code> to be before the <code>while True:</code>:</p>
<pre><code>detector = poseDetector()
while True:
success, img1 = cap.read()
...
</code></pre>
<hr />
<p>Your "module" implementation creates a new <code>poseDetector</code> object every iteration of the main loop.<br />
Each execution of <code>detector = poseDetector()</code> includes a call to <code>poseDetector.__init__</code> that calls <code>self.pose =self.mpPose.Pose</code>...<br />
There is a lot of overhead...</p>
<pre><code>while True:
success, img1 = cap.read()
img = cv2.flip(img1, 1)
detector = poseDetector()
...
</code></pre>
<hr />
<p>In your original ("non-module") implementation, you are executing <code>pose = mpPose.Pose()</code> only once (before the loop).</p>
<pre><code>pose = mpPose.Pose()
while True:
success, img1 = cap.read()
...
</code></pre>
<hr />
<p>I have tested your code before and after moving <code>detector = poseDetector()</code> outside the loop.<br />
After moving the line above the loop, the frame rate is the same as the "non-module" implementation.</p>
|
python|opencv|pose-estimation
| 0 |
1,908,146 | 59,447,411 |
I'm trying to dynamically select using python an image and use it on a website, is there a way for this to work?
|
<p>This is my python code-</p>
<pre><code>file1 = '2D.jpg'
file2 = '3D.jpg'
s = f"""
<img src="{{url_for('static', filename='{file1}')}}" />
<img src="{{url_for('static', filename='{file2}')}}" />
"""
return render_template("index.html", s = s)
</code></pre>
<p>and my HTML using that varaible</p>
<pre><code><p>
{{ s }}
</p>
</code></pre>
<p>instead of showing the image, as I would like, it just shows the html code I wrote. Any ideas for fixes? (I'm using Jinja2 if that helps)</p>
|
<p>Your approach for generating HTML entities in your views then passing them to your templates is Flask/Jinja anti-pattern.</p>
<p>What this means, you need to seperate the logic between your backend (your views) and your rendered templates (Jinja part).</p>
<p>This is why when you apply <code>safe</code> filter in your template you'll endup to tell Jinja <code>Just show me this string as HTML and don't process it</code>. So, your entities that holds Jinja variables and methods are not well rendered.</p>
<p>So, an easy solution to your problem is to pass only the list of pictures you want to render and do the template logic in the template itself.</p>
<p>Here is a working example:</p>
<p><strong>app.py:</strong></p>
<pre><code>from flask import Flask, render_template
from flask.views import MethodView
application = Flask(__name__)
class IndexView(MethodView):
def get(self):
filenames = ['file1.jpg', 'file2.jpg']
return render_template('index.html', filenames=filenames)
application.add_url_rule('/', view_func=IndexView.as_view('index'))
if __name__ == '__main__':
application.run(debug=True)
</code></pre>
<p><strong>index.html:</strong></p>
<pre><code><html>
<body>
{% for filename in filenames %}
<img src="{{url_for('static', filename=filename)}}" />
{% endfor %}
</body>
<html>
</code></pre>
<p>And when you load the app's website you'll have this HTML block:</p>
<pre><code><html>
<body>
<img src="/static/file1.jpg" />
<img src="/static/file2.jpg" />
</body>
<html>
</code></pre>
|
python|html|flask
| 0 |
1,908,147 | 62,285,404 |
Read data in chunks from large request_body
|
<p>I am new to tornado . Currently I want to read from a post request_body . But data in post request is large so I want to implement it through stream_body_request in Tornado. But I am not able to implement it. How to read this 'img_data' in chunks?</p>
<pre><code>@stream_request_body
class MyReportPDF(BaseHandler):
async def post(self):
data = escape.json_decode(self.get_body_argument('img_data'))#This data is an base_64_array
for i in range(len(data)):
# Clean the base_64 Image
data[i].replace('data:image/jpeg;base64,', '')
decode_image.append(base64.b64decode(data[i]))
</code></pre>
|
<p>When you use <a href="https://www.tornadoweb.org/en/stable/web.html?highlight=stream_request_body#tornado.web.stream_request_body" rel="nofollow noreferrer"><code>@stream_request_body</code></a>, your RequestHandler class should define a method <code>data_received</code> which will be called with data as it comes in. The <code>post</code> method is called at the end after all of the data is received. </p>
<p>Note that it is not currently possible to use the argument-related methods with <code>@stream_request_body</code>; you'll need to parse the data in <code>data_received</code> as it comes in. This means that if possible you'll want to structure your API to receive the image as a plain HTTP PUT instead of being wrapped in a form-style POST. </p>
|
python-3.x|request|tornado|chunks
| 0 |
1,908,148 | 62,245,425 |
Converting time-series values into a python list
|
<p>I am retrieving time-series values using <strong>odeint</strong> function. It solves a system of differential equation.</p>
<pre><code>measurement_times = np.arange(0, 12,.1)
init = [.1,.1,.1,.1,.1]
def tar(y, measurement_times):
T, U, V,W,I = y
dT = 0.9*I*10.24 - T*0.0012
dU = V*T*0.0154 - U*1*0.81
dV = W*0.1*0.12 + U*1*0.81 - V*1.64 - V*T*0.015
dW= V*1.64 + 0.7*1*0.47 - W*0.1*0.12 - W*U*1591.5*1
dI= T*0.0012 + 0.8*U*1410.79*1- 0.9*I*10.24 - I*1*1934.77*1
return dT, dU, dV, dW, dI
targetmodel= sp.integrate.odeint(tar, init, measurement_times)
</code></pre>
<p>If I print the values of <strong>dU</strong> it gives me some values like mentioned below.</p>
<pre><code>for g in targetmodel:
print(str(g[1]));
--------------------------------
0.1
0.09223727996210558
0.0850835704105759
0.07849011448256649
</code></pre>
<p>What I want is to convert the values into a list and assign that list to the variable <strong>data</strong>. Currently, I am doing it manually by copying the values and assigning to the variable <strong>data</strong> </p>
<pre><code> Manual way
data = [0.1,0.09223727996210558,0.0850835704105759,0.078490114482]
</code></pre>
<p>I would like to find a way to do it automatically without assigning the values to variable <strong>data</strong> manually. Thanks</p>
|
<p>I hope that helps you</p>
<pre class="lang-py prettyprint-override"><code>data = []
for g in targetmodel:
data.append(g[1])
</code></pre>
|
python|list|time-series
| 1 |
1,908,149 | 35,441,980 |
Possible to add numpy arrays to python sets?
|
<p>I know that in order to add an element to a set it must be hashable, and numpy arrays seemingly are not. This is causing me some problems because I have the following bit of code:</p>
<pre><code>fill_set = set()
for i in list_of_np_1D:
vecs = i + np_2D
for j in range(N):
tup = tuple(vecs[j,:])
fill_set.add(tup)
# list_of_np_1D is a list of 1D numpy arrays
# np_2D is a 2D numpy array
# np_2D could also be converted to a list of 1D arrays if it helped.
</code></pre>
<p>I need to get this running faster and nearly 50% of the run-time is spent converting slices of the 2D numpy array to tuples so they can be added to the set.</p>
<p>so I've been trying to find out the following</p>
<ul>
<li>Is there any way to make numpy arrays, or something that functions like numpy arrays (has vector addition) hashable so they can be added to sets?</li>
<li>If not, is there a way I can speed up the process of making the tuple conversion?</li>
</ul>
<p>Thanks for any help!</p>
|
<p>Create some data first:</p>
<pre><code>import numpy as np
np.random.seed(1)
list_of_np_1D = np.random.randint(0, 5, size=(500, 6))
np_2D = np.random.randint(0, 5, size=(20, 6))
</code></pre>
<p>run your code:</p>
<pre><code>%%time
fill_set = set()
for i in list_of_np_1D:
vecs = i + np_2D
for v in vecs:
tup = tuple(v)
fill_set.add(tup)
res1 = np.array(list(fill_set))
</code></pre>
<p>output:</p>
<pre><code>CPU times: user 161 ms, sys: 2 ms, total: 163 ms
Wall time: 167 ms
</code></pre>
<p>Here is a speedup version, it use broadcast, <code>.view()</code> method to convert dtype to string, after calling <code>set()</code> convert the string back to array:</p>
<pre><code>%%time
r = list_of_np_1D[:, None, :] + np_2D[None, :, :]
stype = "S%d" % (r.itemsize * np_2D.shape[1])
fill_set2 = set(r.ravel().view(stype).tolist())
res2 = np.zeros(len(fill_set2), dtype=stype)
res2[:] = list(fill_set2)
res2 = res2.view(r.dtype).reshape(-1, np_2D.shape[1])
</code></pre>
<p>output:</p>
<pre><code>CPU times: user 13 ms, sys: 1 ms, total: 14 ms
Wall time: 14.6 ms
</code></pre>
<p>To check the result:</p>
<pre><code>np.all(res1[np.lexsort(res1.T), :] == res2[np.lexsort(res2.T), :])
</code></pre>
<p>You can also use <code>lexsort()</code> to remove duplicated data:</p>
<pre><code>%%time
r = list_of_np_1D[:, None, :] + np_2D[None, :, :]
r = r.reshape(-1, r.shape[-1])
r = r[np.lexsort(r.T)]
idx = np.where(np.all(np.diff(r, axis=0) == 0, axis=1))[0] + 1
res3 = np.delete(r, idx, axis=0)
</code></pre>
<p>output:</p>
<pre><code>CPU times: user 13 ms, sys: 3 ms, total: 16 ms
Wall time: 16.1 ms
</code></pre>
<p>To check the result:</p>
<pre><code>np.all(res1[np.lexsort(res1.T), :] == res3)
</code></pre>
|
python|numpy|casting|set|tuples
| 2 |
1,908,150 | 59,854,377 |
ValueError when executing os.popen(...)
|
<p>I want to get the width of a terminal, to print a certain elemnt till the end of the line. But I get an ValueError, when executing this command in Windows:</p>
<pre><code>rows, columns = os.popen('stty size', 'r').read().split()
ValueError: not enough values to unpack (expected 2, got 0)
</code></pre>
<p>In Ubuntu, this is working perfectly, but not in Windows.
What can I do there?</p>
|
<p>You can use below lines to get buffer size in windows command prompt</p>
<pre><code>rows = int(os.popen('mode con | findstr Lines','r').read().strip('\n').strip(' ').split(':')[1].strip(' '))
columns = int(os.popen('mode con | findstr Columns','r').read().strip('\n').strip(' ').split(':')[1].strip(' '))
</code></pre>
|
python|python-3.x|windows|operating-system|system
| 0 |
1,908,151 | 59,827,311 |
Python regex to return all matching strings with comma seperated
|
<p>I'm looking for a regex to extract all the matching strings in a Dataframe column.</p>
<p>Ex:</p>
<pre><code>Test_col_Name <br>
R2T.20.98.12 / New T7Y.10.35.10 <br>
G2O.16.18.02 / Use T7K.11.15.03<br>
A2U.10.18.15<br>
Test<br>
nan<br>
K9I.78.34.20<br>
test text P2I.67.78.99<br>
</code></pre>
<p>Can anyone please help me with regex to get all the matching items in new column as below?</p>
<pre><code>Test_col_Name<br>
R2T.20.98.12,T7Y.10.35.10<br>
G2O.16.18.02,T7K.11.15.03<br>
A2U.10.18.15<br>
nan<br>
nan<br>
K9I.78.34.20<br>
P2I.67.78.99<br>
</code></pre>
|
<p>I don't full see the logic your going with but this could be a starting point</p>
<p><code>df['col'].str.replace('/|New|Use|[T-t]est|text|', '').str.replace(' ', ',')</code></p>
<pre><code>0 R2T.20.98.12, T7Y.10.35.10 <br>
1 G2O.16.18.02, T7K.11.15.03<br>
2 A2U.10.18.15<br>
3 <br>
4 nan<br>
5 K9I.78.34.20<br>
6 P2I.67.78.99<br>
</code></pre>
|
python|regex|dataframe
| 0 |
1,908,152 | 49,240,964 |
How to print all of the variables and their type inside a function
|
<p>I know that I could use <code>locals()</code> or <code>globals()</code> to either get all of the local or global variables used in a Python script environment, but does Python have a keyword that I can use to call only the variables inside a function?</p>
<p>For example: </p>
<pre><code>def function():
a = 3;
b = 4;
c = float(b+a)
>> keyword(function())
>> [a : <class 'int'>, b : <class 'int'>, c : <class 'float'>]
</code></pre>
|
<p>Are you looking for something like this:</p>
<pre><code># define these outside the scope of the function
x = 10
y = 20
def function():
a = 3;
b = 4;
c = float(b+a)
l = locals()
print(", ".join(["{var}: {type}".format(var=v, type=type(l[v])) for v in l]))
function()
#a: <type 'int'>, c: <type 'float'>, b: <type 'int'>
</code></pre>
|
python|function|variables
| 3 |
1,908,153 | 48,892,101 |
Exporting variable values to a separate (.txt) file
|
<p>I have made code for a project and I am trying to get the final output (a/b) to be outputted to an external text document. I have tried a lot, but nothing is currently working.</p>
<pre><code>#SUITABLE GREETING FOR THE USER
print ("Hi Jim, welcome to the bear factory wage calculator program.")
#INPUT bears made
bears_made = int(input("Enter the number of bears you made: "))
#INPUT hours worked
hours_worked = int(input("Enter the number of hours you worked: "))
#ABILITY TO CHANGE THE WAGE PER BEAR AND HOUR
z = str(input("Would you like to change the wage? "))
if z =="yes":
x = int(input("Enter how much the user should get per bear: "))
a = bears_made*(x)
e = int(input("Enter how much the user should get an hour: "))
b = hours_worked*(e)
else:
#BEARSMADE x 2 = a
a = (bears_made *2)
#HOURSWORKED x 5 = b
b = (hours_worked *5)
#OUTPUT BIGGER VALUE
if a > b:
#OUTPUT A
print ("You earned", a," pounds")
else:
#OUTPUT B
print ("You earned", b," pounds")
#f = open("exported.txt",'w') ###This is the area trying to troubleshoot
#text = (a) (b)
#f.write (text)
#f.close()
#END OF PROGRAM
</code></pre>
<p>Edit : Sorry, just realised I should add more. I am getting the following error:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\samho\OneDrive\Desktop\plannedcode.py", line 37, in <module>
text = (a) (b)
TypeError: 'int' object is not callable
</code></pre>
|
<p>In your code, you assign values to <code>a</code> and <code>b</code>, these two variables have numbers in them, and are of type <code>int</code>. </p>
<p>When you run<br>
<code>text = (a)(b)</code><br>
You're essentially calling this: <code>1(2)</code>, which is invalid.</p>
<p>I assume you want to output both of those to the text file so replace that statement with this:<br>
<code>text = "({})({})".format(a, b)</code></p>
|
python
| 1 |
1,908,154 | 25,199,812 |
How can I get words after and before a specific token?
|
<p>I currently work on a project which is simply creating basic corpus databases and tokenizes texts. But it seems I am stuck in a matter. Assume that we have those things:</p>
<pre class="lang-python prettyprint-override"><code>import os, re
texts = []
for i in os.listdir(somedir): # Somedir contains text files which contain very large plain texts.
with open(i, 'r') as f:
texts.append(f.read())
</code></pre>
<p>Now I want to find the word before and after a token.</p>
<pre><code>myToken = 'blue'
found = []
for i in texts:
fnd = re.findall('[a-zA-Z0-9]+ %s [a-zA-Z0-9]+|\. %s [a-zA-Z0-9]+|[a-zA-Z0-9]+ %s\.' %(myToken, myToken, myToken), i, re.IGNORECASE|re.UNICODE)
found.extend(fnd)
print myToken
for i in found:
print '\t\t%s' %(i)
</code></pre>
<p>I thought there would be three possibilities: The token might start sentence, the token might end sentence or the token might appear somewhere in the sentence, so I used the regex rule above. When I run, I come across those things:</p>
<pre><code>blue
My blue car # What I exactly want.
he blue jac # That's not what I want. That must be "the blue jacket."
eir blue phone # Wrong! > their
a blue ali # Wrong! > alien
. Blue is # Okay.
is blue. # Okay.
...
</code></pre>
<p>I also tried \b\w\b or \b\W\b things, but unfortunately those did not return any results instead of returning wrong results. I tried:</p>
<pre><code>'\b\w\b%s\b[a-zA-Z0-9]+|\.\b%s\b\w\b|\b\w\b%s\.'
'\b\W\b%s\b[a-zA-Z0-9]+|\.\b%s\b\W\b|\b\W\b%s\.'
</code></pre>
<p>I hope question is not too blur.</p>
|
<p>I think what you want is:</p>
<ol>
<li>(Optionally) a word and a space;</li>
<li>(Always) <code>'blue'</code>;</li>
<li>(Optionally) a space and a word.</li>
</ol>
<p>Therefore one appropriate regex would be:</p>
<pre><code>r'(?i)((?:\w+\s)?blue(?:\s\w+)?)'
</code></pre>
<p>For example:</p>
<pre><code>>>> import re
>>> text = """My blue car
the blue jacket
their blue phone
a blue alien
End sentence. Blue is
is blue."""
>>> re.findall(r'(?i)((?:\w+\s)?{0}(?:\s\w+)?)'.format('blue'), text)
['My blue car', 'the blue jacket', 'their blue phone', 'a blue alien', 'Blue is', 'is blue']
</code></pre>
<p>See demo and token-by-token explanation <a href="http://regex101.com/r/mB1rU7/2" rel="nofollow">here</a>.</p>
|
python|regex|nlp|text-processing|trigram
| 3 |
1,908,155 | 70,904,401 |
error Object of type ImageFieldFile is not JSON serializable in django
|
<p>I'm trying to learn ajax in Django but when I ran this simple test I got this error and I can't find the reason, my Django version is 4.0</p>
<p>TypeError at /Customer/
Object of type ImageFieldFile is not JSON serializable
Request Method: POST
Request URL: <a href="http://127.0.0.1:8000/Customer/" rel="nofollow noreferrer">http://127.0.0.1:8000/Customer/</a>
Django Version: 4.0.1
Exception Type: TypeError</p>
<p>This is my js file code:</p>
<pre><code>$(document).ready(function() {
$("#button_create_customer").click(function() {
var serializData = $("#create_customer").serialize();
$.ajax({
url: $("create_customer").data('url'),
data: serializData,
dateType: 'json',
type: 'post',
success: function(response) {
$("#Customer_List_View").append('<tr role="row" class="odd" >\n' +
'<td><a href="Profile/' + response.list.id + '/">' + response.list.Fullname + '</a></td>\n' +
'<td>' + response.list.Phone + '</td>\n' +
'<td>' + response.list.Address + '</td>\n' +
'</tr>')
const sText = 'ثبت مشخصات بیمار : " ' + response.list.Fullname + ' " با موفقیت انجام گردید'
handelAlert('success', sText)
setTimeout(() => {
alertBox.innerHTML = ""
}, 3000)
},
error: function(error) {
handelAlert('danger', 'خطا در ثبت');
setTimeout(() => {
alertBox.innerHTML = ""
}, 3000);
},
})
$("#create_customer")[0].reset();
});
</code></pre>
<p>});</p>
<blockquote>
<p>This is my Html file code:</p>
<p><code><form id="create_customer" enctype="multipart/form-data" method="post" style="padding: 10px;"> {% csrf_token %}{{form|crispy}} <input type="submit" id="button_create_customer" value="ثبت" class="btn btn-success"> </form></code></p>
<p>This is my View.py file code:</p>
</blockquote>
<pre><code>class List(View):
def get(self, request):
list_customer = CustomerModel.objects.all()
form = CustomerForm()
return render(request, 'customer/index.html', context={
'list_customer': list_customer,
'form': form,
})
def post(self, request):
form = CustomerForm(request.POST,request.FILES)
if form.is_valid():
new_customer = form.save()
return JsonResponse({'list': model_to_dict(new_customer)}, status=200)
else:
return redirect('Customer_List')
</code></pre>
|
<p><code>HttpRequest.is_ajax()</code> method is deprecated from Django 3.1 and also removed in Django 4.0 as <a href="https://docs.djangoproject.com/en/4.0/releases/4.0/#features-removed-in-4-0" rel="nofollow noreferrer">documented</a></p>
<p>Instead you should can inspect Accept header as per cleanup <a href="https://code.djangoproject.com/ticket/30997" rel="nofollow noreferrer">ticket</a></p>
<p>If you still want replicate old method functionality you can make your own base on <a href="https://github.com/django/django/blob/stable/3.1.x/django/http/request.py#L260" rel="nofollow noreferrer">source</a></p>
<pre><code>def is_ajax(request):
return request.headers.get('x-requested-with') == 'XMLHttpRequest'
</code></pre>
|
javascript|python|django|ajax
| 1 |
1,908,156 | 60,117,479 |
How to run a python script based on pip installation in docker?
|
<p>My docker file design is as follows:</p>
<pre><code>#Use python 3.6 image
FROM python:3.6
ENV PYTHONUNBUFFERED 1
#install required packages
RUN apt-get update
RUN apt-get install libsasl2-dev libldap2-dev libssl-dev python3-dev psmisc -y
#install a pip package
#Note: This pip package has a completely configured django project in it
RUN pip install <pip-packge>
#add a configuration file required for setup
ADD appAdd.json /
#Run a script
#Note: Here appmanage.py is a file inside the pip installed location, but it will be accesible directly without cd to the folder
RUN appmanage.py appAdd.json
#The <pip-packge> installed comes with a built in django package, so running it with following CMD
#Note: Here manage.py is present inside the pip package folder but it is accesible directly
CMD ["manage.py","runserver","0.0.0.0:8000"]
</code></pre>
<p>When i run :</p>
<pre><code>sudo docker build -t test-app .
</code></pre>
<p>The python script running part will be successful in terms of functionality but the image is not getting created because at this point it gets exited with following error:</p>
<pre><code>The command '/bin/sh -c appmanage.py appAdd.json' returned a non-zero code: 137
</code></pre>
<p>Is it treating it as shell script rather than python script. How can i overcome this and run the django project sucessfully?</p>
<p>Note: In local environment I could execute the steps in my machine and successfully setup.So no issues with the code of the django project which comes with pip package</p>
<h1>Update</h1>
<p>The script appmanage.py runs the django project in a port 9999 and performs some tests and kills the port 9999. Is the kill operation in the script causing the error(137) as mentioned above?</p>
|
<p>You need to use the full path to <code>appmanage.py</code>. To find that, you can use the site packages directory.</p>
<p>Replace </p>
<pre><code>RUN appmanage.py appAdd.json
</code></pre>
<p>with</p>
<pre><code>RUN PYTHON_SITE_PACKAGES="$(python -c 'import site; print(site.getsitepackages()[0])')" \
&& python $PYTHON_SITE_PACKAGES/<package>/appmanage.py
</code></pre>
|
python|django|docker|dockerfile
| 0 |
1,908,157 | 60,112,809 |
DNS Python, use 'relativize', but prevent replacing full names with '@'
|
<p>I am using DNS Python in a plugin for DirectAdmin.
While processing the bind file and using <code>relativize=False</code> in the <code>from_file()</code> function of DNS Python, each name of a record is displayed fully, (e.g. 'smtp.example.com.')</p>
<p>I prefer an overview which shows only "smtp" in case of that subdomain. This could be realized by using <code>relativize=True</code> in the <code>from_file()</code> function. However, in that case, all full dns names ('example.com') are replaced by an <code>@</code> (at symbol)</p>
<p>Can I configure DNS python to relativize subdomains, but show the full dns name without only showing the <code>@</code>?</p>
|
<p>Symbol <code>@</code> is used by DNS data format. See <a href="https://www.ietf.org/rfc/rfc1035.txt" rel="nofollow noreferrer">RFC1035</a> paragraph 5.1:</p>
<blockquote>
<p>A free standing @ is used to denote the current origin.</p>
</blockquote>
<p>So it is standard to use <code>@</code> for current zone name. You can replace <code>@</code> with zone (domain) name if it is needed. Though it is strange to mix relative and full domain names.</p>
|
dnspython
| 0 |
1,908,158 | 3,035,028 |
Can't import matplotlib
|
<p>I installed matplotlib using the Mac disk image installer for MacOS 10.5 and Python 2.5. I installed numpy then tried to import matplotlib but got this error: <code>ImportError: numpy 1.1 or later is required; you have 2.0.0.dev8462</code>. It seems to that version 2.0.0.dev8462 would be later than version 1.1 but I am guessing that matplotlib got confused with the ".dev8462" in the version. Is there any workaround to this?</p>
|
<p>Here is the troublesome code located in <code>Lib/site-packages/matplotlib/__init__.py</code> in my python distribution on Windows</p>
<pre><code>nn = numpy.__version__.split('.')
if not (int(nn[0]) >= 1 and int(nn[1]) >= 1):
raise ImportError(
'numpy 1.1 or later is required; you have %s' % numpy.__version__)
</code></pre>
<p>The problem is that it is requiring both the first to digits (separated by periods) to be greater than or equal to 1 and in your case the second digit is a 2. You can get around this in a number of ways, but one way is to change the if statement to</p>
<pre><code>if not ((int(nn[0]) >= 1 and int(nn[1]) >= 1) or int(nn[0]) >= 2):
</code></pre>
<p>or you could just change it to:</p>
<pre><code>if not (float('.'.join(nn[2:])) >= 1.1):
</code></pre>
<p>which might be better.</p>
|
python|installation|numpy|matplotlib
| 1 |
1,908,159 | 3,187,064 |
Organizing multiple Python applications and shared library packages
|
<p>Suppose that I am writing two applications for my employer, we'll call them <strong>App1</strong> and <strong>App2</strong>. These applications depend on some packages containing code needed by both. Let's say that <strong>App1</strong> depends on <strong>PackageA</strong> and <strong>PackageB</strong>. <strong>App2</strong> depends on <strong>PackageB</strong> and <strong>PackageC</strong>. The organizing strategy that seems natural to me would be to check everything into version control like this:</p>
<pre><code>repo_root
+--- App1
| +--- App1.py
| +--- ... and so on
+--- App2
| +--- ... files for App2
+--- PackageA
| +--- __init__.py
| +--- ... and more files
+--- PackageB
| +--- ... files for PackageB
+--- PackageC
+--- ... files for PackageC
</code></pre>
<p>The problem comes with importing the packages. For example, <strong>App1</strong> and <strong>App2</strong> both need to import <strong>PackageB</strong>, but I can't just put "import PackageB" into the main file for each of these applications. Python doesn't search the parent directory for packages to import.</p>
<p>I know a couple of options to do this, but they both seem a little ugly. One strategy that I've used before is to put the main file for <strong>App1</strong> and <strong>App2</strong> into the "repo_root" directory. Then the two main files can import the packages without any problems. Another option is to use sys.path.append and <strong>file</strong> to figure out what the parent directory is and add it to the path that Python searches for modules.</p>
<p>Is there a clean, elegant way to do something like this? Thanks for your help.</p>
<p><strong>Update:</strong> While the virtualenv solution can help a great deal when it comes to dealing with packages and dependencies, it almost seems like overkill for a problem that could be solved by a relative import. Carrying out a relative import seems to be fiendishly complicated, however. There is <a href="https://stackoverflow.com/questions/2943847/nightmare-with-relative-imports-how-does-pep-366-work">PEP 366</a>, but that is quite complicated and probably wouldn't allow importing outside of a package anyway. I spent some time looking at <a href="http://pypi.python.org/pypi/importlib/1.0.1" rel="nofollow noreferrer">importlib</a>, but I'm pretty sure that doesn't allow importing outside of a package either. Many people seem to use munging of the sys.path, of which <a href="http://coding.derkeiler.com/Archive/Python/comp.lang.python/2009-03/msg03072.html" rel="nofollow noreferrer">this</a> seems to be the best example that I've found. But, as I mentioned, this seems a rather hackish way to do things. I've spent nearly all day on investigating this, and I don't think that there is an answer. Correct me if I'm wrong, but I now believe there is no clean, non-hackish way to do a relative import without bringing in a heavy-hitter like virtualenv and some .pth files. Anyway, thanks again for your help. I'll mark this as answered since virtualenv is the only option.</p>
|
<p>One solution you can use for this is to have a <a href="http://pypi.python.org/pypi/virtualenv" rel="nofollow noreferrer" title="virtualenv">virtualenv</a> for each of your apps, and then use a relative <a href="http://bob.pythonmac.org/archives/2005/02/06/using-pth-files-for-python-development/" rel="nofollow noreferrer" title=".pth">.pth</a> file to point to the Packages. This gives you fine control over the environment each of the apps is being developed in and avoids the "but I've got package_x on my machine!" problems in testing.</p>
|
python|package
| 3 |
1,908,160 | 5,871,621 |
How to populate a table while creating the Django model?
|
<p>I have a list of cities (simple cvs file) and I want to populate the citeis table while creating the City model.
Class description:</p>
<pre><code>class City(models.Model):
city = modeld.CharField('city_name', max_length=50)
class Meta:
berbuse_name =....
...........
def __unicode__(self):
return self.city
</code></pre>
<p>Now, what I am looking for is how to do it only once, while creating the model(DB table).</p>
<p>I'm trying to do it over here because it sound very logic to me (like building a sql script in MS-sql and other)</p>
<p>EDIT: Ok, I guess I am asking the wrong thing....maybe this: How do I create a python function that will take the cvs file and transform it to json (Again, in the model itself, while it is being build) and should I do it at all???</p>
<p>Can any one please help me with this?</p>
|
<p>We do something like this, usually.</p>
<pre><code>import csv
from my_django_app.forms import CityForm
with open( "my file", "rb" ) as source:
rdr = csv.DictReader( source )
for row in rdr:
form= CityForm( **row )
if form.is_valid():
form.save()
else:
print form.errors
</code></pre>
<p>This validates and loads the data.</p>
<p>After the data is loaded, you can use <code>django-admin dumpdata</code> to preserve a JSON fixture from the loaded model.</p>
|
python|django|django-models
| 5 |
1,908,161 | 67,873,549 |
Two equally distant points in a tree
|
<p>I'm solving the example probation problem in Python and had only partial success so far (passed 30 test cases, wrong answer in the 31st). The test cases themselves are not disclosed.</p>
<p><strong>Description.</strong></p>
<p>A network of n nodes connected by n-1 links is given. Find two nodes so that the distance from the fatherst node to the nearest of those two would be minimal. If several answers are possible, any of them will be accepted.</p>
<p>The input is given as a list of pairs of numbers. Each number represents node, pair is the connection between two nodes. The result should be the list of two nodes.</p>
<p><em>Example 1</em>
in = [[1, 2], [2, 3]]
result = [3, 1]</p>
<p><em>Example 2</em>
in = [[1, 2], [3, 2], [2, 4], [4, 5], [4, 6]]
result = [2, 4]</p>
<p>My solution.</p>
<p>The net will always be a tree, not a graph. For the above examples the corresponding trees will be:</p>
<p><em>example 1</em></p>
<pre><code>1-2-3
</code></pre>
<p><em>example 2</em></p>
<pre><code>1 5
\ /
2-4
/ \
3 6
</code></pre>
<p>My solution is based on going from leaves to the middle step by step, eventually removing nodes from the leaves -- one level after another. Finally I'll end up with a one or two nodes in the middle.</p>
<p><em>Example A.</em></p>
<pre><code>1-2-3 8-9-10-14 2-3 8-9-10 3 8-9
\ / \ / \ /
7 >>> 7 >>> 7 >>> 7-8
/ \ / \ / \
4-5-6 11-12-13 5-6 11-12 6 11
</code></pre>
<p>result: [7, 8]</p>
<p><em>Example B.</em></p>
<pre><code>1-2-3 8-9-10 2-3 8-9 3 8
\ / \ / \ /
7 >>> 7 >>> 7 >>> 7
/ \ / \ / \
4-5-6 11-12-13 5-6 11-12 6 11
</code></pre>
<p>result: [7, ...]</p>
<p><em>Example C.</em></p>
<pre><code>11 13
\ /
1-2-3-4-5-6-7-8-9-10 >> ... >> 3-4-5-6-7-8 >> ... >> 5-6
/ \
12 14
</code></pre>
<p>result: [3, 8]</p>
<p>If we have more than two nodes after each step but the last (<em>example B</em>), then the answer will be the single middle node and any other node.
As to other cases if we have n steps to the middle point(s), and have two ends after half of n steps to the center (<em>example C</em>), that two ends will be equidistant from the ends of the original tree and from its middle (two middles are possible if the segment length is even). That two ends will be the answer. If we still don't have two ends after half of n steps (<em>example A</em>), than we will continue to move toward the middle and once the number of ends will be narrowed to two, that two ends will be the answer.</p>
<p>Possibly there is a flaw in the above reasoning and someone already can notice it at this point.</p>
<p>Now to the implementation.
I represent tree as a dictionary. Keys are numbers representing nodes and values are sets of numbers representing neighbour nodes. Removing ends from a tree comes to removing keys having only a single neighbour and removing the corresponding number from all of its parents.</p>
<p>Below are the original tree representation and its first modification from the <em>example A</em>.</p>
<pre><code>{1: {2},
2: {1, 3},
3: {2, 7},
4: {5},
5: {4, 6},
6: {5, 7},
7: {3, 6, 8, 11},
8: {7, 9},
9: {8, 10},
10: {9, 14},
11: {7, 12},
12: {11, 13},
13: {12},
14: {10}}
{2: {3},
3: {2, 7},
5: {6},
6: {5, 7},
7: {3, 6, 8, 11},
8: {7, 9},
9: {8, 10},
10: {9},
11: {7, 12},
12: {11}}
</code></pre>
<p>The implementation itself.</p>
<pre><code>def f(pairs):
if len(pairs) == 1:
return pairs[0]
tree = {}
for a, b in pairs:
if a not in tree:
tree[a] = set()
if b not in tree:
tree[b] = set()
tree[a].add(b)
tree[b].add(a)
ends = {e for e in tree if len(tree[e]) == 1}
slices = [ends.copy()]
while len(tree.keys()) > 2:
nx_ends = set()
for end in ends:
pars = tree.pop(end, set())
nx_ends.update(pars)
for par in pars:
tree[par].remove(end)
nx_ends = {e for e in nx_ends if len(tree[e]) < 2}
ends = nx_ends.copy()
if ends:
slices.append(ends)
steps = len(slices) // 2
for ends in slices[steps:]:
ends = list(ends)
if len(ends) == 2:
return ends
if len(ends) == 1:
end = ends[0]
for el in pairs[0]:
if el != end:
return [end, el]
</code></pre>
<p>My solution seems rather effective yet, once again, gives wrong answer in (unfortunately closed) 31st test case.</p>
<p><strong>UPDATE</strong></p>
<p>I've solved the task. The solution is based on the above one with some additional steps.</p>
<p>After I -- according to the former algorithm -- find the two candidate points from the longer ends I do the additional check -- whether they are really equidistant. If the number of steps to cover all the nodes from the two candidate points is more than the number of steps to reach two candidate points from the longer ends the additional shift to the center is perfromed.</p>
<p>The value of the shift is half of difference of the steps since every step to the center not only increases the distanse to the longer ends by one, but also decreases the distance to another end(s).</p>
<pre><code>def get_slices(slic, dic, from_ends=False):
visited = slic.copy()
slices = []
while slic:
slices.append(slic.copy())
nx_slice = set()
for end in slic:
neibs = dic[end]
for neib in neibs:
if neib not in visited:
nx_slice.add(neib)
if from_ends:
nx_slice = {e for e in nx_slice if len(dic[e] - visited) < 2}
slic = nx_slice
visited.update(slic)
return slices
def get_pair(node, pairs):
for pair in pairs:
if node in pair:
return pair
def f(pairs):
dic = {}
if len(pairs) == 1:
return pairs[0]
for a, b in pairs:
if a not in dic:
dic[a] = set()
if b not in dic:
dic[b] = set()
dic[a].add(b)
dic[b].add(a)
ends = {e for e in dic if len(dic[e]) == 1}
slices = get_slices(ends, dic, from_ends=True)
steps = len(slices) // 2
slices = slices[steps:]
while len(slices[0]) > 2:
slices.pop(0)
steps += 1
slic = slices.pop(0)
slices_center = get_slices(slic, dic)
steps_center = len(slices_center) - 1
slices_center.clear()
diff = steps_center - steps
if diff > 1:
slic = slices[diff // 2 - 1]
if len(slic) == 1:
[r] = slic
return get_pair(r, pairs)
return slic
</code></pre>
<p>My question now is whether my solution is overcomplicated? Could you give a tip on a simpler one?</p>
|
<p>I’ve come to the solution that I’m more content with and, supposedly, is closer to the problem creator’s intention. The solution deals with such notions as <em>tree hight-balancing</em> and <em>reparenting</em>. Unfortunately it, as before, contains some subtle error that reveals as a runtime exception in one of the closed latter cases.</p>
<p>The algorithm is as follows.
First let’s suppose some arbitrary node is a root. Then we check each branch depth and find the longer branch. After that we accomplish reparenting: we change root to the branch higher node, the rest of the tree will then become the branch of the new tree.
In this way we continue reparenting until we come to to equally long branches or branches that differs in depth by one node. At this point we consider tree as a <em>hight-balanced</em>.
Than we consider (one of the) longer branches as a separate tree and the rest of the original tree as another separate tree and again balance those two trees by eventually reparenting. The roots of those balanced trees will be the answer.</p>
<p>Let’s illustrate with an example. Suppose we have the following tree.</p>
<pre><code> 52
|
53
|
24-23-22-01-02-03-04-33-32-31-42-43-44
</code></pre>
<p>Suppose the root is 23 node. So we eventually reparent.</p>
<pre><code>24
| 23
23 |__
| | |
22 22 24
| |
01 01
| |
02 02
| |
03 03
| |
04 04 04
|__ >> |__ >> … >> |_____
| | | | | | |
33 53 33 53 33 03 53
| | | | | | |
32 52 32 52 32 02 52
| | | |
31 31 31 01
| | | |
42 42 42 22
| | | |
43 43 43 23
| | | |
44 44 44 24
</code></pre>
<p>Then we consider one of the longest branche (33-32-31-42-43-44) as one tree and the rest of the tree as another one tree.</p>
<pre><code>04
|__
| |
03 53 33
| | |
02 52 … 32
| |
01 31
| |
22 42
| |
23 43
| |
24 44
</code></pre>
<p>Then, in turn, we balance those two trees. After balancing they will be as follows.</p>
<pre><code>02
|__
| | 31
01 03 |__
| | | |
22 04 42 32
| | | |
23 53 43 33
| | |
24 52 44
</code></pre>
<p>So the answer will be nodes 02 and 31.</p>
<p>To make implementation effective enough, we need to cash tree depths for every node. This way we don’t need to recalculate all depths after every reparenting – it’s enough to only recalculate depth for the new root and one new branch after every reparenting.
So, here is the implementation.</p>
<pre><code>class Node:
def __init__(self, val):
self.val = val
self.children = []
def construct_tree(pairs):
pairs = [tuple(list(sorted(pair))) for pair in pairs]
pairs.sort(reverse=True)
p, c = pairs.pop()
root = Node(p)
child = Node(c)
root.children.append(child)
nodes_dic = {p: root, c: child}
while pairs:
pair = pairs.pop()
p, c = pair
if c in nodes_dic:
p, c = c, p
if p in nodes_dic:
parent = nodes_dic[p]
node = Node(c)
parent.children.append(node)
nodes_dic[c] = node
else:
pairs.insert(0, pair)
return root
def get_depth(node, depths):
if node.val not in depths:
if not node.children:
depths[node.val] = 0
else:
depths[node.val] = 1 + max([get_depth(child, depths) for child in node.children])
return depths[node.val]
def reparent_tree(root, new_root, depths):
children = [child for child in root.children if child != new_root]
val = root.val
if val in depths:
del depths[val]
if new_root.val in depths:
del depths[new_root.val]
root = new_root
child = Node(val)
child.children = children
root.children.append(child)
return root
def balance_tree(root):
depths = {}
if len(root.children) == 0:
return root, None
if len(root.children) == 1:
root = reparent_tree(root, root.children[0], depths)
if get_depth(root, depths) < 2:
return root, root.children[0]
while True:
ls = [get_depth(node, depths) for node in root.children]
l1, l2 = list(sorted(ls))[-2:]
ind = ls.index(l2)
candidate_root = root.children[ind]
if l2 - l1 <= 1:
break
root = reparent_tree(root, candidate_root, depths)
return root, candidate_root
def f(pairs):
if len(pairs) <= 2:
return pairs[0]
root = construct_tree(pairs)
root, tree1 = balance_tree(root)
root.children.remove(tree1)
tree2 = root
tree1, _ = balance_tree(tree1)
tree2, _ = balance_tree(tree2)
return tree1.val, tree2.val
</code></pre>
<p>The solution passes my test cases – both handmade and a number of randomly generated without runtime error and it’s hard to figure out what went wrong.</p>
|
python|tree
| 0 |
1,908,162 | 67,953,241 |
mlflow run git-uri clone to specific directory
|
<p>I am using mlflow run with a GitHub uri.</p>
<p>When I run using the below command</p>
<pre><code>mlflow run <git-uri>
</code></pre>
<p>The command sets up a conda environment and then <em>clones the Git repo into a <strong>temp</strong> directory, But I need it setup in a <strong>specific</strong> directory</em></p>
<p>I checked the entire document, but I can't find it. Is there no such option to do so in one shot?</p>
|
<p>For non-local URIs, MLflow uses the Python's <code>tempfile.mkdtemp</code> function (<a href="https://github.com/mlflow/mlflow/blob/1c43176cefb5531fbb243975b9c8c5bfb9775e66/mlflow/projects/utils.py#L140" rel="nofollow noreferrer">source code</a>), that creates the temporary directory. You may have some control over it by setting the <code>TMPDIR</code> environment variable as described in <a href="https://docs.python.org/3/library/tempfile.html#tempfile.mkstemp" rel="nofollow noreferrer">Python docs</a> (it lists <code>TMP</code> & <code>TEMP</code> as well, but they didn't work for me on MacOS) - but it will set only "base path" for temporary directories and files, the directory/file names are still will be random.</p>
|
python-3.x|mlflow
| 1 |
1,908,163 | 30,431,287 |
Django model query truncate column's value
|
<p>I have a query defined like this</p>
<pre><code>SELECT DISTINCT substring(date::text from 1 for 8)||'000' AS date FROM my_table;
</code></pre>
<p>How can transform it into Django model?</p>
<p><code>date</code> is like this <code>20150403000</code>
The query supposed to return <code>20150403</code> without the trailing <code>000</code></p>
<p>What I have right now is like this:</p>
<pre><code>query.distinct('date')
</code></pre>
<hr>
<p><strong>EDITED</strong></p>
<p>Ok the question was vague. I redefined my question again since not even psql can do what I wanted(lost in translation here). LOL</p>
<pre><code>select to_date(CAST(date as TEXT), 'YYYYMMDD') from my_table
Where to_date(CAST(date as TEXT), 'YYYYMMDD')<=DATE '20141127' ;
</code></pre>
<p>date is of type bigint. Can django model orm handle this?</p>
|
<p>You can use Django model's <a href="https://docs.djangoproject.com/en/1.4/howto/custom-model-fields/#django.db.models.Field.to_python" rel="nofollow"><code>to_python</code></a>.</p>
<p><code>Converts a value as returned by your database (or a serializer) to a Python object.</code></p>
<p>For example:</p>
<pre><code>class CustomDateTimeField(models.DateTimeField):
def to_python(self, value):
if isinstance(value, DateTimeField):
return value.rstrip("0")
</code></pre>
<p>You need to then use the custom date time filed for your model. </p>
<p>Say <code>my_date = CustomDateTimeField()</code></p>
<p>There is also, <a href="https://docs.djangoproject.com/en/1.4/howto/custom-model-fields/#django.db.models.Field.get_prep_value" rel="nofollow"><code>get_prep_value</code></a>, this method is for converting Python objects to query values.</p>
<p><strong>Update</strong>:
For the query with filter, it would try to achieve a loopkup and so you should handle the loopkup using <a href="https://docs.djangoproject.com/en/1.8/howto/custom-model-fields/#preparing-values-for-use-in-database-lookups" rel="nofollow">get_prep_lookup</a></p>
<p>Something like this:</p>
<pre><code>def get_db_prep_value(self, value, *args, **kwargs):
if value is not None:
return datetime.datetime.strptime(value[0:4] + "-" + value[4:6] + "-" + value[6:], "%Y-%m%d")
def get_prep_lookup(self, lookup_type, value):
return self.get_db_prep_value(value)
</code></pre>
<p><strong>Sample Example</strong>:</p>
<pre><code>from django.db import models
import datetime
# Create your models here.
class CustomDateTimeField(models.Field):
__metaclass__ = models.SubfieldBase
def db_type(self, connection):
return 'datetime'
def to_python(self, value):
if not value is None:
if isinstance(value, datetime.datetime):
return value.strftime("%Y-%m-%d")
else:
return value.rstrip("0")
def get_db_prep_value(self, value, *args, **kwargs):
if value is not None:
return datetime.datetime.strptime(value[0:4] + "-" + value[4:6] + "-" + value[6:], '%Y-%m-%d')
return None
def get_prep_lookup(self, lookup_type, value):
return self.get_db_prep_value(value.rstrip("0"))
class Person(models.Model):
name = models.CharField(max_length=25)
my_date = CustomDateTimeField()
</code></pre>
|
django|python-2.7|django-models
| 0 |
1,908,164 | 30,347,335 |
How to get hhh:mm datetime format with Python?
|
<p>Is there any way to get the following datetime format in Python?</p>
<pre><code>hhh:mm
</code></pre>
<p>For example:</p>
<pre><code>126:00
</code></pre>
<p>I'd like to be able to do the following operation:</p>
<pre><code>126:00 - 67:56 - 12:00 = 46:04
</code></pre>
<p>I tried with...</p>
<pre><code>hours_d = datetime.datetime.strptime('126:00','%H:%M')
</code></pre>
<p>...but obviously it returns <code>hh:mm</code>.</p>
<p><strong>SOLVED:</strong></p>
<p>Thanks to @Bluehorn and @lenz for your help!!!!!</p>
<pre><code> # h1=126:00
h1_h, h1_m = [int(x) for x in h1.split(":")]
# h2=67:56
h2_h, h2_m = [int(x) for x in h2.split(":")]
# h3=12:00
h3_h, h3_m = [int(x) for x in h3.split(":")]
h1_d = timedelta(hours=h1_h)+ timedelta(minutes=h1_m)
# 5 days, 6:00:00
h2_d = timedelta(hours=h2_h)+ timedelta(minutes=h2_m)
# 2 days, 19:56:00
h3_d = timedelta(hours=h3_h)+ timedelta(minutes=h3_m)
# 12:00:00
result = h1_d - h2_d -h3_d
# 1 day 22:04:00
</code></pre>
<p>I'll update my answer when I get the result in hh:mm.</p>
|
<p>Looking at <a href="https://hg.python.org/cpython/file/648dcafa7e5f/Lib/_strptime.py#l194" rel="nofollow">https://hg.python.org/cpython/file/648dcafa7e5f/Lib/_strptime.py#l194</a> this is not supported - as expected.</p>
<p>The documentation described <code>%H</code> as <em>Hour (24-hour clock) as a decimal number [00,23]</em>, see <a href="https://docs.python.org/2.7/library/time.html?highlight=strptime#time.strftime" rel="nofollow">https://docs.python.org/2.7/library/time.html?highlight=strptime#time.strftime</a></p>
<p>Therefore it even fails for two-digit hours:</p>
<pre><code>>>> import datetime
>>> hours_d = datetime.datetime.strptime('26:00','%H:%M')
Traceback (most recent call last):
[...]
ValueError: time data '26:00' does not match format '%H:%M'
</code></pre>
<p>You will have to parse the pair of <code>HOURS:MINUTES</code> yourself, as in</p>
<pre><code>hours, minutes = [int(x) for x in "126:00".split(":")]
</code></pre>
|
python|datetime
| 3 |
1,908,165 | 66,922,152 |
Sorting a linked list based on two values
|
<p>I have a singly linked list, where the data is a custom object I've created, named 'Patient', and I sort the linked list based on the 'Patient.points'.</p>
<p>however, I want to also sort the list by 'Patient.bmi', as a secondary filter, if the two patients being compared have the same number of points.</p>
<p>the way I want to paitents to be sorted by the BMI filter is to have the patients with the worst bmi top, going in ascending order. I have four 'conditions', 'obese, overweight, underweight and normal'. I want <strong>obese, overweight and normal</strong> as descending order, and <strong>underweight</strong> in ascending order.</p>
<p>this is my linked list:</p>
<pre><code>class Node:
def __init__(self):
self.data = None
self.next = None
class LinkedList:
def __init__(self):
self.head = None
def addNode(self, data):
curr = self.head
if curr is None:
n = Node()
n.data = data
self.head = n
return
if curr.data.points == data.points and curr.data.condition == 'underweight':
if curr.data.bmi > data.bmi:
n = Node()
n.data = data
n.next = curr
self.head = n
return
elif curr.data.points == data.points and curr.data.condition != 'underweight':
if curr.data.bmi < data.bmi:
n = Node()
n.data = data
n.next = curr
self.head = n
return
if curr.data.points < data.points:
n = Node()
n.data = data
n.next = curr
self.head = n
return
while curr.next is not None:
if curr.next.data.points < data.points:
break
curr = curr.next
n = Node()
n.data = data
n.next = curr.next
curr.next = n
return
def __str__(self):
data = []
curr = self.head
while curr is not None:
data.append(curr.data)
curr = curr.next
return "[%s]" %(', '.join(str(i) for i in data))
def __repr__(self):
return self.__str__()
</code></pre>
<p>and this is the data that gets returned:</p>
<pre><code>Paitent name: BDF237 age: 79 Years 2 Months 14 Days BMI: 37.960398071201396 weight classification: obese
Paitent name: ABE127 age: 20 Years 6 Months 20 Days BMI: 34.54054238911118 weight classification: obese
Paitent name: ABD721 age: 73 Years 4 Months 25 Days BMI: 31.217481789802285 weight classification: obese
Paitent name: DED567 age: 58 Years 5 Months 5 Days BMI: 32.74416118146279 weight classification: obese
Paitent name: DEF444 age: 24 Years 11 Months 25 Days BMI: 31.462672275229988 weight classification: obese
Paitent name: BDF777 age: 93 Years 10 Months 11 Days BMI: 14.705367459122582 weight classification: underweight
Paitent name: BDF098 age: 86 Years 0 Months 13 Days BMI: 13.863739248528747 weight classification: underweight
Paitent name: DDC345 age: 54 Years 3 Months 19 Days BMI: 16.45600644235146 weight classification: underweight
Paitent name: DDD222 age: 73 Years 9 Months 0 Days BMI: 18.166204254194742 weight classification: underweight
Paitent name: DDD555 age: 66 Years 10 Months 0 Days BMI: 17.632653061224488 weight classification: underweight
Paitent name: DEF666 age: 65 Years 10 Months 22 Days BMI: 17.68978885166924 weight classification: underweight
Paitent name: ABE123 age: 69 Years 6 Months 19 Days BMI: 28.010411951109102 weight classification: overweight
Paitent name: ABD111 age: 72 Years 8 Months 0 Days BMI: 26.122448979591837 weight classification: overweight
Paitent name: ABE165 age: 23 Years 3 Months 9 Days BMI: 26.491508201480624 weight classification: overweight
Paitent name: ABE329 age: 82 Years 0 Months 7 Days BMI: 25.46401086464464 weight classification: overweight
Paitent name: DDD124 age: 44 Years 6 Months 27 Days BMI: 27.15271058955713 weight classification: overweight
Paitent name: DED675 age: 45 Years 6 Months 29 Days BMI: 28.735632183908045 weight classification: overweight
Paitent name: DED879 age: 22 Years 9 Months 1 Days BMI: 28.73469387755102 weight classification: overweight
Paitent name: DEF555 age: 27 Years 1 Months 24 Days BMI: 25.847768218818715 weight classification: overweight
Paitent name: ABC234 age: 19 Years 7 Months 19 Days BMI: 23.057725694444446 weight classification: normal
Paitent name: ABD221 age: 50 Years 11 Months 0 Days BMI: 21.872422819032593 weight classification: normal
Paitent name: ABD176 age: 55 Years 11 Months 3 Days BMI: 21.132713440405748 weight classification: normal
Paitent name: ABD231 age: 55 Years 11 Months 29 Days BMI: 20.9572742022715 weight classification: normal
Paitent name: ABD321 age: 59 Years 11 Months 9 Days BMI: 20.429418362441915 weight classification: normal
Paitent name: ABD444 age: 57 Years 5 Months 6 Days BMI: 20.820939916716245 weight classification: normal
Paitent name: ABD401 age: 63 Years 9 Months 22 Days BMI: 21.513858510523864 weight classification: normal
Paitent name: ABD007 age: 22 Years 2 Months 17 Days BMI: 21.62964876033058 weight classification: normal
Paitent name: ABC008 age: 83 Years 1 Months 19 Days BMI: 19.77769866698311 weight classification: normal
Paitent name: ABC101 age: 86 Years 0 Months 13 Days BMI: 18.556773222226116 weight classification: normal
Paitent name: ABC201 age: 26 Years 10 Months 7 Days BMI: 20.303697560696612 weight classification: normal
</code></pre>
<p>I believe the issue is in my linked list class in the while class:</p>
<pre><code>while curr.next is not None:
if curr.next.data.points < data.points:
break
</code></pre>
<p>However, I'm not sure what I would need to adjust it to, although I do know it needs adjusting</p>
<p>Many thanks in advance!</p>
|
<p>You don't want your linked list to worry about how to compare the patients. Simply override the < operator for the patient class, so that comparing two patients will automatically give the order you want.</p>
<p>Now your linked list class doesn't care about all those fancy rules, because they are handled by the Patient class, as they should be. The linked list just checks</p>
<pre><code>while curr.next is not None:
if curr.next.data < data:
break
</code></pre>
<p>The way you do this is you define a function named __lt__ on the Patient class. The letters "lt" are short for "less than" and will be used when you use "<" in your code. Similarly, it will automatically be referenced in a sort.</p>
<p>see also <a href="https://stackoverflow.com/questions/17197953/is-it-possible-to-overload-the-multiple-comparison-syntax-in-python">Is it possible to overload the multiple comparison syntax in python?</a></p>
<p>Here's a quick demo:</p>
<pre><code>import random
class Patient:
def __init__(self, points, bmi):
self.points = points
self.bmi = bmi
def __str__(self):
return '{}, {}'.format(self.points, self.bmi)
def __lt__(self, other):
if self.points < other.points:
return True
if self.bmi < 18.5:
if self.bmi > other.bmi:
return True
else:
if self.bmi < other.bmi:
return True
return False
patients = [
Patient(32, 25),
Patient(32, 24),
Patient(32, 18),
Patient(32, 15),
Patient(20, 25),
Patient(20, 24),
Patient(20, 18),
Patient(20, 15),
]
random.shuffle(patients)
patients.sort()
for patient in patients:
print(patient)
</code></pre>
<blockquote>
<p>20, 18<br />
20, 15<br />
20, 24<br />
20, 25<br />
32, 18<br />
32, 24<br />
32, 15<br />
32, 25</p>
</blockquote>
|
python|linked-list
| 1 |
1,908,166 | 42,867,080 |
Split pandas dataframe into mutually exclusive subsets
|
<p>I am am using regression tree analysis on a data contained in a pandas dataframe. In order to preform V-fold cross validation, I need to split my data into V random, mutually exclusive subsets</p>
<p>Here is what I've worked out so far where I add a new column V = 10 to the dataframe to denote which subset each sample is a member of:</p>
<pre><code>def Vfold_Subsets(Data,V):
subs = Data
Data['V'] = V
N = Data.shape[0]
n = N//V
for v in range(1,V):
sample = subs.sample(n = n)
Data['V'][Data.index.isin(sample.index)] = v
subs.drop(sample.index)
return Data
</code></pre>
<p>This method works, but I have a feeling there is a better way to do it? A downside of this method is if N = 108, then</p>
<pre><code>for v in range(1,V+1):
print (v,': ',Data['V'][Data['V']==v].count())
</code></pre>
<p>returns:</p>
<pre><code>1 : 10
2 : 10
3 : 10
4 : 10
5 : 10
6 : 10
7 : 10
8 : 10
9 : 10
10 : 18
</code></pre>
<p>And I think it would be better if I could achieve something like this</p>
<pre><code>1 : 10
2 : 11
3 : 11
4 : 11
5 : 11
6 : 11
7 : 11
8 : 11
9 : 10
10 : 10
</code></pre>
<p>So that I don't lump all the remaining samples into the last bin. </p>
|
<p>Define your function</p>
<pre><code>def Vfold_Subsets(Data, V):
return Data.assign(
V=np.random.permutation(np.arange(len(Data))) % V)
</code></pre>
|
python|pandas|random
| 4 |
1,908,167 | 65,750,118 |
ValueError in NumPy: shapes not aligned
|
<p>The code below shows error "ValueError: shapes (400,16,1) and (16,16) not aligned: 1 (dim 2) != 16 (dim 0)". How can I solve this problem? I want to create an image recognition algorithm using numpy only. Test images are 20*20 px sized. (sorry for my English, I speak Russian)</p>
<pre><code>from numpy import exp, array, random, dot, squeeze, asarray
from PIL import Image
images = []
for k in range(8):
im = Image.open(f'learn\\yes\\{k + 1}.png', 'r')
a = list(im.getdata())
pixel_values = []
for i in a:
pixel_values.append((i[0] + i[1] + i[2] / 3) / 1000)
images.append(pixel_values)
im = Image.open(f'learn\\no\\{k + 1}.png', 'r')
a = list(im.getdata())
pixel_values = []
for i in a:
pixel_values.append((i[0] + i[1] + i[2] / 3) / 1000)
images.append(pixel_values)
im = Image.open(f'test\\1.png', 'r')
a = list(im.getdata())
pixel_values = []
for i in a:
pixel_values.append((i[0] + i[1] + i[2] / 3) / 1000)
print(*images, sep='\n', end='\n\n')
print(pixel_values)
# print(pixel_values3)
training_set_inputs = array([images])
training_set_outputs = array([[1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0]]).T
random.seed(1)
print('processing...')
synaptic_weights = squeeze(asarray(3 * random.random((400, 1)) - 1))
for iteration in range(2):
print(f'starting iteration {iteration + 1}')
output = 1 / (1 + exp(-(dot(training_set_inputs, synaptic_weights))))
synaptic_weights += dot(training_set_inputs.T, (training_set_outputs - output) * output * (1 - output))
print('done!')
a = 1 / (1 + exp(-(dot(array(pixel_values), synaptic_weights))))[0]
print(a)
if a > 0.6:
print('yes')
else:
print('no')
</code></pre>
|
<p>I've solved the problem. The problem was here:</p>
<pre><code>training_set_inputs = array([images])
</code></pre>
<p>instead of</p>
<pre><code>training_set_inputs = array(images)
</code></pre>
|
python|numpy|python-imaging-library
| 0 |
1,908,168 | 50,761,015 |
calculate the days between two times in python with presicion value
|
<p>I have two times like <code>t1= "2018-04-28 10:32:32+00:00"</code> and <code>t2= "2018-04-29 20:32:32+00:00"</code>
Need to find the number of days between these two days. Actually in the above two days there is 36(means 1 and half day) hours difference is there so I need output as 1.5(which mean 1day and half day). How this is possible with python</p>
|
<p>You can use the <code>timedelta</code> in <code>datetime</code> module. I used <code>parse</code> for more date parsing.</p>
<pre><code>In [2]: from dateutil.parser import parse
In [3]: d1 = parse("2018-04-28 10:32:32+00:00")
In [4]: d2 = parse("2018-04-29 20:32:32+00:00")
In [5]: de = d2 - d1
In [6]: de.total_seconds()/ (60 * 60 * 24)
Out[6]: 1.4166666666666667
</code></pre>
|
python|date
| 0 |
1,908,169 | 50,912,007 |
PyObject_Call segfaults when invoked with bound method
|
<p><code>PyObject_Call</code> segfaults when it is called with an instance of a bound method, but works fine when invoked with regular (unbound) procedures or with an instance of a class that implements <code>__call__</code>, or with subclasses of <code>type</code>.</p>
<p>Any reason it should work like this, or is this a bug? The behavior is consistent between v 3.5 and 3.6. Didn't try earlier versions.</p>
<p>PS. The stacktrace produced in this scenario doesn't even contain my code. However, if it matters, the crash happens inside</p>
<pre><code>method_dealloc () at Objects/classobject.c:194
</code></pre>
<p>which looks like this: <a href="https://github.com/python/cpython/blob/master/Objects/classobject.c#L194" rel="nofollow noreferrer">https://github.com/python/cpython/blob/master/Objects/classobject.c#L194</a></p>
<p>To prevent the immediate question: yes, I call <code>Py_INCREF(callable)</code> before calling this procedure.</p>
<hr>
<p>More info</p>
<p>When I try to look at what is being sent into this call, I see something like this:</p>
<pre><code>found method: <bound method DefParser.parse of <bound method DefParser.update_definition of <NULL>>>
</code></pre>
<p>The <code>DefParser.parse</code> and <code>DefParser.update_definition</code> are not exactly random, but also not exactly relevant: they are methods that have been called recently. I.e. I suspect that <code>PyObject_Call</code> itself isn't guilty, it's just the way method objects are represented... for some reason I seem to lose the reference, and instead hold on to garbage...</p>
|
<p>A lot of investigation found the actual error. It wasn't related to <code>PyObject_Call</code> in the end, but it may help others who might run into this situation.</p>
<p><code>PyObject_Call</code> is one of the Python C API which allocates memory. Python's memory allocator will opportunistically call GC. I'm not sure on specifics of when it decides to do it, but eventually it will happen. GC will then try to free memory allocated by Python objects.</p>
<p>What happened in my case: there was a string allocated using regular <code>malloc</code>, where I miscalculated the position of the terminating null-byte (in some cases it would appear one position after the memory I requested to be allocated). This memory was then used to create a Python <code>bytes</code> object. Later one, when GC would de-allocate this object, it would seagfault.</p>
|
python|python-3.x|cpython|python-c-api|python-extensions
| -1 |
1,908,170 | 44,961,116 |
Going from a for loop to arrays in Python
|
<pre><code>for i in range(2,Nx-2):
lyr1[i]=lyr0[i]-coef*(lyr0[i]*(lyr0[i+1]-lyr0[i-1])/2+(dsqr/(deltax)**2)*(lyr0[i+2]-2*lyr0[i+1]+2*lyr0[i-1]-lyr0[i-2]))
lyr1=lyr0[0:Nx]-coef*(lyr0[0:Nx]*(lyr0[2:Nx]-lyr0[0:Nx-2])/2+(dsqr/(deltax)**2)*(lyr0[1:Nx+1]-2*lyr0[0:Nx]+2*lyr0[2:Nx-2]-lyr0[3:Nx-3]))
</code></pre>
<p>I'm trying to change my for loop (above) into an operation on arrays. I am currently getting a broadcast error, but I need to be able to only select portions of the array to match my numerical integration scheme. Any help is greatly appreciated. </p>
|
<p>The problem is that you use vectors of different dimensions (due to your slicing), e.g., <code>len(lyr0[0:Nx]) == 2 + len(lyr0[2:Nx])</code>. Numpy tries to <a href="https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow noreferrer">broadcast</a> these operations to match the vector sizes, but they have incompatible dimensions (not being 1, not being multiples of each other etc.).</p>
<p>However, broadcasting is not what you want to do in the first place (judging from your loop implementation). It seems you rather want to do a look-ahead / look-back. Try shifting the values in <code>lyr0</code> explicitly by using numpy's <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.roll.html" rel="nofollow noreferrer">roll</a> function instead.</p>
|
python|python-3.x|numpy
| 0 |
1,908,171 | 64,788,163 |
Scrapy spider pauses unexpectedly to continue after several hours
|
<p>I am running a Scrapy spider on 400 webpages. The first days it was running as expected, scraping every minute about 500 pages. Yet, after the first days had passed, the spider started to show some unexpected behavior; it occured from the log files that there were periods of longer than an hour (and often a couple of hours, see terminal output below) in which no pages were crawled. I am a bit puzzled about the reason for this behavior. Possible reasons I have ruled out:</p>
<ul>
<li>Internet slowdown: If there would be a break down of my internet connection, Scrapy would throw errors and would still be updating me on the number of pages crawled every minute (being 0 pages /minute in this case).</li>
<li>Throttling by websites: Couldnt be all websites at the same time, would still be updating me on the number of pages crawled every minute, and does not explain why it continues afterwards</li>
<li>CPU slowdown: Could be, but why does it continue afterwards then?</li>
</ul>
<p>What other reasons could explain the Scraper to pause for hours to continue afterwards again?</p>
<pre><code>2020-11-11 05:03:38 [scrapy.extensions.logstats] INFO: Crawled 1043749 pages (at 487 pages/min), scraped 940521 items (at 427 items/min)
2020-11-11 06:27:49 [scrapy.extensions.logstats] INFO: Crawled 1043771 pages (at 22 pages/min), scraped 940592 items (at 71 items/min)
2020-11-11 06:28:49 [scrapy.extensions.logstats] INFO: Crawled 1044370 pages (at 599 pages/min), scraped 941141 items (at 549 items/min)
</code></pre>
|
<p>Following @Gallaecio suggestion that my system just might struggle with printing the INFO logs, I investigated the RAM consumption of my scraper using Task Manager. It soon turned out that after a day or so I was consuming most of my RAM. Inspecting the number of queued request in the Telenet console showed that I was having a problem with too many requests for keeping in my RAM.</p>
<p>I have tried to address this in two ways:</p>
<ol>
<li>I added a middleware that aimed to reduce the number of requests per domain (to prevent it from being stuck in two large domains). Following <a href="https://www.reddit.com/r/scrapy/comments/ikf2me/how_can_i_stop_scrapy_crawlspider_when_it_reaches/" rel="nofollow noreferrer">this post</a> I added this to my middlewares.py:</li>
</ol>
<blockquote>
<pre><code>from urllib.parse import urlparse from threading import Lock
from scrapy.exceptions import IgnoreRequest, NotConfigured
class DomainlimitMiddleware:
def __init__(self, settings):
self.lock = Lock()
self.domain_data = {}
self.max_requests_per_domain = settings.getint('MAX_REQUESTS_PER_DOMAIN')
if self.max_requests_per_domain < 1:
raise NotConfigured()
@classmethod
def from_crawler(cls, crawler):
return cls(crawler.settings)
def process_request(self, request, spider):
parsed = urlparse(request.url)
num_requests = 0
with self.lock:
num_requests = self.domain_data.get(parsed.netloc, 0)
if num_requests == 0:
self.domain_data[parsed.netloc] = 1
else:
self.domain_data[parsed.netloc] = num_requests + 1
if num_requests > self.max_requests_per_domain:
raise IgnoreRequest('Domain has hit the maximum number of requests processed')
return None
</code></pre>
</blockquote>
<p>And activated it by adding this to my settings.py:</p>
<pre><code>MAX_REQUESTS_PER_DOMAIN = 50000
DOWNLOADER_MIDDLEWARES = {'<myproject>.middlewares.DomainlimitMiddleware': 543, }
</code></pre>
<ol start="2">
<li>Following <a href="https://stackoverflow.com/questions/30441191/memory-leak-in-scrapy">this post</a> I queued the request on disk instead of in memory by adding this to my settings.py:
SCHEDULER_DISK_QUEUE = 'scrapy.squeue.PickleFifoDiskQueue'
SCHEDULER_MEMORY_QUEUE = 'scrapy.squeue.FifoMemoryQueue'</li>
</ol>
<p>Subsequently I ran my scraper in the commandline using:</p>
<pre><code>scrapy crawl {spidername} -s JOBDIR=crawls/{spidername}
</code></pre>
<p>The advantage of saving the requests to your disk is that it also allows the scraper too pause and resume afterwards.</p>
|
python|scrapy
| 1 |
1,908,172 | 64,822,062 |
for loop: running a function for each element in a list, determining a range of 7 days
|
<p>I have two lists, one with dates (<code>list_dates</code>) in the format mm/dd/yyyy and another list (<code>my_list</code>) with strings.
Example:</p>
<pre><code>my_list=['my_string1', 'my_string2', 'my_string3']
list_dates=['11/20/2020','12/12/2019','07/12/2019']
</code></pre>
<p>I would need to run a function for all the items in <code>my_list</code> in a range of 7 days from its start_date, specifically:</p>
<ul>
<li>three days before its start_date;</li>
<li>three days after its start_date.</li>
</ul>
<p>In the example:</p>
<ul>
<li>'my_string1' has <code>start_date= 11/20/2020</code></li>
<li>'my_string2' has <code>start_date= 12/12/2019</code></li>
<li>'my_string3' has <code>start_date= 07/12/2019</code></li>
</ul>
<p>The structure of code that I would need should be:</p>
<ul>
<li>select the first item in <code>my_list</code>;</li>
<li>select its <code>start_date</code> in <code>list_dates</code>;</li>
<li>calculate the week range as follows: take <code>start_date</code> as <code>x</code> and calculate <code>x+3</code> and <code>x-3</code>. This will give its week range;</li>
<li>run a function <code>f</code>, where <code>start=start_date</code> and <code>end=start</code>;</li>
<li>collect results;</li>
<li>then run the function f where <code>start=start_date - 1</code> and <code>end=start</code>;</li>
<li>collect results;
...</li>
<li>run the function <code>f</code> where <code>start=start_date-7</code> and <code>end=start</code>;</li>
<li>collect results;</li>
<li>select the second item in <code>my_list</code></li>
<li>select its <code>start_date</code> in <code>list_dates</code>;</li>
<li>calculate the week range as follows: take <code>start_date</code> as <code>x</code> and calculate <code>x+3</code> and <code>x-3</code>. This will give its week range;</li>
<li>run a function <code>f</code>, where <code>start=start_date</code> and <code>end=start</code>;</li>
<li>collect results;</li>
<li>then run the function <code>f</code> where <code>start=start_date - 1</code> and <code>end=start</code>;</li>
<li>collect results;
...</li>
<li>run the function <em>f</em> where <code>start=start_date-7</code> and <code>end=start</code>;</li>
<li>collect results
... and repeat per all the elements in the list.</li>
</ul>
<p>I tried to do it as follows:</p>
<pre><code>from datetime import datetime, timedelta
date_range = 3
for i in my:_list:
for t in list_dates:
t = datetime.strptime(t, '%m/%d/%Y')
before_dates = [ t - timedelta(i) for i in range(1,date_range+1)]
after_dates = [ t + timedelta(i) for i in range(1,date_range+1)]
start=t
end=start
f(start, end, i)
</code></pre>
<p>but the loop does not work as I would expect, i.e. it does not do what I described in the bullets above.</p>
|
<p>Is it this what you are looking for? This is your code but chooses the full seven day range for each <code>my_list</code> entry and its corresponding date from <code>list_dates</code> as its central date (I added the function <code>fun</code> just to print for clarification):</p>
<pre><code>from datetime import datetime, timedelta
date_range = 3
my_list=['my_string1', 'my_string2', 'my_string3']
list_dates=['11/20/2020','12/12/2019','07/12/2019']
def fun(start_date, end_date, my_string):
print(start_date, end_date, my_string)
return
for i in range(len(my_list)):
print(my_list[i])
#for t in list_dates:
t = list_dates[i] # assume my_list and list_dates always correspond index-wise
start_date = t
end_date = t
t = datetime.strptime(t, '%m/%d/%Y')
#before_dates = [ t - timedelta(i) for i in range(date_range, 0, -1)]
#after_dates = [ t + timedelta(i) for i in range(1, date_range+1)]
ba_dates = [ t - timedelta(i) for i in range(date_range, -date_range-1, -1)]
for isd in range(len(ba_dates)):
#print(ba_dates[isd])
#print(ba_dates[isd].strftime('%Y/%m/%d'))
fun(ba_dates[isd].strftime('%Y/%m/%d'), ba_dates[isd].strftime('%Y/%m/%d'), my_list[i])
</code></pre>
<p>for an output of</p>
<pre><code>my_string1
2020/11/17 2020/11/17 my_string1
2020/11/18 2020/11/18 my_string1
2020/11/19 2020/11/19 my_string1
2020/11/20 2020/11/20 my_string1
2020/11/21 2020/11/21 my_string1
2020/11/22 2020/11/22 my_string1
2020/11/23 2020/11/23 my_string1
my_string2
2019/12/09 2019/12/09 my_string2
2019/12/10 2019/12/10 my_string2
2019/12/11 2019/12/11 my_string2
2019/12/12 2019/12/12 my_string2
2019/12/13 2019/12/13 my_string2
2019/12/14 2019/12/14 my_string2
2019/12/15 2019/12/15 my_string2
my_string3
2019/07/09 2019/07/09 my_string3
2019/07/10 2019/07/10 my_string3
2019/07/11 2019/07/11 my_string3
2019/07/12 2019/07/12 my_string3
2019/07/13 2019/07/13 my_string3
2019/07/14 2019/07/14 my_string3
2019/07/15 2019/07/15 my_string3
</code></pre>
|
python|for-loop
| 1 |
1,908,173 | 61,493,359 |
Write to a csv file with a blank element. Python / CSV
|
<p>I'm trying to scrape elements from a page. If the element (always the Middle name) doesn't exist, I can easily use a try/except to get past it in the script... until it tries to save to csv. I'll get a writerow error: <code>NameError: name 'Middle' is not defined</code> How can I just save 'NA' or a blank field to the csv file?</p>
<pre><code>import csv
First = #site element for first name
Last = #site element for last name
try:
Middle = #site element for middle name
print(Middle)
except:
print('NA')
with open ('test.csv', 'a', newline="") as f:
writer = csv.writer(f)
writer.writerow([First,Last,Middle])
</code></pre>
|
<p>Middle is never set to anything if it doesn't exist in your try except block (which you <em>really</em> should consider excepting the exact error)</p>
<pre><code>try:
Middle = #site element for middle name
print(Middle)
except:
print('NA')
</code></pre>
<p>So you try to set <code>Middle</code> to whatever element is there and if the element doesn't exist you just ignore it. In your exception block set <code>Middle</code> to N/a instead of just printing N/A:</p>
<pre><code>try:
Middle = #site element for middle name
print(Middle)
except:
Middle = "N/A"
print('NA')
</code></pre>
<p>This causes middle to actually have something assigned and thus it won't throw the error you're seeing.</p>
<p>As in the comments, you should define middle outside the try catch block to avoid a scope error:</p>
<pre><code>import csv
First = #site element for first name
Last = #site element for last name
Middle = 'N/A' #this will be overwritten if Middle exists in the try except block below, otherwise it will be 'N/A'
try:
Middle = #site element for middle name
print(Middle)
except:
print('NA')
with open ('test.csv', 'a', newline="") as f:
writer = csv.writer(f)
writer.writerow([First,Last,Middle])
</code></pre>
|
python-3.x|csv|export-to-csv
| 0 |
1,908,174 | 61,404,673 |
Django:What is best way to import my Custom-User in views.py?
|
<p>Actually i have created my own custom-user <strong>(MyUser)</strong> in models.py</p>
<pre><code>from django.db import models
from django.contrib.auth.models import AbstractUser
class MyUser(AbstractUser):
country=models.CharField(max_length=20)
</code></pre>
<p>settings.py</p>
<pre><code>AUTH_USER_MODEL = 'new_user.MyUser'
</code></pre>
<p>I created a simple function in views.py to create new users.</p>
<pre><code>from django.shortcuts import render,redirect
from django.http import HttpResponse
from django.contrib.auth import get_user_model
from django.conf import settings
def save_account(request):
MyUser=get_user_model()
if request.method == "POST":
name=request.POST.get("name")
email=request.POST.get("email")
password1=request.POST.get("password1")
password2=request.POST.get("confirm-password")
country=request.POST.get("country")
new_user=MyUser.objects.create_user(username=name,email=email,password=password1,country=country)
new_user.save()
return redirect('/')
else:
return HttpResponse("404 Not Found")
</code></pre>
<p><strong>1.</strong> Here i use get_user_model() function to get my currently custom-user.</p>
<p><strong>2.</strong> I have second option to get my custom user just by importing MyUser model from models.py and then use it.</p>
<pre><code>from .models import MyUser
</code></pre>
<p><strong>3.</strong> I have third option to get custom-user just by importing <strong>AUTH_USER_MODEL</strong> directly from settings.py and use it like in this way.</p>
<pre><code>from django.conf import settings
MyUser=settings.AUTH_USER_MODEL
</code></pre>
<p>but third option returning a string instead of returning my custom-user.but why?</p>
<p>I just want to know that which option is best to import custom-user in views.py and why?</p>
|
<p>Your first two options are both ok. Using <code>get_user_model</code> makes it easier to reuse the app without changing the import. However many Django projects are not meant to be re-used, in which case the explicit import makes it clearer where the user model is imported from.</p>
<p><code>settings.AUTH_USER_MODEL</code> is meant to be a string. You set it to a string when you did <code>AUTH_USER_MODEL = 'new_user.MyUser'</code>, and Django doesn't automatically change it to the model instance when you access it. It's useful in foreign keys because it means you don't need to import the model. For example:</p>
<pre><code>class MyModel(models.Model):
author = models.ForeignKey(
settings.AUTH_USER_MODEL,
on_delete=models.CASCADE,
)
</code></pre>
|
python|django|django-custom-user
| 3 |
1,908,175 | 61,447,819 |
Notepad++ Pythonscript Can I import a module from the same folder as my pythonscript program?
|
<p>I have a pythonscript TestOnly.py running successfully in Notepad++ (using a shortcut Shift-F9 to run it). TestOnly.py is in
C:\Users\User\AppData\Roaming\Notepad++\plugins\config\PythonScript\scripts\myfolder</p>
<pre><code>console.clear()
console.show()
from myDict import *
print months
</code></pre>
<p>This works fine when myDict.py is in
C:\Users\User\AppData\Roaming\Notepad++\plugins\config\PythonScript\scripts</p>
<p>But I want myDict.py to be in the same folder as TestOnly.py i.e.
C:\Users\User\AppData\Roaming\Notepad++\plugins\config\PythonScript\scripts\myfolder</p>
<p>When I run TestOnly.py I get the following error:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\User\AppData\Roaming\Notepad++\plugins\Config\PythonScript\scripts\myFolder\TestOnly.py", line 3, in <module>
from myDict import *
ImportError: No module named myDict
</code></pre>
<p>I have put an empty __init__.py file in both folders but they don't seem to have any effect. Can anyone explain a simple way of getting around this?</p>
|
<p>The <strong>simple</strong> answer is:</p>
<p>Place an empty __init.py__ in myfolder</p>
<pre><code>console.clear()
console.show()
from myfolder.myDict import *
print months
</code></pre>
|
python|module|notepad++
| 0 |
1,908,176 | 61,278,373 |
There is a way to use less than this number of "if" There is a way to shorten this code
|
<pre><code>h = int(input ("Enter your working hours in a week:"))
rate = 8
if ((h < 0 ) or (h > 168)):
print("INVALID")
elif h <= 40:
print ("YOU MADE", rate*h, "DOLLARS THIS WEEK")
elif 41 <= h <= 50:
print("YOU MADE", int(40 * rate + (h - 40) * (1.129 * rate)), "DOLLARS THIS WEEK")
else:
</code></pre>
<p>print("YOU MADE",int(40 * rate + (h - 40) * 1.20373 * rate), "DOLLARS THIS WEEK")</p>
|
<p>First(easy solution): the problem can be solved easily by just replacing 'and' operator with 'or' operator in line 3.</p>
<p>You can also force the user to keep entering the correct number based on your condition and also add an extra safety of checking if the entered number is an int or not by using the below code-</p>
<pre><code>good_Input = False
while not good_Input:
try:
number = int(input('Enter a number: '))
if number > 0 and number < 168:
good_Input = True
print("that's a good number. Well done!")
else:
print("that's not a good number number. Try again: ")
except ValueError:
print("that's not an integer. Try again: ")
</code></pre>
<p>Just add this snippet in your first line and BINGOO..!!</p>
|
python|python-3.x|printing|format
| 1 |
1,908,177 | 60,621,326 |
Reversing the order of a nested array in a 3D matrix
|
<p>I have a 3D matrix, or rather, an array of a 2D matrix, and I would like to reverse the order of the deepest dataset.</p>
<p>So say I have:</p>
<pre><code>array([[[ 1 , 2 , 3 , ...,
7 , 8 , 9 ],
...,
[ 10 , 11 , 12 , ...,
16 , 17 , 18 ]],
[[ 19 , 20 , 21 , ...,
25 , 26 , 27 ],
...,
[ 28 , 29 , 30 , ...,
34 , 35 , 36 ]]])
</code></pre>
<p>I would want it to be</p>
<pre><code>array([[[ 9 , 8 , 7 , ...,
3 , 2 , 1 ],
...,
[ 18 , 17 , 16 , ...,
12 , 11 , 10 ]],
[[ 27 , 26 , 25 , ...,
21 , 20 , 19 ],
...,
[ 36 , 35 , 34 , ...,
30 , 29 , 28 ]]])
</code></pre>
<p>I am currently achieving this result by using the following:</p>
<pre><code>reordered_list = []
for i in range(ts):
inner_list = []
for j in range(M_y):
inner_list.append(original_array[i][j][::-1])
reordered_list.append(inner_list)
reordered_array = np.array(reordered_list)
</code></pre>
<p>but wondered if there was a more efficient route to follow.</p>
<p>Thanks in advance.</p>
|
<p>you can use:</p>
<pre><code>a = np.array([[[ 1 , 2 , 3,
7 , 8 , 9 ],
[ 10 , 11 , 12 ,
16 , 17 , 18 ]],
[[ 19 , 20 , 21 ,
25 , 26 , 27 ],
[ 28 , 29 , 30 ,
34 , 35 , 36 ]]])
a[:,:,::-1]
</code></pre>
<p>output:</p>
<pre><code>array([[[ 9, 8, 7, 3, 2, 1],
[18, 17, 16, 12, 11, 10]],
[[27, 26, 25, 21, 20, 19],
[36, 35, 34, 30, 29, 28]]])
</code></pre>
|
python|arrays|multidimensional-array|reverse
| 2 |
1,908,178 | 58,040,020 |
How to run python code without the tensorflow warning
|
<p>I am using python 3.6.9 and tensorflow 1.14.0 on windows 10 and have installed and updated all libraries on python 3.6. When I run my code I expect it to just print out the json data but instead I get a big tensorflow/tflearn error and I don't know why. </p>
<p>My code :</p>
<pre><code>import nltk
import numpy
import tflearn
import tensorflow
import random
import json
from nltk.stem.lancaster import LancasterStemmer
stemmer = LancasterStemmer()
with open("intents.json") as file:
data = json.load(file)
print(data)
</code></pre>
<p>Error I get :</p>
<pre><code>C:\Users\PC\PycharmProjects\Machine Learning\venv\lib\site-packages\tensorflow\python\framework\dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
C:\Users\PC\PycharmProjects\Machine Learning\venv\lib\site-packages\tensorflow\python\framework\dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
C:\Users\PC\PycharmProjects\Machine Learning\venv\lib\site-packages\tensorflow\python\framework\dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
C:\Users\PC\PycharmProjects\Machine Learning\venv\lib\site-packages\tensorflow\python\framework\dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
C:\Users\PC\PycharmProjects\Machine Learning\venv\lib\site-packages\tensorflow\python\framework\dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
C:\Users\PC\PycharmProjects\Machine Learning\venv\lib\site-packages\tensorflow\python\framework\dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
C:\Users\PC\PycharmProjects\Machine Learning\venv\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
C:\Users\PC\PycharmProjects\Machine Learning\venv\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
C:\Users\PC\PycharmProjects\Machine Learning\venv\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
C:\Users\PC\PycharmProjects\Machine Learning\venv\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
C:\Users\PC\PycharmProjects\Machine Learning\venv\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
C:\Users\PC\PycharmProjects\Machine Learning\venv\lib\site-packages\tensorboard\compat\tensorflow_stub\dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
WARNING:tensorflow:From C:\Users\PC\PycharmProjects\Machine Learning\venv\lib\site-packages\tflearn\helpers\summarizer.py:9: The name tf.summary.merge is deprecated. Please use tf.compat.v1.summary.merge instead.
WARNING:tensorflow:From C:\Users\PC\PycharmProjects\Machine Learning\venv\lib\site-packages\tflearn\helpers\trainer.py:25: The name tf.summary.FileWriter is deprecated. Please use tf.compat.v1.summary.FileWriter instead.
WARNING:tensorflow:From C:\Users\PC\PycharmProjects\Machine Learning\venv\lib\site-packages\tflearn\collections.py:13: The name tf.GraphKeys is deprecated. Please use tf.compat.v1.GraphKeys instead.
WARNING:tensorflow:From C:\Users\PC\PycharmProjects\Machine Learning\venv\lib\site-packages\tflearn\config.py:123: The name tf.get_collection is deprecated. Please use tf.compat.v1.get_collection instead.
WARNING:tensorflow:From C:\Users\PC\PycharmProjects\Machine Learning\venv\lib\site-packages\tflearn\config.py:129: The name tf.add_to_collection is deprecated. Please use tf.compat.v1.add_to_collection instead.
WARNING:tensorflow:From C:\Users\PC\PycharmProjects\Machine Learning\venv\lib\site-packages\tflearn\config.py:131: The name tf.assign is deprecated. Please use tf.compat.v1.assign instead.
</code></pre>
<p>What do I do to get rid of that error and run my code properly?</p>
|
<p>Try these commands:</p>
<pre><code>pip install tensorflow==1.14.0
pip install numpy==1.16.4
</code></pre>
|
python-3.x|tensorflow|tflearn
| 1 |
1,908,179 | 57,886,220 |
Function that takes n rows as input and returns column names if sum in column equals n
|
<p>I have a large <code>DataFrame</code> that structures as follows:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame({'name1': [1, 0, 1,1],
'name2': [0, 0, 0,1],
'name3': [1, 1, 1,1],
'namen': [0, 0, 0,0]},
index=['label1', 'label2', 'label3', 'labeln'])
>>> df
name1 name2 name3 name4
label1 1 0 1 1
label2 0 0 0 1
label3 1 1 1 1
label4 0 0 0 0
</code></pre>
<p>I am trying to build a function that takes in <strong>n</strong> row names as arguments sums up the values in all columns and returns me the column names if the sum of those columns equals <strong>n</strong>. </p>
<p>For instance, using label1, label2 and label3 as inputs I would like to obtain the following output:</p>
<pre class="lang-py prettyprint-override"><code>def common_terms(*nargs):
the function...
>>> common_terms(label1, label2, label3)
(name4)
</code></pre>
<p>or</p>
<pre><code>>>> common_terms(label1, label3)
(name1, name3)
</code></pre>
<p>I have little knowledge of building functions in Python, but got my head really stuck on this. Could you kindly help me to progress?</p>
|
<p>Filter rows by <code>loc</code> and test if all <code>1</code> per columns, then filter <code>index</code> of <code>Series</code>:</p>
<pre><code>def common_terms(*nargs):
i = df.loc[list(nargs)].all()
return i.index[i].tolist()
print (common_terms('label1', 'label2', 'label3'))
['namen']
print (common_terms('label1','label3'))
['name1', 'namen']
</code></pre>
|
python|pandas|pandas-loc
| 1 |
1,908,180 | 56,420,528 |
Create a function that uses a list as an argument to extract rows from a CSV
|
<p>I'm trying to pass a list as an argument to a function that will grab a row from a csv if it contains a string in the list provided. I can't get the index to change on itemA. It only prints the last item of the list!</p>
<pre class="lang-py prettyprint-override"><code>GAS=[
"SUNOCO",
"CUMBERLAND",
"MOBIL"]
gasLength=len(GAS)
print(gasLength)
def parseData(csvToParse = transactionsCSV, itemA="", itemB=""):
#For Loop to append to CSV
for row in csvToParse:
if itemA in row[3]:
csv_personA.writerow([row[0],row[1],row[2],row[3],row[4],row[5]])
print(row[3])
print(itemA)
elif itemB in row[3]:
csv_personB.writerow([row[0],row[1],row[2],row[3],row[4],row[5]])
#This Was suggested but still only returns the GAS index of 0
for counter, _ in enumerate(range(gasLength)):
parseData(csvToParse=transactionsCSV, itemA=GAS[counter], itemB="")
for _ in range(gasLength):
x = gasLength-1
parseData(csvToParse=transactionsCSV, itemA=GAS[x], itemB="")
# My first attempt is below!!!
#Get gas purchases
def parseGasStations():
x = 0
itemsToCheck = row_count*gasLength
print(itemsToCheck)
#while x is less than total of items in the main csv times the number of items in the gas array.
while x < itemsToCheck:
a = 0
y = 0
#While a is less than the total number of rows in the main
while a < row_count:
print(GAS[y])
for _ in range(gasLength):
parseData(csvToParse=transactionsCSV, itemA=GAS[gasLength-1], itemB="")
if y != gasLength-1:
y += 1
elif y == gasLength-1:
y = 0
a += 1
x += 1
parseGasStations()
</code></pre>
<p><a href="https://i.stack.imgur.com/zQso7.png" rel="nofollow noreferrer">csv output</a></p>
<p>The output is only appending the MOBIL stations to the CSV and not indexing through the list like I thought it would.</p>
|
<p>Thanks to Fluxens I was able to figure this out!
Here's a function that takes a list as a parameter and indexes through all the items!</p>
<pre class="lang-py prettyprint-override"><code>
GAS=(
"SUNOCO",
"CUMBERLAND",
"MOBIL",
"BESTWAY",
"AMORE FUEL")
gasLength=len(GAS)
def parseData(csvToParse="", catagory=(), catagorySize=""):
#For loop to check each row in master csv
for row in csvToParse:
#For loop to index through catagory items to look for in each row
for counter, _ in enumerate(range(catagorySize)):
if catagory[counter] in row[3]:
csv_mark.writerow([row[0],row[1],row[2],row[3],row[4],row[5]])
print(row[3])
print(catagory)
parseData(csvToParse = transactionsCSV, catagory=GAS, catagorySize=gasLength)
</code></pre>
|
python|list|function|parameters
| 0 |
1,908,181 | 71,636,721 |
Edit a value in a dict
|
<p>I have the following dict:</p>
<pre><code>dict = {'1:': ('Mercedes,', '200,', '10000USD'),
'2:': ('BMW,', '150,', '12000USD'),
'3:': ('Jeep,', '30,', '8000USD')}
</code></pre>
<p>What is the right function to edit values (name, quantity, price)</p>
<p>I want the user to input the ID, and then let him edit the information of the mentioned ID</p>
|
<p>When you have given the value in Tuple datatype, you can't edit its elements separately.
But, you can entirely replace the existing tuple with new one.</p>
|
python
| 1 |
1,908,182 | 69,359,633 |
Python Pandas Dataframe: Divide values in two rows based on column values
|
<p>I have a pandas dataframe:</p>
<pre><code>A B C D
1 1 0 32
1 4
2 0 43
1 12
3 0 58
1 34
2 1 0 37
1 5
[..]
</code></pre>
<p>where A, B and C are index columns. What I want to compute is for every group of rows with unique values for A and B: D WHERE C=1 / D WHERE C=0.</p>
<p>The result should look like this:</p>
<pre><code>A B NEW
1 1 4/32
2 12/43
3 58/34
2 1 37/5
[..]
</code></pre>
<p>Can you help me?</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.unstack.html" rel="nofollow noreferrer"><code>Series.unstack</code></a> first, so possible divide columns <code>0,1</code>:</p>
<pre><code>new = df['D'].unstack()
new = new[1].div(new[0]).to_frame('NEW')
print (new)
NEW
A B
1 1 0.125000
2 0.279070
3 0.586207
2 2 0.135135
</code></pre>
|
python|pandas|dataframe
| 3 |
1,908,183 | 55,448,185 |
Discord.py Bot Kick/DM Command
|
<p>I'm learning how to create a discord bot using python and I'm having trouble with this one command. What I'm trying to do is kick a specific user and then dm them a invite back to the discord server using the bot. It is a silly idea but I really want to make it work.</p>
<p>I specifically am having trouble with is how to kick a specific user (with user ID) and then DM that user.</p>
<p>Thanks!</p>
<p>Here the code:</p>
<pre><code>if message.content == '!kickjohn':
if "527290609166188554" in [role.id for role in message.author.roles]:
<KICK JOHN COMMAND>
await client.send_message(message.channel, "_**Bye Bye John!**_")
await client.send_message(<JOHN>, 'https://discord.gg/XXXXXXX')
else:
await client.send_message(message.channel, "sorry you can't do that")
</code></pre>
<p>The goal of this is that if someone of the appropriate role types <code>!kickjohn</code> a specific discord user id (<code>john</code>) gets kicked and the bot automatically dm's john an invite to the server.</p>
|
<p>I think you should use a command to make it easier, if you have an <code>on_message</code> function add <code>await bot.process_commands(message)</code> like so </p>
<pre class="lang-py prettyprint-override"><code>@bot.event
async def on_message(message):
await bot.process_commands(message)
</code></pre>
<pre class="lang-py prettyprint-override"><code>@commands.has_role("role_name_here")#makes it so that only works with specific role
@bot.command(pass_context=True)
async def kick(msg,user:discord.Member): #this converts the member you mention into a usuer object, you can also do it by user id or server nickname if you don't want to mention them
"""[Create an invite code then kicks the user]
"""
code=await bot.create_invite(msg.message.channel) #create the invite code
await bot.send_message(user,f'{code}') #Send the invite code first because you must have common server to send message to user
await bot.kick(user) #kick the user
</code></pre>
|
python|discord|discord.py
| 0 |
1,908,184 | 55,213,959 |
How to generate POST and GET request by python twisted HTTPClient?
|
<p>I am writing an HTTP Client. It is a simple mock bank site. It needs to be able to send two kinds of request out:</p>
<ol>
<li>When user login:
POST /login?user=bob&pass=abc123
HTTP/1.1
Host: bank.com</li>
<li>When the user transfers money:
GET /transfer?to=badguy&amt=100
HTTP/1.1
Host: bank.com
Cookie: login=fde874</li>
</ol>
<p>I am implementing it by python twisted, I write a subclass of HTTPClient:</p>
<pre><code>class BankClient(HTTPClient):
def genReq():
# How to write code to generate and send the Two request?
def connectionMode(self):
genReq()
class BankCllientFactory(ClientFactory):
protocol = BankClient
def __init__(self):
self.done = Defered()
def main(reactor):
factory= BankClientFactory()
reactor.connectTCP('localhost',8080,factory)
return factory.done
if __name__ =='__main__':
task.react(main)
</code></pre>
|
<p>You want to stop using <code>HTTPClient</code>. Instead, use <a href="https://twistedmatrix.com/documents/current/web/howto/client.html#the-agent" rel="nofollow noreferrer"><code>Agent</code></a> or the third-party <a href="https://treq.readthedocs.io/en/release-17.8.0/" rel="nofollow noreferrer"><code>treq</code></a>.</p>
<p>To generate <code>GET</code> and <code>POST</code> with <code>Agent</code>:</p>
<pre><code>from twisted.web.client import Agent
from twisted.internet import reactor
agent = Agent(reactor)
d_get = agent.request(b"GET", uri)
d_post = agent.request(b"POST", uri)
</code></pre>
<p>To generate <code>GET</code> and <code>POST</code> with <code>treq</code>:</p>
<pre><code>import treq
d_get = treq.get(url)
d_post = treq.post(url)
</code></pre>
|
python|http|web|client|twisted
| 0 |
1,908,185 | 55,303,718 |
Converting Pandas to Numpy
|
<p>I am trying to implement the following code, which was written with pandas, into a more generic version using only Numpy. The code is also <a href="https://medium.com/@rakendd/decision-tree-from-scratch-9e23bcfb4928" rel="nofollow noreferrer">found here</a>:</p>
<pre><code>attribute = 'Taste'
target_variables = df.Eat.unique() #This gives all 'Yes' and 'No'
variables = df[attribute].unique() #This gives different features in that attribute (like 'Sweet')
entropy_attribute = 0
for variable in variables:
entropy_each_feature = 0
for target_variable in target_variables:
num = len(df[attribute][df[attribute]==variable][df.Eat ==target_variable]) #numerator
den = len(df[attribute][df[attribute]==variable]) #denominator
fraction = num/(den+eps) #pi
entropy_each_feature += -fraction*log(fraction+eps) #This calculates entropy for one feature like 'Sweet'
fraction2 = den/len(df)
entropy_attribute += -fraction2*entropy_each_feature #Sums up all the entropy ETaste
</code></pre>
<p>Here is my attempt so far:</p>
<pre><code>def entropy_by_attribute(dataset, feature):
attribute = dataset[,:feature]
target_variables = numpy.unique(dataset[:,-1])
variables = numpy.unique(attribute)
entropy_attribute = 0
for variable in variables:
entropy_each_feature = 0
for target_variable in target_variables:
num =
den =
fraction = num / (den + eps)
entropy_each_feature = entropy_each_feature + (-fraction*log(fraction+eps))
fraction2 = den/len(dataset)
entropy_attribute = entropy_attribute + (-fraction2*entropy_each_feature)
return abs(entropy_attribute)
</code></pre>
<p><strong>What I am confused about is how to convert the numerator and denominator lines. I don't understand what <code>len(df[attribute][df[attribute]==variable][df.Eat ==target_variable])</code> is doing.</strong> </p>
<p>For reference, here is the dataset the pandas example is using:</p>
<pre><code>dataset = {'Taste':['Salty','Spicy','Spicy','Spicy','Spicy','Sweet','Salty','Sweet','Spicy','Salty'],
'Temperature':['Hot','Hot','Hot','Cold','Hot','Cold','Cold','Hot','Cold','Hot'],
'Texture':['Soft','Soft','Hard','Hard','Hard','Soft','Soft','Soft','Soft','Hard'],
'Eat':['No','No','Yes','No','Yes','Yes','No','Yes','Yes','Yes']}
</code></pre>
<p>Can someone help me understand the <code>num</code> and <code>den</code> declarations so I can continue this conversion? I do not understand what they represent in this instance, or what <code>eps</code> is. </p>
<p>Thank you</p>
|
<p><code>num</code> calculates the amount of times there is a row that both contains the target variable of Eat ('Yes' or 'No') and the variable attribute .</p>
<p><code>den</code> calculates the amount of times the variable attribute is present.</p>
<p>Let's take 'Salty' as Taste attribute and 'No' as target variable of Eat. Now the numerator will be 2 since we have two ('No', 'Salty') pairs and the denominator will be three since there are three appearances of 'Salty'.</p>
<p>If you look at the link you provided, you'll find the meaning of eps.
<code>eps = np.finfo(float).eps</code></p>
<p>I quote: </p>
<blockquote>
<p>‘eps’ here is the smallest representable number. At times we get log(0) or 0 in the denominator, to avoid that we are going to use this.</p>
</blockquote>
<p>Eps is used to avoid dividing by zero by using the smallest possible number in that case.</p>
<p>Calculating <code>num</code> and <code>den</code> without using pandas can be done like this:</p>
<pre><code>attribute = 'Taste'
variables = numpy.unique(dataset[attribute])
target_variables = numpy.unique(dataset["Eat"])
entropy_attribute = 0
for variable in variables:
entropy_each_feature = 0
for target_variable in target_variables:
num = 0
den = 0
for index in range(0,len(dataset[attribute])):
if dataset[attribute][index] == variable and dataset["Eat"][index] == target_variable:
num += 1
if dataset[attribute][index] == variable:
den += 1
</code></pre>
<p>There might be a faster/cleaner way using numpy operations instead of index based comparisons but I'm not very familiar with it.</p>
|
python|pandas|numpy
| 0 |
1,908,186 | 55,405,725 |
How do I execute a Python program from within a Flask server?
|
<p>I'm creating a Flask server that will include various Python programs. I want the Administrator to be able to execute them from WITHIN the Flask app. For example, if they click a button, it would execute the Python script.</p>
<p>Currently, the Python file and Flask server execute sequentially when I run the server. As in, the Python file executes first and THEN the Flask server runs (but only AFTER I've terminated the Python program).</p>
<p>For reference, the Python file allows the user to plot points on an image by double clicking on it in a window.</p>
<p>routes.py</p>
<pre><code>from app import app, db
from flask import Flask, request, render_template, session
import datasets
app.secret_key = app.config['SECRET_KEY']
/*--------------IRREVEVANT CODE BEGINS---------------*/
@app.route('/')
@app.route('/index')
def index():
#Create connection session between SQLAlchemy database and server
#Select ALL records in tables Lot
#Store queries in data and push to INDEX template
data = db.session.execute("SELECT * FROM Lot").fetchall()
return render_template('index.html', data=data)
@app.route('/info/<lot_id>')
def info(lot_id):
lotid = lot_id
#Create connection session between SQLAlchemy database and server
#Select records in table Spot based on LOT_ID parameter
#Store queries in data and push to INFO template
data = db.session.execute("SELECT * FROM Spot WHERE lot_id = :lotid;", {"lotid": lotid}).fetchall()
return render_template('info.html', data=data)
/*------------IRRELEVANT CODE ENDS--------------*/
@app.route('/test')
def test():
return datasets.click_and_crop()
if __name__ == '__main__':
app.run(host='0.0.0.0', port='8000', debug=True)
</code></pre>
<p>datasets.py</p>
<pre><code>import cv2
import yaml
import numpy as np
file_path = 'parking_spots.yml'
img = cv2.imread('test1.jpg')
refPt = []
data = []
cropping = False
def yaml_loader(file_path):
with open(file_path, "r") as file_descr:
data = yaml.load(file_descr)
return data
def yaml_dump(file_path, data):
with open(file_path, "a") as file_descr:
yaml.dump(data, file_descr)
def click_and_crop(event, x, y, flags, param):
info = {'id': 0, 'points': []}
global refPt, cropping
if event == cv2.EVENT_LBUTTONDBLCLK:
refPt.append((x,y))
cropping = False
if len(refPt) == 4:
if data == []:
if yaml_loader(file_path) != None:
new_data = len(yaml_loader(file_path))
else:
new_data = 0
else:
if yaml_loader(file_path) != None:
new_data = len(data) + len(yaml_loader(file_path))
else:
new_data = len(data)
cv2.line(image, refPt[0], refPt[1], (0, 0, 255), 2)
cv2.line(image, refPt[1], refPt[2], (0, 0, 255), 2)
cv2.line(image, refPt[2], refPt[3], (0, 0, 255), 2)
cv2.line(image, refPt[3], refPt[0], (0, 0, 255), 2)
corner_1 = list(refPt[2])
corner_2 = list(refPt[3])
corner_3 = list(refPt[0])
corner_4 = list(refPt[1])
info['points'] = [corner_1, corner_2, corner_3, corner_4]
info['id'] = new_data + 1
data.append(info)
refPt = []
image = cv2.resize(img, None, fx=0.6, fy=0.6)
clone = image.copy()
cv2.namedWindow("Click to mark points")
cv2.imshow("Click to mark points", image)
cv2.setMouseCallback("Click to mark points", click_and_crop)
while True:
cv2.imshow("Click to mark points", image)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# data list into yaml file
if data != []:
yaml_dump(file_path, data)
cv2.destroyAllWindows()
</code></pre>
<p>In this case, I'd expect the Python program to run only when you go the <code>localhost/test</code> url. The Flask server would run and then, when a button that takes you to that url is clicked, the Python program runs concurrently until its terminated.</p>
|
<p>For anybody that finds themselves here, I figured the solution to this particular problem.</p>
<p>As @roganjosh mentioned, the <code>import</code> statement was executing the functions in <code>datasets</code> as soon as it was interpreted. To prevent this, you need to include <code>if __name__ == "__main__"</code> in the file, which tells the system only execute the specified function when it's NOT called by <code>import</code>.</p>
<p>Another "issue" to mention with my <code>datasets.py</code> code in particular is that calling <code>click_and_crop</code> from within the server didn't work because that function required arguments. I alleviated this by creating a <code>main()</code> method and calling that instead. The full solution is as follows:</p>
<p>datasets.py</p>
<pre><code>import cv2
import yaml
import numpy as np
img = cv2.imread('test1.jpg')
image = cv2.resize(img, None, fx=0.6, fy=0.6)
file_path = 'parking_spots.yml'
refPt = []
data = []
def yaml_loader(file_path):
//contents of function
def yaml_dump(file_path, data):
//contents of function
def click_and_crop(event, x, y, flags, param):
//contents of function
def main():
cropping = False
clone = image.copy()
cv2.namedWindow("Click to mark points")
cv2.imshow("Click to mark points", image)
cv2.setMouseCallback("Click to mark points", click_and_crop)
while True:
cv2.imshow("Click to mark points", image)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# data list into yaml file
if data != []:
yaml_dump(file_path, data)
cv2.destroyAllWindows()
hi = "done"
return hi
if __name__ == "__main__":
# stuff only to run when not called via 'import' here
main()
</code></pre>
<p>routes.py</p>
<pre><code>from app import app, db
from flask import Flask, request, render_template, session
import datasets
app.secret_key = app.config['SECRET_KEY']
@app.route('/')
@app.route('/index')
def index():
//Contents of index()
@app.route('/info/<lot_id>')
def info(lot_id):
//Contents of info()
@app.route('/test')
def test():
return datasets.main()
if __name__ == '__main__':
app.run(host='0.0.0.0', port='8000', debug=True)
</code></pre>
|
python|flask|import
| 0 |
1,908,187 | 57,522,806 |
constants in Pytorch Linear Module Class Definition
|
<p>What is <code>__constants__</code> in pytorch <code>class Linear(Module):</code> defined in <a href="https://pytorch.org/docs/stable/_modules/torch/nn/modules/linear.html" rel="noreferrer">https://pytorch.org/docs/stable/_modules/torch/nn/modules/linear.html</a>? </p>
<p>What is its functionality and why is it used?</p>
<p>I have been searching around, but did not find any documentation. Please note that this does not mean the <code>__constants__</code> in torch script.</p>
|
<p>The <code>__constants__</code> you're talking about is, in fact, the one related to TorchScript. You can confirm it by using <code>git blame</code> <em>(when it was added and by who)</em> on GitHub. For example, for <a href="https://github.com/pytorch/pytorch/blob/master/torch/nn/modules/linear.py" rel="nofollow noreferrer"><code>torch/nn/modules/linear.py</code></a>, check its <a href="https://github.com/pytorch/pytorch/blame/master/torch/nn/modules/linear.py" rel="nofollow noreferrer">git blame</a>.</p>
<blockquote>
<p><a href="https://pytorch.org/docs/stable/jit.html#python-defined-constants" rel="nofollow noreferrer">TorchScript</a> also provides a way to use constants that are defined in Python. These can be used to hard-code hyper-parameters into the function, or to define universal constants.</p>
<p>-- Attributes of a ScriptModule can be marked constant by listing them as a member of the <strong>constants</strong> property of the class:</p>
</blockquote>
<pre class="lang-py prettyprint-override"><code>class Foo(torch.jit.ScriptModule):
__constants__ = ['a']
def __init__(self):
super(Foo, self).__init__(False)
self.a = 1 + 4
@torch.jit.script_method
def forward(self, input):
return self.a + input
</code></pre>
|
pytorch
| 7 |
1,908,188 | 57,554,065 |
Iterate 2D numpy array using for and while loops
|
<h1>The objective of the code is to do the following:</h1>
<ol>
<li><p>Take an integer user input</p></li>
<li><p>Create a numpy array of 1s of that many number of rows and columns</p></li>
<li><p>Create a 1D array which using arange function which has the number of elements equal to the size of the array.</p></li>
<li><p>Multiply each of the elements of the 2D array with the elements of 1D array sequentially.</p></li>
<li><p>Print the final array.</p></li>
</ol>
<p>I tried using iterating over different positions of the matrix using for and while loops.</p>
<pre><code>size = int(input("Enter the matrix size:"))
one_matrix = np.ones((size, size), dtype=int)
y = np.size(one_matrix)
range_matrix = np.arange(1, y + 1)
i = 0
for i in range(size):
j = 0
while j > 2:
one_matrix[i][j] = range_matrix[i + j]
j += 1
i += 1
</code></pre>
<h2>I am getting output as:</h2>
<p>All 1s instead of 1,2,3,....9</p>
|
<p>The while condition is never satisfied as </p>
<blockquote>
<p>j = 0</p>
</blockquote>
<p>and after that </p>
<blockquote>
<p>while j > 2</p>
</blockquote>
<p>You can consider this solution:</p>
<blockquote>
<pre><code>size = int(input("Enter the matrix size:"))
one_matrix = np.ones((size, size), dtype=int)
y = np.size(one_matrix)
range_matrix = np.arange(1, y + 1)
for i in range(y):
r = i // size
c = i % size
one_matrix[r][c] = range_matrix[i]
</code></pre>
</blockquote>
|
python|numpy
| 0 |
1,908,189 | 42,536,556 |
Running Alexnet using MxNet
|
<p>I am trying to play with the alexnet code in the /mxnet/example/image-classification/symbols directory using MxNet Framework. I am not an expert in AI. Can some explain explain how to run it using GPUs? I have tried the following for single GPU:</p>
<p>python alexnet.py --network resnet --num-layers 110 --batch-size 128 --gpus 0</p>
<p>It didn't do anything. I have HPC background. I want to test the scalability of this framework per node and across the nodes ( distributed ). Any help would be appreciated.</p>
<p>Thanks,</p>
|
<p>the alexnet.py (along with the other Python files in examples/image-classification/symbols folder) only returns symbols that represent the network.</p>
<p>First download and unarchive your dataset:</p>
<pre><code>/mxnet/example/image-classification/data# wget http://www.image-net.org/image/whatever-zip-or-tar-file
/mxnet/example/image-classification/data# unzip whatever-zip-or-tar-file
</code></pre>
<p>Convert data format to RecordIO:</p>
<pre><code>/mxnet/example/image-classification/data# python ../../../tools/im2rec.py --list True --recursive True --train-ratio 0.95 mydata tiny-imagenet-200
/mxnet/example/image-classification/data# python ../../../tools/im2rec.py --num-thread 16 mydata tiny-imagenet-200
</code></pre>
<p>Use train_imagenet.py script to train on alexnet (you may switch to any of the other symbols if you wish):</p>
<pre><code>/mxnet/example/image-classification/data# cd ..
/mxnet/example/image-classification# python train_imagenet.py --network alexnet --data-train /mxnet/example/image-classification/data/mydata_train.rec --data-val /mxnet/example/image-classification/data/mydata_val.rec --num-layers 110 --batch-size 64 --gpus 0
</code></pre>
<p>Take a look at the <a href="https://github.com/dmlc/mxnet/tree/master/example/image-classification" rel="nofollow noreferrer">README</a> for more details.</p>
|
python|mxnet
| 2 |
1,908,190 | 42,240,971 |
How can I get exit code of a twisted spawn process which may be terminated unexpected?
|
<p>I'm using twisted to spawn a local process, which may be terminated in some condition. </p>
<p>I have a custom <code>twisted.internet.protocol.ProcessProtocol</code> class for the reactor. If the local process abruptly terminated, I can't get the return value in <code>processEnded</code>. The <code>exitCode</code> is set to <code>None</code>. </p>
<p>A mcv example is like this:</p>
<pre><code>from twisted.internet import error,protocol,reactor
class MyPP(protocol.ProcessProtocol):
def processEnded(self, reason):
if reason.check(error.ProcessTerminated):
err_info = "wrong termination: %s; exitCode: %s; signal: %s" % \
(reason, reason.value.exitCode, reason.value.signal)
print(err_info)
else:
print("processEnded, status %d" % (reason.value.exitCode,))
print("quitting")
reactor.stop()
pp = MyPP()
reactor.spawnProcess(pp, "throw_exception", ["throw_exception"], {})
reactor.run()
</code></pre>
<p>And the <code>throw_exception</code> executable could be compiled from:</p>
<pre><code>#include <iostream>
int main() {
throw std::runtime_error("some exception");
return 0;
}
</code></pre>
<p>Execute the python example will print </p>
<blockquote>
<p>wrong termination: [Failure instance: Traceback (failure with no
frames): : A process
has ended with a probable error condition: process ended by signal 6.
]; <strong>exitCode: None</strong>; signal: 6</p>
</blockquote>
<p>The C++ example will have a return value of 134 if run in a shell, which means <code>SIGABRT</code>(6) sent. (I've also tested sending <code>SIGINT</code> to terminate and still getting no exit code.) </p>
<p>How can I get it in the <code>ProcessProtocal</code> instance? Or it is impossible?</p>
|
<p>In your example, 134 is the "wait status" (or, in the man pages, the "wstatus"). It encodes a few pieces of information about a running-state-transition the process being <code>wait()</code> on underwent.</p>
<p>The possible transitions are:</p>
<ul>
<li>exited with a code (ie, called <code>exit(2)</code>)</li>
<li>was killed by a signal</li>
<li>whether a core dump was produced</li>
<li>was stopped (eg by <code>SIGSTOP</code>)</li>
<li>was un-stopped (eg with <code>SIGCONT</code>)</li>
</ul>
<p>POSIX provides macros for extracting the details from the wait status: <code>WIFEXITED</code>, <code>WIFSIGNALED</code>, <code>WTERMSIG</code>, <code>WEXITSTATUS</code>, etc.</p>
<p>Python exposes these through the <code>os</code> module: <code>os.WIFEXITED</code>, etc.</p>
<p><code>os.WIFEXITED(134)</code> evaluates to <code>False</code>. <code>os.WIFSIGNALED(134)</code> evaluates to true and <code>os.WTERMSIG(134)</code> evaluates to <code>6</code>.</p>
<p><code>ProcessDone</code> and <code>ProcessTerminated</code> use these macros to extract the information and present it in a slightly more useful form - the <code>exitCode</code> and <code>signal</code> attributes.</p>
<p>POSIX doesn't provide a way to go the <em>other</em> way, though. There's no standard API for constructing a "wait status" representing, say, "the process was killed by signal 6".</p>
<p>Fortunately, partially because there's no way to go backwards, Twisted preserves the original wait status on the exception as the <code>status</code> attribute. If you check that attribute on the exception wrapped in the <code>Failure</code> passed to your <code>ProcessProtocol</code>, you should find the wait status you're looking for.</p>
|
python|twisted|twisted.internet
| 1 |
1,908,191 | 54,129,452 |
Using opensubtitles api Node.js/Python wrapper behind a proxy
|
<p>I'm trying to use <a href="https://github.com/vankasteelj/opensubtitles-api" rel="nofollow noreferrer">opensubtitles-api</a> node.js wrapper of opensubtitles api behind a proxy. Unfortunately there is no proxy option available. The library in turn uses <a href="https://github.com/baalexander/node-xmlrpc" rel="nofollow noreferrer">node-xmlrpc</a> to make RPC calls. But the underlying <code>node-xmlrpc</code> library also doesn't support proxy tunneling. My project also benefits from some <code>python</code> libraries and code. But <a href="https://github.com/agonzalezro/python-opensubtitles" rel="nofollow noreferrer">python</a> wrapper also seems not to handle proxies. What are my options?</p>
|
<p>Since <code>node-xmlrpc</code> is using <a href="https://nodejs.org/api/http.html" rel="nofollow noreferrer">http</a>/<a href="https://nodejs.org/api/https.html" rel="nofollow noreferrer">https</a> you can specify a proxy like this.</p>
<pre><code>const xmlrpc = require('xmlrpc');
const options = {
host: "proxy_url",
port: 8080, // proxy port
path: "http://opensubtitles_url",
headers: {
Host: "opensubtitles_domain",
"Proxy-Authorization": "Basic bXl1c2VyOm15cGFzc3dvcmQ=" // if needed
}
};
const client = xmlrpc.createClient(options);
</code></pre>
|
python|node.js|proxy|xml-rpc|subtitle
| 1 |
1,908,192 | 53,971,227 |
Micropython: How to free up all RAM memory on startup
|
<p>I'm using a library that doesn't free up all memory when the program is stopped. As a result I get an ENOMEM exception when restarting the program. Is there a way to free all RAM memory upon program restart?</p>
<p>For now I'm using hard-reset as a work-around, but I'd like to be able to just stop and restart the program and have it clean up the RAM at startup. Something like: <code>clean_ram()</code>. <code>clean_ram()</code> would force the garbage collector to free all allocated memory.</p>
|
<p>The best way is to restart the MicroPython interpreter itself on the C level. Exit the Python program and do the whole interpreter setup again.</p>
<p>On Python level, you can manually "unimport" all modules and then call GC:</p>
<pre class="lang-py prettyprint-override"><code># main.py
# do not use any global imports here, except builtins
import gc, sys
while True:
import program
del program
for key in sys.modules:
del sys.modules[key]
gc.collect()
</code></pre>
<p>then in <code>program.py</code> do whatever you wanted to do in <code>main</code>.</p>
|
esp8266|micropython
| 0 |
1,908,193 | 53,855,530 |
What are these lists in sklearn tree visualisation
|
<p>I'm using sklearn.tree.export_graphviz to visualize a decision tree. </p>
<p><a href="https://scikit-learn.org/stable/modules/generated/sklearn.tree.export_graphviz.html" rel="nofollow noreferrer">https://scikit-learn.org/stable/modules/generated/sklearn.tree.export_graphviz.html</a></p>
<p>The nodes all have these lists of lists in them and I can't for the life of me figure out what they are or how to get rid of them. First I figured they must be samples. But all the lists are the same size and samples can't be represented as length-2 lists. Then I thought they would be either a representation of the class names, or a representation of impurity, but I've disabled both to no effect. I've also disabled the ID, labels and impurity. It's a multi-class multi-label text classification.</p>
<p>Here's the tree code:</p>
<pre><code>def _create_classifier():
decision_tree_classifier = DecisionTreeClassifier(
criterion=CRITERION, # Gini
splitter=SPLITTER, # best
min_samples_split=MIN_SAMPLES_SPLIT, # 4
#max_features=MAX_FEATURES, # 50%
max_depth=MAX_DEPTH, # 68
presort=PRESORT # True
)
return decision_tree_classifier
</code></pre>
<p>Here's the train and export. Notice everything set to False:</p>
<pre><code>classifier.fit(X_train, y_train)
from sklearn.tree import export_graphviz
import os
path = 'dtree.dot'
with open(path, 'w') as dotfile:
export_graphviz(classifier, out_file = dotfile, feature_names=all_features, filled=True, rounded=True, label=False, class_names=False, node_ids=False, impurity=False, proportion=True)
print("EXPORTED")
os.system('dot -Tpng dtree.dot -o tree.png')
</code></pre>
<p>And here be my tree:</p>
<p><a href="https://i.stack.imgur.com/q8N4C.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/q8N4C.png" alt="enter image description here"></a></p>
|
<p>Found it. It is the samples. The representation is the first two components of PCA dimensionality reduction.</p>
<p><a href="https://scikit-learn.org/stable/auto_examples/plot_multilabel.html" rel="nofollow noreferrer">https://scikit-learn.org/stable/auto_examples/plot_multilabel.html</a></p>
|
python|scikit-learn|graphviz|decision-tree
| 0 |
1,908,194 | 58,546,313 |
How to locate Instagram Follow button with selenium (Python)
|
<p>I'm trying to locate the 'follow' button element on an instagram page (using selenium).</p>
<p>I've found the main 'follow' button of a user's page (<a href="https://www.instagram.com/USERNAME/" rel="nofollow noreferrer">https://www.instagram.com/USERNAME/</a>) with the following code:</p>
<pre><code>follow_button = self.driver.find_element_by_css_selector('button')
</code></pre>
<p>Although after clicking the above element ^, now I'm trying to locate 'follow' buttons visible when you view a user's followers.
<a href="https://i.stack.imgur.com/iYCZ1.jpg" rel="nofollow noreferrer">Click here to see which buttons I'm referring to.</a></p>
<p>I've tried the following code but it doesn't work:</p>
<pre><code>acc_follow_buttons = self.driver.find_elements_by_css_selector('button')
for acc in acc_follow_buttons[:15]:
acc.click()
time.sleep(1)
</code></pre>
<p>I've also tried searching with Xpath, with no luck.</p>
<p>Could anyone with experience in Selenium help me with the code to locate the follow buttons on this page.</p>
|
<p>You have to search for the button by button text:</p>
<pre><code>Follow_Button = driver.find_element_by_xpath("//*[text()='Follow']")
</code></pre>
|
python|selenium|web-scraping|instagram
| 4 |
1,908,195 | 58,589,589 |
Dynamic module import doesn't work in python - why not?
|
<p>I'm trying to dynamically import a python script <code>foo.py</code> into another executable script, which is in a deeply nested folder. I'm using </p>
<pre><code>import os
sys.path.insert(0, '../../../../.')
from foo import Bar
</code></pre>
<p>this works, and I can use <code>Bar</code> happily.</p>
<p>I would like to make the script dynamically determine the folder depth e.g.</p>
<pre><code>import os
root_path = os.path.relpath(os.popen("git rev-parse --show-toplevel").read()).replace("../reponame", ".")
print(root_path) # prints '../../../../.'
sys.path.insert(0, root_path)
from foo import Bar
</code></pre>
<p>However this doesn't work, the script complains it can't find Bar when it is run.</p>
<p>Why is this?</p>
|
<p>If you debug you see <code>root_path</code> is actually <code>'../../../../.\n'</code>. Remove the <code>\n</code></p>
<pre><code>root_path.strip()
</code></pre>
|
python|python-import
| 2 |
1,908,196 | 45,479,155 |
SQLAlchemy with Postgres: Property returning JSONElement instead of result
|
<p>Goal: Object has two attributes: First attribute points to a Python list backed by a JSON array. Second attribute points to a specific index in that first attribute's list.</p>
<p>Problem: Second attribute listed above currently doesn't work.</p>
<p>Background: SQLAlchemy attribute with a <a href="http://docs.sqlalchemy.org/en/latest/core/custom_types.html#sqlalchemy.types.TypeDecorator" rel="nofollow noreferrer"><code>TypeDecorator</code></a> derived from <a href="http://docs.sqlalchemy.org/en/latest/dialects/postgresql.html#sqlalchemy.dialects.postgresql.JSONB" rel="nofollow noreferrer"><code>JSONB</code></a> works as expected. When trying to index the attribute (extract the second value in the array, for instance), I get a <a href="http://docs.sqlalchemy.org/en/rel_0_9/dialects/postgresql.html#sqlalchemy.dialects.postgresql.JSONElement" rel="nofollow noreferrer"><code>JSONElement</code></a> (derived from <a href="http://docs.sqlalchemy.org/en/rel_0_9/core/sqlelement.html#sqlalchemy.sql.expression.BinaryExpression" rel="nofollow noreferrer"><code>BinaryExpression</code></a>) instead of an actual result.</p>
<p>Attributes in my class are defined as follows:</p>
<pre><code>class MyClass...
values_to_hold = Column(MyArr)
MyClass.second_val_in_arr = MyClass.values_to_hold[1]
</code></pre>
<p><code>second_val_in_arr</code> returns a <code>JSONElement</code> instead of the result as expected. Explicitly adding <a href="http://docs.sqlalchemy.org/en/rel_0_9/dialects/postgresql.html#sqlalchemy.dialects.postgresql.JSONElement.astext" rel="nofollow noreferrer"><code>astext</code></a> (<code>MyClass.values_to_hold[1].astext</code>) does not help either.</p>
<p>If I set:</p>
<pre><code>MyClass.second_val_in_arr = MyClass.values_to_hold
</code></pre>
<p>then <code>second_val_in_arr</code> returns the actual array as expected. But if I try to perform an index operation on the array (as above), then it suddenly returns a <code>JSONElement</code>.</p>
<p>Additional info:</p>
<p><code>MyArr</code> is a <code>TypeDecorator</code> as below:</p>
<pre><code>class MyArr(TypeDecorator):
impl = JSONB
def coerce_compared_value(self, op, value):
return self.impl.coerce_compared_value(op, value)
def process_result_value(self, value, dialect):
if value:
return list(value)
else:
return list()
</code></pre>
<p>(Note that <code>coerce_compared_value</code> is explicitly overridden because of the special logic required for JSON as per the <a href="http://docs.sqlalchemy.org/en/latest/core/custom_types.html#sqlalchemy.types.TypeDecorator" rel="nofollow noreferrer">docs</a>).</p>
|
<p>The assignment</p>
<pre><code>MyClass.second_val_in_arr = MyClass.values_to_hold[1]
</code></pre>
<p>will just add the <code>JSONElement</code> as a regular attribute to the class. SQLAlchemy will not treat it in any special way, so instances of that class will look the attribute up from the class, resulting in the original <code>JSONElement</code>. Instead define a <a href="http://docs.sqlalchemy.org/en/latest/orm/extensions/hybrid.html" rel="nofollow noreferrer">hybrid attribute</a>:</p>
<pre><code>from sqlalchemy.ext.hybrid import hybrid_property
class MyClass(...):
values_to_hold = Column(MyArr)
@hybrid_property
def second_val_in_arr(self):
return self.values_to_hold[1]
</code></pre>
<p>The hybrid will act as <code>JSONElement</code>/<code>BinaryExpression</code> in query context, when accessed through the class. It will return the 2nd item in <code>values_to_hold</code> when accessed on an instance of the class.</p>
|
python|postgresql|sqlalchemy
| 1 |
1,908,197 | 45,389,049 |
Error code while trying to change dictionary folder
|
<p>I tried to make a simple program that tells me information about an Image. But when i try to change the image folder with os.chdir()</p>
<pre><code>from PIL import Image
import os
os.chdir('C:\Users\Yonatan\PycharmProjects\Python Programs\Color')
</code></pre>
<p>i get this error code:
SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 2-3: truncated \UXXXXXXXX escape</p>
<p>Please help.
~Yonatan Cohen</p>
|
<pre><code>r'C:\Users\kallz\Desktop\alice.txt'.
</code></pre>
<p>That way, everything in the string is interpreted as a literal character, and you don't have to escape every backslash.</p>
<p>That means, r'C:\Users\kallz\Desktop\alice.txt' will evaluate to 'C:\Users\kallz\Desktop\alice.txt'' - without causing the backslash to escape characters. </p>
<p>in your Code </p>
<pre><code>from PIL import Image
import os
os.chdir(r'C:\Users\Yonatan\PycharmProjects\Python Programs\Color')
</code></pre>
|
python|windows|operating-system
| 0 |
1,908,198 | 14,852,948 |
Trying to figure out mixed results of call to win32com.client.Dispatch
|
<p>I saw this post <a href="https://stackoverflow.com/a/5169864/2065006">https://stackoverflow.com/a/5169864/2065006</a></p>
<p>So I thought I would experiment. Can someone with a little more experience explain these result?</p>
<pre><code>>>> import win32com.client
>>> shellobject = win32com.client.Dispatch("Wscript.Shell")
>>> print (shellobject.SpecialFolders("ProgramFiles"))
>>> print (shellobject.SpecialFolders("Common AppData"))
>>> print (shellobject.SpecialFolders("AppData"))
F:\Documents and Settings\Randy1\Application Data
>>> print (shellobject.SpecialFolders("My Music"))
>>> print (shellobject.SpecialFolders("MyMusic"))
>>> print (shellobject.SpecialFolders("AppData"))
F:\Documents and Settings\Randy1\Application Data
</code></pre>
|
<p>According to MSDN, <a href="http://msdn.microsoft.com/en-us/library/0ea7b5xe.aspx" rel="nofollow">SpecialFolders Property</a>.</p>
<p>The following special folders are available: </p>
<ul>
<li>AllUsersDesktop</li>
<li>AllUsersStartMenu</li>
<li>AllUsersPrograms</li>
<li>AllUsersStartup</li>
<li>Desktop</li>
<li>Favorites</li>
<li>Fonts</li>
<li>MyDocuments</li>
<li>NetHood</li>
<li>PrintHood</li>
<li>Programs</li>
<li>Recent</li>
<li>SendTo</li>
<li>StartMenu</li>
<li>Startup</li>
<li>Templates</li>
</ul>
<p>Though it seems the above list is incomplete, e.g. AppData is also available. We can still conclude: some of special folders are not available.</p>
<p>We can experiment <code>WshShell</code> object in <em>Windows Script Host</em> which is more reliable than <code>win32com</code>.</p>
<pre><code>var shell = new ActiveXObject("WScript.Shell");
WScript.Echo(shell.SpecialFolders("ProgramFiles"));
WScript.Echo(shell.SpecialFolders("AppData"));
</code></pre>
<p><code>shell.SpecialFolders("ProgramFiles")</code> is also an empty string.</p>
|
python|python-3.x
| 1 |
1,908,199 | 68,638,167 |
Run detectron2 training on multiple GPUs
|
<p>I have a problem to run modified train_net.py script on multiple GPUs.</p>
<h2>Instructions To Reproduce the Issue:</h2>
<p>I'm using <a href="https://public.roboflow.com/object-detection/bccd" rel="nofollow noreferrer">this dataset</a> as an experiment to test how to run detectron2 training on multiple GPUs with Slurm.</p>
<ol>
<li>Full runnable code or full changes you made (tools/train_net.py modified) :</li>
</ol>
<pre><code>#!/usr/bin/env python
# Copyright (c) Facebook, Inc. and its affiliates.
"""
A main training script.
This scripts reads a given config file and runs the training or evaluation.
It is an entry point that is made to train standard models in detectron2.
In order to let one script support training of many models,
this script contains logic that are specific to these built-in models and therefore
may not be suitable for your own project.
For example, your research project perhaps only needs a single "evaluator".
Therefore, we recommend you to use detectron2 as an library and take
this file as an example of how to use the library.
You may want to write your own script with your datasets and other customizations.
"""
import logging
import os
from collections import OrderedDict
import torch
import detectron2.utils.comm as comm
from detectron2.checkpoint import DetectionCheckpointer
from detectron2.config import get_cfg
from detectron2.data import MetadataCatalog
from detectron2.engine import DefaultPredictor, DefaultTrainer, default_argument_parser, default_setup, hooks, launch
from detectron2.evaluation import (
CityscapesInstanceEvaluator,
CityscapesSemSegEvaluator,
COCOEvaluator,
COCOPanopticEvaluator,
DatasetEvaluators,
LVISEvaluator,
PascalVOCDetectionEvaluator,
SemSegEvaluator,
verify_results,
)
from detectron2.modeling import GeneralizedRCNNWithTTA
from detectron2 import model_zoo
from detectron2.data.datasets import register_coco_instances
from detectron2.utils.visualizer import Visualizer
import glob
import cv2
# Dodato za SLURM
import ssl
ssl._create_default_https_context = ssl._create_unverified_context
cfg = ""
trainer = ""
class Trainer(DefaultTrainer):
"""
We use the "DefaultTrainer" which contains pre-defined default logic for
standard training workflow. They may not work for you, especially if you
are working on a new research project. In that case you can write your
own training loop. You can use "tools/plain_train_net.py" as an example.
"""
@classmethod
def build_evaluator(cls, cfg, dataset_name, output_folder=None):
if output_folder is None:
os.makedirs("coco_eval", exist_ok=True)
output_folder = "coco_eval"
return COCOEvaluator(dataset_name, cfg, False, output_folder)
@classmethod
def test_with_TTA(cls, cfg, model):
logger = logging.getLogger("detectron2.trainer")
# In the end of training, run an evaluation with TTA
# Only support some R-CNN models.
logger.info("Running inference with test-time augmentation ...")
model = GeneralizedRCNNWithTTA(cfg, model)
evaluators = [
cls.build_evaluator(
cfg, name, output_folder=os.path.join(cfg.OUTPUT_DIR, "inference_TTA")
)
for name in cfg.DATASETS.TEST
]
res = cls.test(cfg, model, evaluators)
res = OrderedDict({k + "_TTA": v for k, v in res.items()})
return res
def setup():
global cfg
"""
Create configs and perform basic setups.
"""
print("START SETUP")
cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file("COCO-Detection/faster_rcnn_X_101_32x8d_FPN_3x.yaml"))
cfg.DATASETS.TRAIN = ("my_dataset_train",)
cfg.DATASETS.TEST = ("my_dataset_val",)
cfg.DATALOADER.NUM_WORKERS = 4
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-Detection/faster_rcnn_X_101_32x8d_FPN_3x.yaml") # Let training initialize from model zoo
cfg.SOLVER.IMS_PER_BATCH = 2
cfg.SOLVER.BASE_LR = 0.001
cfg.SOLVER.WARMUP_ITERS = 50
cfg.SOLVER.MAX_ITER = 500 #adjust up if val mAP is still rising, adjust down if overfit
cfg.SOLVER.STEPS = (50, 450)
cfg.SOLVER.GAMMA = 0.05
cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 32
cfg.MODEL.ROI_HEADS.NUM_CLASSES = 4 #your number of classes + 1
cfg.TEST.EVAL_PERIOD = 500
# cfg.merge_from_list(args.opts)
# cfg.freeze()
default_setup(cfg, args)
print("END SETUP")
return cfg
def main():
global cfg, trainer
cfg = setup()
if args.eval_only:
print("EVAL_ONLY")
model = Trainer.build_model(cfg)
DetectionCheckpointer(model, save_dir=cfg.OUTPUT_DIR).resume_or_load(cfg.MODEL.WEIGHTS, resume=args.resume)
res = Trainer.test(cfg, model)
if cfg.TEST.AUG.ENABLED:
res.update(Trainer.test_with_TTA(cfg, model))
if comm.is_main_process():
verify_results(cfg, res)
return res
print("BEFORE TRAINER")
trainer = Trainer(cfg)
trainer.resume_or_load(resume=args.resume)
if cfg.TEST.AUG.ENABLED:
print("TEST AUG ENABLED")
trainer.register_hooks([hooks.EvalHook(0, lambda: trainer.test_with_TTA(cfg, trainer.model))])
print("BEFORE MAIN END")
return trainer.train()
if __name__ == "__main__":
from datetime import datetime
args = default_argument_parser().parse_args()
print("Command Line Args:", args)
register_coco_instances("my_dataset_train", {}, "./data/train/_annotations.coco.json", "./data/train")
register_coco_instances("my_dataset_val", {}, "./data/valid/_annotations.coco.json", "./data/valid")
register_coco_instances("my_dataset_test", {}, "./data/test/_annotations.coco.json", "./data/test")
now = datetime.now()
print("before launch =", now)
launch(main, num_gpus_per_machine = args.num_gpus, dist_url = "auto")
now = datetime.now()
print("after launch =", now)
# Evaluate
from detectron2.data import DatasetCatalog, build_detection_test_loader
from detectron2.evaluation import inference_on_dataset
cfg.MODEL.WEIGHTS = os.path.join(cfg.OUTPUT_DIR, "model_final.pth")
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.85
predictor = DefaultPredictor(cfg)
evaluator = COCOEvaluator("my_dataset_test", cfg, False, output_dir="./output/")
val_loader = build_detection_test_loader(cfg, "my_dataset_test")
inference_on_dataset(trainer.model, val_loader, evaluator)
cfg.MODEL.WEIGHTS = os.path.join(cfg.OUTPUT_DIR, "model_final.pth")
cfg.DATASETS.TEST = ("my_dataset_test", )
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.7 # set the testing threshold for this model
predictor = DefaultPredictor(cfg)
test_metadata = MetadataCatalog.get("my_dataset_test")
with open("results.txt", mode="a") as f:
for imageName in glob.glob('data/test/*jpg'):
im = cv2.imread(imageName)
outputs = predictor(im)
f.write(f"Instances:{outputs['instances']}\n")
v = Visualizer(im[:, :, ::-1], metadata=test_metadata)
out = v.draw_instance_predictions(outputs["instances"].to("cpu"))
cv2.imwrite(f"images/{imageName}",out.get_image()[:, :, ::-1])
</code></pre>
<p>On the other hand, I have this slurm script to run an experiment on 2 GPUs:</p>
<pre><code>#!/bin/bash -l
#SBATCH --account=Account
#SBATCH --partition=gpu # gpu partition
#SBATCH --nodes=1 # 1 node, 4 GPUs per node
#SBATCH --time=24:00:00
#SBATCH --job-name=detectron2_demo4 # job name
module load Python/3.9.5-GCCcore-10.3.0
module load CUDA/11.1.1-GCC-10.2.0
cd /experiment_path
srun python main.py --num-gpus 2
</code></pre>
<p>When I ran this script I faced an error (cat slurm-xxx.out), and no error file:</p>
<pre><code>The following have been reloaded with a version change:
1) GCCcore/10.3.0 => GCCcore/10.2.0
2) binutils/2.36.1-GCCcore-10.3.0 => binutils/2.35-GCCcore-10.2.0
3) zlib/1.2.11-GCCcore-10.3.0 => zlib/1.2.11-GCCcore-10.2.0
Command Line Args: Namespace(config_file='', resume=False, eval_only=False, num_gpus=2, num_machines=1, machine_rank=0, dist_url='tcp://127.0.0.1:54190', opts=[])
before launch = 2021-08-03 21:26:48.061817
[W ProcessGroupNCCL.cpp:1569] Rank 0 using best-guess GPU 0 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect.Specify device_ids in barrier() to force use of a particular device.
[W ProcessGroupNCCL.cpp:1569] Rank 1 using best-guess GPU 1 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect.Specify device_ids in barrier() to force use of a particular device.
</code></pre>
<h2>Expected behavior:</h2>
<p>To run training on 2 GPUs</p>
<h2>Environment:</h2>
<p>Paste the output of the following command:</p>
<pre><code>No CUDA runtime is found, using CUDA_HOME='/usr/local/software/CUDAcore/11.1.1'
--------------------- --------------------------------------------------------------------------------
sys.platform linux
Python 3.9.5 (default, Jul 9 2021, 09:35:24) [GCC 10.3.0]
numpy 1.21.1
detectron2 0.5 @/home/users/aimhigh/detectron2/detectron2
Compiler GCC 10.2
CUDA compiler CUDA 11.1
DETECTRON2_ENV_MODULE <not set>
PyTorch 1.9.0+cu102 @/home/users/aimhigh/.local/lib/python3.9/site-packages/torch
PyTorch debug build False
GPU available No: torch.cuda.is_available() == False
Pillow 8.3.1
torchvision 0.10.0+cu102 @/home/users/aimhigh/.local/lib/python3.9/site-packages/torchvision
fvcore 0.1.5.post20210727
iopath 0.1.9
cv2 4.5.3
--------------------- --------------------------------------------------------------------------------
PyTorch built with:
- GCC 7.3
- C++ Version: 201402
- Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v2.1.2
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- NNPACK is enabled
- CPU capability usage: AVX2
</code></pre>
<p>Hints (more details):</p>
<p>Github detectron2: <a href="https://github.com/facebookresearch/detectron2/issues/3319" rel="nofollow noreferrer">link</a></p>
<p>Github pyTorch: <a href="https://github.com/pytorch/pytorch/issues/62802" rel="nofollow noreferrer">link</a></p>
<p>Github NCCL: <a href="https://github.com/NVIDIA/nccl/issues/541" rel="nofollow noreferrer">link</a></p>
|
<p>It was a problem with the Slurm file.
So, I solved it by attaching this code in Slurm file:</p>
<pre class="lang-sh prettyprint-override"><code>export NCCL_SOCKET_IFNAME=bond0
export NCCL_IB_DISABLE=1
export NCCL_P2P_DISABLE=1
</code></pre>
|
python|deep-learning|pytorch|slurm
| 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.