Unnamed: 0
int64 0
1.91M
| id
int64 337
73.8M
| title
stringlengths 10
150
| question
stringlengths 21
64.2k
| answer
stringlengths 19
59.4k
| tags
stringlengths 5
112
| score
int64 -10
17.3k
|
---|---|---|---|---|---|---|
1,901,700 | 42,031,528 |
Python requests call not handling authentication as expected
|
<p>I have a list in which I am putting the url and authentication parameters for a requests get call. If I try the call this way I get authentication error (401) but if I break out the auth by itself explicitly on the call things work. Why can't I include auth in this manner and expect it to properly "explode" in the call?</p>
<pre><code>parms = []
parms.append(url)
parms.append('auth=(''UID'', ''PWD'')')
response = requests.get(*parms)
</code></pre>
<p>This results in 404 because it is not recognizing the authentication. BUT, if I do this it works. To me these seem the same. The single quotes are only different to get the string to append in the list properly. I thought the first way would result in 2 parameters - url and auth</p>
<pre><code>parms = []
parms.append(url)
response = requests.get(*parms, auth=('UID', 'PWD'))
</code></pre>
|
<p>The first one is equivalent to the following:</p>
<pre><code>requests.get(url, "auth=('UID', 'PWD')")
</code></pre>
<p>When you really want:</p>
<pre><code>requests.get(url, auth=('UID', 'PWD'))
</code></pre>
<p>You must use this instead:</p>
<pre><code>args = []
kwargs = {}
args.append(url)
kwargs['auth'] = ('UID', 'PWD')
requests.get(*args, **kwargs)
</code></pre>
<p>The rule is:</p>
<ul>
<li>when you have <code>function(foo, bar)</code> you are using positional parameters, they go into <code>*args</code>.</li>
<li>when you have <code>function(foo=1, bar=2)</code> you are using named parameters, they go into <code>**kwargs</code>.</li>
<li>you can mix both, in Python 2 positional parameters must be placed before named parameters.</li>
</ul>
|
python|python-requests
| 2 |
1,901,701 | 47,163,820 |
Getting "Queue objects should only be shared between processes through inheritance" but I'm not using a Queue
|
<p>I am trying to use a ProcessPoolExecutor, but I am getting the error "Queue objects should only be shared between processes through inheritance", but I am not using a Queue (at least not explicitly). I can't find anything that explains what I am doing wrong. </p>
<p>Here is some code that demonstrates the issue (not my actual code):</p>
<pre><code>from concurrent.futures import ProcessPoolExecutor, as_completed
class WhyDoesntThisWork:
def __init__(self):
self.executor = ProcessPoolExecutor(4)
def execute_something(self, starting_letter):
futures = [self.executor.submit(self.something, starting_letter, d) for d in range(4)]
letter = None
for future in as_completed(futures):
letter = future.result()
print(letter)
def something(self, letter, d):
# do something pointless for the example
for x in range(d):
letter = chr(ord(letter) + 1)
if __name__ == '__main__':
WhyDoesntThisWork(). execute_something('A')
</code></pre>
<p>El Ruso has pointed out that making something() a staticmethod or classmethod makes the error go away. Unfortunately, my actual code needs to call other methods using self.</p>
|
<p>Can be solved without using static approach.</p>
<p>When using process, each process runs in an independent memory space. That's unlike when using thread, when different threads are running under the same process, using the same memory space. Thus the error doesn't occur when you use
<code>ThreadPoolExecutor</code> but occurs in <code>ProcessPoolExecutor</code>.</p>
<p>So when the function of the class instance is delivered into separate sub-processes, the multiprocessing mechanism pickles the function so that the function can be passed into the sub-process as an independent instance. And when the sub-process joined, the class is updated by the function instance delivered back as unpicked one.</p>
<p>To make it work, just add <code>__getstate__()</code> and <code>__setstate__()</code> functions to the class to guide the class how to pickle and unpickle the function. In pickling, the unnecessary fields can be excluded as shown in <code>del self_dict['executor']</code>.</p>
<pre><code>import multiprocessing
import time
from concurrent.futures import ProcessPoolExecutor, as_completed
class GuessItWorksNow():
def __init__(self):
self.executor = ProcessPoolExecutor(4)
def __getstate__(self):
state = self.__dict__.copy()
del state['executor']
return state
def __setstate__(self, state):
self.__dict__.update(state)
def something(self, letter, d):
# do something pointless for the example
p = multiprocessing.current_process()
time.sleep(1)
for x in range(d):
letter = chr(ord(letter) + 1)
return (f'[{p.pid}] ({p.name}) ({letter})')
def execute_something(self, starting_letter):
futures = [self.executor.submit(self.something, starting_letter, d) for d in range(10)]
for future in as_completed(futures):
print(future.result())
if __name__ == '__main__':
obj = GuessItWorksNow()
obj.execute_something('A')
</code></pre>
|
python|python-multiprocessing|concurrent.futures
| 7 |
1,901,702 | 47,447,272 |
Does tensorflow have the function similar to pytorch's "masked_fill_"
|
<p>I want to set INF value to matrix by mask matrix, just like pytorch code:</p>
<pre><code>scores.data.masked_fill_(y_mask.data, -float('inf'))
</code></pre>
<p>I try to use <code>tf.map_fn</code> to implement that, but the performance is too slow. So does tensorflow have any efficient function to implement that?</p>
|
<p>I have used a math calculate method to instead. It's valid and much faster.</p>
<pre><code>def mask_fill_inf(matrix, mask):
negmask = 1 - mask
num = 3.4 * math.pow(10, 38)
return (matrix * mask) + (-((negmask * num + num) - num))
</code></pre>
<p>Do anyone have the better method?</p>
|
tensorflow|pytorch
| 3 |
1,901,703 | 11,695,467 |
Integer Overflow in Summing Multiples of 3 and 5
|
<p>I am sure this question has been ask a lot, but I have checked other forums and have tried addressing the issue, which doesn't seem to help. I am thinking there is an overflow problem, but I can't remember on how to fix it. I took a long break from coding (my fault there) so I am trying some problems to help get me back in the swing of things. So, just wondering as to what is going wrong. When I try <code>n = 1000</code> the answer is wrong but numbers smaller than that seem to work out right. Since large numbers won't work I think it's an integer overflow.</p>
<pre><code>def n_number():
n = raw_input("Enter a max number: ")
try:
int(n)
return n
except ValueError:
print 'Value is not an integer'
exit(1)
# 'function that will add multiples of 3 and 5 that are less than the given value, n.'
def sum_multiplies(n):
sum = long(0)
counter3, counter5 = int(1),int(1)
value3 = 3*counter3
value5 = 5*counter5
while True:
# 'sums of multiples of 5\'s less than n'
if value5<int(n):
sum+= value5
counter5+=1
value5 = 5*counter5
# 'sums of multiples of 3\'s less than n'
if value3<int(n):
sum+= value3
counter3+=1
value3 = 3*counter3
else:
break
print "sum: %s" %sum
print "counter3: %s" %counter3
print "counter5: %s" %counter5
def main():
'max number is in n'
n = n_number()
sum_multiplies(n)
if __name__ == "__main__":
main()
</code></pre>
|
<p>The problem is that you're counting numbers which are multiples of both 3 and 5 (like 15) twice.</p>
<p>One way to solve it would be to add:</p>
<pre><code>if counter3%5 == 0: continue
</code></pre>
<p>to skip the double counting.</p>
|
python
| 3 |
1,901,704 | 11,727,701 |
ERROR: invalid mode ('r') - Cannot conduct Cognate Analysis using LingPy in Python 2.7
|
<p>I'm using the LingPy 1.0.1 library for Python 2.7, attempting to conduct Cognate Analysis on a single tab-delimited list of Spanish-English words I created. The list is named SE.lxs and looks like this:</p>
<pre><code>ID Words Spanish
1 dog pero
2 cat gato
3 water agua
4 table meza
5 hand mano
6 red rojo
7 blue azul
8 green verde
9 person persona
10 girl chica
</code></pre>
<p>Which I believe is the appropriate format as defined here: <a href="http://lingulist.de/lingpy/docu/lingpy.lexstat.LexStat.html" rel="nofollow">http://lingulist.de/lingpy/docu/lingpy.lexstat.LexStat.html</a></p>
<p>However, when I run the commands:</p>
<pre><code>lex = LexStat(get_file('C:\Python27\SE.lxs'))
</code></pre>
<p>I get the following error:</p>
<pre><code>Traceback (most recent call last):
File "<pyshell#1>", line 1, in <module>
lex = LexStat(get_file('C:\Python27\SE.lxs'))
File "C:\Python27\lib\site-packages\lingpy-1.0.1-py2.7-win32.egg\lingpy\lexstat.py", line 62, in __init__
self._init_lxs(infile)
File "C:\Python27\lib\site-packages\lingpy-1.0.1-py2.7-win32.egg\lingpy\lexstat.py", line 278, in _init_lxs
txt = array(loadtxt(infile),dtype="str")
File "C:\Python27\lib\site-packages\lingpy-1.0.1-py2.7-win32.egg\lingpy\algorithm\misc.py", line 454, in loadtxt
f = open(infile)
IOError: [Errno 22] invalid mode ('r') or filename: 'C:\\Python27\\lib\\site-packages\\lingpy-1.0.1-py2.7-win32.egg\\lingpy\\test/tests/lxs/C:\\Python27\\SE.lxs'
</code></pre>
<p>A picture of the problem can be found here: <a href="http://i.imgur.com/XdLig.png" rel="nofollow">http://i.imgur.com/XdLig.png</a></p>
|
<p>Actually, the get_file (blame my bad documentation on this) is a simple shortcut function which helps me to get access to some test modules residing in the text folder. So if you want to make an analysis on some languages, you do not need the get_file argument. Just make sure that the lxs-file is in the folder from where you loaded the library. I am not sure about windows, but on Linux this usually works.</p>
|
python|compiler-errors|linguistics
| 1 |
1,901,705 | 11,735,802 |
web2py doesn't recognize modules in the modules directory
|
<p>I copied a module to the "modules" folder of the application in web2py. When I tried to import the module in any function inside of any of the controllers, I get the following error:</p>
<pre><code><type 'exceptions.ImportError'> No module named module_name
</code></pre>
<p>I get that error irrespective of any module used. But if I copied the same module to the "site-packages" and import it, it works perfectly.</p>
<p>I found out that the "sys.path" doesn't contain the modules folder but contains the "site-packages" folder. </p>
<p>How do I add the modules folder to "sys.path" specific for web2py or are there any other ways to solve this problem?</p>
|
<p>Finally found a solution to my problem.</p>
<p>Seems like in order to import a module in the module directory using <code>import module_name</code>, the name of the application must be in all lower case.</p>
<p>Previously, the name of my application was <code>projectX</code>, but when I changed it to <code>projectx</code> the <code>import module_name</code> worked like it should.</p>
<p>I also found out that explicitly stating the application name while importing the module also works, no matter how the application name is. <code>import applications.projectX.modules.module_name</code> also worked for me when the name of the application was <code>projectX</code>.</p>
|
python|web2py
| 4 |
1,901,706 | 46,713,024 |
Server not submitting form when posting a request - Python
|
<p>I am having a problem getting a response from a webserver when posting a request. The webserver is NanoDLP and I am trying to write a script that will load a file for 3D printing when I submit the form. Spent hours reading forums and "posts" that cover the topic and I cannot see what am doing wrong. Can someone please take a look and see if they can help me? Code is as follows:</p>
<pre><code>import requests
machineAddr = "http://192.168.0.234"
# Get printable files from USB
urlUSBFiles = machineAddr + "/json/usb"
usbFiles = requests.get(urlUSBFiles).json()
print(usbFiles)
fileUploadName = input('What do you want to name your file?')
fileUploadData = {
'USBfile': usbFiles[1],
'Path': fileUploadName,
'ProfileID': '3',
'AutoCenter': '0',
'StopLayers': '',
'LowQualityLayerNumber': '0',
'MaskFile': '',
'MaskEffect': '',
'ImageRotate': '0'
}
print(fileUploadData)
urlAddUSBFiles = machineAddr + "/plate/add-usb"
r = requests.post(urlAddUSBFiles, data=fileUploadData)
print(r)
</code></pre>
<p>Here is the response when the code is run:</p>
<pre><code>['/media/usb0/DriveSleeve.stl', '/media/usb0/TestCube100um.zip']
</code></pre>
<p><code>What do you want to name your file?turbo
{'USBfile': '/media/usb0/TestCube100um.zip', 'Path': 'turbo', 'ProfileID': '3', 'AutoCenter': '0', 'StopLayers': '', 'LowQualityLayerNumber': '0', 'MaskFile': '', 'MaskEffect': '', 'ImageRotate': '0'}
<Response [200]>
Process finished with exit code 0</code></p>
<p>Thanks,</p>
<p>Dylan</p>
|
<p>For completeness sake, I found my issue. I was not defining the upload data as a dict, so the fix is:</p>
<pre><code>fileUploadData = dict(
USBfile = usbFiles[1],
Path = fileUploadName,
ProfileID = 3,
AutoCenter = 0,
StopLayers = '',
LowQualityLayerNumber: 0,
MaskFile = '',
MaskEffect = '',
ImageRotate = 0
)
</code></pre>
<p>I hope this helps someone else with similar issues! :)</p>
|
python|post|python-requests
| 1 |
1,901,707 | 68,003,826 |
docker doesn't install package enviirons
|
<p>I am trying to install a package "environs" in docker using the command docker-compose exec web pipenv install 'environs [django] == 8.0.0', however nothing happens in the terminal and the package is not installed in the container. What is the reason?
docker-compose.yml</p>
<pre><code>version: '3.8'
services:
web:
build: .
command: python /code/manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- 8000:8000
depends_on:
- db
environment:
- "DJANGO_SECRET_KEY=django-insecure-c@p4-@$$@#0deu2p5&-59#-1kv&@(ayfu*b+a+wt(i9j5p7&=p3"
db:
image: postgres:11
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- "POSTGRES_HOST_AUTH_METHOD=trust"
volumes:
postgres_data:
</code></pre>
<p>dockerfile</p>
<pre><code>FROM python:3.8
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Set work directory
WORKDIR /code
# Install dependencies
COPY Pipfile Pipfile.lock /code/
RUN pip install pipenv && pipenv install --system
# Copy project
COPY . /code/
</code></pre>
|
<p>Try specifying the working directory.</p>
<p><code>docker-compose exec --workdir /code web pipenv install 'environs [django] == 8.0.0'</code></p>
|
python|django|docker
| 0 |
1,901,708 | 67,962,018 |
How do I link a title with a url in Python and BeautifulSoup?
|
<p><br><br>So im working on a script to download videos for me automatically, however it would seem i have stumbled upon a problem(insufficient experience).</p>
<p>How do i link a category title with a bunch of url's?
<br>
<br><strong>Expected output:</strong>
<br>->CATEGORY 1
<br>->https://example.com/part/of/link/x
<br>->https://example.com/part/of/link/x
<br>->https://example.com/part/of/link/x
<br>.....
<br>->CATEGORY 2
<br>->https://example.com/part/of/link/x</p>
<pre><code>for category_title in soup.findAll(class_='section-title'):
title = category_title.get_text().lstrip()
print(title)
for onelink in soup.findAll('a'):
link = onelink.get('href')
print(f'https://example.com{link}')
</code></pre>
<p><strong>What this does is:</strong>
<br>Lists all the titles(10 titles)
<br>Lists all the links(100 links)</p>
<pre><code><div class="section-title">
CATEGORY 1
<li class="section-item next-random">
<a class="item" href="/part/of/link/x">
<div class="title-container">
<div class="btn-primary btn-sm pull-right">
BUTTON
</div>
<span class="random-name">
URL TITLE
</span>
</div>
</a>
</li>
</ul>
</code></pre>
|
<p>Depending on how the HTML structure of your page is made, you could check the "section-title" parent object, and then, list all the link of that particular section. Giving you all the link for category #</p>
<p><a href="https://www.codegrepper.com/code-examples/python/beautifulsoup+parent" rel="nofollow noreferrer">Here some help</a></p>
|
python|for-loop|beautifulsoup
| 1 |
1,901,709 | 65,563,193 |
Efficiently slice triangular sparse matrix
|
<p>I have a sparse, triangular matrix (e.g. a distance matrix). In reality this would be a > 1M x 1M distance matrix with high sparsity.</p>
<pre class="lang-py prettyprint-override"><code>from scipy.sparse import csr_matrix
X = csr_matrix([
[1, 2, 3, 3, 1],
[0, 1, 3, 3, 2],
[0, 0, 1, 1, 3],
[0, 0, 0, 1, 3],
[0, 0, 0, 0, 1],
])
</code></pre>
<p>I want to subset this matrix to another triangular distance matrix.
The indexes may be ordered differently and/or duplicated.</p>
<pre class="lang-py prettyprint-override"><code>idx = np.matrix([1,2,4,2])
X2 = X[idx.T, idx]
</code></pre>
<p>This may result in the resulting matrix not being triangular, with some values missing from
the upper triangle, and some values being duplicated in the lower triangle.</p>
<pre class="lang-py prettyprint-override"><code>>>> X2.toarray()
array([[1, 3, 2, 3],
[0, 1, 3, 1],
[0, 0, 1, 0],
[0, 1, 3, 1]])
</code></pre>
<p>How can I get the correct upper triangle matrix as efficiently as possible?
Currently, I mirror the matrix before subsetting, and subset it to the triangle afterwards, but this doesn't feel particularly efficient, as it requires, at least, duplication of all entries.</p>
<pre class="lang-py prettyprint-override"><code># use transpose method, see https://stackoverflow.com/a/58806735/2340703
X = X + X.T - scipy.sparse.diags(X.diagonal())
X2 = X[idx.T, idx]
X2 = scipy.sparse.triu(X2, k=0, format="csr")
</code></pre>
<pre class="lang-py prettyprint-override"><code>>>> X2.toarray()
array([[1., 3., 2., 3.],
[0., 1., 3., 1.],
[0., 0., 1., 3.],
[0., 0., 0., 1.]])
</code></pre>
|
<p>Here's a method that does not involve mirroring the data, and instead does manipulation of sparse indices to arrive at the desired result:</p>
<pre class="lang-python prettyprint-override"><code>import scipy.sparse as sp
X2 = X[idx.T, idx]
# Extract indices and data (this is essentially COO format)
i, j, data = sp.find(X2)
# Generate indices with elements moved to upper triangle
ij = np.vstack([
np.where(i > j, j, i),
np.where(i > j, i, j)
])
# Remove duplicate elements
ij, ind = np.unique(ij, axis=1, return_index=True)
# Re-build the matrix
X2 = sp.coo_matrix((data[ind], ij)).tocsr()
</code></pre>
|
python|numpy|scipy|sparse-matrix
| 2 |
1,901,710 | 72,215,512 |
How do I reorder a long string of concatenated date and timestamps seperated by commas using Python?
|
<p>I have a string type column called 'datetimes' that contains multiple dates with their timestamps, and I'm trying to extract the earliest and last dates (without the timestamps) into new columns called 'earliest_date' and 'last date'.</p>
<p>The problem, however, is that the dates are not in order, so it's not as straightforward as using a str.split() method to get the first and last dates in the string. I need to order them first in ascending order.</p>
<p>Here's an example of an entry for one of the rows: 2022-04-13 04:47:00,2022-04-07 01:58:00,2022-03-31 02:32:00,2022-03-25 11:59:00,2022-04-12 05:07:00,2022-03-29 01:46:00,2022-03-31 05:52:00,</p>
<p>As you can see, the order is randomized. I would like to firstly remove the timestamps which are fortunately in between a whitespace and comma, then order the dates in ascending order, and then finally get the max and min dates into two separate columns.</p>
<p>Can anyone please help me? Thanks in advance :)</p>
<p>`df['Campaign Interaction Dates'] = df['Campaign Interaction Dates'].str.replace('/','-')</p>
<p>def normalise(d):
if len(t := d.split('-')) == 3:
return d if len(t[0]) == 4 else '-'.join(reversed(t))
return '9999-99-99'</p>
<p>out = sorted(normalise(t[:10]) for t in str(df[df['Campaign Interaction Dates']]).split(',') if t)</p>
<p>df['out'] = out[1]</p>
<p>print(display(df[df['Number of Campaign Codes registered']==3]))`</p>
|
<p>You can use following code if you are not sure that date format will always be YYYY-MM-DD:</p>
<pre><code>import datetime
string= "2022-04-13 04:47:00,2022-04-07 01:58:00,2022-03-31 02:32:00,2022-03-25 11:59:00,2022-04-12 05:07:00,2022-03-29 01:46:00,2022-03-31 05:52:00"
dates_list = [date[:10] for date in string.split(',')]
dates_list.sort(key=lambda x: datetime.datetime.strptime(x, '%Y-%m-%d'))
min_date, max_date = dates_list[0], dates_list[-1]
</code></pre>
<p>You can easily replace date format here</p>
|
python|pandas|dataframe|python-datetime
| 0 |
1,901,711 | 43,462,061 |
pycryptodome: OverflowError: The counter has wrapped around in CTR mode
|
<p>I am having difficulty with AES-CTR encryption using pycryptodome on Python 3. Data can be ~1000 bytes but when it gets long enough it breaks. I do not understand what this error supposedly means or how to get around it. </p>
<pre><code>from os import urandom
from Crypto.Cipher import AES
cipher = AES.new(urandom(16), AES.MODE_CTR, nonce=urandom(15))
cipher.encrypt(urandom(10000))
---------------------------------------------------------------------------
OverflowError Traceback (most recent call last)
<ipython-input-116-a48990362615> in <module>()
3
4 cipher = AES.new(urandom(16), AES.MODE_CTR, nonce=urandom(15))
----> 5 cipher.encrypt(urandom(10000))
6
/usr/local/lib/python3.5/dist-packages/Crypto/Cipher/_mode_ctr.py in encrypt(self, plaintext)
188 if result:
189 if result == 0x60002:
--> 190 raise OverflowError("The counter has wrapped around in"
191 " CTR mode")
192 raise ValueError("Error %X while encrypting in CTR mode" % result)
OverflowError: The counter has wrapped around in CTR mode
</code></pre>
|
<p>I figured it out. Nonce is only 1 byte short of block size so counter mode can only produce 256 blocks which would allow to encrypt 4096 bytes. If nonce is few bytes shorter the problem goes away.</p>
|
python-3.x|encryption|counter|ctr-mode|pycryptodome
| 1 |
1,901,712 | 43,406,968 |
python "tail -f" like function's readline stuck on EOF
|
<p>I have a tail -f like code snippet that I found somehwere on the web. Unfortunatelly, I found out at some point, that it seems to be stuck on EOF even when the EOF is not any longer there, because something was appended to the file. It will still return 0. If I seek back, for example 10 bytes, it will read up to the previous EOF position and that's all. I can fix it by closing and reopening the file, but I don't understand the behaviour. Can someone help?</p>
<p>The code:</p>
<pre><code>def tail_f_nonblock(f):
while True:
where = f.tell()
line = f.readline()
if not line:
diff = f.tell()-where
f.seek(where)
# If there was some output, give -1
if diff!=0: return -1
else: return 0
else:
return line
</code></pre>
|
<p>I'm assuming you're on a linux machine or similar. Did the inode number (use ls -i filename) change when the file was modified? If yes - essentially your old file was deleted, but it's contents are still available through the file handle your program is using (i.e. it still points to old inode). In such case reopening is the only possibility. </p>
<p>For more details see <a href="https://stackoverflow.com/questions/2028874/what-happens-to-an-open-file-handler-on-linux-if-the-pointed-file-gets-moved-de">What happens to an open file handler on Linux if the pointed file gets moved, delete </a></p>
|
python|readline|eof|tail
| 0 |
1,901,713 | 43,193,926 |
Variable "result" is not defined
|
<p>I have been working on this bit of code and every time I run it it says that result was not defined. </p>
<pre><code>Error: Traceback (most recent call last):
File "/Users/Bubba/Documents/Jamison's School Work/Programming/Python scripts/Ch9Lab2.py", line 24, in <module>
print(str(numberOne) + " " + operation + " " + str(numberTwo) + " = " + str(result))
NameError: name 'result' is not defined
</code></pre>
<p>Original Code:</p>
<pre><code>def performOperation(numberOne, numberTwo):
if operation == "+":
result = numberOne + numberTwo
if operation == "-":
result = numberOne - numberTwo
if operation == "*":
result = numberOne * numberTwo
if operation == "/":
result = numberOne / numberTwo
numberOne = int(input("Enter the first number: "))
numberTwo = int(input("Enter the second number: "))
operation = input("Enter an operator (+ - * /): ")
performOperation(numberOne, numberTwo)
print(str(numberOne) + " " + operation + " " + str(numberTwo) + " = " + str(result))
</code></pre>
|
<p>You'll need to use the <code>return</code> keyword to use the variable result outside the function</p>
<pre><code>def performOperation(numberOne, numberTwo):
...
return result
result = performOperation(numberOne, numberTwo)
</code></pre>
|
python
| 1 |
1,901,714 | 36,732,188 |
How to fix an Empty output for STDERR?
|
<p>My program is working well and its printing the correct STDOUT but for STDERR I'm getting ''Empty output stream''</p>
<p>can anyone fix my code?, I'm stuck here.</p>
<p>Input</p>
<pre><code>285 242 2053 260 310 450 10 682
</code></pre>
<p>Output</p>
<pre><code>207229
</code></pre>
<p>My code</p>
<pre><code>def sum_leaves(K, inputs, count=1):
A, B, M, L1, L2, L3, D, R = map(int, inputs)
x = (((A*K)+B) % M)
y = (((A*K)+2*B) % M)
if K < L1 or count == D:
my_list.append(K)
elif L1 <= K < L2:
sum_leaves(x, inputs, count + 1)
elif L2 <= K < L3:
sum_leaves(y, inputs, count + 1)
elif L3 <= K:
sum_leaves(x, inputs, count + 1)
sum_leaves(y, inputs, count + 1)
if count == 1:
return sum(my_list)
def read_input(input_string):
inputs = input_string.split()
A, B, M, L1, L2, L3, D, R = map(int, inputs)
x = (((A*R)+B) % M)
y = (((A*R)+2*B) % M)
if L1 <= R < L2:
return sum_leaves(x, inputs)
elif L2 <= R < L3:
return sum_leaves(y, inputs)
elif L3 <= R:
sum_leaves(x, inputs)
return sum_leaves(y, inputs)
my_list = []
if __name__ == '__main__':
print(read_input(input()))
</code></pre>
|
<p>You aren't sending anything to <code>stderr</code>. <code>print()</code> sends to <code>stdout</code>. Use <code>print("error", file=sys.stderr)</code> to send to <code>stderr</code>.</p>
|
python|stderr
| 2 |
1,901,715 | 48,612,210 |
grab signal from webcam button
|
<p>I have a webcam and it has button. In Windows, when I push that button it grabs image snapshot. But now I am on linux and I am using VLC to view video from my usb webcam "/dev/video0". I would like to use python to get signal from that button once pushed - I mean just to get this signal (no grab snapshot automatically).
I tried to google it, but no luck. I don't want to use opencv or gstreamer to get video into new window, I just need to grab signal when button on webcam is pushed. Any idea how to get this signal please? </p>
|
<p>with <code>python-evdev</code> you can capture events triggered by input devices like mouse, keyboard,..., and also webcams</p>
<blockquote>
<p>evdev is a Linux-only generic protocol that the kernel uses to forward
information and events about input devices to userspace. It's not just
for mice and keyboards but any device that has any sort of axis, key
or button, including things like webcams and remote controls. Each
device is represented as a device node in the form of
/dev/input/event0, with the trailing number increasing as you add more
devices. The node numbers are re-used after you unplug a device, so
don't hardcode the device node into a script. The device nodes are
also only readable by root, thus you need to run any debugging tools
as root too.</p>
</blockquote>
<p><a href="http://who-t.blogspot.de/2016/09/understanding-evdev.html" rel="nofollow noreferrer">http://who-t.blogspot.de/2016/09/understanding-evdev.html</a></p>
<p>the button of the webcam also has an event assigned, you have to find in which <code>/dev/input/</code> folder</p>
<p>run the following code from <a href="http://python-evdev.readthedocs.io/en/latest/tutorial.html" rel="nofollow noreferrer">http://python-evdev.readthedocs.io/en/latest/tutorial.html</a> to get the <code>/dev/input</code> path of your webcam :</p>
<p><strong>Listing accessible event devices</strong></p>
<pre><code> import evdev
devices = [evdev.InputDevice(fn) for fn in evdev.list_devices()]
for device in devices:
print(device.fn, device.name, device.phys)
</code></pre>
|
python|python-2.7
| 1 |
1,901,716 | 48,594,673 |
google cloud platform for image fetching from url
|
<p>I need to download 1 million images(approx) from different web sources using image url and store it in google cloud platform using python. Is there any specific services available in google cloud platform for faster download with storage option.</p>
|
<p>You can use <a href="https://cloud.google.com/storage/" rel="nofollow noreferrer">Google Cloud Storage</a>. Make sure you create buckets on the region closest to your user's location. Sample POST: </p>
<pre><code>https://www.googleapis.com/upload/storage/v1/b/myBucket/o?uploadType=media&name=myObject
</code></pre>
<p>1M images can potentially be a lot of bandwith. <a href="https://cloud.google.com/storage/docs/access-control/create-signed-urls-program" rel="nofollow noreferrer">Signed URLs</a> on Cloud Storage can help you control access to the buckets like giving time-limited read/write access to a specific resource.</p>
<p>Here is the official <a href="https://cloud.google.com/storage/docs/" rel="nofollow noreferrer">documentation</a>, the <a href="https://cloud.google.com/storage/docs/json_api/v1/" rel="nofollow noreferrer">API</a>, and <a href="https://cloud.google.com/storage/docs/best-practices#uploading" rel="nofollow noreferrer">best practices</a>. </p>
|
python|google-cloud-platform|imagedownload
| 0 |
1,901,717 | 51,354,616 |
Extracting HTML tag content with xpath from a specific website
|
<p>I am trying to extract the contents of a specific tag on a webpage by using lxml, namely on Indeed.com.</p>
<p>Example page: <a href="https://www.indeed.com/cmp/The-Habitat-Company/jobs/Janitor-eb00a371b5460027?sjdu=QwrRXKrqZ3CNX5W-O9jEvUx5EqoNSCDS7Hcvh7EMNpiZzZ9eiZPJIDkC3NDWbdUNmWvfZSevVvzi4YXBmkjFd8Bkdc4qgiTbLPuuF_N0vKU&tk=1cigks4msafthee0&vjs=3" rel="nofollow noreferrer">link</a></p>
<p>I am trying to extract the company name and position name. Chrome shows that the company name is located at </p>
<pre><code>"//*[@id='job-content']/tbody/tr/td[1]/div/span[1]"
</code></pre>
<p>and the position name is located at</p>
<pre><code>"//*[@id='job-content']/tbody/tr/td[1]/div/b/font"
</code></pre>
<p>This bit of code tries to extract those values from a locally saved and parsed copy of the page:</p>
<pre><code>import lxml.html as h
xslt_root = h.parse("Temp/IndeedPosition.html")
company = xslt_root.xpath("//*[@id='job-content']/tbody/tr/td[1]/div/span[1]/text()")
position = xslt_root.xpath("//*[@id='job-content']/tbody/tr/td[1]/div/b/font/text()")
print(company)
print(position)
</code></pre>
<p>However, the print commands return empty strings, meaning nothing was extracted!</p>
<p>What is going on? Am I using the right tags? I don't think these are dynamically generated since the page loads normally with javascript disabled.</p>
<p>I would really appreciate any help with getting those two values extracted.</p>
|
<p>Try it like this:</p>
<pre><code>company = xslt_root.xpath("//div[@data-tn-component='jobHeader']/span[@class='company']/text()")
position = xslt_root.xpath("//div[@data-tn-component='jobHeader']/b[@class='jobtitle']//text()")
['The Habitat Company']
['Janitor-A (Scattered Sites)']
</code></pre>
<p>Once we have the <code>//div[@data-tn-component='jobHeader']</code> path things become pretty straightforward: </p>
<ol>
<li>select the text of the child span <code>/span[@class='company']/text()</code> to get the <strong>company</strong> name</li>
<li><p><code>/b[@class='jobtitle']//text()</code> is a bit more convoluted: since the job title is embedded in a font tag. But we can just select any descendant text using <code>//text()</code> to get the <strong>position</strong>.</p>
<p>An alternative is to select the <code>b</code> or <code>font</code> node and use <code>text_content()</code> to get the text (recursively, if needed), e.g.<br>
<code>xslt_root.xpath("//div[@data-tn-component='jobHeader']/b[@class='jobtitle']")[0].text_content()</code></p></li>
</ol>
|
python|html|xpath|lxml
| 1 |
1,901,718 | 70,589,470 |
How to run the Calculate Field using ArcPy?
|
<p>I have an issue in executing the Calculate field command in Python (ArcPy). I couldn't find any related resources or helpful descriptions regarding this. I hope somebody could help me with this.</p>
<pre><code>inFeatures = r"H:\Python Projects\PycharmProjects\ArcPy\Test_Output\Trial_out.gdb\Trail_Data_A.shp"
arcpy.CalculateField_management(inFeatures, 'ObjArt_Ken', '!AttArt_Ken!'.split('_')[0])
arcpy.CalculateField_management(inFeatures, 'Wert', '!AttArt_Ken!'.split('_')[-1])
</code></pre>
<p>The error i am getting when I run the command is</p>
<pre><code>arcgisscripting.ExecuteError: Error during execution. Parameters are invalid.
ERROR 000989: The CalculateField tool cannot use VB expressions for services.
Error while executing (CalculateField).
</code></pre>
<p>I am using ArcGIS Pro 2.8 and Python 2.7</p>
|
<p><em>Since I can't comment yet, I will have to use an Answer to make several comments</em></p>
<p>You need to clarify, "I am using ArcGIS Pro 2.8 and Python 2.7." ArcGIS Pro has always come bundled with Python 3.x, and ArcGIS Desktop/ArcMap has always been bundled with Python 2.x. It is not possible to use ArcGIS Pro and Python 2.7. Given the error message, it appears you are running some version of ArcMap and Python 2.7.</p>
<p>The original error message, as you seemed to have figured out, was telling you exactly what wasn't working originally. Since you didn't pass <code>expression_type</code> to Calculate Field and you are using some version of ArcMap that defaults to <code>VB</code>, the function was interpreting your expression as VB code and erring because "CalculateField tool cannot use VB expressions for services."</p>
<p>Looking at the newer code, I am not sure why you are encoding <code>field_type</code>. Although I tested it and it works, it is unnecessary and confusing to someone looking at the code.</p>
<p>Unless you provide some samples of what the AttArt_Ken field contains, people can't really comment on the output you are seeing.</p>
|
python|arcgis|arcpy
| -1 |
1,901,719 | 73,440,112 |
MySql: Insert record doesn't exists or actually never being created?
|
<p>Recently I've encountered a strange problem which I couldn't see any ideas based on my current knowledge.<br />
Backend: Python3, Sqlalchemy,
MySQL Config: read-committed, id auto-increment, cluster with 3 nodes.
Query: insert into xxx values(xxx...) and then db.session.commit()
Expect Result: New record id returned and mysql successfully create one record.
Actually Result: New record id returned and no mysql record created and no binlog found.</p>
<p>I wonder: if something panic, the transcation should've rollback and no id should returned. What I missed?</p>
|
<p>Possibly one is overwriting the session with child processes,
<a href="https://stackoverflow.com/questions/41279157/connection-problems-with-sqlalchemy-and-multiple-processes">Connection problems with SQLAlchemy and multiple processes</a></p>
|
mysql|python-3.x|sqlalchemy
| 0 |
1,901,720 | 50,158,647 |
Amazon web services and tensorflow
|
<p>As someone who has a laptop with insufficient processing power, I am having a hard time trying to train my neural network models. So, I thought of Amazon Web Services as a solution. But I have a few questions. First of all, as far as I know, Amazon SageMaker supports TensorFlow. I could not understand if the service is free or not though. I have heard some people say that it is free for a specific time, others say that it is free unless you surpass a limit. I would be more than happy if someone could clarify or put forward other alternatives that would help me out.<br>
Thanks a lot! </p>
|
<p>They have a free tier, and this is all well documented at <a href="https://aws.amazon.com/sagemaker/pricing/" rel="nofollow noreferrer">https://aws.amazon.com/sagemaker/pricing/</a></p>
|
amazon-web-services|tensorflow|amazon-sagemaker
| 1 |
1,901,721 | 65,009,681 |
how can I select rows in a Dataframe according to specified conditions in several columns?
|
<p>I'm new in pandas and trying to get some rows which match conditions of two columns</p>
<p>Here is my code:</p>
<pre><code>import pandas as pd
df = pd.read_csv('sp500.csv')
full_list = []
symbol = df['Symbol']
full_list.append(symbol)
name = df['Name']
full_list.append(name)
sector = df['Sector']
full_list.append(sector)
price = df['Price']
full_list.append(price)
book_value = df['Book Value']
full_list.append(book_value)
low = df['52 week low']
full_list.append(low)
high = df['52 week high']
full_list.append(high)
df = pd.DataFrame(full_list)
df = df.T
print(df.loc[df['Sector'].isin(['Financials','Energy']) and (df['52 week low'] < 80)])
</code></pre>
<p>I can't find the correct command in the documentation, and the problem is in the last line of code. Please help me to understand how it works</p>
|
<p>You're quite close. You need to use bit-wise operators and take care to consider the unintuitive operator precedence:</p>
<pre class="lang-py prettyprint-override"><code>df.loc[
df['Sector'].isin(['Financials','Energy']) & # not "and"
(df['52 week low'] < 80) # these parenthesis are crucial
]
</code></pre>
<p>Side note, without seeing the text file that you're working from, I can't help but think you'd be better off selecting your columns directly instead of rebuilding your dataframe:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
cols_to_keep = ['Symbol', 'Name', 'Sector', 'Price', 'Book Value', '52 week low', '52 week high']
rows_to_keep = lambda df: df['Sector'].isin(['Financials','Energy']) & (df['52 week low'] < 80)
df = (
pd.read_csv('sp500.csv')
.loc[row_to_keep, cols_to_keep]
)
</code></pre>
|
python|pandas
| 1 |
1,901,722 | 64,774,027 |
Tkinter space between widgets keeps growing
|
<p>I'm trying to get a paned widget to grow with a form but whenever I pull the window down vertically, the gap between the paned widget and the status bar grows.</p>
<pre><code>from tkinter import *
root = Tk()
pw = PanedWindow(root, orient='horizontal')
red = LabelFrame(pw, text='red')
Label(red, text='something', anchor='w').grid(row=1, column=1)
blue = LabelFrame(pw, text='blue')
Label(blue, text='anything', anchor='w').grid(row=1, column=1)
pw.add(red, stretch='always')
pw.add(blue, stretch='always')
status = Label(root, text='Status', relief=SUNKEN)
pw.pack(expand=True, fill=BOTH)
status.pack(side=BOTTOM, expand=True, fill=X)
root.mainloop()
</code></pre>
<p>Is there a way of stopping the gap between the status bar and the bottom of the window and the gap between the status bar and the paned widget from growing?</p>
|
<p>To solve your issue I configured this line:</p>
<pre><code>status.pack(side=BOTTOM, expand=False, fill=X,anchor='n')
</code></pre>
<p>For more information <a href="https://stackoverflow.com/questions/63536505/how-do-i-organize-my-tkinter-appllication/63536506#63536506">see</a>.</p>
|
python|python-3.x|tkinter
| 3 |
1,901,723 | 63,793,969 |
Making table sortable in Dash html Table component
|
<p>I have a table in my app with Dash, but I made it using the html components and not with the DataTable of Dash. The app is quite big and already configured, so I would like to avoid rewriting it. In html there is the <code><table class="sortable"></code> which makes the table sortable. However, when I construct the table with dash, where should I write this attribute? Here is my table code:</p>
<pre><code>def generate_table(dataframe, max_rows=1000):
return html.Table([
html.Thead(
html.Tr([html.Th(col) for col in dataframe.columns])
),
html.Tbody([
html.Tr([
html.Td(dataframe.iloc[i][col]) for col in dataframe.columns
]) for i in range(min(len(dataframe), max_rows))
])
], style={
'margin-right': 'auto',
'margin-left': 'auto'
}
)
</code></pre>
|
<p>To add a class to a Dash component, simply pass it via the <code>className</code> keyword,</p>
<pre><code>... style={'margin-right': 'auto', 'margin-left': 'auto'}, className="sortable")
</code></pre>
<p>The class itself won't make the table sortable though, you'll need to load an <a href="https://stackoverflow.com/a/55730907/2428887">appropriate JavaScript library</a>. In Dash, scrips are <a href="https://dash.plotly.com/external-resources" rel="nofollow noreferrer">usually loaded on app initialization</a>. However, for this type of library to work, it must be loaded <em>after</em> the table has been render, which can be achived using <a href="https://pypi.org/project/dash-defer-js-import/" rel="nofollow noreferrer">this library</a>. Here is a small example,</p>
<pre><code>import dash
import dash_html_components as html
import pandas as pd
import dash_defer_js_import as dji
def generate_table(dataframe, max_rows=1000):
return html.Table([
html.Thead(
html.Tr([html.Th(col) for col in dataframe.columns])
),
html.Tbody([
html.Tr([
html.Td(dataframe.iloc[i][col]) for col in dataframe.columns
]) for i in range(min(len(dataframe), max_rows))
])
], style={'margin-right': 'auto', 'margin-left': 'auto'}, className="sortable")
df = pd.DataFrame({'Fruits': ["Apple", "Banana"], 'Vegetables': ["Tomato", "Cucumber"]})
app = dash.Dash()
app.layout = html.Div([generate_table(df), dji.Import(src="https://www.kryogenix.org/code/browser/sorttable/sorttable.js")])
if __name__ == '__main__':
app.run_server()
</code></pre>
|
python|plotly-dash
| 2 |
1,901,724 | 53,283,766 |
Tkinter Optionmenu StringVar.get() returning blank
|
<p>I am creating a Tkinter window with a 'for loop' so it can self-adjust if later on, I decide to add more questions. My issue is that I can't save the inputted value on the optionmenu. So far all I got was a <code>list1 = ['', '', '']</code> while the <code>Strg_var = [StringVar, StringVar, StringVar]</code> and it prints only the blanks and the variables PY_numbers.</p>
<pre><code>import tkinter as tk
from tkinter import *
LARGE_FONT = ("Arial", 12)
window=Tk()
def _save():
print(*list1, sep = ", ")
print(*Strg_var, sep = ", ")
Questionlist = ["A. Is A true? :", "B. Is B true? :", "C. Is C true? :"]
choices = ['-', 'Yes', 'No']
n = 0
Strg_var=[0]*len(Questionlist)
list1=[]
for n in range(len(Questionlist)):
Label(window, text=Questionlist[n], font = LARGE_FONT, background = "white").grid(row= n, column=0, columnspan=2, padx=10, pady = 10, sticky="W")
a = tk.StringVar(window)
OptionMenu(window, a, choices[0], *choices).grid(row = n, column=2, padx=10, sticky="WE")
list1.append(a.get())
tk.Button(window, text="Save", command = _save,width=18).grid(row=16, column=0, padx=10, pady=15, sticky="W")
window.mainloop()
</code></pre>
<p>Can someone help me to sort this out on how to save the optionmenu user selections into a list or any other way?</p>
|
<p>You can make a list of <code>StringVar</code>(Do Initialise them, I haven't done that in my code). Every time an option is selected the corresponding item will change. So I would do it like this.</p>
<pre><code>import tkinter as tk
LARGE_FONT = ("Arial", 12)
window=tk.Tk()
def _save():
print(list(map(lambda x: x.get(), a)))
Questionlist = ["A. Is A true? :", "B. Is B true? :", "C. Is C true? :"]
choices = ['-', 'Yes', 'No']
a = [tk.StringVar(window) for i in range(len(Questionlist))]
n = 0
for n in range(len(Questionlist)):
tk.Label(window, text=Questionlist[n], font = LARGE_FONT, background = "white").grid(row= n, column=0, columnspan=2, padx=10, pady = 10, sticky="W")
Strg_var = tk.StringVar(window)
tk.OptionMenu(window, a[n], *choices).grid(row = n, column=2, padx=10, sticky="WE")
tk.Button(window, text="Save", command = _save,width=18).grid(row=16, column=0, padx=10, pady=15, sticky="W")
window.mainloop()
</code></pre>
<p><a href="https://i.stack.imgur.com/OOZeu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OOZeu.png" alt="enter image description here"></a> </p>
<p>Output:</p>
<p><code>['No', '-', 'Yes']</code></p>
|
python|tkinter|optionmenu
| 2 |
1,901,725 | 62,692,601 |
How would i access the object's of a website through python?
|
<p>Okay, as far as i know those "Objects" are only accessible through console but if it's accessible through the console why wouldn't it be accessible through python? I haven't tried anything yet because i have literal NO idea of what could i do..... Any help would be appreciated, is it possible to get the object through requests? Also i would appreciate the name that those "Objects" of the websites are called :D Thanks.</p>
|
<p>You can checkout selenium python
it has methods for the execution of scripts and finding DOM elements</p>
<p>driver.execute_script("some javascript code here");</p>
<pre><code>from selenium import webdriver
driver = webdriver.Firefox()
driver.get("www.google.co.in")
driver.execute_script("document.getElementsByXpath('//input[@name='q']')
</code></pre>
<p>Another method would be to Use beautifulsoup(BS4)</p>
<p>also, you can use scrappy. it's quite powerful but has a major style that needs to be followed and less freedom.</p>
|
python|web-scraping|request
| 1 |
1,901,726 | 62,039,496 |
i have checked out answers but couldn't solve the issue 'expected an indented block'
|
<p>i have a code like this and i get an error "expected an indented block"
event = Events(int(splitted[0]), int(splitted[1]), int(splitted[2]), int(splitted[3]),splitted[4].lower()) for this line. I'm new to python so can't figure out what is really going on. please</p>
<pre><code> class Events: ## arama ve sms classı (user arama ya da sms oluşturduğunda listeye girecek olan objeler buradan üretilecek)
callduration = 0
def __init__(self, PhoneNo, Year, Month, Day, Type):
self.PhoneNo = PhoneNo
self.Year = Year
self.Month = Month
self.Day = Day
self.Type = Type
class User:
EventsList = [] ## event objelerinin listede tutulması
def __init__(self, PhoneNo, FName, LName): ## user classından obje üretmek için construction
self.PhoneNo = PhoneNo
self.FName = FName
self.LName = LName
self.Credit = 100
self.Smscounter = 0
def Add(self): ## Add fonksiyonu
print("Add Event with comma for example : PhoneNo,Year,Month,Day,Type(SMS OR CALLING)")
print("PhoneNo,Year,Month,Day,Type(SMS OR CALLING)")
eventInput = input()
splitted = eventInput.split(",") ## Virgüllerin Split işlemi arraya atılması
if len(splitted) == 5:
try:
event = Events(int(splitted[0]), int(splitted[1]), int(splitted[2]), int(splitted[3]),splitted[4].lower())
if event.Type == "sms": ## SMS kontrolü
if(self.Credit != 0):
self.Credit = self.Credit - 1 ## Credittten Düşme
self.Smscounter = self.Smscounter + 1
print("\n ~~~~~~Call/Sms Added~~~~~ \n")
self.EventsList.append(event)
elif event.Type == "call": ## Arama Kontrolü
print("Call duration in minute ?")
duration = input()
event.callduration = int(duration) ## Arama dakikası inputu alma
print("\n ~~~~~~Call/Sms Added~~~~~ \n")
self.EventsList.append(event)
else:
print("Wrong Input!")
except Exception:
print("Wrong Input")
else:
print("Wrong Input!")
</code></pre>
|
<p>this is the correct version of your code:</p>
<pre><code>class Events: ## arama ve sms classı (user arama ya da sms oluşturduğunda listeye girecek olan objeler buradan üretilecek)
callduration = 0
def __init__(self, PhoneNo, Year, Month, Day, Type):
self.PhoneNo = PhoneNo
self.Year = Year
self.Month = Month
self.Day = Day
self.Type = Type
class User:
EventsList = [] ## event objelerinin listede tutulması
def __init__(self, PhoneNo, FName, LName): ## user classından obje üretmek için construction
self.PhoneNo = PhoneNo
self.FName = FName
self.LName = LName
self.Credit = 100
self.Smscounter = 0
def Add(self): ## Add fonksiyonu
print("Add Event with comma for example : PhoneNo,Year,Month,Day,Type(SMS OR CALLING)")
print("PhoneNo,Year,Month,Day,Type(SMS OR CALLING)")
eventInput = input()
splitted = eventInput.split(",") ## Virgüllerin Split işlemi arraya atılması
if len(splitted) == 5:
try:
event = Events(int(splitted[0]), int(splitted[1]), int(splitted[2]), int(splitted[3]), splitted[4].lower())
if event.Type == "sms": ## SMS kontrolü
if (self.Credit != 0):
self.Credit = self.Credit - 1 ## Credittten Düşme
self.Smscounter = self.Smscounter + 1
print("\n ~~~~~~Call/Sms Added~~~~~ \n")
self.EventsList.append(event)
elif event.Type == "call": ## Arama Kontrolü
print("Call duration in minute ?")
duration = input()
event.callduration = int(duration) ## Arama dakikası inputu alma
print("\n ~~~~~~Call/Sms Added~~~~~ \n")
self.EventsList.append(event)
except Exception:
print("Wrong Input")
else:
print("Wrong Input!")
else:
print("Wrong Input!")
</code></pre>
<p>please pay attention to the editing that i made in your code. couple of things to remember:
you can't have an outer scope between a try/except block. that is like breaking their connection.
print statements like any other sub-scope code, must be indented if they are going to be part of a scope. </p>
|
python-3.x
| 1 |
1,901,727 | 62,033,256 |
Converting numerical date back to date in Python
|
<p>I created a dataframe with the first column as 'date', the rest our numerical columns. </p>
<pre><code>date transportation pm25 pm10 ozone no2 so2 co final_aqi label
719163 21 162 193 24 40 2 16 193 119.0
....
589 rows × 10 columns
</code></pre>
<p>Here I converted the dates to numeric form using</p>
<pre><code>import datetime as dt
df['date'] = pd.to_datetime(df['date'])
df['date'] = df['date'].map(dt.datetime.toordinal)
</code></pre>
<p>I wish to use linear regression on my code, will my predictions be altered if my date is numeric(date column is a feature)? Also how should I go back to the date format?</p>
|
<p>If date is not one of the input features, then there is no effect at all because it will not be included in your features matrix.
If date is an input to your model. i.e. one of the input features, then keep it numerical as it's.
If your model is predicting something that is related to a specific date, then it's better to include the distance from that date. i.e. (date - specific date)</p>
|
python|datetime|linear-regression|datetime-format
| 0 |
1,901,728 | 61,769,000 |
How to convert an image to URL Discord Py
|
<p>If there any way to convert an image to an Url?</p>
<p>Like, if "x" admin do a <code>!Warn Simplezes Reason</code> (Reason should be a text accompanied by an image that the admin will send)
when the admin send the command, the bot should display something like <code>Simplezes has been warned. Reason:</code> and show an URL to the picture that the admin sent</p>
|
<p>In discord.py you've an attribute to the Message object that called "attachments", it's a list of all the attachments of the message ("Attachment" class<a href="https://discordpy.readthedocs.io/en/latest/api.html#discord.Attachment" rel="nofollow noreferrer"> [DISCORD.PY DOCS]</a>).</p>
<p>Now we know that this attribute contains a list of "Attachment" class.
The "Attachment" class have an attribute called "url" and that's the url of the attachment.</p>
<p>That what you was looking for ^^, now for example you can do:</p>
<pre><code>for attachment in message.attachments:
# Do what you want with attachment.url
print(attachment.url)
</code></pre>
<p>I recommend you also to save the <a href="https://discordpy.readthedocs.io/en/latest/" rel="nofollow noreferrer">discord.py docs</a></p>
<p>And join to discord.py discord server for fast response on your questions related to discord.py!</p>
|
python|discord.py
| 3 |
1,901,729 | 67,445,613 |
Selenium-wire does not intercept requests when connecting remotely
|
<p>I am using Selenoid on a dedicated computer to run browsers.<br />
The connection is as follows:</p>
<pre class="lang-py prettyprint-override"><code>from seleniumwire import webdriver
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('disable-infobars')
chrome_options.add_argument('--disable-extensions')
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument('--ignore-certificate-errors')
chrome_options.add_experimental_option('prefs', prefs)
capabilities = {
"browserName": "chrome",
"selenoid:options": {
"enableVNC": True
}
}
capabilities.update(chrome_options.to_capabilities())
driver = webdriver.Remote(
command_executor='http://<remote_ip>:4444/wd/hub',
desired_capabilities=capabilities,
seleniumwire_options={
'auto_config': False,
'addr': '0.0.0.0'
}
)
</code></pre>
<p>The connection is ok, browser control works too, but when I want to get the list of requests it is empty:</p>
<pre class="lang-py prettyprint-override"><code>driver.get('https://google.com')
print(driver.requests)
# []
</code></pre>
|
<p>Try the following patch:</p>
<pre><code># Go to the Google home page
driver.get('https://www.google.com')
# Access requests via the `requests` attribute
for request in driver.requests:
if request.response:
print(
request.url,
request.response.status_code,
request.response.headers['Content-Type']
)
</code></pre>
<p>You should get some result similar to this:</p>
<pre><code>https://www.google.com/ 200 text/html; charset=UTF-8
https://www.google.com/images/branding/googlelogo/2x/googlelogo_color_120x44dp.png 200 image/png
https://consent.google.com/status?continue=https://www.google.com&pc=s&timestamp=1531511954&gl=GB 204 text/html; charset=utf-8
https://www.google.com/images/branding/googlelogo/2x/googlelogo_color_272x92dp.png 200 image/png
https://ssl.gstatic.com/gb/images/i2_2ec824b0.png 200 image/png
https://www.google.com/gen_204?s=webaft&t=aft&atyp=csi&ei=kgRJW7DBONKTlwTK77wQ&rt=wsrt.366,aft.58,prt.58 204 text/html; charset=UTF-8
...
</code></pre>
|
python-3.x|selenium|selenoid|seleniumwire
| -1 |
1,901,730 | 60,673,007 |
How to make a Card fill screen height?
|
<p>Using <code>dash</code> to create a simple application with a left menu and a plot as the main output.</p>
<p>I've adjusted widths using the <code>md</code> option of <code>dbc.Col</code>, and this is working ok as I resize the browser window.</p>
<p>However I'm having issues adjusting the height. I would like the <code>Card(GRAPHS)</code> below to fill the available screen height. How to I do that?</p>
<p>I have a callback retuning a <code>plotly</code> figure to the tag <code>my-graph</code>. If I use the figure height property, it will be fixed and won't resize.</p>
<p>The default height coming from <code>dash</code>/<code>plotly</code> is making the figure to short.</p>
<pre><code>import dash
import dash_core_components as dcc
import dash_html_components as html
import dash_bootstrap_components as dbc
GRAPHS = [
dbc.CardHeader(html.H5('header')),
dbc.CardBody([dcc.Graph(id='my-graph')])
]
app = dash.Dash(__name__)
app.layout = html.Div([
dbc.Container([
dbc.Row([html.H1('title')]),
dbc.Row([
dbc.Col([dbc.Card(LEFT_MENU)]),
dbc.Col([dbc.Card(GRAPHS)]) # How to make this fill screen height?
])])])
</code></pre>
<p><a href="https://i.stack.imgur.com/NEzi2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NEzi2.png" alt="enter image description here"></a></p>
|
<p>Maybe someone with more experience using <code>dash</code> could give you a better answer, but I think you can set up a specific vertical size for your cards using the keyword <code>style</code>. Then you can pass a dictionary with the arguments for CSS stylesheet, in this case we want to change the height so <code>{'height':'100vh'}</code>. I would try this:</p>
<pre><code>app.layout = html.Div([
dbc.Container([
dbc.Row([html.H1('title')]),
dbc.Row([
dbc.Col([dbc.Card(LEFT_MENU, style={'height':'100vh'})]),
dbc.Col([dbc.Card(GRAPHS, style={'height':'100vh'})])
])])])
</code></pre>
<p>You can play around with different <code>vh</code> values to see what fits best</p>
<p>Alternatively you can fit the cards into a <code>CardGroup</code> as explained in the <a href="https://dash-bootstrap-components.opensource.faculty.ai/docs/components/card/" rel="nofollow noreferrer">documentation</a>. That should force all the cards to have the same height:</p>
<pre><code>app.layout = html.Div([
dbc.Container([
dbc.Row([html.H1('title')]),
dbc.Row([
dbc.CardGroup([
dbc.Card(LEFT_MENU),
dbc.Card(GRAPHS)
])
])
])
</code></pre>
|
python|plotly-dash
| 2 |
1,901,731 | 56,637,678 |
How to paste a valid JS Code as String in Python correctly skipping all possibles characters and without inserting anything?
|
<p>I need to pass a string to a RPC Call that will be received and compiled as a Javascript on the other side. The problem arrives when my Javascript Code has 2.5mi+ lines, utf8 chars and others that need to be skipped (', `, \', é, ^...).</p>
<p>I already tried using triple quotes and inserting it as a multiline string and replacing \n with '' afterwards, but I'm not sure it worked as my terminal running for 8min+ without producing anything (no error from the RPC side nor from the (my) python side).</p>
<pre><code>api = Savoir(rpcuser, rpcpasswd, rpchost, rpcport, chainname)
js = "This would be all my JS Code"
print(api.create('txfilter', 'nameOfTheFunction', {}, js))
</code></pre>
<p><a href="https://gist.githubusercontent.com/tloriato/d3aa5df63c12cfec05c9b55f39f7fcfc/raw/51e16c97bc57aa88eb8a77aa2ffc17df1c85dbd1/file.js" rel="nofollow noreferrer">Here's my JS code (smaller to be easier, ~30k lines)</a></p>
|
<p>Rather than pasting it all into your python file, have you considered just straight-up <em>reading it</em> from the file it's already in?</p>
<pre><code>with open("file.js", 'r') as js_file:
js = js_file.read()
# you can put a benchmark here to see how long loading the file takes - or to make sure it loaded correctly
...
api.create('txfilter', 'nameOfTheFunction', {}, js)
</code></pre>
<p>This removes the need to fuss over quote characters, escaped characters, newlines, etc. because they're all exactly how they were in the file - python doesn't <em>parse</em> this text, it just <em>copies</em> it.</p>
<p>I would also submit that, if your javascript file has any characters that would need to be omitted on the other end, you simply remove them in said file before putting them into your program. It's also possible that it's the API call that's taking a long time to execute, not your own code - you can verify this with <code>print()</code> statements placed between each line, or with a debugger.</p>
|
javascript|python|python-3.x
| 1 |
1,901,732 | 66,021,175 |
python regex include end-of-line char in capture group or not?
|
<p>When doing a search within a line that includes the end of the line should one include the $ into the capture group or not?</p>
<p>Example included:</p>
<pre><code>x=re.search("\S+(bla\S+bla$)",line)
</code></pre>
<p>Example excluded:</p>
<pre><code>x=re.search("\S+(bla\S+bla)$",line)
</code></pre>
<p>Are there significant advantages / risks of one over the other?</p>
<p>Thanks.</p>
|
<p>The <code>$</code> ending anchor is zero-width, meaning that even if you do capture it, it won't contribute anything to the capture group. So, I might actually consider the first version to be an anti-pattern, and would therefore always use the second version:</p>
<pre class="lang-py prettyprint-override"><code>x = re.search("\S+(bla\S+bla)$", line)
</code></pre>
<p>In both cases, the underlying pattern is the same, so I would expect the regex engine to take the same steps, and have the same performance.</p>
|
python|regex
| 1 |
1,901,733 | 63,045,970 |
Python splitting column value with special delimeters
|
<p>I am trying to split a pandas column value with out loosing its deli-meter. Here is the <a href="https://stackoverflow.com/questions/56961327/splitting-columns-values-in-pandas-by-delimiter-without-losing-delimiter">stack-overflow</a> that I am following. It is working well when I pass a string, however it doesn't work when I want it to split by '/m'. I tried different regex, but doesn't seem work either. Any suggestions?</p>
<pre><code>import pandas as pd
ls = [
{'ID': 'ABC',
'LongString': '/m/04abc3 1 1 1 1 /m/04ccc32 3 3 3 3'},
{'ID': 'CDE',
'LongString': '/m/04abc4 2 2 2 2 /m/04ccc12 4 4 4 4'}
]
df = pd.DataFrame(ls)
df['LongString'] = df['LongString'].str.split('(?<=/m)\s') # tried removing `/` and put in `m` for testing. Did not do the trick.
</code></pre>
<p>I am trying to get it to look like this. What am I doing wrong here?</p>
<pre><code>pandas dataframe format:
ID | LongString
ABC | ['/m/04abc3 1 1 1 1', '/m/04ccc32 3 3 3 3']
CDE | ['/m/04abc4 2 2 2 2', '/m/04ccc12 4 4 4 4']
</code></pre>
|
<p>It looks as if you want to split on a white space <strong>followed</strong> by <code>/m</code>. In regex language, you want a lookahead rather than a lookbehind.</p>
<p>Proposed solution:</p>
<pre><code>df['LongString'] = df['LongString'].str.split('\s(?=/m)')
</code></pre>
|
python-3.x|pandas|split
| 3 |
1,901,734 | 58,708,216 |
Right click menu opens far away from clicked place
|
<p>I have a canvas. When I right click on it:</p>
<ol>
<li>It returns me coords where I've clicked, that's ok;</li>
<li>Opens popup menu with some functions, <strong>but far away (left - up) and that is a main problem</strong>. That's not ok;</li>
</ol>
<p>I want this menu to open next to the place I've just right clicked (Up-right from the place where mouse clicked). </p>
<p>How should I specify to it where to open this menu or maybe I need to change some attributes?</p>
<pre><code>from tkinter import *
from tkinter import ttk # for Treeview
class Main(Frame):
def __init__(self, main):
Frame.__init__(self, main)
# Vars that will be defined by user:
self.frame_width = 300
self.frame_height = 300
self.canvas_width = 600
self.canvas_height = 600
# Flexible widgets when window size alters:
main.columnconfigure(0, weight=1)
main.rowconfigure(0, weight=1)
# Some text
Label(text='Some text 1').grid(row=0, column=0)
Label(text='Some text 2').grid(row=1, column=0)
Label(text='Some text 3').grid(row=2, column=0)
Label(text='Some text 4').grid(row=3, column=0)
Label(text='Some text 5').grid(row=4, column=0)
# Canvas and frame
canvas_frame = Frame(main, width=self.frame_width, height=self.frame_height, relief='groove')
canvas_frame.grid(row=1, column=1)
canvas_frame.grid_propagate(False)
self.canvas = Canvas(canvas_frame, width = self.canvas_width, height = self.canvas_height,
bg='bisque')
self.canvas.grid(row=0, column=0)
# (!!!) RIGHT CLICK MENU
self.rmenu = Menu(self.canvas, tearoff=0, takefocus=0)
self.rmenu.add_command(label='Add', command=self.hello)
self.canvas.bind("<ButtonPress-3>", self.popup)
# Bind
self.canvas.bind("<ButtonPress-1>", self.scroll_start)
self.canvas.bind("<B1-Motion>", self.scroll_move)
# Functions
self.grid(self.canvas)
# End of init function.
# canvas popup menu: --------------------------------------------------
# (!!!) HERE IS THE POPUP RIGHTCLICK MENU FUNCTIONS
def popup(self, event):
self.rmenu.tk_popup(event.x + 40, event.y + 10, entry="0")
coords_1 = []
coords_1 += [[event.x, event.y]]
print(coords_1)
def hello(self):
print('hello')
# grid: --------------------------------------------------
def grid(self, canvas):
for l in range(0, self.canvas_width, 10):
canvas.create_line([(l, 0), (l, self.canvas_width)], fill='#d9d9d9')
for l in range(0, self.canvas_height, 10):
canvas.create_line([(0, l), (self.canvas_height, l)], fill='#d9d9d9')
# scrolling canvas: ----------------------------------------
def scroll_start(self, event):
self.canvas.scan_mark(event.x, event.y)
def scroll_move(self, event):
self.canvas.scan_dragto(event.x, event.y, gain=1)
if __name__ == '__main__':
main = Tk()
# Making window opens in the centre of the screen
w = 800
h = 650
ws = main.winfo_screenwidth()
hs = main.winfo_screenheight()
x = (ws / 2) - (w / 2)
y = (hs / 2) - (h / 2)
main.geometry('%dx%d+%d+%d' % (800, 600, x, y))
#
Main(main)
main.mainloop()
</code></pre>
<p>All necessary parts I pointed out with (!!!) and uppercase text</p>
|
<p>Use <code>event.x_root</code> and <code>event.y_root</code> instead of <code>event.x</code>, <code>event.y</code> to get absolute screen coordinates.</p>
<pre><code>self.rmenu.tk_popup(event.x_root + 40, event.y_root + 10, entry="0")
</code></pre>
<p>Doc: <a href="https://effbot.org/tkinterbook/tkinter-events-and-bindings.htm" rel="nofollow noreferrer">Events and Bindings</a></p>
<hr>
<p>BTW: there is also </p>
<pre><code>x = root.winfo_pointerx()
y = root.winfo_pointery()
</code></pre>
<p>which also give absolute screen coordinates for mouse. </p>
<p>(Stackoverflow: <a href="https://stackoverflow.com/a/22943296/1832058">Mouse Position Python Tkinter</a>)</p>
|
python|tkinter
| 2 |
1,901,735 | 58,984,474 |
How to get CORRECT feature importance plot in XGBOOST?
|
<p>Using two different methods in XGBOOST feature importance, gives me two different most important features, which one should be believed? </p>
<p>Which method should be used when? I am confused.</p>
<h1>Setup</h1>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
import xgboost as xgb
df = sns.load_dataset('mpg')
df = df.drop(['name','origin'],axis=1)
X = df.iloc[:,1:]
y = df.iloc[:,0]
</code></pre>
<h1>Numpy arrays</h1>
<pre class="lang-py prettyprint-override"><code># fit the model
model_xgb_numpy = xgb.XGBRegressor(n_jobs=-1,objective='reg:squarederror')
model_xgb_numpy.fit(X.to_numpy(), y.to_numpy())
plt.bar(range(len(model_xgb_numpy.feature_importances_)), model_xgb_numpy.feature_importances_)
</code></pre>
<p><a href="https://i.stack.imgur.com/dvrNf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dvrNf.png" alt="enter image description here"></a></p>
<h1>Pandas dataframe</h1>
<pre class="lang-py prettyprint-override"><code># fit the model
model_xgb_pandas = xgb.XGBRegressor(n_jobs=-1,objective='reg:squarederror')
model_xgb_pandas.fit(X, y)
axsub = xgb.plot_importance(model_xgb_pandas)
</code></pre>
<p><a href="https://i.stack.imgur.com/MMvSu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MMvSu.png" alt="enter image description here"></a></p>
<h1>Problem</h1>
<p>Numpy method shows 0th feature cylinder is most important. Pandas method shows model year is most important. Which one is the CORRECT most important feature?</p>
<h1>References</h1>
<ul>
<li><a href="https://stackoverflow.com/questions/37627923/how-to-get-feature-importance-in-xgboost">How to get feature importance in xgboost?</a></li>
<li><a href="https://stackoverflow.com/questions/57360703/feature-importance-gain-in-xgboost">Feature importance 'gain' in XGBoost</a></li>
</ul>
|
<p>There are 3 ways to get feature importance from Xgboost:</p>
<ul>
<li>use built-in feature importance (I prefer <code>gain</code> type),</li>
<li>use permutation-based feature importance</li>
<li>use SHAP values to compute feature importance</li>
</ul>
<p>In my <a href="https://mljar.com/blog/feature-importance-xgboost/" rel="nofollow noreferrer">post</a> I wrote code examples for all 3 methods. Personally, I'm using permutation-based feature importance. In my opinion, the built-in feature importance can show features as important after overfitting to the data(this is just an opinion based on my experience). SHAP explanations are fantastic, but sometimes computing them can be time-consuming (and you need to downsample your data).</p>
|
python|xgboost
| 3 |
1,901,736 | 25,372,500 |
How to form a dictionary with different parameters?
|
<p>I have a <code>class Test</code> with class parameter </p>
<p><code>parameters = {'first': [1,2,3,4], 'second': [5,6,7]}</code>. I want to convert it into a dictionary so that it will be <code>"{'Test': 'first':1 'second':5}"</code></p>
<p>what I tried is:
<code>
di = {}
di = dict(itertools.izip(name, vals))</code></p>
<p>where I'm getting Test i.e classnane in variable name i.e <code>name = Test</code> and</p>
<p><code>vals = {'first': [1,2,3,4], 'second': [5,6,7]}</code>
Though I want it as <code>"{'Test': 'first':1 'second':5}"</code>, shouldn't this print <code>"{'Test': 'first':[1,2,3,4] 'second':[5,6,7]}"</code>?? Instead what I'm getting when I'm printing <code>di</code> is <code>{'Test': 'first'}</code>. I'm not getting where my logic is going wrong.</p>
|
<p>Use a <a href="http://legacy.python.org/dev/peps/pep-0274/" rel="nofollow">dictionary comprehension</a>:</p>
<pre><code>subjects = ['Physics', 'Chemistry', 'Maths']
{subject: {} for subject in subjects}
</code></pre>
|
python|list|class|dictionary|set
| 4 |
1,901,737 | 59,952,284 |
How to link a Python script to React Native app
|
<p>I'm developing a react native application using Expo, it will display different data, preprocessed and cleaned with Python, along with sentiments analysis on tweets regarding a specific topic.</p>
<p>What is the best way to do that? I read about using RESTful API with Flask but after some reading I don't think it will serve this purpose.</p>
<p>Thank you in advance.</p>
|
<p>You could create an AWS lambda endpoint for sending data and retrieving the processed results. With the free tier of AWS lambda you get "...1M free requests per month and 400,000 GB-seconds of compute time per month."</p>
<p>Seems like it might suit your use-case.</p>
<p>Tutorial you can easily follow along with here: <a href="https://www.tutorialspoint.com/aws_lambda/aws_lambda_function_in_python.htm" rel="nofollow noreferrer">https://www.tutorialspoint.com/aws_lambda/aws_lambda_function_in_python.htm</a></p>
<p>Some more info on setting up a rest API using lambda:
<a href="https://blog.sourcerer.io/full-guide-to-developing-rest-apis-with-aws-api-gateway-and-aws-lambda-d254729d6992" rel="nofollow noreferrer">https://blog.sourcerer.io/full-guide-to-developing-rest-apis-with-aws-api-gateway-and-aws-lambda-d254729d6992</a></p>
<p>Reference for AWS lambda free tier here:
<a href="https://aws.amazon.com/lambda/pricing/" rel="nofollow noreferrer">https://aws.amazon.com/lambda/pricing/</a></p>
|
python|python-3.x|react-native|expo
| 3 |
1,901,738 | 59,971,036 |
How to change value by checking if row x+1-row x is greater than certain value(pandas)
|
<p>I have a table that looks like this:</p>
<pre><code>Dates, Minutes
1/24/2020 2:58:04 PM, 0
1/24/2020 3:13:04 PM, 0
1/27/2020 10:04:09 AM, 3
1/27/2020 10:19:09 AM, 0
1/27/2020 10:34:09 AM, 0
1/27/2020 10:49:10 AM, 1
1/27/2020 11:04:09 AM, 0
1/27/2020 11:19:09 AM, 1
1/27/2020 11:34:09 AM, 1
1/27/2020 11:49:09 AM, 0
1/27/2020 12:04:09 PM, 13
1/27/2020 12:19:09 PM, 0
1/27/2020 12:34:09 PM, 0
1/27/2020 12:49:09 PM, 0
1/27/2020 1:04:09 PM, 11
1/27/2020 1:19:09 PM, 26
1/27/2020 1:34:09 PM, 41
1/27/2020 1:49:09 PM, 0
1/27/2020 2:04:09 PM, 0
1/27/2020 2:19:09 PM, 12
1/27/2020 2:34:09 PM, 0
</code></pre>
<p>I am looking to check if the difference between the current row from the previous row is greater than, or equal to, 15 and if it is then to change the value to 15.
So the new table would look like:</p>
<pre><code>Dates, Minutes
1/24/2020 2:58:04 PM, 0
1/24/2020 3:13:04 PM, 0
1/27/2020 10:04:09 AM, 3
1/27/2020 10:19:09 AM, 0
1/27/2020 10:34:09 AM, 0
1/27/2020 10:49:10 AM, 1
1/27/2020 11:04:09 AM, 0
1/27/2020 11:19:09 AM, 1
1/27/2020 11:34:09 AM, 1
1/27/2020 11:49:09 AM, 0
1/27/2020 12:04:09 PM, 13
1/27/2020 12:19:09 PM, 0
1/27/2020 12:34:09 PM, 0
1/27/2020 12:49:09 PM, 0
1/27/2020 1:04:09 PM, 11
1/27/2020 1:19:09 PM, 15*
1/27/2020 1:34:09 PM, 15*
1/27/2020 1:49:09 PM, 0
1/27/2020 2:04:09 PM, 0
1/27/2020 2:19:09 PM, 12
1/27/2020 2:34:09 PM, 0
</code></pre>
<p>*=value changed</p>
|
<p>Let us do </p>
<pre><code>df.Minutes = df.Minutes.mask(df.Minutes.diff().ge(15), 15)
</code></pre>
|
python|pandas
| 2 |
1,901,739 | 66,774,003 |
Defaultdic sum does not print in python
|
<pre class="lang-py prettyprint-override"><code>defaultdict(<class 'float'>, {('03/24/21', 'BLUE', '390'): 176.0, ('03/24/21', 'BLUE', '391'): 182.0})
</code></pre>
<p>Instead of printing as per the above with the : indicating the sums, I want to print each item separately like:</p>
<pre class="lang-py prettyprint-override"><code>{('03/24/21', 'BLUE', '390'): 176.0 ........ new line
('03/24/21', 'BLUE', '391'): 182.0
</code></pre>
<p>However when I try</p>
<pre class="lang-py prettyprint-override"><code>for d in dictionary:
print(d)
</code></pre>
<p>output:</p>
<p>the sum is missing.i only get:</p>
<pre class="lang-py prettyprint-override"><code>{('03/24/21', 'BLUE', '390') ..........new line
('03/24/21', 'BLUE', '391')
</code></pre>
|
<pre><code>for d in dictionary.items(): print(d)
</code></pre>
|
python
| 0 |
1,901,740 | 26,681,293 |
Two try statements with the same except clause
|
<p>Say I want to do something in python 3 with a two character string like "A1" or "1A" but first check if it's in either form beacuse the user may type it in both ways. (I already have a way to check if a char is a letter.)</p>
<p>How would I implement this in the shortest and easiest way?
I would like to try one thing, and if it fails, try another before raising the exception. Like so:</p>
<pre><code>x = 0
string = 'A1'
try:
x = int(string[1]) # Check that second char is a digit
# something else to check first char is a letter
else try:
x = int(string[0]) # Check that first char is a digit
# something else to check second char is a letter
except:
print('Was in neither form')
</code></pre>
|
<p>You could use a loop:</p>
<pre><code>for i in range(1, -1, -1):
try:
string[i] = int(string[i])
except ValueError:
pass
else:
break
else:
print('Was in neither form')
</code></pre>
<p>If either conversion succeeds, the loop is broken out of with <code>break</code>. If you didn't use <code>break</code>, the <code>else</code> suite on the <code>for</code> loop is executed.</p>
<p>However, you cannot assign to string indices as they are immutable. A better option would be to use a regular expression:</p>
<pre><code>import re
valid_pattern = re.compile('^(?:\d[A-Z]|[A-Z]\d)$')
def is_valid(string):
return valid_pattern.match(string) is not None
</code></pre>
|
python|try-catch
| 4 |
1,901,741 | 44,990,834 |
Getting messages sent from python KafkaProducer
|
<p>My goal is to get data from non-file sources (i.e. generated within a program or sent though an API) and have it sent to a spark stream. To accomplish this, I'm sending the data through a <a href="https://github.com/dpkp/kafka-python" rel="nofollow noreferrer">python-based</a> <code>KafkaProducer</code>:</p>
<pre><code>$ bin/zookeeper-server-start.sh config/zookeeper.properties &
$ bin/kafka-server-start.sh config/server.properties &
$ bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic my-topic
$ python
Python 3.6.1| Anaconda custom (64-bit)
> from kafka import KafkaProducer
> import time
> producer = KafkaProducer(bootstrap_servers='localhost:9092', value_serializer=lambda v: json.dumps(v).encode('utf-8'))
> producer.send(topic = 'my-topic', value = 'MESSAGE ACKNOWLEDGED', timestamp_ms = time.time())
> producer.close()
> exit()
</code></pre>
<p>My issue is that nothing appears when checking the topic from the consumer shell script:</p>
<pre><code>$ bin/kafka-console-consumer.sh --bootstrap-server localhost:2181 --topic my-topic
^C$
</code></pre>
<p>Is something missing or wrong here? I'm new to spark/kafka/messaging systems, so anything will help. The Kafka version is 0.11.0.0 (Scala 2.11) and no changes are made to the config files.</p>
|
<p>If you start a consumer after sending messages to a topic, it is possible that the consumer will skip that messages because it will set a topic offset (which could be considered as a "starting point" to read from) to the topic's end. To change that behavior try to add <code>--from-beginning</code> option:</p>
<pre><code>$ bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic my-topic --from-beginning
</code></pre>
<p>Also you can try <code>kafkacat</code>, which is more convenient than Kafka's console consumer and producer (imho). Reading messages from Kafka with <code>kafkacat</code> can be performed with the following command:</p>
<pre><code>kafkacat -C -b 'localhost:9092' -o beginning -e -D '\n' -t 'my-topic'
</code></pre>
<p>Hope it will help.</p>
|
python|apache-kafka|kafka-consumer-api|kafka-python
| 1 |
1,901,742 | 69,296,750 |
Pandas counting and suming specific conditions returns only nan
|
<p>I am trying to follow the otherwise excellent solution provided in the thread <a href="https://stackoverflow.com/questions/20995196/pandas-counting-and-summing-specific-conditions">pandas-counting-and-summing-specific-conditions</a>, but the code only ever outputs nan values, and with sum (not count), gives a future warning.</p>
<p>Basically, for each row in my df, I want to count how many dates in a column are within a range of +/- 1 days of other dates in the same column.</p>
<p>If I were doing it in excel, the following muti-conditional sumproduct or countifs are possible:</p>
<pre><code>= SUMPRODUCT(--(AN2>=$AN$2:$AN$35000-1),--(AN2<=$AN$2:$AN$35000+1)),
</code></pre>
<p>or</p>
<pre><code>=countifs($AN$2:$AN$35000,">="&AN2-1,$AN$2:$AN$35000,"<="&AN2+1)
</code></pre>
<p>In python, trying the approach in the <a href="https://stackoverflow.com/questions/20995196/pandas-counting-and-summing-specific-conditions">linked</a> thread, I believe the code would be:</p>
<pre><code>import pandas as pd
import datetime
df = pd.DataFrame({'datet': [pd.to_datetime("2020-03-04 00:00:00"), pd.to_datetime("2020-03-05 00:00:00"),\
pd.to_datetime("2020-03-09 00:00:00"), pd.to_datetime("2020-03-10 00:00:00"),\
pd.to_datetime("2020-03-11 00:00:00"), pd.to_datetime("2020-03-12 00:00:00")]})
df["caseIntensity"] = df[(df['datet'] <= df['datet'] + datetime.timedelta(days=1)) &\
(df['datet'] >= df['datet'] - datetime.timedelta(days=1))].sum()
</code></pre>
<p>The output should be: 2, 2, 2, 3, 3, 2.
Instead it is wholemeal nan!</p>
<p>Is it correct to assume that because I'm testing conditions, it doesn't matter if I sum or count? If I need to sum, I get a future warning about invalid columns (the columns are valid), which I don't understand. But mostly, my question is why am I only getting nan?</p>
|
<p>I think what you're trying to sum doesn't match the logic you're trying to apply.</p>
<p>Use the following code:</p>
<p>Create a function which counts the number of days in that range
Call that function for each row and save that as the the value of the new column</p>
<pre><code>import pandas as pd
import datetime
df = pd.DataFrame({'datet': [pd.to_datetime("2020-03-04 00:00:00"), pd.to_datetime("2020-03-05 00:00:00"),\
pd.to_datetime("2020-03-09 00:00:00"), pd.to_datetime("2020-03-10 00:00:00"),\
pd.to_datetime("2020-03-11 00:00:00"), pd.to_datetime("2020-03-12 00:00:00")]})
def get_dates_in_range(df_copy, row):
return df_copy[(df_copy['datet'] <= row['datet'] + datetime.timedelta(days=1)) &\
(df_copy['datet'] >= row['datet'] - datetime.timedelta(days=1))].shape[0]
df["caseIntensity"] = df.apply(lambda row: get_dates_in_range(df, row), axis=1)
datet caseIntensity
0 2020-03-04 2
1 2020-03-05 2
2 2020-03-09 2
3 2020-03-10 3
4 2020-03-11 3
5 2020-03-12 2
</code></pre>
|
python|excel|pandas|datetime|sumproduct
| 1 |
1,901,743 | 55,277,809 |
Q&A: Defining __getitem__ method or assigning __getitem__ in classes
|
<p>Which of the below is the recommended way to do <code>__getitem__</code>, when it maps to an internal sequence type?</p>
<pre class="lang-py prettyprint-override"><code>class A:
def __init__(self, ...):
...
self._internal_sequence = []
def __getitem__(self, key):
return self._internal_sequence[key]
class B:
def __init__(self, ...):
...
self._internal_sequence = []
def __getitem__(self, key):
# I'm pretty sure it won't be this one.
return self._internal_sequence.__getitem__(key)
class C:
_internal_sequence = []
__getitem__ = _internal_sequence.__getitem__
</code></pre>
|
<p>I realised my answer while writing the question, but I'll still post it here so others can benefit.</p>
<p>C) This appears to work fine, until you realise the _internal_sequence is a class variable and will retain itself between classes. In addition, redefining <code>_internal_sequence = [...]</code> seems to remove the <code>__getitem__</code> and cause <code>IndexError</code>.</p>
<p>B) This doesn't look as nice as A) and for builtin types it will be slower - see <a href="https://stackoverflow.com/a/10826247/7349206">here</a>.</p>
<p>Hence, A is the recommended way of mapping a class's <code>__getitem__</code> to an internal sequence. </p>
|
python
| 2 |
1,901,744 | 54,173,351 |
How do I write a Django ORM query that searches for a value that is close but doesn't exceed a certain other value?
|
<p>I'm using Python 3.7 with Postgres 9.5. I would like to write a Django ORM query if possible. I have the below model</p>
<pre><code>class PageStat(models.Model):
article = models.ForeignKey(Article, on_delete=models.CASCADE, )
elapsed_time = models.IntegerField(default=0)
score = models.IntegerField(default=0)
</code></pre>
<p>What I would like to do is write a query that, for a given article and elapsed time (an integer in seconds), I would like to write a query taht returns the PageStat object with the greatest elapsed time value that doesn't exceed the argument. So, for example, if my table had these values</p>
<pre><code>article_id elapsed_time
==================
1 10
1 20
1 30
2 15
</code></pre>
<p>And I did a search with article ID "1" and elapsed time" 15", I would like to get back the first row (where article_id = 1 and elapsed_time = 10), since "10" is the greatest value that is still less than 15. I know how to write a query for just finding the stat with the article, </p>
<pre><code>PageStat.objects.filter(article=article)
</code></pre>
<p>but I don't know how to factor in the time value.</p>
|
<p>You could try:</p>
<pre><code>PageStat.objects.filter(elapsed_time__lte=20)
</code></pre>
<p>There are also:</p>
<pre><code>lt - less than
gte - greater than equal
gt - greater than
etc
</code></pre>
|
python|django|python-3.x|postgresql|orm
| 2 |
1,901,745 | 58,553,391 |
How to resolve this error 407 during login?
|
<p>I tried to check by printing where could be the cause of the problem. It shows that user_id and password were there but it cannot proceed after that.</p>
<pre><code>def login_request(request):
if request.method == 'POST':
user_id = request.POST['user_id']
password = request.POST['password']
user = authenticate(user_id=user_id, password=password)
if user is not None:
login(request, user)
return redirect("/index")
else:
messages.info(request, 'Invalid Credentials')
return redirect('login_request')
else:
return render(request, 'login.html')
</code></pre>
<p>urls.py
path('login_request/', views.login_request, name='login_request'),</p>
|
<p>You need to authenticate user with <em>username</em> and <em>password</em> not with user_id</p>
<pre><code>if request.method == 'POST':
username = request.POST['username']
password = request.POST['password']
user = authenticate(request,username=username, password=password)
if user is not None:
login(request, user)
return redirect("/index")
else:
messages.info(request, 'Invalid Credentials')
return redirect('login_request')
else:
return render(request, 'login.html')
</code></pre>
|
python|django
| 1 |
1,901,746 | 45,504,761 |
Django 1.9 Cannot resolve keyword 'models' into field. Choices are: comm, id1, id2, id2_id
|
<p>Having these two models:</p>
<pre><code>class Models(models.Model):
id = models.IntegerField(primary_key=True)
name = models.TextField()
genes = models.TextField(blank=True, null=True)
class Meta:
db_table = 'models'
class ModelInteractions(models.Model):
id1 = models.IntegerField(primary_key=True)
id2 = models.ForeignKey('Models')
comm = models.TextField(blank=True, null=True)
class Meta:
unique_together = (('id1', 'id2'),)
</code></pre>
<p>I'm trying to select <code>comm</code> (from <code>ModelInteractions</code>) but also with <code>name</code> (from <code>Models</code>), for specific <code>request_id</code> (ID received with the request).</p>
<p>I'm using:</p>
<pre><code># request_genes example = "ab-2;cra-19"
genes = request_genes.split(';')
condition = Q(id2=request_id)
for field in genes:
condition &= Q(models__genes__icontains=field)
models = ModelInteractions.objects.filter(condition)
</code></pre>
<p>This returns:</p>
<pre><code>Cannot resolve keyword 'models' into field. Choices are: comm, id1, id2, id2_id.
</code></pre>
<p>Without for loop everything works fine, but I don't have <code>Models</code> data.</p>
<p>What am I missing?</p>
|
<p>Try,</p>
<pre><code>for field in genes:
condition &= Q(id2__genes__icontains=field)
models = ModelInteractions.objects.filter(condition)
</code></pre>
<p><code>Models</code> is the name of the table. When performing queries, use field_names as keyword arguments for the manager method <code>filter</code>.</p>
|
python|django|postgresql|django-1.9
| 1 |
1,901,747 | 49,619,142 |
Python: Inserting command line arguments from a variable length list
|
<p>I'm joining up a few videos with VLC. However the number of videos I'm joining varies. I've been able to get it to work with a constant amount of variables in my output file list with:</p>
<pre><code>p = sub.Popen(['C:\\Program Files\\VideoLAN\\vlc\\vlc.exe',
outputFileList[0],
outputFileList[1],
outputFileList[2],
'vlc://quit',
'--sout-keep',
'--sout=#gather:standard{access=file,dst=D:\\movies\\' + fileName + '.mov}',
'--sout-keep'],
stdout=sub.PIPE,
stderr=sub.PIPE)
</code></pre>
<p>However I'm having trouble figuring out how to provide a varying number of arguments. Sometimes I want to joint 2 videos, sometimes 3, etc. I can't simply loop through and add items in the command line itself (at least I gave it a go). And I can't just provide a list in place of the individual items since it's looking for a string path for each.</p>
<p>Any help would be appreciated.</p>
|
<p>Just concatenate your lists:</p>
<pre><code>p = sub.Popen(['C:\\Program Files\\VideoLAN\\vlc\\vlc.exe'] +
outputFileList +
['vlc://quit',
'--sout-keep',
'--sout=#gather:standard{access=file,dst=D:\\movies\\' + fileName + '.mov}',
'--sout-keep'],
stdout=sub.PIPE,
stderr=sub.PIPE)
</code></pre>
|
python|command-line|vlc|popen
| 2 |
1,901,748 | 53,740,911 |
Pandas: Merge dataframes and keep only the minimum value associated with merged unique pairs
|
<p>I'm having a tougher problem with pandas.</p>
<p>I am merging two dataframes on a column <code>V</code> which defines groups.</p>
<p>Both dataframes have also a unique <code>ID</code> column and a <code>Time</code> column.</p>
<p>After merging I compute the <code>Timedelta</code> between those two columns and filter out the negative values:</p>
<pre><code>import pandas as pd
L11 = ['V1','V1','V1','V2','V2','V3','V3','V3','V3']
L12 = [1,2,3,4,5,6,7,8,9]
L13 = [pd.Timestamp("1.1.1980 12:12:12"),
pd.Timestamp("1.1.1980 13:12:12"),
pd.Timestamp("1.2.1980 01:12:12"),
pd.Timestamp("1.1.1980 14:12:12"),
pd.Timestamp("1.1.1980 16:12:12"),
pd.Timestamp("1.1.1980 16:12:12"),
pd.Timestamp("1.1.1980 14:12:12"),
pd.Timestamp("1.1.1980 13:12:12"),
pd.Timestamp("1.2.1980 10:12:12")]
L21 = ['V1','V1','V2','V3','V3','V3','V3','V3','V3']
L22 = [11,12,13,14,15,16,17,18,19]
L23 = [pd.Timestamp("1.1.1980 12:12:12"),
pd.Timestamp("1.1.1980 13:12:12"),
pd.Timestamp("1.1.1980 14:12:12"),
pd.Timestamp("1.1.1980 14:12:12"),
pd.Timestamp("1.1.1980 16:12:12"),
pd.Timestamp("1.1.1980 18:12:12"),
pd.Timestamp("1.1.1980 11:12:12"),
pd.Timestamp("1.1.1980 12:12:12"),
pd.Timestamp("1.2.1980 10:12:12")]
df1 = pd.DataFrame({'V':L11,'ID1':L12,'Time1':L13})
df2 = pd.DataFrame({'V':L21,'ID2':L22,'Time2':L23})
df = pd.merge(df1,df2,on='V')
df["Delta"] = df.Time1-df.Time2
df = df[df.Delta>pd.Timedelta(0)].copy()
df = df.drop(["Time1","Time2"],axis=1)
</code></pre>
<p>Additionally I count how many entries per <code>V</code>-group there are in each dataframe and get the lower value which I'm calling <code>Max</code> because it will be the maximum allowed value of merged entries per group. This ensures that on both sides the <code>ID</code>-values per <code>V</code>-group can be unique.</p>
<pre><code>df1g = df1.groupby("V").ID1.count().reset_index().rename(columns={"ID1":"C1"})
df2g = df2.groupby("V").ID2.count().reset_index().rename(columns={"ID2":"C2"})
df12g = pd.merge(df1g,df2g,on='V')
df12g["Max"] = df12g[["C1","C2"]].min(axis=1)
df = pd.merge(df,df12g[['V','Max']],on='V')
df = df.sort_values(['V','Delta']).reset_index(drop=True)
</code></pre>
<p>This is my sorted example data:</p>
<pre><code> V ID1 ID2 Delta Max
0 V1 2 11 01:00:00 2
1 V1 3 12 12:00:00 2
2 V1 3 11 13:00:00 2
3 V2 5 13 02:00:00 1
4 V3 8 18 01:00:00 4
5 V3 6 14 02:00:00 4
6 V3 7 18 02:00:00 4
7 V3 8 17 02:00:00 4
8 V3 7 17 03:00:00 4
9 V3 6 18 04:00:00 4
10 V3 6 17 05:00:00 4
11 V3 9 16 16:00:00 4
12 V3 9 15 18:00:00 4
13 V3 9 14 20:00:00 4
14 V3 9 18 22:00:00 4
15 V3 9 17 23:00:00 4
</code></pre>
<ul>
<li>Group <code>V1</code> has 3 entries but is only allowed 2</li>
<li>Group <code>V2</code> has 1 entry and is only allowed 1</li>
<li>Group <code>V3</code> has 12 entries but is only allowed 4</li>
</ul>
<p>I now need to find for each <code>ID1</code> the <code>ID2</code> entry with the lowest <code>Delta</code> but the combinations must be unique.</p>
<p>That means because in line <code>4</code> <code>ID1 8</code> is paired with <code>ID2 18</code> in line <code>6</code> <code>ID1 7</code> must not be paired with <code>ID2 18</code>.</p>
<p>The result I want is essentially this:</p>
<pre><code> V ID1 ID2 Delta Max
0 V1 2 11 01:00:00 2
1 V1 3 12 12:00:00 2
3 V2 5 13 02:00:00 1
4 V3 8 18 01:00:00 4
5 V3 6 14 02:00:00 4
8 V3 7 17 03:00:00 4
11 V3 9 16 16:00:00 4
</code></pre>
<p>And I can't wrap my head around how to achieve this.</p>
<p>Simple approaches like</p>
<pre><code>df1 = df.drop_duplicates('ID1')
df2 = df.drop_duplicates('ID2')
result = pd.merge(df1,df2)
</code></pre>
<p>obviously don't work out properly.</p>
<p>Is it even possible to solve this without iterating over the sorted rows and building a memory of already occupied <code>ID2</code>-values?</p>
|
<p>Answering my own question with the <code>iterrows()</code> approach:</p>
<p>After the line</p>
<pre><code>df = df.sort_values(['V','Delta']).reset_index(drop=True)
</code></pre>
<p>this solves the problem:</p>
<pre><code>df["Keep"] = False
old_V = ''
for i,row in df.iterrows():
if row.V != old_V:
old_V = row.V
ID1_list = []
ID2_list = []
if row.ID1 not in ID1_list and row.ID2 not in ID2_list:
df.iloc[i,5] = True
ID1_list.append(row.ID1)
ID2_list.append(row.ID2)
df = df[df.Keep].drop("Keep",axis=1)
</code></pre>
|
python|pandas|dataframe|merge
| 1 |
1,901,749 | 40,984,516 |
Most efficient way of accessing non-zero values in row/column in scipy.sparse matrix
|
<p>What is the fastest or, failing that, least wordy way of accessing all non-zero values in a row <code>row</code> or column <code>col</code> of a <code>scipy.sparse</code> matrix <code>A</code> in <code>CSR</code> format?</p>
<p>Would doing it in another format (say, <code>COO</code>) be more efficient?</p>
<p>Right now, I use the following:</p>
<pre><code>A[row, A[row, :].nonzero()[1]]
</code></pre>
<p>or</p>
<pre><code>A[A[:, col].nonzero()[0], col]
</code></pre>
|
<p>For a problem like this is pays to understand the underlying data structures for the different formats:</p>
<pre><code>In [672]: A=sparse.csr_matrix(np.arange(24).reshape(4,6))
In [673]: A.data
Out[673]:
array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17,
18, 19, 20, 21, 22, 23], dtype=int32)
In [674]: A.indices
Out[674]: array([1, 2, 3, 4, 5, 0, 1, 2, 3, 4, 5, 0, 1, 2, 3, 4, 5, 0, 1, 2, 3, 4, 5], dtype=int32)
In [675]: A.indptr
Out[675]: array([ 0, 5, 11, 17, 23], dtype=int32)
</code></pre>
<p>The <code>data</code> values for a row are a slice within <code>A.data</code>, but identifying that slice requires some knowledge of the <code>A.indptr</code> (see below)</p>
<p>For the <code>coo</code>.</p>
<pre><code>In [676]: Ac=A.tocoo()
In [677]: Ac.data
Out[677]:
array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17,
18, 19, 20, 21, 22, 23], dtype=int32)
In [678]: Ac.row
Out[678]: array([0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3], dtype=int32)
In [679]: Ac.col
Out[679]: array([1, 2, 3, 4, 5, 0, 1, 2, 3, 4, 5, 0, 1, 2, 3, 4, 5, 0, 1, 2, 3, 4, 5], dtype=int32)
</code></pre>
<p>Note that <code>A.nonzeros()</code> converts to <code>coo</code> and returns the <code>row</code> and <code>col</code> attributes (more or less - look at its code).</p>
<p>For the <code>lil</code> format; data is stored by row in lists:</p>
<pre><code>In [680]: Al=A.tolil()
In [681]: Al.data
Out[681]:
array([[1, 2, 3, 4, 5], [6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17],
[18, 19, 20, 21, 22, 23]], dtype=object)
In [682]: Al.rows
Out[682]:
array([[1, 2, 3, 4, 5], [0, 1, 2, 3, 4, 5], [0, 1, 2, 3, 4, 5],
[0, 1, 2, 3, 4, 5]], dtype=object)
</code></pre>
<p>===============</p>
<p>Selecting a row of <code>A</code> works, though in my experience that tends to be a bit slow, in part because it has to create a new <code>csr</code> matrix. Also your expression seems wordier than needed.</p>
<p>Looking at my first row which has a 0 element (the others are too dense):</p>
<pre><code>In [691]: A[0, A[0,:].nonzero()[1]].A
Out[691]: array([[1, 2, 3, 4, 5]], dtype=int32)
</code></pre>
<p>The whole row, expressed as a dense array is:</p>
<pre><code>In [692]: A[0,:].A
Out[692]: array([[0, 1, 2, 3, 4, 5]], dtype=int32)
</code></pre>
<p>but the <code>data</code> attribute of that row is the same as your selection</p>
<pre><code>In [693]: A[0,:].data
Out[693]: array([1, 2, 3, 4, 5], dtype=int32)
</code></pre>
<p>and with the <code>lil</code> format</p>
<pre><code>In [694]: Al.data[0]
Out[694]: [1, 2, 3, 4, 5]
</code></pre>
<p><code>A[0,:].tocoo()</code> doesn't add anything.</p>
<p>Direct access to attributes of a <code>csr</code> and <code>lil</code> isn't that good when picking columns. For that <code>csc</code> is better, or <code>lil</code> of the transpose.</p>
<p>Direct access to the <code>csr</code> <code>data</code>, with the aid of <code>indptr</code>, would be:</p>
<pre><code>In [697]: i=0; A.data[A.indptr[i]:A.indptr[i+1]]
Out[697]: array([1, 2, 3, 4, 5], dtype=int32)
</code></pre>
<p>Calculations using the <code>csr</code> format routinely iterate through <code>indptr</code> like this, getting the values of each row - but they do this in compiled code.</p>
<p>A recent related topic, seeking the product of nonzero elements by row:
<a href="https://stackoverflow.com/questions/40918414/multiplying-column-elements-of-sparse-matrix">Multiplying column elements of sparse Matrix</a></p>
<p>There I found the <code>reduceat</code> using <code>indptr</code> was quite fast.</p>
<p>Another tool when dealing with sparse matrices is multiplication</p>
<pre><code>In [708]: (sparse.csr_matrix(np.array([1,0,0,0])[None,:])*A)
Out[708]:
<1x6 sparse matrix of type '<class 'numpy.int32'>'
with 5 stored elements in Compressed Sparse Row format>
</code></pre>
<p><code>csr</code> actually does <code>sum</code> with this kind of multiplication. And if my memory is correct, it actually performs <code>A[0,:]</code> this way </p>
<p><a href="https://stackoverflow.com/questions/39500649/sparse-matrix-slicing-using-list-of-int">Sparse matrix slicing using list of int</a></p>
|
python|scipy|sparse-matrix
| 11 |
1,901,750 | 38,162,852 |
Limiting the amount of times a User can make a comment
|
<p>Say I have django models like so </p>
<pre><code>class Comment(models.Model):
commentDescription = models.CharField(max_length=60)
commentOwner = models.ForeignKey(User1,null=True)
post = models.ForeignKey(Post, null=True)
class Post(models.Model):
postDescription = models.CharField(max_length=200)
postOwner = models.ForeignKey(User1,null=True)
class User1(models.Model):
user = models.OneToOneField(User)
def __unicode__(self):
return self.user.username
</code></pre>
<p>Now when someone creates a <code>Post</code> I have to limit the amount of <code>Comment</code>'s made by a specific <code>User1</code> so that the <code>Post</code> can have a lot of <code>Comment</code>'s but only allow three <code>Comment</code>'s per <code>User1</code> on the <code>Post</code></p>
<p>I am thinking of creating some kind of method that checks how many comments a user has already made on a specific post but I have no idea where to start with the complexity of the foreign keys.</p>
|
<p>Edit: Whoops, missunderstood your question. Thought you would like to limit voting timebased, not with an absolut voting count :).</p>
<p>I would add a <a href="https://docs.djangoproject.com/en/dev/ref/forms/fields/#datetimefield" rel="nofollow">DateTimeField</a> to the <code>Comment</code> model and then check in the <code>views.py</code> if he is allowed to create a new comment. (Or if you have a <code>CommentForm</code> you could check it there with a custom <a href="https://docs.djangoproject.com/en/1.9/ref/forms/api/#django.forms.Form.is_valid" rel="nofollow">is_valid</a>)</p>
<p>In pseudocode-like it would be something like this to check if a comment is allowed again.</p>
<pre><code>if (comment_date < timezone.now() - datetime.timedelta(minutes=min_comment_time)):
// allow comment
</code></pre>
|
python|django
| 0 |
1,901,751 | 51,809,979 |
Analysis of Eye-Tracking data in python (Eye-link)
|
<p>I have data from eye-tracking (.edf file - from Eyelink by SR-research). I want to analyse it and get various measures such as fixation, saccade, duration, etc.
Is there an existing package to analyse Eye-Tracking data?
Thanks!</p>
|
<p>At least for importing the .edf-file into a pandas DF, you can use the following package by Niklas Wilming: <a href="https://github.com/nwilming/pyedfread/tree/master/pyedfread" rel="nofollow noreferrer">https://github.com/nwilming/pyedfread/tree/master/pyedfread</a> <br/>
This should already take care of saccades and fixations - have a look at the readme. Once they're in the data frame, you can apply whatever analysis you want to it.</p>
|
python-3.x|psychopy|neuroscience|eye-tracking
| 3 |
1,901,752 | 39,080,288 |
How to send a "method" via socket
|
<p>I'm trying to send packet made with scapy library via sockets in python 3..</p>
<p>That's the code:</p>
<pre><code>from scapy.all import *
import socket, threading
def loop():
global threads
for x in range(800):
sending().start()
class sending(threading.Thread):
def run(self):
self.connstart()
def connstart(self):
host = "ip" # this could be a proxy for example
port = port # the port of proxy
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((host, port))
s.send(self.spoofing)
def spoofing(self):
A = "ip" # spoofed source IP address
B = "ip" # destination IP address
C = RandShort() # source port
D = port # destination port
payload = "yada yada yada" # packet payload
spoofed_packet = IP(src=A, dst=B) / TCP(sport=C, dport=D) / payload
return spoofed_packet
loop()
</code></pre>
<p>Obviusly the script raises an error:</p>
<pre><code>Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib/python3.5/threading.py", line 914, in _bootstrap_inner
self.run()
File "spoof.py", line 12, in run
self.connstart()
File "spoof.py", line 19, in connstart
s.send(self.spoofing)
TypeError: a bytes-like object is required, not 'method'
</code></pre>
<p>For you is there a way to bypass this? So to send this packet unchanged?</p>
<p>What I wanna do is to connect to a proxy, and then send to proxy a tcp packet that contains the spoofed source ip, and the destination (different from proxy, it will be another site/server)</p>
|
<p>The particular error you are seeing is because you aren't calling the <code>spoofing</code> method. It will go away if you add parentheses, like this:</p>
<pre><code>s.send(self.spoofing())
</code></pre>
<p>However you have rather more serious issues. <code>socket.socket(socket.AF_INET, socket.SOCK_STREAM)</code> returns a TCP socket, and the system will always insert the (correct) source and destination ports and addresses, as set by the <code>connect()</code> call.</p>
<p>If you want to do IP address spoofing you are going to have to find out how to use raw sockets and pass them directly to the data link driver - see <a href="https://gist.github.com/pklaus/856268" rel="nofollow">this example</a> for a few clues as to how to proceed (and pray you aren't working on Windows, which does everything in its power to prevent raw socket access).</p>
|
python|sockets|python-3.x|methods|byte
| 1 |
1,901,753 | 43,193,969 |
How to get an attribute value using BeautifulSoup and Python?
|
<p>I'm failing miserably to get an attribute value using BeautifulSoup and Python. Here is how the XML is structured:</p>
<pre><code>...
</total>
<tag>
<stat fail="0" pass="1">TR=111111 Sandbox=3000613</stat>
<stat fail="0" pass="1">TR=121212 Sandbox=3000618</stat>
...
<stat fail="0" pass="1">TR=999999 Sandbox=3000617</stat>
</tag>
<suite>
...
</code></pre>
<p>What I'm trying to get is the <code>pass</code> value, but for the life of me I just can't understand how to do it. I checked the <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/#attributes" rel="noreferrer">BeautifulSoup</a> and it seems that I should be using something like <code>stat['pass']</code>, but that doesn't seem to work.</p>
<p>Here's my code:</p>
<pre><code>with open('../results/output.xml') as raw_resuls:
results = soup(raw_resuls, 'lxml')
for stat in results.find_all('tag'):
print stat['pass']
</code></pre>
<p>If I do <code>results.stat['pass']</code> it returns a value that is within another tag, way up in the XML blob.</p>
<p>If I print the <code>stat</code> variable I get the following:</p>
<pre><code><stat fail="0" pass="1">TR=787878 Sandbox=3000614</stat>
...
<stat fail="0" pass="1">TR=888888 Sandbox=3000610</stat>
</code></pre>
<p>Which seems to be ok.</p>
<p>I'm pretty sure that I'm missing something or doing something wrong. Where should I be looking at? Am I taking the wrong approach?</p>
<p>Any advice or guidance will be greatly appreciated! Thanks</p>
|
<p>Please consider this approach:</p>
<pre><code>from bs4 import BeautifulSoup
with open('test.xml') as raw_resuls:
results = BeautifulSoup(raw_resuls, 'lxml')
for element in results.find_all("tag"):
for stat in element.find_all("stat"):
print(stat['pass'])
</code></pre>
<p>The problem of your solution is that <em>pass</em> is contained in <em>stat</em> and not in <em>tag</em> where you search for it.</p>
<p>This solution searches for all <em>tag</em> and in these <em>tag</em> it searches for <em>stat</em>. From these results it gets <em>pass</em>.</p>
<p>For the XML file</p>
<pre><code><tag>
<stat fail="0" pass="1">TR=111111 Sandbox=3000613</stat>
<stat fail="0" pass="1">TR=121212 Sandbox=3000618</stat>
<stat fail="0" pass="1">TR=999999 Sandbox=3000617</stat>
</tag>
</code></pre>
<p>the script above gets the output</p>
<pre><code>1
1
1
</code></pre>
<p><strong>Addition</strong></p>
<p>Since some detailes still seemed to be unclear (see comments) consider this complete workaround using <code>BeautifulSoup</code> to get everything you want. This solution using dictionaries as elements of lists might not be perfect if you face performance issues. But since you seem to have some troubles using the Python and Soup i thought I create this example as easy as possible by giving the possibility to access all relevant information by name and not by an index.</p>
<pre><code>from bs4 import BeautifulSoup
# Parses a string of form 'TR=abc123 Sandbox=abc123' and stores it in a dictionary with the following
# structure: {'TR': abc123, 'Sandbox': abc123}. Returns this dictionary.
def parseTestID(testid):
dict = {'TR': testid.split(" ")[0].split("=")[1], 'Sandbox': testid.split(" ")[1].split("=")[1]}
return dict
# Parses the XML content of 'rawdata' and stores pass value, TR-ID and Sandbox-ID in a dictionary of the
# following form: {'Pass': pasvalue, TR': TR-ID, 'Sandbox': Sandbox-ID}. This dictionary is appended to
# a list that is returned.
def getTestState(rawdata):
# initialize parser
soup = BeautifulSoup(rawdata,'lxml')
parsedData= []
# parse for tags
for tag in soup.find_all("tag"):
# parse tags for stat
for stat in tag.find_all("stat"):
# store everthing in a dictionary
dict = {'Pass': stat['pass'], 'TR': parseTestID(stat.string)['TR'], 'Sandbox': parseTestID(stat.string)['Sandbox']}
# append dictionary to list
parsedData.append(dict)
# return list
return parsedData
</code></pre>
<p>You can use the script above as follows to do whatever you want (e.g. just print out)</p>
<pre><code># open file
with open('test.xml') as raw_resuls:
# get list of parsed data
data = getTestState(raw_resuls)
# print parsed data
for element in data:
print("TR = {0}\tSandbox = {1}\tPass = {2}".format(element['TR'],element['Sandbox'],element['Pass']))
</code></pre>
<p>The output looks like this</p>
<pre><code>TR = 111111 Sandbox = 3000613 Pass = 1
TR = 121212 Sandbox = 3000618 Pass = 1
TR = 222222 Sandbox = 3000612 Pass = 1
TR = 232323 Sandbox = 3000618 Pass = 1
TR = 333333 Sandbox = 3000605 Pass = 1
TR = 343434 Sandbox = ZZZZZZ Pass = 1
TR = 444444 Sandbox = 3000604 Pass = 1
TR = 454545 Sandbox = 3000608 Pass = 1
TR = 545454 Sandbox = XXXXXX Pass = 1
TR = 555555 Sandbox = 3000617 Pass = 1
TR = 565656 Sandbox = 3000615 Pass = 1
TR = 626262 Sandbox = 3000602 Pass = 1
TR = 666666 Sandbox = 3000616 Pass = 1
TR = 676767 Sandbox = 3000599 Pass = 1
TR = 737373 Sandbox = 3000603 Pass = 1
TR = 777777 Sandbox = 3000611 Pass = 1
TR = 787878 Sandbox = 3000614 Pass = 1
TR = 828282 Sandbox = 3000600 Pass = 1
TR = 888888 Sandbox = 3000610 Pass = 1
TR = 999999 Sandbox = 3000617 Pass = 1
</code></pre>
<p>Let's summerize the core elements that are used:</p>
<p><strong>Finding XML tags</strong>
To find XML tags you use <code>soup.find("tag")</code> which returns the first matched tag or <code>soup.find_all("tag")</code> which finds all matching tags and stores them in a list. The single tags can easily be accessed by iterating over the list.</p>
<p><strong>Finding nested tags</strong>
To find nested tags you can use <code>find()</code> or <code>find_all()</code> again by applying it to the result of the first <code>find_all()</code>.</p>
<p><strong>Accessing the content of a tag</strong>
To access the content of a tag you apply <code>string</code> to a single tag. For example if <code>tag = <tag>I love Soup!</tag></code> <code>tag.string = "I love Soup!"</code>.</p>
<p><strong>Finding values of attributes</strong>
To get the values of attributes you can use the subscript notation. For example if <code>tag = <tag color=red>I love Soup!</tag></code> <code>tag['color']="red"</code>.</p>
<p>For parsing strings of form <code>"TR=abc123 Sandbox=abc123"</code> I used common Python string splitting. You can read more about it here: <a href="https://stackoverflow.com/questions/5749195/how-can-i-split-and-parse-a-string-in-python">How can I split and parse a string in Python?</a></p>
|
python|beautifulsoup
| 16 |
1,901,754 | 43,262,401 |
How to update a varchar field only when it ends with (.%) in mysql with python
|
<p>I want to loop and update all the fields that has entry like (abc.py) to (abc) or (def.exe) to (def) where id = 2.</p>
<pre><code>UPDATE column from table test
when column=abc.exe THEN column = abc
where id = 2;
</code></pre>
|
<p>This can be done by <a href="https://dev.mysql.com/doc/refman/5.7/en/string-functions.html" rel="nofollow noreferrer">MySQL SUBSTRING_INDEX Function</a>, and I guess this question is not related to <code>python</code>.</p>
<pre><code>UPDATE table_name
SET column = SUBSTRING_INDEX(column, '.', 1)
WHERE id = 2;
</code></pre>
|
python|mysql
| 0 |
1,901,755 | 64,294,001 |
openSuse 15.2 no python38 pip
|
<p>I upgraded my OpenSuse 15.1 to 15.2. Which installed python 3.6.x. But there was no pip. So I tried installing python 3.8.5 from OpenSuse. Again no pip.
Which reports no pip. If I type 'python3' I get python 3.8.5.</p>
<p>How can I fix this issue. I need pip.</p>
|
<p>You need the python3-pip package:</p>
<p><a href="https://software.opensuse.org/package/python3-pip" rel="nofollow noreferrer">https://software.opensuse.org/package/python3-pip</a></p>
|
python|pip|opensuse
| 0 |
1,901,756 | 69,827,238 |
transpose data and making fist column as header
|
<p>I´m trying to transpose the data that I obtained and make the column with Samples as a header
What I´m starting with</p>
<pre><code>|Sample 1 | 0,4704 |
----------
|Sample 2 | 0,1501 |
----------
|Sample 3 | 0,4388 |
----------
|Sample 4 | 0,2957|
---------
</code></pre>
<p>What I'd like to have</p>
<pre><code>sample 1 |sample 2 |sample 3 |sample 4
----------
0,4704 | 0,1501 | 0,4388 | 0,2957
----------
</code></pre>
<p>What I really have</p>
<pre><code>1 |2 |3 |4
----------
sample 1| sample 2 |sample 3 |sample 4
----------
0,4704 | 0,1501 |0,4388 | 0,2957
</code></pre>
<p>What I want is to remove those numbers 1,2,3,4 and make sample 1, sample 2... as a header</p>
<p>here is also a link to the screenshot that I made [1]: <a href="https://i.stack.imgur.com/G526C.png" rel="nofollow noreferrer">https://i.stack.imgur.com/G526C.png</a></p>
<p>What I tried so far is</p>
<pre><code>df2=pd.read_csv('3.dat',sep='\t', header=None)
df2
df2=pd.read_csv('3.dat',sep='\t', header=None)
df2_transponed=df2.T
df2_transponed.to_csv('3_0T.dat', index=[1], header=None, sep='\t')
df2_transponed
</code></pre>
<p>Any suggestions would be helpful, thank you</p>
|
<p>Try setting the column ´0´ as index first</p>
<pre><code>df2.set_index(0).T
</code></pre>
|
python|pandas|transpose
| 0 |
1,901,757 | 69,668,677 |
Untraceable output appearing on console
|
<p>I have a function that allows the user to perform sums, it can be called with up to 4 arguments depending on type, but for some reason it seems to be putting output on the screen I can't trace</p>
<p>The program is as follows</p>
<pre><code>from typing import Union, Callable
def Sum(*args: Union[int, float, Callable[[Union[int, float]], Union[int, float]]]) -> Union[int, float]:
print("Sum called with args", args)
start, stop, step, func = __SumProdSetup("Sum", *args)
if len(args) == 4 and all([isinstance(arg, float) for arg in args[1:]]) and callable(args[0]):
#SUMMATION CODE
else:
print("Called", args, "space")
def __SumProdSetup(name: str, *args: Union[int, float, Callable[[Union[int, float]], Union[int, float]]]) -> tuple[Union[int, float], Union[int, float], Union[int, float], Callable[[Union[int, float]], Union[int, float]]]:
print("Setup called with args", args)
raiserr = False
# Add different args combinations
raiseerr=True #added here for testing purposes
if raiserr:
raise ValueError(f"""((ADD USAGES))""")
else:
try:
return start, stop, step, func
except NameError as err:
err.args += f"Usage: {', '.join([str(arg) for arg in args])}",
raise err
Sum()
</code></pre>
<p>As expected, this gives an ValueError, as no arguments is an invalid usage.</p>
<p>However, I'm also getting output that can't be traced. Nowhere in the code should it print 'Called'. It should (with a different usage) print 'Called' then the args, then 'space'.
It should also never print 'None'</p>
<p>but yet, this is the full output: (Sum called and Setup called are expected)</p>
<pre><code>Called
None
Sum called with args ()
Setup called with args ()
Traceback (most recent call last):
File "C:\Users\maxcu\OneDrive\JetbrainsProjects\python\Games\engine\__init__.py", line 173, in <module>
Sum()
File "C:\Users\maxcu\OneDrive\JetbrainsProjects\python\Games\engine\__init__.py", line 79, in Sum
start, stop, step, func = __SumProdSetup("Sum", *args)
File "C:\Users\maxcu\OneDrive\JetbrainsProjects\python\Games\engine\__init__.py", line 154, in __SumProdSetup
raise TypeError(f"""
TypeError:
Available usages:
Sum(int)
Sum(int, int)
Sum(float, float)
Sum(float, float, float)
Sum(Callable, int)
Sum(Callable, int, int)
Sum(Callable, float, float)
Sum(Callable, float, float, float)
</code></pre>
<p>The 'Called' and 'None' outputs are unexpected and using a search on the code yields only three results for print; none of these should output 'None' or 'Called' on its own</p>
|
<p>Found the solution! One module I was importing was a custom module (not included in the snippet as it is only found on my machine and not used in the function causing the issue) and this had a line that printed the 'Called' and 'None' content</p>
<p>TL;DR A custom module was accidentally causing the problem</p>
<p>This question can be closed now</p>
|
python-3.x
| 0 |
1,901,758 | 72,848,885 |
Compare variables through a range
|
<p>I've 2 DataFrames where one gives me 3 Dates and a slope, and the other one gives me a calendar with prices.</p>
<pre><code>slopes = {'slope': [-1.6541925161, -3.118229956, -1.607413448],
'Date1': ['2021-12-16', '2021-12-29', '2021-12-13'],
'Date2': ['2021-12-23', '2021-12-31', '2021-12-17'],
'Date3': ['2021-12-28', '2022-01-03', '2021-12-27'],
}
slope_and_dates = pd.DataFrame(slopes)
print(slope_and_dates)
Historical_prices = {
'Date': ['10/12/2021', '13/12/2021', '14/12/2021', '15/12/2021', '16/12/2021', '17/12/2021', '20/12/2021', '21/12/2021', '22/12/2021', '23/12/2021', '27/12/2021', '28/12/2021', '29/12/2021', '30/12/2021', '31/12/2021', '3/1/2022'],
'High': [1020.97998, 1005, 966.409973, 978.75, 994.97998, 960.659973, 921.690002, 939.5, 1015.659973, 1072.97998, 1117, 1119, 1104, 1095.550049, 1082, 1079],
}
Dates = pd.DataFrame(Historical_prices)
print(Dates)
</code></pre>
<p>I'm want to know if any High price in Dates breaks the line between Date1 and Date3 of the slope_and_dates DataFrame.</p>
<p>I expect an extra column in slopes_and_dates called Break where I can save if the line has been broken at any Date.</p>
<p>My pseudo code...</p>
<pre><code>#My pseudo-code:
# Need to fix this for start and end date (Date1 and Date3).
Dates['proj_price'] = Dates.High + slope_and_dates.slope
#If all projected prices are above the line, then return FALSE. If a projected price is lower than the High - 0.004%, then return TRUE.
slope_and_dates['Break'] = np.where(Dates['proj_price'] < (Dates.High * 0.996), True, False)
#print(slope_and_dates)
</code></pre>
<p>Obviously I fail to loop between Dates row for each row of slopes.</p>
<p>If this can help, I did a simulation in excel of the output (without tolerance).</p>
<p>Where first row is yellow and the rows to compare too.
Second row is orange and the rows to compare too.
Blue data are my intermediate steps and you can see Break column in slopes table.</p>
<p>Hope this helps.</p>
<p><a href="https://i.stack.imgur.com/Ebp80.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ebp80.png" alt="Simulation" /></a></p>
|
<p>I don't quite understand, you don't need solutions with a loop? I only get it with him. Here it is:</p>
<pre><code>Dates['Date'] = pd.to_datetime(Dates['Date'], format='%d/%m/%Y')
slope_and_dates[['Date1', 'Date2', 'Date3']] = slope_and_dates[['Date1', 'Date2', 'Date3']].apply(pd.to_datetime)
Dates = Dates.set_index('Date')
slope_and_dates['Break'] = np.nan
for i in range(len(slope_and_dates)):
aaa = Dates.loc[slope_and_dates.loc[i, 'Date1']] + slope_and_dates.loc[i, 'slope']
ddd = []
ddd.append(aaa.values[0])
fff = Dates[(Dates.index > slope_and_dates.loc[i, 'Date1']) & (Dates.index <= slope_and_dates.loc[i, 'Date3'])]
[ddd.append(ddd[len(ddd) - 1] + slope_and_dates.loc[i, 'slope']) for x in range(1, len(fff))]
ddd = np.array(ddd, float)
slope_and_dates.loc[i, 'Break'] = (ddd < fff.values[:, 0]).any()
print(slope_and_dates)
</code></pre>
|
python|pandas|loops
| 1 |
1,901,759 | 55,722,158 |
My python script can't see the attachments attached to an issue with jira-python
|
<p>I've made a python script that creates Jira issues and attach a file to them.</p>
<p>This part works fine and I can see the attachment directly in Jira. I can also see the attachment in my Python script if I reference directly the attachment ID (that I've found in the Jira page of my issue) with that bit of code : </p>
<pre><code>att = jira.attachment(116328)
print att
</code></pre>
<p>But if I open the issue in my Python script, I won't see any attachment attached to it : </p>
<pre><code>issue = jira.issue('ARR-10')
print issue.fields.attachment
</code></pre>
<p>Will return : AttributeError: type object 'PropertyHolder' has no attribute 'attachment'</p>
<p>I was using the version 1.0.10 and upgraded to 2.0.0, but it didn't make a difference. </p>
<p>I'm sure that the issue I'm looking up has an attachment, I just can't understand why there are no attachment attributes. I've checked the "Questions that may already have your answer" but none of them are helping.</p>
<p>Thanks !</p>
|
<p>The Jira REST API doesn't return fields that are hidden in the field configuration. Verify that the <code>Attachments</code> field is not hidden in the field configuration.</p>
<p>Source: <a href="https://confluence.atlassian.com/jirakb/fetching-an-issue-via-api-does-not-return-all-fields-278694956.html" rel="nofollow noreferrer">Fetching an Issue Via API Does Not Return All fields</a></p>
|
python-jira
| 1 |
1,901,760 | 66,381,199 |
Interactive Login to Azure from Jupyter to fetch Secret from key vault
|
<p>I would like to fetch a secret from an Azure key vault to use it for an API call. I would like to do this from Jupyter (Python).</p>
<p>I could login to Azure Portal and get the secret manually and store it locally, but I think, that is bad... so I would like to "pop up" a login dialog from jupyter notebook to insert my credentials, and fetch the secret with the API.
Any hints how to do this?</p>
<p>Thanks!</p>
|
<p>Got it working now...not sure if it's the best way, but it works.</p>
<pre><code>pip install az.cli
from az.cli import az
az("login") #interactive Login is triggered
exit_code, result_dict, logs = az("keyvault secret show --name SECRETNAME --vault-name KEYVAULTNAME")
`secret = result_dict["value"]
#call my API with the secret
</code></pre>
<p>No error handling in above code ;-)</p>
|
python|azure|jupyter-notebook|azure-keyvault
| 0 |
1,901,761 | 66,588,041 |
Read a specific value from a json file with Python
|
<p>I am getting a JSON file from a curl request and I want to read a specific value from it.
Suppose that I have a JSON file, like the following one. How can I insert the "result_count" value into a variable?</p>
<p><img src="https://i.stack.imgur.com/JKX7y.png" alt="An example of a JSON file" /></p>
<p>Currently, after getting the response from curl, I am writing the JSON objects into a txt file like this.</p>
<pre><code>json_response = connect_to_endpoint(url, headers)
f.write(json.dumps(json_response, indent=4, sort_keys=True))
</code></pre>
|
<p>Your <code>json_response </code> isn't a JSON content (JSON is a formatted string), but a <code>python dict</code>, you can access it using the keys</p>
<pre><code>res_count = json_response['meta']['result_count']
</code></pre>
|
python|json|curl
| 1 |
1,901,762 | 53,097,804 |
Parse postgresql -pycparser.plyparser.ParseError before: pgwin32_signal_event
|
<p>I need to parse an open-source project Postgresql using pycparser.</p>
<p>While parsing its source-code the following error arises:</p>
<pre><code>Traceback (most recent call last):
File "examples\using_cpp_libc.py", line 48, in <module>
getAllFiles(projectName)
File "examples\using_cpp_libc.py", line 29, in getAllFiles
ast = parse_file(dirName+'\\'+fname, use_cpp = True, cpp_path = 'cpp',
cpp_args = [r'-nostdinc',r'-Iutils/fake_libc_include',r'-
Iprojects/postgresql/src/include'])
File "G:\python\pycparser-master\pycparser\__init__.py", line 92, in
parse_file
return parser.parse(text, filename)
File "G:\python\pycparser-master\pycparser\c_parser.py", line 152, in parse
debug=debuglevel)
File "G:\python\pycparser-master\pycparser\ply\yacc.py", line 334, in parse
return self.parseopt_notrack(input, lexer, debug, tracking, tokenfunc)
File "G:\python\pycparser-master\pycparser\ply\yacc.py", line 1204, in
parseopt_notrack
tok = call_errorfunc(self.errorfunc, errtoken, self)
File "G:\python\pycparser-master\pycparser\ply\yacc.py", line 193, in
call_errorfunc
r = errorfunc(token)
File "G:\python\pycparser-master\pycparser\c_parser.py", line 1838, in
p_error
column=self.clex.find_tok_column(p)))
File "G:\python\pycparser-master\pycparser\plyparser.py", line 67, in
_parse_error
raise ParseError("%s: %s" % (coord, msg))
pycparser.plyparser.ParseError:
projects/postgresql/src/include/pg_config_os.h:366:15: before:
pgwin32_signal_event
</code></pre>
<p>I am using postgresql-9.6.9, build it using visual studio express 2017 on windows 10 (64-bit)</p>
|
<p>The blog post you quoted in the comment is the canonical resource. Parsing large C projects is not easy - they have their own quirks - so it takes work. I doubt it's resolvable within the confines of a Stack Overflow question.</p>
<p>You need to start tackling the issues one by one - for example look at the <code>pgwin32_signal_event</code> token in <code>pg_config_os.h</code> - why can't it be parsed? Perhaps its type is unparsable? Was it defined? Could it be added to a "fake" header, etc. Unfortunately, there's no easy way to do this except working through the issues one by one.</p>
<p>Be sure to preprocess the file you're parsing first, dumping the full preprocessed version into a single <code>.c</code> file - this gets all the types into a single file you can work with.</p>
|
python|postgresql-9.6|pycparser
| 0 |
1,901,763 | 71,998,770 |
Creating JSON string using two PySpark columns by GroupBy
|
<p>I have a Spark dataframe as below. I want to create a column 'new_col' which groups by all of the columns except 'Code' and 'Department' and assign a JSON structure based on columns 'Code' and 'Department'.</p>
<p>The dataframe needs to be sorted first. Rows 1-3 and 4-5 are duplicated except for columns Code and Department. So I would have JSON created for 1st 3 rows as new_col as {"Code": "A", "Department": "Department Store"}, { "Code": "B","Department": "All Other Suppliers"}, {"Code": "C","Department": "Rest"}</p>
<p>My input dataframe:</p>
<p><a href="https://i.stack.imgur.com/UObN2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UObN2.png" alt="enter image description here" /></a></p>
<p>Expected output Spark dataframe:</p>
<p><a href="https://i.stack.imgur.com/QlPwI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QlPwI.png" alt="enter image description here" /></a></p>
|
<p>Something like this should do it:</p>
<pre class="lang-py prettyprint-override"><code>from pyspark.sql import functions as F, Window as W
df = spark.createDataFrame(
[('XYZ', '324 NW', 'VA', 'A', 'Department Store', 'X', 'Y'),
('XYZ', '324 NW', 'VA', 'B', 'All Other Suppliers', 'X', 'Y'),
('XYZ', '324 NW', 'VA', 'C', 'Rest', 'X', 'Y'),
('ABC', '45 N Ave', 'MA', 'C', 'Rest', 'A', 'A'),
('ABC', '45 N Ave', 'MA', 'B', 'All Other Suppliers', 'A', 'A'),
('ZXC', '12 SW Street', 'NY', 'A', 'Department Store', 'B', 'Z')],
['Name', 'Address', 'State', 'Code', 'Department', 'col1', 'col2']
)
cols = [c for c in df.columns if c not in ['Code', 'Department']]
w1 = W.partitionBy(cols).orderBy('Code')
w2 = W.partitionBy(cols).orderBy(F.desc('Code'))
df = (df
.withColumn('_rn', F.row_number().over(w1))
.withColumn('new_col', F.collect_list(F.to_json(F.struct(['Code', 'Department']))).over(w2))
.withColumn("new_col", F.array_join("new_col", ","))
.filter('_rn=1')
.drop('_rn')
)
</code></pre>
<pre class="lang-py prettyprint-override"><code>df.show(truncate=False)
# +----+------------+-----+----+-------------------+----+----+-----------------------------------------------------------------------------------------------------------------------------+
# |Name|Address |State|Code|Department |col1|col2|new_col |
# +----+------------+-----+----+-------------------+----+----+-----------------------------------------------------------------------------------------------------------------------------+
# |ABC |45 N Ave |MA |B |All Other Suppliers|A |A |{"Code":"C","Department":"Rest"},{"Code":"B","Department":"All Other Suppliers"} |
# |XYZ |324 NW |VA |A |Department Store |X |Y |{"Code":"C","Department":"Rest"},{"Code":"B","Department":"All Other Suppliers"},{"Code":"A","Department":"Department Store"}|
# |ZXC |12 SW Street|NY |A |Department Store |B |Z |{"Code":"A","Department":"Department Store"} |
# +----+------------+-----+----+-------------------+----+----+-----------------------------------------------------------------------------------------------------------------------------+
</code></pre>
|
python|json|dataframe|pyspark|group-by
| 2 |
1,901,764 | 68,869,492 |
How to save loaded model in localstorage or IndexedDB
|
<p>I'm useing <a href="https://github.com/tensorflow/tfjs-models/tree/master/coco-ssd" rel="nofollow noreferrer">TFJS's samle program</a>. I'd like to save loaded model in localstorage or IndexedDB. So I wrote this program.</p>
<pre><code><!-- Load TensorFlow.js. This is required to use coco-ssd model. -->
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs"> </script>
<!-- Load the coco-ssd model. -->
<script src="https://cdn.jsdelivr.net/npm/@tensorflow-models/coco-ssd"> </script>
<!-- Replace this with your image. Make sure CORS settings allow reading the image! -->
<img id="img" src="cat.jpg"/>
<!-- Place your code in the script tag below. You can also use an external .js file -->
<script>
// Notice there is no 'import' statement. 'cocoSsd' and 'tf' is
// available on the index-page because of the script tag above.
const img = document.getElementById('img');
// Load the model.
cocoSsd.load().then(model => {
// save to localstorage. <-- My code
model.save('localstorage://test')
// detect objects in the image.
model.detect(img).then(predictions => {
console.log('Predictions: ', predictions);
});
});
</script>
</code></pre>
<p>However <code>model.save('localstorage://test') </code> got error that is <code>Uncaught (in promise) TypeError: model.save is not a function</code>.</p>
<p>How to save model in localstorage or IndexedDB?</p>
|
<p>Let's check whether you really can save the model by checking the methods that exist on it.</p>
<pre class="lang-js prettyprint-override"><code>function getMethods(o) {
return Object.getOwnPropertyNames(Object.getPrototypeOf(o))
.filter(m => 'function' === typeof o[m])
}
cocoSsd.load().then(model => {
console.log(getMethods(model));
})
</code></pre>
<p>Output:</p>
<pre class="lang-js prettyprint-override"><code>["constructor","getPrefix","load","infer","buildDetectedObjects","calculateMaxScores","detect","dispose"]
</code></pre>
<p>Looking at the methods, the error is in fact correct, that the method <code>save</code> does not exist on a <code>cocoSsd</code> model. So you cannot directly save the model to the local storage or to a database.</p>
<p>No worries, you can just download the original model from a list of all the available object detection models which can be found <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md" rel="nofollow noreferrer">TensorFlow 2 Detection Model Zoo</a>. All of them are trained on the same coco dataset so you can choose any of them.</p>
<p>For example I chose <a href="http://download.tensorflow.org/models/object_detection/tf2/20210210/centernet_mobilenetv2fpn_512x512_coco17_od.tar.gz" rel="nofollow noreferrer">centernet_mobilenetv2_fpn_od</a>.</p>
<p>Downloading the file and unzipping it will produce the following:</p>
<pre class="lang-bash prettyprint-override"><code>|-- checkpoint
| |-- checkpoint
| |-- ckpt-301.data-00000-of-00001
| `-- ckpt-301.index
|-- label_map.txt
|-- model.tflite
|-- pipeline.config
`-- saved_model
|-- assets
|-- saved_model.pb
`-- variables
|-- variables.data-00000-of-00001
`-- variables.index
</code></pre>
<p>What you want is <code>saved_model.pb</code> which you need to convert to a format that is usable by Tensorflow.js. To do this you can use the <code>tensorflowjs_wizard</code> which you run in the terminal:</p>
<pre class="lang-bash prettyprint-override"><code>>> tensorflowjs_wizard
Welcome to TensorFlow.js Converter.
? Please provide the path of model file or the directory that contains model files.
If you are converting TFHub module please provide the URL. .
? What is your input model format? (auto-detected format is marked with *) Tensorflow Saved Model *
? What is tags for the saved model? serve
? What is signature name of the model? signature name: serving_default
? Do you want to compress the model? (this will decrease the model precision.) No compression (Higher accuracy)
? Please enter shard size (in bytes) of the weight files? 4194304
? Do you want to skip op validation?
This will allow conversion of unsupported ops,
you can implement them as custom ops in tfjs-converter. No
? Do you want to strip debug ops?
This will improve model execution performance. Yes
? Do you want to enable Control Flow V2 ops?
This will improve branch and loop execution performance. Yes
? Do you want to provide metadata?
Provide your own metadata in the form:
metadata_key:path/metadata.json
Separate multiple metadata by comma.
? Which directory do you want to save the converted model in? /Users/yravindranath/Downloads/centernet_mobilenetv2_fpn_od/saved_model/model
converter command generated:
tensorflowjs_converter --control_flow_v2=True --input_format=tf_saved_model --metadata= --saved_model_tags=serve --signature_name=serving_default --strip_debug_ops=True --weight_shard_size_bytes=4194304 . /Users/yravindranath/Downloads/centernet_mobilenetv2_fpn_od/saved_model/model
</code></pre>
<p>This will produce the following in the specified save path:</p>
<pre class="lang-bash prettyprint-override"><code>./
|-- group1-shard1of3.bin
|-- group1-shard2of3.bin
|-- group1-shard3of3.bin
`-- model.json
</code></pre>
<p>Then you can load the model in your Javascript application using this function if you want to load it directly from local storage:</p>
<pre class="lang-js prettyprint-override"><code>const tf = require('@tensorflow/tfjs');
const tfnode = require('@tensorflow/tfjs-node');
async function loadModel(){
const model = await loadGraphModel("path to model.json");
// You can even save it to a github repo and load it in using a GET request like so:
// const model = await loadGraphModel("https://raw.githubusercontent.com/hugozanini/TFJS-object-detection/master/models/web_model/model.json");
console.log("Model loaded")
}
loadModel();
</code></pre>
|
javascript|tensorflow|tensorflow.js
| 0 |
1,901,765 | 68,670,447 |
How can I print a dictionary key while in a for loop in python
|
<p>I have a problem, I have a dictionary and i need to pass a function to each value of the dictionary, I need my output to include the value key + the result of the function.
The program itself should identify open reading frames in a given DNA sequences, I have this part working but i need to print the name of sequence + the open reading frames.</p>
<p>Im new to programming and i got stuck. All help will be greatly appreciated.</p>
<pre><code>#converts a fasta file into a dictionary.
import re
myfile = input("Enter a file name and directory:")
try:
f=open(myfile)
except IOError:
print("File doesn't exist!")
seqs = {}
for line in f:
line=line.rstrip()
#Gets rid of triling empy spaces.
if line[0]=='>':
words=line.split()
name=words[0][1:]
seqs[name] = ''
else:
seqs[name] = seqs[name] + line
print("Number of entries:",len(seqs.keys()))
length_seqs = {key:len(seq)for key, seq in seqs.items()}
sorted_length_seqs = sorted(length_seqs.items(), key=lambda kv:kv[1])
print("Entries by length:",sorted_length_seqs)
#finds the ORF in the dictionary sequences.
def find_ORFs(DNA):
ORFs = []
if 'ATG' in DNA:
for startMatch in re.finditer('ATG',DNA):
remaining = DNA[startMatch.start():]
for stopMatch in re.finditer('TAA|TGA|TAG',remaining):
substring = remaining[:stopMatch.end()]
if len(substring) % 3 == 0:
ORFs.append(substring)
break
else:
print("There are no ORFs in your sequence")
ORFs.sort(key=len, reverse=True)
print(ORFs)
for ORF in ORFs:
print(ORF,'ORF lenght',len(ORF))
#passes the function to the dictionary values.
#Here i need to pass the function to each of the values of the dictionary but i cant manage to make it print the the dictionary key of each value.
for seq in seqs:
DNA = seqs[seq]
ORF = find_ORFs(DNA)
</code></pre>
|
<p>You could pass the name of the sequence to the function as well:</p>
<pre class="lang-py prettyprint-override"><code>def find_ORFs(name, DNA):
ORFs = []
if 'ATG' in DNA:
for startMatch in re.finditer('ATG',DNA):
remaining = DNA[startMatch.start():]
for stopMatch in re.finditer('TAA|TGA|TAG',remaining):
substring = remaining[:stopMatch.end()]
if len(substring) % 3 == 0:
ORFs.append(substring)
break
else:
print("There are no ORFs in your sequence: {}".format(name))
ORFs.sort(key=len, reverse=True)
print(ORFs)
for ORF in ORFs:
print("{}, {}, {}".format(name, len(ORF), ORF))
for name, DNA in seqs.items():
ORF = find_ORFs(name, DNA)
</code></pre>
<hr />
<p>You should also take a look at some external libraries that can help you with parsing FASTA files, such as <a href="https://biopython.org/wiki/SeqIO" rel="nofollow noreferrer">SeqIO from BioPython</a>.</p>
|
python|dictionary|bioinformatics
| 0 |
1,901,766 | 60,719,587 |
How is it possible that the python program is utilizing multiple cores when running using Cpython?
|
<p>All documentation states that running a python program using the <code>threading</code> library does not truly enable you to run the program on multiple cores on Cpython interpreter. However, the CPU usage shows that it's utilizing multiple cores. How is that possible?</p>
<p>I did verify that the python interpreter was Cpython using </p>
<pre><code>import platform
platform.python_implementation() # output-> 'Cpython'
</code></pre>
<p>Python version - 3.5.2 <br>
OS - ubuntu</p>
<p>Threading code</p>
<pre><code>import threading
import math
def fizz():
print ("start")
for i in range (1, 100000000):
math.sqrt(i)
print(" exit")
threads = []
n = 4
for _ in range (n):
t = threading.Thread(target=fizz)
threads.append(t)
for t in threads:
t.start()
for t in threads:
t.join()
print ("Done")
</code></pre>
<p>CPU usage before running program(running <code>top</code>)</p>
<pre><code>op - 09:27:44 up 235 days, 11:41, 8 users, load average: 0.27, 0.23, 0.13
Tasks: 530 total, 1 running, 522 sleeping, 7 stopped, 0 zombie
%Cpu0 : 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu1 : 0.0 us, 0.3 sy, 0.0 ni, 99.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu2 : 0.3 us, 0.3 sy, 0.0 ni, 99.3 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu3 : 0.3 us, 0.0 sy, 0.0 ni, 99.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
</code></pre>
<p>CPU usage while running program</p>
<pre><code>op - 09:29:29 up 235 days, 11:43, 8 users, load average: 0.39, 0.24, 0.14
Tasks: 530 total, 1 running, 522 sleeping, 7 stopped, 0 zombie
%Cpu0 : 26.0 us, 0.7 sy, 0.0 ni, 73.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.3 st
%Cpu1 : 24.5 us, 1.0 sy, 0.0 ni, 74.2 id, 0.0 wa, 0.0 hi, 0.0 si, 0.3 st
%Cpu2 : 25.0 us, 0.3 sy, 0.0 ni, 74.7 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu3 : 26.1 us, 0.0 sy, 0.0 ni, 73.9 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
</code></pre>
|
<p>I think the documentation means using the <code>threading</code> does not run multiple threads truly at the same time, It means you can only use one core capacity in one process, even you run 4 or 8 threads at the same time. </p>
|
python|python-3.x|python-multiprocessing|python-multithreading|gil
| 0 |
1,901,767 | 64,394,731 |
Why does str.capitalize() not work as I expect?
|
<p>Please, let me know if I'm not providing enough information. The goal of the program is to capitalize the first letter of every sentence.</p>
<pre><code>usr_str = input()
def fix_capitalization(usr_str):
list_of_sentences = usr_str.split(".")
list_of_sentences.pop() #remove last element: ""
new_str = ''
for sentence in list_of_sentences:
new_str += sentence.capitalize() + "."
return new_str
print(fix_capitalization(usr_str))
</code></pre>
<p>For instance, if I input "hi. hello. hey." I expect it to output "Hi. Hello. Hey." but instead, it outputs "Hi. hello. hey."</p>
|
<p>An alternative would be to build a list of strings then concatenate them:</p>
<pre><code>def fix_capitalization(usr_str):
list_of_sentences = usr_str.split(".")
output = []
for sentence in list_of_sentences:
new_sentence = sentence.strip().capitalize()
# If empty, don't bother
if new_sentence:
output.append(new_sentence)
# Finally, join everything
return ". ".join(output) +"."
</code></pre>
|
python
| 1 |
1,901,768 | 64,192,354 |
how to display output in table from file using flask
|
<p>hi i need to display my output on flask web page in table form i get output in this format right now where <code>wraith</code> is my PC name <code>connected / disconnected</code> is <code>status</code> and <code>IP</code>. I Save client output in <code>logs.log</code> file then i read the file and display it on web page like this.</p>
<p><code>wraith Connected USB from IP: 127.0.0.1</code></p>
<p><code>wraith Disconnected USB from IP: 127.0.0.1</code></p>
<p><code>wraith Connected USB from IP: 127.0.0.1 </code></p>
<p>I need to display this output in table formate like this</p>
<p><code>CLIENT</code> <code>IP</code> <code>STATUS</code></p>
<p>wraith 127.0.0.1 connected</p>
<p>wraith 127.0.0.1 disconnected</p>
<p>like this example</p>
<p><a href="https://i.stack.imgur.com/7brs6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7brs6.png" alt="enter image description here" /></a></p>
<p>this is my client code</p>
<pre><code>import requests
import subprocess, string, time
import os
url = 'http://127.0.0.1:5000/'
name = os.uname()[1]
def on_device_add():
requests.post(f'{url}/device_add?name={name}')
def on_device_remove():
requests.post(f'{url}/device_remove?name={name}')
def detect_device(previous):
total = subprocess.run('lsblk | grep disk | wc -l', shell=True, stdout=subprocess.PIPE).stdout
time.sleep(3)
# if condition if new device add
if total > previous:
on_device_add()
# if no new device add or remove
elif total == previous:
detect_device(previous)
# if device remove
else:
on_device_remove()
# Infinite loop to keep client running.
while True:
detect_device(subprocess.run(' lsblk | grep disk | wc -l', shell=True , stdout=subprocess.PIPE).stdout)
</code></pre>
<p>this is my flask app</p>
<pre><code>from flask import Flask, request, Response
app = Flask(__name__)
@app.route("/device_add", methods=['POST'])
def device_add():
name = request.args.get('name')
with open('logs.log', 'a') as f:
f.write(f'{name} Connected USB from IP: {request.remote_addr}\n')
return 'ok'
@app.route("/device_remove", methods=['POST'])
def device_remove():
name = request.args.get('name')
with open('logs.log', 'a') as f:
f.write(f'{name} Disconnected USB from IP: {request.remote_addr}\n')
return 'ok'
@app.route("/", methods=['GET'])
def device_list():
with open('logs.log', 'r') as f:
return ''.join(f'<div>{line}</div>' for line in f.readlines())
</code></pre>
|
<p>You need to actually provide some HTML to get what you want. Normally you'd have a separate HTML template inside a "templates" directory and you'd use <code>render_template</code>. In this case, I've built the template as a string inside the python script just to demo, but this is not sustainable in actual projects.</p>
<p>The syntax for looping the data rows and inserting individual points is based on <a href="https://flask.palletsprojects.com/en/1.1.x/templating/" rel="nofollow noreferrer">Jinja2</a></p>
<pre><code>from flask import Flask, request, render_template_string
app = Flask(__name__)
TABLE_TEMPLATE = """
<style>
table, th, td {
border: 1px solid black;
}
</style>
<table style="width: 100%">
<thead>
<th>Client</th>
<th>IP</th>
<th>Status</th>
</thead>
<tbody>
{% for row in data %}
<tr>
<td>{{ row.client }}</td>
<td>{{ row.ip }}</td>
<td>{{ row.status }}</td>
</tr>
{% endfor %}
</tbody>
</table>
"""
@app.route("/device_add", methods=['POST'])
def device_add():
name = request.args.get('name')
with open('logs.log', 'a') as f:
f.write(f'{name} Connected USB from IP: {request.remote_addr}\n')
return 'ok'
@app.route("/device_remove", methods=['POST'])
def device_remove():
name = request.args.get('name')
with open('logs.log', 'a') as f:
f.write(f'{name} Disconnected USB from IP: {request.remote_addr}\n')
return 'ok'
@app.route("/", methods=['GET'])
def device_list():
keys = ['client', 'ip', 'status']
data = []
with open('logs.log', 'r') as f:
for line in f:
row = line.split()
data.append(dict(zip(keys, [row[0], row[-1], row[1]])))
return render_template_string(TABLE_TEMPLATE,
data=data)
</code></pre>
|
python|flask
| 1 |
1,901,769 | 70,194,016 |
Merge new rows from one dataframe to another
|
<p>Say two dataframe:</p>
<pre><code>df1 = pd.DataFrame({'A': ['foo', 'bar', 'test'], 'b': [1, 2, 3], 'c': [3, 4, 5]})
df2 = pd.DataFrame({'A': ['foo', 'baz'], 'c': [2, 1]})
df1
A b c
0 foo 1 3
1 bar 2 4
2 test 3 5
df2
A c
0 foo 2
1 baz 1
</code></pre>
<p>After merging I want:</p>
<pre><code>df1
A b c
0 foo 1 3
1 bar 2 4
2 test 3 5
3 baz NaN 1
</code></pre>
<p>If <code>df1['A']</code> does not contain any thing from <code>df2['A']</code>, only that row(s) from <code>df2</code> need to be added to <code>df1</code>. Ignore other columns when there is a match in col <code>A</code>.</p>
<p>I tried <code>pd.merge(df1, df2, on=['A'], how='outer')</code>, but this does not give expected output.</p>
<p>Additionally, for future reference, I also want to know how to get the below output:</p>
<pre><code>df1
A b c
0 foo 1 2
1 bar 2 4
2 test 3 5
3 baz NaN 1
</code></pre>
<p>See the updated col <code>c</code> value from <code>df2</code> for <code>foo</code>.</p>
|
<p>Try with <code>combine_first</code></p>
<pre><code>out = df1.set_index('A').combine_first(df2.set_index('A')).reset_index()
A b c
0 bar 2.0 4
1 baz NaN 1
2 foo 1.0 3
3 test 3.0 5
</code></pre>
|
python|pandas|dataframe
| 4 |
1,901,770 | 70,552,787 |
i am trying to decode a Huffman tree but i am having some problems
|
<p>i have created and encoded a Huffman tree and put it in a binary file but now when i try to decode it i am getting this error:
line 28, in
Huffman_Decoding(data, tree)
File "/Users/georgesmac/Desktop/python_course/decompress.py", line 12, in Huffman_Decoding
for x in encoded_data:
TypeError: 'node' object is not iterable
the decode code:</p>
<pre><code>
import pickle
with open("binary.bin", "rb") as file_handle:
data = pickle.load(file_handle)
file_handle.seek(0)
tree = pickle.load(file_handle)
def Huffman_Decoding(encoded_data, huffman_tree):
tree_head = huffman_tree
decoded_output = []
for x in encoded_data:
if x == '1':
huffman_tree = huffman_tree.right
elif x == '0':
huffman_tree = huffman_tree.left
try:
if huffman_tree.left.symbol == None and huffman_tree.right.symbol == None:
pass
except AttributeError:
decoded_output.append(huffman_tree.symbol)
huffman_tree = tree_head
string = ''.join([str(item) for item in decoded_output])
return string
Huffman_Decoding(data, tree)
</code></pre>
<p>the encode code:</p>
<pre><code>import pickle
q = {}
a_file = open("george.txt", 'r')
for line in a_file:
key, value = line.split()
q[key] = value
class node:
def __init__(self, freq, symbol, left=None, right=None):
self.freq = freq
self.symbol = symbol
self.left = left
self.right = right
self.huff = ''
def printNodes(node, val=''):
newVal = val + str(node.huff)
if(node.left):
printNodes(node.left, newVal)
if(node.right):
printNodes(node.right, newVal)
if(not node.left and not node.right):
print(f"{node.symbol} -> {newVal}")
chars = ['a', 'b', 'c', 'd', 'e', 'f']
# frequency of characters
freq = [q['a'], q['b'], q['c'], q['d'], q['e'], q['f']]
nodes = []
for x in range(len(chars)):
nodes.append(node(freq[x], chars[x]))
while len(nodes) > 1:
nodes = sorted(nodes, key=lambda x: x.freq)
left = nodes[0]
right = nodes[1]
left.huff = 0
right.huff = 1
newNode = node(left.freq+right.freq, left.symbol+right.symbol, left, right)
nodes.remove(left)
nodes.remove(right)
nodes.append(newNode)
printNodes(nodes[0])
with open('binary.bin', 'wb') as f:
b = pickle.dumps(nodes[0]) # bytes representation of your object
f.write(b)
</code></pre>
|
<p>You pickled objects of type <code>node</code>. Python to unpickle them into objects needs <code>node</code> class. Since you created it with class <code>node</code> defined in main module that is where python looks for it.</p>
<p>You should either import the <code>node</code> class or define it in decompress.py.</p>
<p>More clean solution would be to create a module specifically for <code>node</code> class and in both, pickling and unplickling code import it from there.</p>
|
python|algorithm|huffman-code
| 0 |
1,901,771 | 63,618,393 |
To extend axes limits with custom axis tick labels in python
|
<p>I am trying to set my plot with custom axis tick labels and I want my x-axis, y-axis limit range to be same. In the image the y-axis maximum value is 31622776 and I want to extend the y-axis to 100000000 (same as x-axis). The code I used is</p>
<pre><code>ax = sns.scatterplot(x="Actual", y="Predict",hue="Flag",style="Flag",data=df)
x_labels = {31622 : 4.5 , 100000 : 5 ,316227 : 5.5 ,1000000 : 6 ,3162277 : 6.5 ,10000000 : 7 ,31622776 : 7.5, 100000000 : 8}
ax.set_xticklabels(x_labels)
y_labels = {31622 : 4.5 , 100000 : 5 ,316227 : 5.5 ,1000000 : 6 ,3162277 : 6.5 ,10000000 : 7 ,31622776 : 7.5 , 100000000 : 8}
ax.set_yticklabels(y_labels)
plt.title(r'Log-Log plot of Actual Sales Price Vs Predicted Sales Price')
plt.xlabel(r'Actual Sales Price in Dollars')
plt.ylabel(r'Predicted Sales Price in Dollars')
</code></pre>
<p><a href="https://i.stack.imgur.com/w9lWi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/w9lWi.png" alt="enter image description here" /></a></p>
<p>I am confused about what change I should make to achieve it.</p>
|
<p>Use matplotlib.pyplot.xlim() / .ylim() in addition to setting tick labels. Tick labels won't automatically change the axis scaling.</p>
<p>See: <a href="https://matplotlib.org/api/_as_gen/matplotlib.pyplot.xlim.html" rel="nofollow noreferrer">https://matplotlib.org/api/_as_gen/matplotlib.pyplot.xlim.html</a></p>
|
python|matplotlib|data-visualization|axes
| 0 |
1,901,772 | 56,836,996 |
How can I create Frequency Matrix using all columns
|
<p>Let's say that I have a dataset that contains 4 binary columns for 2 rows.</p>
<p>It looks like this:</p>
<pre><code> c1 c2 c3 c4 c5
r1 0 1 0 1 0
r2 1 1 1 1 0
</code></pre>
<p>I want to create a matrix that gives the number of occurrences of a column, given that it also occurred in another column. Kinda like a confusion matrix</p>
<p>My desired output is:</p>
<pre><code> c1 c2 c3 c4 c5
c1 - 1 1 1 0
c2 1 - 1 2 0
c3 1 1 - 1 0
c4 1 2 1 - 0
</code></pre>
<p>I have used pandas crosstab but it only gives the desired output when using 2 columns. I want to use all of the columns</p>
|
<h3><code>dot</code></h3>
<pre><code>df.T.dot(df)
# same as
# df.T @ df
c1 c2 c3 c4 c5
c1 1 1 1 1 0
c2 1 2 1 2 0
c3 1 1 1 1 0
c4 1 2 1 2 0
c5 0 0 0 0 0
</code></pre>
<p>You can use <code>np.fill_diagonal</code> to make the diagonal zero</p>
<pre><code>d = df.T.dot(df)
np.fill_diagonal(d.to_numpy(), 0)
d
c1 c2 c3 c4 c5
c1 0 1 1 1 0
c2 1 0 1 2 0
c3 1 1 0 1 0
c4 1 2 1 0 0
c5 0 0 0 0 0
</code></pre>
<p>And as long as we're using Numpy, you could go all the way...</p>
<pre><code>a = df.to_numpy()
b = a.T @ a
np.fill_diagonal(b, 0)
pd.DataFrame(b, df.columns, df.columns)
c1 c2 c3 c4 c5
c1 0 1 1 1 0
c2 1 0 1 2 0
c3 1 1 0 1 0
c4 1 2 1 0 0
c5 0 0 0 0 0
</code></pre>
|
python-3.x|pandas
| 6 |
1,901,773 | 69,970,794 |
Comparing list and int in python
|
<p>I'm trying to run the following piece of code:</p>
<pre><code>TN = np.sum((1 - predict) * (1 - actual))
</code></pre>
<p>where I have <strong>predicted</strong>, the variable that I cannot modify, which gets printed as follow:</p>
<pre><code>[False False False False False False]
</code></pre>
<p>without any comma, so I guess it is not a list.
Then, I have <strong>actual</strong> which is formatted as:</p>
<pre><code>[False, False, False, False, False, False]
</code></pre>
<p>They have the same length, but when I run the command above I get the error:</p>
<pre><code>TypeError: unsupported operand type(s) for -: 'int' and 'list'
</code></pre>
<p>How can I convert the variable <strong>actual</strong> so it can be compared to <strong>predicted</strong>?</p>
|
<p><code>[False False False False False False]</code> is a numpy array:</p>
<pre><code>l = [False, False, False, False, False, False]
a = np.array(l)
print(a)
# Output
[False False False False False False]
</code></pre>
<p>I think you have to convert <code>actual</code> to numpy array.</p>
|
python|list|integer|compare
| 2 |
1,901,774 | 18,202,609 |
Quoted separator cursor copy_from in python
|
<p>Is there any way to provide Quoted separator like</p>
<pre><code>import psycopg2
f_cm = open('cm.sql', 'r')
constr = "dbname='mydb' user= 'pgsql' host='127.0.0.1'"
db = psycopg2.connect(constr)
st = db.cursor()
#st.copy_from(f_cm, 'mytable', sep='","', columns = ('col1','col2', 'col3'))
#instead of
st.copy_from(f_cm, 'mytable', sep=',', columns = ('col1','col2', 'col3'))
</code></pre>
<p>Date format is:</p>
<pre><code>"54654","4454","45465"
"54546","4545","885dds45"
"54536","4546","885dd45"
</code></pre>
<p>I have searched and found Good news at psycopg
<a href="http://initd.org/psycopg/docs/news.html" rel="noreferrer">New in psycopg2.0.9</a></p>
<p>Go to heading What’s new in psycopg 2.0.9¶ which states:
copy_from() and copy_to() can now use quoted separators.</p>
<p>tools:</p>
<pre><code>psycopg2 = 2.4.5
python = 2.7.3
</code></pre>
|
<p>Seems Like cursor.copy_from or copy_to does not support Quoted sheets. solution is to use copy_expert.</p>
<pre><code>import psycopg2
f_cm = open('cm.sql', 'r')
constr = "dbname='mydb' user= 'pgsql' host='127.0.0.1'"
db = psycopg2.connect(constr)
st = db.cursor()
copy = "COPY mytable(col1,col2, col3) FROM STDIN with csv"
st.copy_expert(sql=copy, file=f_cm)
db.commit()
st.close()
db.close()
</code></pre>
|
python|sql|psycopg2
| 9 |
1,901,775 | 17,761,202 |
Is Python dict an Object?
|
<p>I have a <code>dict</code> like this:</p>
<pre><code>>>> my_dict = {u'2008': 6.57, u'2009': 4.89, u'2011': 7.74,
... u'2010': 7.44, u'2012': 7.44}
</code></pre>
<p>Output with <code>has_key</code>:</p>
<pre><code>>>> my_dict.has_key(unicode(2012))
True
</code></pre>
<p>Output with <code>hasattr</code>:</p>
<pre><code>>>> hasattr(my_dict, unicode(2012))
False
</code></pre>
<p>I couldn't understand why this behaves differently.
I googled and found out that it is because <code>dict</code> and objects are different. </p>
<p>But, still I couldn't understand the difference properly.</p>
<p>(BTW : I am using python 2.7)</p>
|
<p><code>dict</code> instances are objects too. But their keys are just not exposed as as attributes.</p>
<p>Exposing the keys as attributes (too or instead of item access) would lead to namespace pollution; you'd never be able to use a <code>has_key</code> key, for example. <code>has_key</code> is <em>already</em> an attribute on dictionaries:</p>
<pre><code>>>> hasattr({}, 'has_key')
True
>>> {}.has_key
<built-in method has_key of dict object at 0x7fa2a8461940>
</code></pre>
<p>Attributes of objects and the contents of dictionaries are two <em>separate</em> things, and the separation is deliberate.</p>
<p>You can always subclass <code>dict</code> to add attribute access using the <a href="http://docs.python.org/2/reference/datamodel.html#object.__getattr__" rel="noreferrer"><code>__getattr__()</code> hook method</a>:</p>
<pre><code>class AttributeDict(dict):
def __getattr__(self, name):
if name in self:
return self[name]
raise AttributeError(name)
</code></pre>
<p>Demo:</p>
<pre><code>>>> demo = AttributeDict({'foo': 'bar'})
>>> demo.keys()
['foo']
>>> demo.foo
'bar'
</code></pre>
<p><em>Existing</em> attributes on the <code>dict</code> class take priority:</p>
<pre><code>>>> demo['has_key'] = 'monty'
>>> demo.has_key
<built-in method has_key of AttributeDict object at 0x7fa2a8464130>
</code></pre>
|
python|dictionary
| 31 |
1,901,776 | 61,025,959 |
How to load a SavedModel in a new Colab notebook?
|
<p>I am using Google Colab.
In a first notebook, I saved my model using: <code>model.save('my_model.h5')</code>.
In the same notebook, I can see that the model is stored.
Input: <code>ls -d $PWD/*</code>
Output: <code>/content/my_model.h5</code></p>
<p>In the same notebook, I can restore the saved model by using:
<code>model = tf.keras.models.load_model('my_model.h5')</code></p>
<p>But if I want to load the same model in a new notebook, I got an error message.
Input: </p>
<pre class="lang-py prettyprint-override"><code>from keras.models import load_model
new_model = tf.keras.models.load_model('/content/my_model.h5')
</code></pre>
<p>Error message: </p>
<blockquote>
<p>OSError: SavedModel file does not exist at: /content/my_model.h5/{saved_model.pbtxt|saved_model.pb}</p>
</blockquote>
<p>Thanks for your help,</p>
<p>Boris</p>
|
<p>Each notebook is on a different machine. You need to save it on google drive, using drive.mount("drive"). Then you can share the model across different notebooks.</p>
|
tensorflow|google-colaboratory
| 0 |
1,901,777 | 72,791,331 |
Compare two lists and get the indices where the values are different
|
<p>I would like help with the following situation
I have two lists:</p>
<p>Situation 1:</p>
<pre><code>a = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
b = [0, 1, 2, 3, 5, 5, 6, 7, 8, 9]
</code></pre>
<p>I need key output: <strong>Key 4</strong> is different</p>
<p>Situation 2:</p>
<pre><code>a = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
b = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
</code></pre>
<p>I need key output: <strong>false</strong> -> no key is different</p>
<p>Situation 3:</p>
<pre><code>a = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
b = [0, 9, 2, 3, 4, 5, 6, 7, 3, 9]
</code></pre>
<p>I need key output: <strong>Key 1 and Key 8</strong> is different</p>
<p>How could I resolve this? My array has 260 keys</p>
|
<p>You can use a list comprehension with <a href="https://docs.python.org/3/library/functions.html#zip" rel="nofollow noreferrer"><code>zip</code></a>, and <a href="https://docs.python.org/3/library/functions.html#enumerate" rel="nofollow noreferrer"><code>enumerate</code></a> to get the indices. Use <a href="https://stackoverflow.com/questions/2580136/does-python-support-short-circuiting">short-circuiting</a> and the fact that an empty list is falsy to get <code>False</code>:</p>
<pre><code>a = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
b = [0, 1, 2, 3, 5, 5, 6, 7, 8, 9]
out = [i for i,(e1,e2) in enumerate(zip(a,b)) if e1!=e2] or False
</code></pre>
<p>output:</p>
<pre><code>[4]
</code></pre>
<p>output for example #2: <code>False</code></p>
<p>output for example #3: <code>[1, 8]</code></p>
|
python|list
| 3 |
1,901,778 | 68,346,372 |
Multiply String in Dataframe?
|
<p>My desired output is the following:</p>
<pre><code> count tally
1 2 //
2 3 ///
3 5 /////
4 3 ///
5 2 //
</code></pre>
<p>My code:</p>
<pre><code>my_list = [1,1,2,2,2,3,3,3,3,3,4,4,4,5,5]
my_series = pd.Series(my_list)
values_counted = pd.Series(my_series.value_counts(),name='count')
# other calculated columns left out for SO simplicity
df = pd.concat([values_counted], axis=1).sort_index()
df['tally'] = values_counted * '/'
</code></pre>
<p>With the code above I get the following error:</p>
<pre><code>masked_arith_op
result[mask] = op(xrav[mask], y)
numpy.core._exceptions.UFuncTypeError: ufunc 'multiply' did not contain a loop with signature matching types (dtype('<U21'), dtype('<U21')) -> dtype('<U21')
</code></pre>
<p>In searching for solutions I found one on SO that said to try:</p>
<pre><code>values_counted * float('/')
</code></pre>
<p>But that did not work.</p>
<p>In 'normal' Python outside of Dataframes the following code works:</p>
<pre><code>10 * '/'
</code></pre>
<p>and returns</p>
<pre><code>///////////
</code></pre>
<p>How can I achieve the same functionality in a Dataframe?</p>
|
<p>Use lambda function for repeat values, your solution is simplify:</p>
<pre><code>my_list = [1,1,2,2,2,3,3,3,3,3,4,4,4,5,5]
df1 = pd.Series(my_list).value_counts().to_frame('count').sort_index()
df1['tally'] = df1['count'].apply(lambda x: x * '/')
print (df1)
count tally
1 2 //
2 3 ///
3 5 /////
4 3 ///
5 2 //
</code></pre>
|
pandas|dataframe
| 1 |
1,901,779 | 68,142,236 |
pandas: condition of location of elements in columns
|
<p>Given a dataframe, I am trying to print out how many cells of one column with a specific value correspond to the same index of another column having other specific values.
In this instance the output should be '2' since the condition is df[z]=4 and df[x]=C and only cells 10 and 11 match this requirement.
My code does not output any result but only a warning message: :5: DeprecationWarning: elementwise comparison failed; this will raise an error in the future.
if (df[df['z']== 4].index.values) == (df[df['x']== 'C'].index.values):
:5: DeprecationWarning: elementwise comparison failed; this will raise an error in the future.</p>
<p>Besides fixing this issue, is there another more 'pythonish' way of doing this without a for loop?</p>
<pre><code>import numpy as np
import pandas as pd
data=[['A', 1,2 ,5, 'blue'],
['A', 1,5,6, 'blue'],
['A', 4,4,7, 'blue']
,['B', 6,5,4,'yellow'],
['B',9,9,3, 'blue'],
['B', 7,9,1,'yellow']
,['B', 2,3,1,'yellow'],
['B', 5,1,2,'yellow'],
['C',2,10,9,'green']
,['C', 8,2,8,'green'],
['C', 5,4,3,'green'],
['C', 8,4 ,3,'green']]
df = pd.DataFrame(data, columns=['x','y','z','xy', 'color'])
k=0
print((df[df['z']==4].index.values))
print(df[df['x']== 'C'].index.values)
for i in (df['z']):
if (df[df['z']== 4].index.values) == (df[df['x']== 'C'].index.values):
k+=1
print(k)
</code></pre>
|
<p>try:</p>
<pre><code>c=df['z'].eq(4) & df['x'].eq('C')
#your condition
</code></pre>
<p>Finally:</p>
<pre><code>count=df[c].index.size
#OR
count=len(df[c].index)
</code></pre>
<p>output:</p>
<pre><code>print(count)
>>>2
</code></pre>
|
python|pandas|indexing|conditional-statements
| 2 |
1,901,780 | 68,079,012 |
ValueError: win_size exceeds image extent. If the input is a multichannel (color) image, set multichannel=True
|
<p>I am trying to calculate SSIM value for two images but I am getting error.</p>
<ul>
<li><code>img_np.shape = (1, 256, 256)</code></li>
<li><code>out.detach()[0].cpu().numpy().shape = (1, 256, 256)</code></li>
<li><code>out</code> is the output image generated from the model</li>
</ul>
<p>when i try to find SSIM value <code>ssim_ = compare_ssim(img_np, out.detach().cpu().numpy()[0])</code> i am having error <code>ValueError: win_size exceeds image extent. If the input is a multichannel (color) image, set multichannel=True.</code></p>
<p>I have tried</p>
<ul>
<li><code>ssim_ = compare_ssim(img_np, out.detach().cpu().numpy()[0], full=True)</code> but same error</li>
<li><code>ssim_ = compare_ssim(img_np, out.detach().cpu().numpy()[0], full=True, win_size=1,use_sample_covariance=False)</code> then i am getting output as an array not an number</li>
</ul>
|
<p>The shape (1, 256, 256) is interpreted as an image with 1 row, 256 columns and 256 color channels.</p>
<p>You may use <a href="https://numpy.org/doc/stable/reference/generated/numpy.squeeze.html" rel="nofollow noreferrer">numpy.squeeze</a> for removing the redundant dimension:</p>
<pre><code>img_np = np.squeeze(img_np)
</code></pre>
<p>The new shape is going to be (256, 256) - shape of a valid Grayscale image.</p>
<hr />
<p>Most Python packages assumes "Channels-Last" image format.<br />
See: <a href="https://machinelearningmastery.com/a-gentle-introduction-to-channels-first-and-channels-last-image-formats-for-deep-learning/" rel="nofollow noreferrer">A Gentle Introduction to Channels-First and Channels-Last Image Formats</a>.</p>
|
python|image-processing|scikit-image|ssim
| 2 |
1,901,781 | 59,068,666 |
JAX: time to jit a function grows superlinear with memory accessed by function
|
<p>Here is a simple example, which numerically integrates the product of two Gaussian pdfs. One of the Gaussians is fixed, with mean always at 0. The other Gaussian varies in its mean:</p>
<pre class="lang-py prettyprint-override"><code>import time
import jax.numpy as np
from jax import jit
from jax.scipy.stats.norm import pdf
# set up evaluation points for numerical integration
integr_resolution = 6400
lower_bound = -100
upper_bound = 100
integr_grid = np.linspace(lower_bound, upper_bound, integr_resolution)
proba = pdf(integr_grid)
integration_weight = (upper_bound - lower_bound) / integr_resolution
# integrate with new mean
def integrate(mu_new):
x_new = integr_grid - mu_new
proba_new = pdf(x_new)
total_proba = sum(proba * proba_new * integration_weight)
return total_proba
print('starting jit')
start = time.perf_counter()
integrate = jit(integrate)
integrate(1)
stop = time.perf_counter()
print('took: ', stop - start)
</code></pre>
<p>The function looks seemingly simple, but it doesn't scale at all. The following list contains pairs of (value for integr_resolution, time it took to run the code):</p>
<ul>
<li>100 | 0.107s</li>
<li>200 | 0.23s</li>
<li>400 | 0.537s</li>
<li>800 | 1.52s</li>
<li>1600 | 5.2s</li>
<li>3200 | 19s</li>
<li>6400 | 134s</li>
</ul>
<p>For reference, the unjitted function, applied to <code>integr_resolution=6400</code> takes 0.02s.</p>
<p>I thought that this might be related to the fact that the function is accessing a global variable. But moving the code to set up the integration points inside of the function has no notable influence on the timing. The following code takes 5.36s to run. It corresponds to the table entry with 1600 which previously took 5.2s:</p>
<pre class="lang-py prettyprint-override"><code># integrate with new mean
def integrate(mu_new):
# set up evaluation points for numerical integration
integr_resolution = 1600
lower_bound = -100
upper_bound = 100
integr_grid = np.linspace(lower_bound, upper_bound, integr_resolution)
proba = pdf(integr_grid)
integration_weight = (upper_bound - lower_bound) / integr_resolution
x_new = integr_grid - mu_new
proba_new = pdf(x_new)
total_proba = sum(proba * proba_new * integration_weight)
return total_proba
</code></pre>
<p>What is happening here?</p>
|
<p>I also answered this at <a href="https://github.com/google/jax/issues/1776" rel="noreferrer">https://github.com/google/jax/issues/1776</a>, but adding the answer here too.</p>
<p>It's because the code uses <code>sum</code> where it should use <code>np.sum</code>.</p>
<p><code>sum</code> is a Python built-in that extracts each element of a sequence and sums them one by one using the <code>+</code> operator. This has the effect of building a large, unrolled chain of adds which XLA takes a long time to compile.</p>
<p>If you use <code>np.sum</code>, then JAX builds a single XLA reduction operator, which is much faster to compile.</p>
<p>And just to show how I figured this out: I used <code>jax.make_jaxpr</code>, which dumps JAX's internal trace representation of a function. Here, it shows:</p>
<pre><code>In [3]: import jax
In [4]: jax.make_jaxpr(integrate)(1)
Out[4]:
{ lambda b c ; ; a.
let d = convert_element_type[ new_dtype=float32
old_dtype=int32 ] a
e = sub c d
f = sub e 0.0
g = pow f 2.0
h = div g 1.0
i = add 1.8378770351409912 h
j = neg i
k = div j 2.0
l = exp k
m = mul b l
n = mul m 2.0
o = slice[ start_indices=(0,)
limit_indices=(1,)
strides=(1,)
operand_shape=(100,) ] n
p = reshape[ new_sizes=()
dimensions=None
old_sizes=(1,) ] o
q = add p 0.0
r = slice[ start_indices=(1,)
limit_indices=(2,)
strides=(1,)
operand_shape=(100,) ] n
s = reshape[ new_sizes=()
dimensions=None
old_sizes=(1,) ] r
t = add q s
u = slice[ start_indices=(2,)
limit_indices=(3,)
strides=(1,)
operand_shape=(100,) ] n
v = reshape[ new_sizes=()
dimensions=None
old_sizes=(1,) ] u
w = add t v
x = slice[ start_indices=(3,)
limit_indices=(4,)
strides=(1,)
operand_shape=(100,) ] n
y = reshape[ new_sizes=()
dimensions=None
old_sizes=(1,) ] x
z = add w y
... similarly ...
</code></pre>
<p>and it's then obvious why this is slow: the program is very big.</p>
<p>Contrast the <code>np.sum</code> version:</p>
<pre><code>In [5]: def integrate(mu_new):
...: x_new = integr_grid - mu_new
...:
...: proba_new = pdf(x_new)
...: total_proba = np.sum(proba * proba_new * integration_weight)
...:
...: return total_proba
...:
In [6]: jax.make_jaxpr(integrate)(1)
Out[6]:
{ lambda b c ; ; a.
let d = convert_element_type[ new_dtype=float32
old_dtype=int32 ] a
e = sub c d
f = sub e 0.0
g = pow f 2.0
h = div g 1.0
i = add 1.8378770351409912 h
j = neg i
k = div j 2.0
l = exp k
m = mul b l
n = mul m 2.0
o = reduce_sum[ axes=(0,)
input_shape=(100,) ] n
in [o] }
</code></pre>
<p>Hope that helps!</p>
|
python|jax
| 5 |
1,901,782 | 59,442,103 |
Pandas test if one column is a subset of another column? (i.e. column B contains column A)
|
<pre><code>data = {'COL1':['United States of America', 'United Kingdom'], 'COL2':['States of America', 'United States']}
df = pd.DataFrame(data)
df['IS_COL1_IN_COL2'] = df['COL1'].isin(df['COL2'])
display(df)
</code></pre>
<p>The code above is giving the result below. I expected the values to be False, but I'm getting True? Can you please let me know what's wrong in my code? Thanks.</p>
<p><a href="https://i.stack.imgur.com/LBPqX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LBPqX.png" alt="enter image description here" /></a></p>
|
<p>I think you need <code>in</code> with lambda function:</p>
<pre><code>df['IS_COL1_IN_COL2'] = df.apply(lambda x:x['COL1'] in x['COL2'], axis=1)
df['IS_COL2_IN_COL1'] = df.apply(lambda x:x['COL2'] in x['COL1'], axis=1)
print (df)
COL1 COL2 IS_COL1_IN_COL2 \
0 United States of America States of America False
1 United Kingdom United States False
IS_COL2_IN_COL1
0 True
1 False
</code></pre>
<hr>
<p>Function <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.isin.html" rel="nofollow noreferrer"><code>Series.isin</code></a> working different - it compare each value of <code>COL1</code> by all values of <code>COL2</code>, check changed data sample:</p>
<pre><code>data = {'COL1':['United States of America', 'United Kingdom','USA','JAR'],
'COL2':['States of America', 'United States','UK', 'USA'],}
df = pd.DataFrame(data)
df['IS_COL1_IN_COL2_isin'] = df['COL1'].isin(df['COL2'])
df['IS_COL1_IN_COL2'] = df.apply(lambda x:x['COL1'] in x['COL2'], axis=1)
df['IS_COL2_IN_COL1'] = df.apply(lambda x:x['COL2'] in x['COL1'], axis=1)
print (df)
COL1 COL2 IS_COL1_IN_COL2_isin \
0 United States of America States of America False
1 United Kingdom United States False
2 USA UK True
3 JAR USA False
IS_COL1_IN_COL2 IS_COL2_IN_COL1
0 False True
1 False False
2 False False
3 False False
</code></pre>
|
python|pandas
| 2 |
1,901,783 | 63,184,926 |
How can I access layers in a pytorch module by index?
|
<p>I am trying to write a pytorch module with multiple layers. Since I need the intermediate outputs I cannot put them all in a Sequantial as usual. On the other hand, since there are many layers, what I have in mind is to put the layers in a list and access them by index in a loop. Below describe what I am trying to achieve:</p>
<pre><code>import torch
import torch.nn as nn
import torch.optim as optim
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.layer_list = []
self.layer_list.append(nn.Linear(2,3))
self.layer_list.append(nn.Linear(3,4))
self.layer_list.append(nn.Linear(4,5))
def forward(self, x):
res_list = [x]
for i in range(len(self.layer_list)):
res_list.append(self.layer_list[i](res_list[-1]))
return res_list
model = MyModel()
x = torch.randn(4,2)
y = model(x)
print(y)
optimizer = optim.Adam(model.parameters())
</code></pre>
<p>The forward method works fine, but when I want to set an optimizer the program says</p>
<pre><code>ValueError: optimizer got an empty parameter list
</code></pre>
<p>It appears that the layers in the list are not registered here. What can I do?</p>
|
<p>If you put your layers in a python list, pytorch does not register them correctly. You have to do so using <code>ModuleList</code> (<a href="https://pytorch.org/docs/master/generated/torch.nn.ModuleList.html" rel="noreferrer">https://pytorch.org/docs/master/generated/torch.nn.ModuleList.html</a>).</p>
<blockquote>
<p>ModuleList can be indexed like a regular Python list, but modules it contains are properly registered, and will be visible by all Module methods.</p>
</blockquote>
<p>Your code should be something like:</p>
<pre class="lang-py prettyprint-override"><code>
import torch
import torch.nn as nn
import torch.optim as optim
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.layer_list = nn.ModuleList() # << the only changed line! <<
self.layer_list.append(nn.Linear(2,3))
self.layer_list.append(nn.Linear(3,4))
self.layer_list.append(nn.Linear(4,5))
def forward(self, x):
res_list = [x]
for i in range(len(self.layer_list)):
res_list.append(self.layer_list[i](res_list[-1]))
return res_list
</code></pre>
<p>By using <code>ModuleList</code> you make sure all layers are registered in the computational graph.</p>
<p>There is also a <code>ModuleDict</code> that you can use if you want to index your layers by name. You can check pytorch's containers here: <a href="https://pytorch.org/docs/master/nn.html#containers" rel="noreferrer">https://pytorch.org/docs/master/nn.html#containers</a></p>
|
python|pytorch
| 4 |
1,901,784 | 63,155,887 |
Writing scraped data in rows with python
|
<p>I have a basic bs4 web scraper, There are no issues in getting my scrape data, but when I try to write it to a .csv file, I got some problems. I am unable to write my data to more than one column. In <a href="https://www.youtube.com/watch?v=XQgXKtPSzUI" rel="nofollow noreferrer">the tutorial</a> I kinda follow, he can separate rows with "," easily but when I open my CSV with excel, neither in the header nor in data there is a separation, what am I missing?</p>
<pre><code>import requests
from bs4 import BeautifulSoup
url="myurl"
page=requests.get(url)
soup=BeautifulSoup(page.content,'html.parser')
items=soup.find_all('a', class_='listing-card')
filename = 'data.csv'
f = open(filename, "w")
header = "name, price\n"
f.write(header)
for item in items:
title = item.find('span', class_='title').text
price = item.find('span', class_='price').text
f.write(title.replace(",","|") + ',' + price + "\n")
f.close()
</code></pre>
|
<p>I have found that the easiest way to get your data into a CSV file is to put the data into a pandas DataFrame then use the to_csv method to write the file.</p>
<p>Using your example the code would be as follows:</p>
<pre><code>import requests
import pandas as pd
from bs4 import BeautifulSoup
url="myurl"
page=requests.get(url)
soup=BeautifulSoup(page.content,'html.parser')
items=soup.find_all('a', class_='listing-card')
filename = 'data.csv'
f = open(filename, "w")
header = "name, price\n"
f.write(header)
#
# Create an empty list to store entries
mylist = []
for item in items:
title = item.find('span', class_='title').text
price = item.find('span', class_='price').text
#
# Create the dictionary item to be appended to the list
entry = {'name' : title, 'price' : price}
mylist.append(entry)
myDataframe = pd.DataFrame(mylist)
myDataframe.to_csv('CSV_file.csv')
</code></pre>
|
python|dataframe|csv|web-scraping|beautifulsoup
| 0 |
1,901,785 | 62,345,107 |
Finding, counting and replacing values in lists
|
<p>I am a new user, using python 2.7 with grasshopper. I have three lists (A,B,C) each with 8760 values ranging between 0 and 25000.
For each list I want to do the following.
For A to replace all the values "greater than 2000" with 2000.
For B to find the percentage of values that are between 100 and 2000.
For C to find the percentage of values greater than 300.
By percentage I mean, 'x' percent of the values in the list are greater than 'n'.
I could find such questions answered but only with somethting called numpy and panda which I don't know and can't use. Any help is greatly appreciated. Thank you.</p>
|
<p>Not quite familiar with rhino apis, and how does list work in it, but this would be the general way of doing it.</p>
<pre><code>for x in A:
if x > 2000:
x = 2000
bCount = 0
for x in B:
if x > 100 and x < 2000:
bCount += 1
bPercentage = bCount / 8760.0
cCount = 0
for x in C:
if x > 300:
cCount += 1
cPercentage = cCount / 8760.0
</code></pre>
|
python-2.7|grasshopper
| 0 |
1,901,786 | 62,440,231 |
How are __repr__ and __str__ implemented in print function
|
<p>The way that I understand about <code>__str__</code> and <code>__repr__</code> through googling is that <code>__str__</code>is called when <code>print</code>or <code>__str__()</code> function is used and the output is always string. <code>__repr__</code> is called when <code>repr()</code> or <code>__repr__()</code> function is used and while <code>repr()</code> can only output string, <code>__repr__()</code> could be any valid python expression such as tuple, dictionary, string etc.</p>
<p>So I was creating a two class Person and CircularQueue. In person class, it contains variable <strong>name</strong>, <strong>age</strong> and I defined <code>__str__</code> and <code>__repr__</code>. For CircularQueue class, it is ordinal CQ except it only accepts Person class. </p>
<p>For <code>print(cg.dequeue())</code> it prints only the <strong>name</strong> as I expected with <code>__str__</code>. However, when I run <code>print(cg.multi_dequeue(3))</code>, it gives the output of <code>__repr__</code> and not <code>__str__</code>. I thought that since I used <code>print</code> it would use <code>__str__</code>.</p>
<ol>
<li><p>Why <code>print(cg.dequeue())</code> output using <code>__str__</code> while <code>print(cg.multi_dequeue(3))</code> output using <code>__repr__</code>? </p></li>
<li><p>This question is not really connected to the topic but why can't I use <code>return self</code> in <code>__repr__</code> even if I use <code>cg.multi_dequeue(3).__repr__()</code>? It gives <code>TypeError: __repr__ returned non-string (type Person)</code> error.</p></li>
</ol>
<pre><code>class Person:
name = ""
age = 0
def __init__(self, name, age):
self.name = name
self.age = age
def __repr__(self):
return ("[Person(name: " + self.name + ", age: " + str(self.age) + ")")
#return self
def __str__(self):
return self.name
class CircularQueue:
...
def enqueue(self, element):
check = isinstance(element,Person)
#check if CQ is full
if self.is_full():
print("Queue is full!")
return 0
#check if input is Person class
elif check != True:
print("Element is not a Person class element")
return 0
#add element to CQ
else:
self.queue[self.rear] = element
self.rear = (self.rear + 1) % self.M
def dequeue(self):
#check if CQ is empty
if self.is_empty():
print("Queue is empty")
return 0
#return element
else:
temp = self.queue[self.front]
self.queue[self.front] = None
self.front = (self.front +1) % self.M
return temp
def multi_dequeue(self, count):
temp = []
for i in range(count):
temp.append(self.dequeue())
return temp
def main():
cg = CircularQueue(10)
cg.enqueue(Person("hi",1))
cg.enqueue(Person("bye",1))
cg.enqueue(Person("dd",3))
cg.enqueue(Person("ss",3))
cg.enqueue(Person("aa",3))
print(cg.dequeue())
print(cg.multi_dequeue(3))
main()
</code></pre>
<p>output</p>
<pre><code>hi
[[Person(name: bye, age: 1), [Person(name: dd, age: 3), [Person(name: ss, age: 3)]
</code></pre>
|
<p>The explanation is not very good. I will try another one.</p>
<p>The difference between <code>__repr__</code> and <code>__str__</code> lies in the information value. The standard implementation is <code>__repr__</code> and returns the class and the hash of the instance, which represents the memory address, as this is used by the interpreter for unique identification. It is the unique representation of an object's instance. However, this is not always helpful for the user. This is where <code>__str__</code> comes into play.</p>
<p>Inheritance is an illustration of a tree structure. Every time you inherit from a class or type, you add another child node to the tree. The deeper you are in the tree, the more information may be available about the individual nodes.<br>
Within my implementation of a tree you can see that the unique identification is given by the class, the module path and the content. However, the actual content of the node is more interesting for the user. However, this can vary depending on the use or depth within the tree.</p>
<p>Due to the amount of data that is collected in a sequence / iterable, it makes more sense to continuously output the <code>__repr__</code> variant. In my implementation the <code>__str__</code> output would make no sense, would change constantly and go beyond the scope if I wanted to output the entire tree. So with <code>__str__</code> I only return the string of the real data. Remember, there may be a node that appears at a different location in the tree but contains the same content. These appear the same but are different.</p>
<p>Unfortunately I can't give you a better explanation. I hope it's not too weird and complicated.<br>
Think about it.<br>
I believe in you!</p>
<pre><code>from abc import ABC
from typing import List
class BaseNode(ABC):
def __init__(self) -> None:
self._children = []
self._parent = None
def __len__(self) -> int:
return len(self._children)
def __repr__(self):
# python default implementation of __repr__
return f'<{self.__class__.__module__}.{self.__class__.__name__}' \
f' object at {self.__hash__()}>'
# python default implementation of __str__
__str__ = __repr__
def has_children(self) -> bool:
return len(self) > 0
def is_leaf(self) -> bool:
return not self.has_children()
def is_root(self) -> bool:
return self._parent is None
def add(self, child: 'BaseNode') -> 'BaseNode':
self._children.append(child)
child._parent = self
return self
def depth(self) -> int:
d,n = 0,self
while not n.is_root():
d,n = d+1,n._parent
return d
def descendants(self) -> List['Node']:
arr = []
if self.is_leaf(): return arr
else:
arr.extend(self._children)
for child in self._children:
arr.extend(child.descendants())
return arr
# abstract
def remove(self, child: 'BaseNode') -> 'BaseNode':
self._children.remove(child)
child._parent = None
return self
class RootNode(BaseNode):
pass
class Node(BaseNode):
def __init__(self, content):
super().__init__()
self._content = content
def __str__(self):
return self.content
@property
def content(self):
return self._content
def main():
node = RootNode()
node_a = Node('Node 1')
node_aa = Node('Node 1.1')
node_ab = Node('Node 1.2')
node_aba = Node('Node 1.2.1')
node_b = Node('Node 2')
node_ba = Node('Node 2.1')
node_ab.add(node_aba)
node_a.add(node_aa)
node_a.add(node_ab)
node.add(node_a)
node_b.add(node_ba)
node.add(node_b)
print(node)
print(str(node))
print(repr(node))
print(node.depth())
print(node.descendants())
print(node_aba)
print(str(node_aba))
print(repr(node_aba))
print(node_aba.depth())
print(node_aba.descendants())
if __name__ == '__main__':
main()
``
</code></pre>
|
python-3.x
| 0 |
1,901,787 | 35,508,089 |
Can I use braintree sdk with python3
|
<p>The braintree git repo says in the <a href="https://github.com/braintree/braintree_python" rel="nofollow">README</a> it is compatible with Python 3.4 but <code>caniusepython3</code> tells me it is not.</p>
<p>Can I safely use it?</p>
<pre><code>$ caniusepython3 -r requirements.txt
Finding and checking dependencies ...
[WARNING] Stale overrides: {'reportlab'}
You need 1 project to transition to Python 3.
Of that 1 project, 1 has no direct dependencies blocking its transition:
braintree
$
$ more requirements.txt
braintree==3.24.0
requests==2.9.1
</code></pre>
|
<p><sub>Full disclosure: I work at Braintree. If you have any further questions, feel free to contact <a href="https://support.braintreepayments.com/" rel="nofollow">support</a>.</sub></p>
<p>The Braintree Python library is compatible with Python 3.3 and higher.</p>
<p><code>caniusepython3</code> uses a <a href="https://pypi.python.org/pypi?%3Aaction=list_classifiers" rel="nofollow">list of classifiers</a> from PyPi to determine if a project is Python 3 compatible. Since <a href="https://pypi.python.org/pypi/braintree/3.24.0" rel="nofollow">braintree has no classifiers listed on PyPi</a>, it is not listed as
Python 3 compatible.</p>
<p>The only dependency of the Braintree library is <code>requests</code>, which is listed as compatible.</p>
<p>We'll work on updating PyPI to reflect the compatibility.</p>
|
python|python-3.x|braintree
| 2 |
1,901,788 | 35,590,090 |
Multiprocesing pool.join() hangs under some circumstances
|
<p>I am trying to create a simple producer / consumer pattern in Python using <code>multiprocessing</code>. It works, but it hangs on <code>poll.join()</code>.</p>
<pre><code>from multiprocessing import Pool, Queue
que = Queue()
def consume():
while True:
element = que.get()
if element is None:
print('break')
break
print('Consumer closing')
def produce(nr):
que.put([nr] * 1000000)
print('Producer {} closing'.format(nr))
def main():
p = Pool(5)
p.apply_async(consume)
p.map(produce, range(5))
que.put(None)
print('None')
p.close()
p.join()
if __name__ == '__main__':
main()
</code></pre>
<p>Sample output:</p>
<pre><code>~/Python/Examples $ ./multip_prod_cons.py
Producer 1 closing
Producer 3 closing
Producer 0 closing
Producer 2 closing
Producer 4 closing
None
break
Consumer closing
</code></pre>
<p><strong>However</strong>, it works perfectly when I change one line:</p>
<pre><code>que.put([nr] * 100)
</code></pre>
<p>It is 100% reproducible on Linux system running Python 3.4.3 or Python 2.7.10. Am I missing something?</p>
|
<p>There is quite a lot of confusion here. What you are writing is not a producer/consumer scenario but a mess which is misusing another pattern usually referred as "pool of workers".</p>
<p>The pool of workers pattern is an application of the producer/consumer one in which there is one producer which schedules the work and many consumers which consume it. In this pattern, the owner of the <code>Pool</code> ends up been the producer while the workers will be the consumers.</p>
<p>In your example instead you have a hybrid solution where one worker ends up being a consumer and the others act as sort of middle-ware. The whole design is very inefficient, duplicates most of the logic already provided by the <code>Pool</code> and, more important, is very error prone. What you end up suffering from, is a <a href="https://en.wikipedia.org/wiki/Deadlock" rel="nofollow">Deadlock</a>. </p>
<p>Putting an object into a <code>multiprocessing.Queue</code> is an asynchronous operation. It blocks only if the <code>Queue</code> is full and your <code>Queue</code> has infinite size. </p>
<p>This means your <code>produce</code> function returns immediately therefore the call to <code>p.map</code> is not blocking as you expect it to do. The related worker processes instead, wait until the actual message goes through the <code>Pipe</code> which the <code>Queue</code> uses as communication channel. </p>
<p>What happens next is that you terminate prematurely your consumer as you put in the <code>Queue</code> the <code>None</code> "message" which gets delivered before all the lists your <code>produce</code> function create are properly pushed through the <code>Pipe</code>.</p>
<p>You notice the issue once you call <code>p.join</code> but the real situation is the following.</p>
<ul>
<li>the <code>p.join</code> call is waiting for all the worker processes to terminate.</li>
<li>the worker processes are waiting for the big lists to go though the <code>Queue</code>'s <code>Pipe</code>.</li>
<li>as the consumer worker is long gone, nobody drains the <code>Pipe</code> which is obviously full.</li>
</ul>
<p>The issue does not show if your lists are small enough to go through before you actually send the termination message to the <code>consume</code> function.</p>
|
python|multiprocessing|producer-consumer|pool
| 1 |
1,901,789 | 35,689,678 |
Unittest of an encryption code
|
<p>Ok, so i believe i am missing something really obvious, but i have been trying for a long time to figure out a way and everyone who has tried to help me just tell me that i pretty much got it set up correctly for everything to work, but it doesn't no matter what test i try, i have been through so many but the most promising for now is </p>
<pre><code>import unittest
from unittest import TestCase
from mock import patch
from encrdecrprog import encryption
class teststuff(TestCase):
def test_encryption(self):
with patch('__bulletin__.raw_input', return_value = 'x') as raw_input:
self.assertEqual(encryption(x), '78')
_raw_input.assert_called_once_with('x')
</code></pre>
<p>I stole this from <a href="https://stackoverflow.com/questions/21046717/python-mocking-raw-input-in-unittests/30875124#30875124">python mocking raw input in unittests</a> I just don't understand how it works, at all...</p>
<p>The code that i want to test is</p>
<pre><code>def enprint():
print(encryption(raw_input()))
def encryption(x):
pur = ("".join("{:02x}".format(ord(c)) for c in x))
#The code for this was shamelessly stolen from https://stackoverflow.com/questions/12214801/print-a-string-as-hex-bytes
return pur
def main():
userinput = raw_input()
if userinput == "1":
enprint()
</code></pre>
<p>I need to figure out how to get the unittest to work, properly. I have an input which is encryption(x), this is called in another method. This input is needed without calling the other method to test it with a unittest. I need to test if the output equals something i already figured out beforehand, that x = 78, so i hashed out this code basically as clear as i can, english is not my first language so sorry if its bad.</p>
<p>Here is the newest try:</p>
<pre><code> import unittest
from encrdecrprog import encryption
class TestStringMethods(unittest.TestCase):
def setup(self):
pass
def test_encryption(self):
self.assertEquals(encryption('x'), 78)
print self.test_encryption
if __name__ == '__main__':
unittest.main()
</code></pre>
<p>Also, what i expect is a test that checks weather x really equals 78.
EDIT: to add i am using 2.7 python Should probably add that i use wing ide to help me spot errors with its inbuilt exception checker to help me spot what is wrong just in case it matters.</p>
|
<p>Maybe you need just</p>
<pre><code>self.assertEquals(encryption('x'), "78")
</code></pre>
<p><code>encryption()</code> return a string and not an integer.</p>
|
python|unit-testing|encryption
| 0 |
1,901,790 | 71,041,983 |
Using "x:" in an XML element name
|
<p>I'm trying to create an XML file that needs to be sent to a server, with this format:</p>
<pre><code><x:Envelope xmlns:x="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:sen2="http://www.some_url.com">
<x:Header>
</code></pre>
<p>I'm working with Python 3 and the lxml library, but when trying to create the element I get an error.</p>
<p>Test code:</p>
<pre><code>def authSendDataExtraccion():
envelope = etree.Element("x:Envelope")
debug_XML(envelope)
</code></pre>
<p>Result:</p>
<pre><code>ValueError: Invalid tag name 'x:Envelope'
</code></pre>
<p>How can I use the ":" character in the element and attribute names?</p>
|
<p>Use an <code>nsmap</code> to create an element in a namespace:</p>
<pre><code>envelope = etree.Element("Envelope", nsmap={'x': 'http://schemas.xmlsoap.org/soap/envelope/'})
</code></pre>
|
python|xml|xml-namespaces
| 1 |
1,901,791 | 60,109,279 |
A function that encrypts files taking too much time to finish
|
<p>I have a program that encrypts my files in a specific location. I built a function for this that loops through the length of a list that stores my files, so if I have 12 files, it will loop 12 times. and then I'm looping through my directory, opening each file for reading and writing bytes and encrypt their data and write it to the file.</p>
<p>The function is working fine, but my problem is that my function is taking a long time to finish, and I don't know why.</p>
<p>Is there any way to improve the performance of my function ?</p>
<p><strong>Encrypt Function:</strong></p>
<p>This function is taking a lot of time to finish.</p>
<pre><code>def encrypt(self):
for _ in range(0, len(files())):
for file in files():
try:
with open(file, 'rb+') as f:
plain_text = f.read()
cipher_text = self.token.encrypt(plain_text)
f.seek(0); f.truncate()
f.write(cipher_text)
except Exception as e:
print(f'{e}')
</code></pre>
<p><strong>Files Function:</strong></p>
<pre><code>def files(pattern='*'):
matches = []
for root, dirnames, filenames in chain(os.walk(desktop_path), os.walk(downloads_path), os.walk(documents_path), os.walk(pictures_path)):
for filename in filenames:
full_path = os.path.join(root, filename)
if filter([full_path], pattern):
matches.append(os.path.join(root, filename))
return matches
</code></pre>
|
<p>Why are you looping over the files nested?</p>
<pre><code>for _ in range(0, len(files())):
for file in files():
</code></pre>
<p>should be just </p>
<pre><code>for file in files():
</code></pre>
<p>If you had 12 files the old code would encrypt each file 12 times.</p>
|
python|encryption
| 5 |
1,901,792 | 67,676,574 |
Merging pandas.DataFrame
|
<p>I need help. I've to merge this DataFrames(examples) by adding new column and put percents there.
If 'level'<5000 it's NaN, if 5000<'level'<=7000 it's 5%, if 7000<'level'<=10000 it's 7% and etc..</p>
<pre><code>import pandas as pd
levels = pd.DataFrame({'lev':[5000,7000,10000],'proc':['5%','7%','10%']})
data = pd.DataFrame({'name':['A','B','C','D','E','F','G'],'sum':[6500,3000,15000,1400,8600,5409,9999]})
</code></pre>
<p><a href="https://i.stack.imgur.com/G5Pp2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/G5Pp2.png" alt="Image with result which I need Click" /></a></p>
<p>My efforts do solve this... It doesn't work and I don't understand how to solve this.</p>
<pre><code>temp = data[data['sum'] >= levels['lev'][2]]
temp['proc']=levels['proc'][2]
lev3 = temp
temp = data[levels['lev'][1]<=data['sum'] and data['sum']<=levels['lev'][2]]
temp['proc']=levels['proc'][1]
lev2 = temp
temp = data[levels['lev'][0]<=data['sum'] and data['sum']<=levels['lev'][1]]
temp['proc']=levels['proc'][0]
lev1 = temp
data = pd.concat([lev1,lev2,lev3,data])
</code></pre>
|
<p>You can apply a function to each row like this:</p>
<pre><code>import pandas as pd
def levels(s):
if 5000 < s <= 7000:
return '5%'
elif 7000 < s <= 10000:
return '7%'
elif s > 10000:
return '10%'
df = pd.DataFrame({'name':['A','B','C','D','E','F','G'],'sum':[6500,3000,15000,1400,8600,5409,9999]})
df['Percent'] = df.apply(lambda x: levels(x['sum']), axis=1)
print(df)
name sum Percent
0 A 6500 5%
1 B 3000 None
2 C 15000 10%
3 D 1400 None
4 E 8600 7%
5 F 5409 5%
6 G 9999 7%
</code></pre>
|
python|pandas
| 0 |
1,901,793 | 67,609,256 |
Defining friendships in a Data Frame
|
<p>I have a task in which it is necessary to determine the presence of a friendly connection. Let me explain, there is a checkpoint at work. The employee, passing through it, gets into the database, where his time of passage and his name are recorded. If an employee often passes through the point with the same person, then it is possible to assume with some probability that there is a friendly relationship between them. It is also necessary to take into account the difference in time with which they passed, if the difference in the passage is large, then they probably did not even see each other. For example, I made a small time Series:</p>
<pre><code> import pandas as pd
dict_df={
'Data':['2020-02-10 10:00:23','2020-02-10 10:01:23','2020-02-10 10:01:30','2020-02-10 10:01:43',
'2020-02-10 10:02:02','2020-02-10 10:02:30','2020-02-10 10:02:35','2020-02-10 10:02:50',
'2020-02-10 10:02:58','2020-02-10 10:03:02','2020-02-10 10:03:10','2020-02-10 10:03:15',
'2020-02-10 10:03:26','2020-02-10 10:03:32','2020-02-10 10:03:38','2020-02-10 10:03:40',
'2020-02-10 10:03:46','2020-02-10 10:03:50','2020-02-10 10:04:04','2020-02-10 10:04:12',
'2020-02-10 10:04:23','2020-02-10 10:04:27','2020-02-10 10:04:45','2020-02-10 10:04:50',
'2020-02-10 10:04:59','2020-02-10 10:05:20','2020-02-10 10:05:26','2020-02-10 10:05:40',
'2020-02-10 10:05:56','2020-02-10 10:06:12','2020-02-10 10:06:18','2020-02-10 10:06:30',
'2020-02-10 10:06:37'],
'Name':['Ann','Jhon','Chase','Bruce','Evan','Fred','Hugh','Gregory','Jack','Caleb','Eric','James',
'Ann','Gerld','Jess','Juan','Luke','Kyle','Neil','Owen','James','Eric','Jhon','Jess','Norman',
'Hugh','Fred','Gregory','Ryan','Angel','Cole','James','Eric']}
df=pd.DataFrame(dict_df)
</code></pre>
<p>This is what it looks like:</p>
<pre><code> Data Name
0 2020-02-10 10:00:23 Ann
1 2020-02-10 10:01:23 Jhon
2 2020-02-10 10:01:30 Chase
3 2020-02-10 10:01:43 Bruce
4 2020-02-10 10:02:02 Evan
5 2020-02-10 10:02:30 Fred
6 2020-02-10 10:02:35 Hugh
7 2020-02-10 10:02:50 Gregory
8 2020-02-10 10:02:58 Jack
9 2020-02-10 10:03:02 Caleb
10 2020-02-10 10:03:10 Eric
11 2020-02-10 10:03:15 James
12 2020-02-10 10:03:26 Ann
13 2020-02-10 10:03:32 Gerld
14 2020-02-10 10:03:38 Jess
15 2020-02-10 10:03:40 Juan
16 2020-02-10 10:03:46 Luke
17 2020-02-10 10:03:50 Kyle
18 2020-02-10 10:04:04 Neil
19 2020-02-10 10:04:12 Owen
20 2020-02-10 10:04:23 James
21 2020-02-10 10:04:27 Eric
22 2020-02-10 10:04:45 Jhon
23 2020-02-10 10:04:50 Jess
24 2020-02-10 10:04:59 Norman
25 2020-02-10 10:05:20 Hugh
26 2020-02-10 10:05:26 Fred
27 2020-02-10 10:05:40 Gregory
28 2020-02-10 10:05:56 Ryan
29 2020-02-10 10:06:12 Angel
30 2020-02-10 10:06:18 Cole
31 2020-02-10 10:06:30 James
32 2020-02-10 10:06:37 Eric
</code></pre>
<p>I need it to be like this:</p>
<pre><code> Data Name cluster
0 2020-02-10 10:00:23 Ann 0
1 2020-02-10 10:01:23 Jhon 0
2 2020-02-10 10:01:30 Chase 0
3 2020-02-10 10:01:43 Bruce 0
4 2020-02-10 10:02:02 Evan 0
5 2020-02-10 10:02:30 Fred 1
6 2020-02-10 10:02:35 Hugh 1
7 2020-02-10 10:02:50 Gregory 1
8 2020-02-10 10:02:58 Jack 0
9 2020-02-10 10:03:02 Caleb 0
10 2020-02-10 10:03:10 Eric 2
11 2020-02-10 10:03:15 James 2
12 2020-02-10 10:03:26 Ann 0
13 2020-02-10 10:03:32 Gerld 0
14 2020-02-10 10:03:38 Jess 0
15 2020-02-10 10:03:40 Juan 0
16 2020-02-10 10:03:46 Luke 0
17 2020-02-10 10:03:50 Kyle 0
18 2020-02-10 10:04:04 Neil 0
19 2020-02-10 10:04:12 Owen 0
20 2020-02-10 10:04:23 James 2
21 2020-02-10 10:04:27 Eric 2
22 2020-02-10 10:04:45 Jhon 0
23 2020-02-10 10:04:50 Jess 0
24 2020-02-10 10:04:59 Norman 0
25 2020-02-10 10:05:20 Hugh 1
26 2020-02-10 10:05:26 Fred 1
27 2020-02-10 10:05:40 Gregory 1
28 2020-02-10 10:05:56 Ryan 0
29 2020-02-10 10:06:12 Angel 0
30 2020-02-10 10:06:18 Cole 0
31 2020-02-10 10:06:30 James 2
32 2020-02-10 10:06:37 Eric 2
</code></pre>
<p>You can see that Fred, Gregory and Hugh have passed together several times, so a friendly connection is established. Also, James and Eric passed together, so it is also a friendly relationship.</p>
<p>Help us solve it using machine learning, say clustering or graph analysis. Tell me, maybe someone has thoughts.</p>
|
<p>No need for clustering algorithms. Such algorithms are useful if your data has multiple traits. In this case, there is only one: arrival time. Simply keep track of how often pairs arrive "together".</p>
<pre><code>loop over arrivals
loop over previous arrivals, recent enough to be friends
increment count for this pair
loop over pairs
if count above minimum, mark as friends
</code></pre>
<p>Setting the maximum time between friends arriving to be 20 secs, and the minimum frequency for a pair to be recognized as friends to be 2, then we get:</p>
<pre><code>togerther count:
Angel Cole 1
Angel James 1
Ann Gerld 1
Ann Jess 1
Ann Juan 1
Ann Luke 1
Bruce Evan 1
Caleb Eric 1
Caleb James 1
Chase Bruce 1
Cole Eric 1
Cole James 1
Eric Ann 1
Eric James 3
Eric Jhon 1
Fred Gregory 2
Fred Hugh 2
Gerld Jess 1
Gerld Juan 1
Gerld Kyle 1
Gerld Luke 1
Gregory Caleb 1
Gregory Eric 1
Gregory Jack 1
Gregory Ryan 1
Hugh Gregory 2
Jack Caleb 1
Jack Eric 1
Jack James 1
James Ann 1
James Gerld 1
Jess Juan 1
Jess Kyle 1
Jess Luke 1
Jess Norman 1
Jhon Bruce 1
Jhon Chase 1
Jhon Jess 1
Jhon Norman 1
Juan Kyle 1
Juan Luke 1
Kyle Neil 1
Luke Kyle 1
Luke Neil 1
Neil James 1
Neil Owen 1
Owen Eric 1
Owen James 1
Ryan Angel 1
</code></pre>
<p>and so the friends are</p>
<pre><code>friends:
Eric James 3
Fred Gregory 2
Fred Hugh 2
Hugh Gregory 2
</code></pre>
<p>You can see C++ code implementing this at <a href="https://gist.github.com/JamesBremner/cba0a5e8bbda9388c3e983c3bc5dfd9b" rel="nofollow noreferrer">https://gist.github.com/JamesBremner/cba0a5e8bbda9388c3e983c3bc5dfd9b</a></p>
|
python|pandas|graph|cluster-analysis
| 1 |
1,901,794 | 67,799,246 |
Weighted random sampler - oversample or undersample?
|
<h2>Problem</h2>
<p>I am training a deep learning model in PyTorch for binary classification, and I have a dataset containing unbalanced class proportions. My minority class makes up about <code>10%</code> of the given observations. To avoid the model learning to just predict the majority class, I want to use the <code>WeightedRandomSampler</code> from <code>torch.utils.data</code> in my <code>DataLoader</code>.</p>
<p>Let's say I have <code>1000</code> observations (<code>900</code> in class <code>0</code>, <code>100</code> in class <code>1</code>), and a batch size of <code>100</code> for my dataloader.</p>
<p>Without weighted random sampling, I would expect each training epoch to consist of 10 batches.</p>
<h2>Questions</h2>
<ul>
<li>Will only 10 batches be sampled per epoch when using this sampler - and consequently, would the model 'miss' a large portion of the majority class during each epoch, since the minority class is now overrepresented in the training batches?</li>
<li>Will using the sampler result in more than 10 batches being sampled per epoch (meaning the same minority class observations may appear many times, and also that training would slow down)?</li>
</ul>
|
<p>A small snippet of code to use <code>WeightedRandomSampler</code><br />
First, define the function:</p>
<pre><code>def make_weights_for_balanced_classes(images, nclasses):
n_images = len(images)
count_per_class = [0] * nclasses
for _, image_class in images:
count_per_class[image_class] += 1
weight_per_class = [0.] * nclasses
for i in range(nclasses):
weight_per_class[i] = float(n_images) / float(count_per_class[i])
weights = [0] * n_images
for idx, (image, image_class) in enumerate(images):
weights[idx] = weight_per_class[image_class]
return weights
</code></pre>
<p>And after this, use it in the following way:</p>
<pre><code>import torch
dataset_train = datasets.ImageFolder(traindir)
# For unbalanced dataset we create a weighted sampler
weights = make_weights_for_balanced_classes(dataset_train.imgs, len(dataset_train.classes))
weights = torch.DoubleTensor(weights)
sampler = torch.utils.data.sampler.WeightedRandomSampler(weights, len(weights))
train_loader = torch.utils.data.DataLoader(dataset_train, batch_size=args.batch_size, shuffle = True,
sampler = sampler, num_workers=args.workers, pin_memory=True)
</code></pre>
|
pytorch|oversampling|pytorch-dataloader
| 2 |
1,901,795 | 30,713,756 |
Accumulate counts of a column's data values as separate columns
|
<p>I am trying to get a specific format of csv so another code can read it properly. I have ordered it using Ordereddicts but it takes much longer, and my plotting code is giving me "StringIO() takes no keyword arguments" error. Although I think I could probably fix that, I prefer my value_counts method anyway because it is much faster. I get a csv file with the correct information, the step I need next is just formatting. I've looked up multiple threads on similar issues but not how to sort this particular way. </p>
<p>My code: </p>
<pre><code>import csv
import numpy as np
import pandas as pd
from collections import defaultdict, Counter
import pandas.util.testing as tm; tm.N = 3
data = pd.DataFrame.from_csv('MYDATA.csv')
data[['QualityIssue','CompanyName']]
data['QualityIssue'].value_counts()
RatedCustomerCallers = data['CompanyName'].value_counts()
TopCustomerCallers = RatedCustomerCallers[0:18]
print(TopCustomerCallers)
TopCustomerCallers.to_csv('topcustomercallerslist.csv')
byqualityissue = data.groupby(["CompanyName","QualityIssue"]).size()
print byqualityissue
byqualityissue.to_csv('byqualityissue.csv', header=True)
</code></pre>
<p>Output: </p>
<pre><code>CompanyName, QualityIssue, 0
Company 1, Equipment Error, 15
Company 2, User Error, 1
Company 2, Equipment Error, 5
Company 3, Equipment Error, 3
Company 3, User Error, 10
Company 3, Neither, 13
</code></pre>
<p>Where Company names are repeated for each type of issue.</p>
<p>However, I want it to be sorted by Top calling customers (added number of Equipment, User, Neither calls) and display in this way:</p>
<pre><code>Top Calling Customers, Equipment, User, Neither,
Company 3, 3, 10, 13,
Company 1, 15, 0, 0,
Customer 2, 5, 1, 0,
</code></pre>
<p>I tried using a pivot table </p>
<pre><code>df = pd.DataFrame(byqualityissue)
df.pivot(index='CompanyName', columns='QualityIssue', values='0')
</code></pre>
<p>But it's giving me KeyError: '0' which is strange since I put that in for the input for values. Also, I am not sure it will work since each customer's output is only the type they called in. As in, Company 1 only had equipment error calls so it doesn't list them for User Error or Neither calls. Not sure if a pivot table will account for this.</p>
|
<p>In the spirit of StackOverflow, here is how I solved my issue.</p>
<pre><code>import numpy as np
import pandas as pd
import pandas.util.testing as tm; tm.N = 3
data = pd.DataFrame.from_csv('MYDATA.csv')
byqualityissue = data.groupby(["CompanyName","QualityIssue"]).size()
df = pd.DataFrame(byqualityissue)
formatted = df.unstack(level=-1)
formatted[np.isnan(formatted)] = 0
formatted.to_csv('byqualityissue.csv', header=True)
includingtotals = pd.concat([formatted,pd.DataFrame(formatted.sum(axis=1),columns=['Total'])],axis=1)
sorted = includingtotals.sort_index(by=['Total'], ascending=[False])
</code></pre>
<p>I used unstack to reorganize my data, replaced NaN values with a 0, added up all the rows and appended a new column with those values, then sorted. </p>
|
python|csv|pandas
| 1 |
1,901,796 | 67,134,550 |
getting 400 error when trying to update Webex user through API
|
<p>I'm trying to update a value received from API in JSON and send it back.
I updated the value as the following code but when I try to send it back "PUT" I get error 400 Bad Request</p>
<p>the API can be found here: <a href="https://developer.webex.com/docs/api/v1/people/update-a-person" rel="nofollow noreferrer">Webex Update a Person API</a></p>
<p>could someone help me figure what am I doing wrong?</p>
<p>thanks a lot</p>
<pre><code>def get_userID(user_id):
session = HTMLSession()
header = {
"Authorization": "Bearer %s" % token,
"Content-Type": "application/json",
}
session.get(
f"https://webexapis.com/v1/people/{user_id}",
headers=header,
verify=True,
)
user_details = webex_user_details.json()
user_details["emails"][0] = "newemail123@example.com"
webex_user_details = session.put(
f"https://webexapis.com/v1/people/{user_id}",
headers=header,
data=user_details,
verify=True,
)
print(webex_user_details)
</code></pre>
|
<p>I figured it out that I need to dump the new data as JSON after updating</p>
<pre><code> user_details = json.dumps(user_details)
</code></pre>
|
json|python-3.x|api|put|webex
| 0 |
1,901,797 | 63,863,036 |
Filtering columns based on row values in Pandas
|
<p>I am trying to create dataframes from this "master" dataframe based on unique entries in the row 2.</p>
<pre><code> DATE PROP1 PROP1 PROP1 PROP1 PROP1 PROP1 PROP2 PROP2 PROP2 PROP2 PROP2 PROP2 PROP2 PROP2
1 DAYS MEAN MEAN MEAN MEAN MEAN MEAN MEAN MEAN MEAN MEAN MEAN MEAN MEAN MEAN
2 UNIT1 UNIT2 UNIT3 UNIT4 UNIT5 UNIT6 UNIT7 UNIT8 UNIT3 UNIT4 UNIT11 UNIT12 UNIT1 UNIT2
3
4 1/1/2020 677 92 342 432 878 831 293 88 69 621 586 576 972 733
5 2/1/2020 515 11 86 754 219 818 822 280 441 11 123 36 430 272
6 3/1/2020 253 295 644 401 574 184 354 12 680 729 823 822 174 602
7 4/1/2020 872 568 505 652 366 982 159 131 218 961 52 85 679 923
8 5/1/2020 93 58 864 682 346 19 293 19 206 500 793 962 630 413
9 6/1/2020 696 262 833 418 876 695 900 781 179 138 143 526 9 866
10 7/1/2020 810 58 579 244 81 858 362 440 186 425 55 920 345 596
11 8/1/2020 834 609 618 214 547 834 301 875 783 216 834 609 550 274
12 9/1/2020 687 935 976 380 885 246 339 904 627 460 659 352 361 793
13 10/1/2020 596 300 810 248 475 718 350 574 825 804 245 209 212 925
14 11/1/2020 584 984 711 879 916 107 277 412 122 683 151 811 129 4
15 12/1/2020 616 515 101 743 650 526 475 991 796 227 880 692 734 799
16 1/1/2021 106 441 305 964 452 249 282 486 374 620 652 793 115 697
17 2/1/2021 969 504 936 678 67 42 985 791 709 689 520 503 102 731
18 3/1/2021 823 169 412 177 783 601 613 251 533 463 13 127 516 15
19 4/1/2021 348 588 140 966 143 576 419 611 128 830 68 209 952 935
20 5/1/2021 96 711 651 121 708 360 159 229 552 951 79 665 709 165
21 6/1/2021 805 657 729 629 249 547 581 583 236 828 636 248 412 535
22 7/1/2021 286 320 908 765 336 286 148 168 821 567 63 908 248 320
23 8/1/2021 707 975 565 699 47 712 700 439 497 106 288 105 872 158
24 9/1/2021 346 523 142 181 904 266 28 740 125 64 287 707 553 437
25 10/1/2021 245 42 773 591 492 512 846 487 983 180 372 306 785 691
26 11/1/2021 785 577 448 489 425 205 672 358 868 637 104 422 873 919
</code></pre>
<p>so the output will look something like this</p>
<p><strong>df_unit1</strong></p>
<pre><code> DATE PROP1 PROP2
1 DAYS MEAN MEAN
2 UNIT1 UNIT1
3
4 1/1/2020 677 972
5 2/1/2020 515 430
6 3/1/2020 253 174
7 4/1/2020 872 679
8 5/1/2020 93 630
9 6/1/2020 696 9
10 7/1/2020 810 345
11 8/1/2020 834 550
12 9/1/2020 687 361
13 10/1/2020 596 212
14 11/1/2020 584 129
15 12/1/2020 616 734
16 1/1/2021 106 115
17 2/1/2021 969 102
18 3/1/2021 823 516
19 4/1/2021 348 952
20 5/1/2021 96 709
21 6/1/2021 805 412
22 7/1/2021 286 248
23 8/1/2021 707 872
24 9/1/2021 346 553
25 10/1/2021 245 785
26 11/1/2021 785 873
</code></pre>
<p><strong>df_unit2</strong></p>
<pre><code> DATE PROP1 PROP2
1 DAYS MEAN MEAN
2 UNIT2 UNIT2
3
4 1/1/2020 92 733
5 2/1/2020 11 272
6 3/1/2020 295 602
7 4/1/2020 568 923
8 5/1/2020 58 413
9 6/1/2020 262 866
10 7/1/2020 58 596
11 8/1/2020 609 274
12 9/1/2020 935 793
13 10/1/2020 300 925
14 11/1/2020 984 4
15 12/1/2020 515 799
16 1/1/2021 441 697
17 2/1/2021 504 731
18 3/1/2021 169 15
19 4/1/2021 588 935
20 5/1/2021 711 165
21 6/1/2021 657 535
22 7/1/2021 320 320
23 8/1/2021 975 158
24 9/1/2021 523 437
25 10/1/2021 42 691
26 11/1/2021 577 919
</code></pre>
<p>I have extracted the unique units from the row</p>
<pre><code>unitName = pd.Series(pd.Series(df[2,:]).unique(), name = "Unit Names")
unitName = unitName.tolist()
</code></pre>
<p>Next I was planning to loop through this list of unique units and create dataframes with each units</p>
<pre><code>for unit in unitName:
df_unit = df.iloc[[df.iloc[2:,:].str.match(unit)],:]
print(df_unit)
</code></pre>
<p>I am getting an error that 'DataFrame' object has no attribute 'str'. So my plan was to match all the cells in row2 that matches a given unit and then extract the entire column for the matched row cell.</p>
|
<p>Here is what worked for me.</p>
<pre><code>#Assign unique column names to the dataframe
df.columns = range(df.shape[1])
#Get all the unique units in the dataframe
unitName = pd.Series(pd.Series(df.loc[2,:]).unique(), name = "Unit Names")
#Convert them to a list to loop through
unitName = unitName.tolist()
for var in unitName:
#this looks for an exact match for the unit in row index 2 and
#extracts the entire column with the match
df_item = df[df.columns[df.iloc[3].str.fullmatch(var)]]
print (df_item)
</code></pre>
|
python|pandas|match|rows
| 0 |
1,901,798 | 63,799,400 |
How to split dataframe at headers that are in a row
|
<p>I've got a page I'm scraping and most of the tables are in the format Heading --info. I can iterate through most of the tables and create separate dataframes for all the various information using pandas.read_html.</p>
<p>However, there are some where they've combined information into one table with subheadings that I want to be separate dataframes with the text of that row as the heading (appending a list).</p>
<p>Is there an easy way to split this dataframe - It will always be heading followed by associated rows, new heading followed by new associated rows.</p>
<p>eg.</p>
<pre><code> Col1 Col2
0 thing
1 1 2
2 2 3
3 thing2
4 1 2
5 2 3
6 3 4
</code></pre>
<p>Should be</p>
<pre><code>thing
1 1 1
2 2 2
thing2
4 1 2
5 2 3
6 3 4
</code></pre>
<p>It'd be nice if people would just create web pages that made sense with the data but that's not the case here.</p>
<p>I've tried iterrows but cannot seem to come up with a good way to create what I want.</p>
<p>Help would be very much appreciated!</p>
<pre><code><div class="ranking">
<h6><a href="javascript:;">Sprint</a></h6>
<table>
<tbody>
</tbody>
<tbody>
<tr>
<td class="title" colspan="8">Canneto - km 137</td>
</tr>
<tr>
<td class="rank"><span title="Rank">1</span></td>
<td class="any-progression"></td>
<td class="bib"><span title="Bib">21</span></td>
<td class="name">
<a class="10010085859" href="javascript:;">
<abbr title="Young rider">*</abbr>
BAGIOLI Nicola
</a>
</td>
<td class="team"><img alt="ANDRONI GIOCATTOLI - SIDERMEC" src="/Content/images/event/2020/tirreno-adriatico/jerseys/ANS.png" title="ANDRONI GIOCATTOLI - SIDERMEC"/></td>
<td class="noc"><img alt="ITA" src="/Content/images/flags/ITA.png" title="ITA"/></td>
<td class="bonif">
</td>
<td class="points" title="Points">5</td>
</tr>
<tr>
<td class="rank"><span title="Rank">2</span></td>
<td class="any-progression"></td>
<td class="bib"><span title="Bib">54</span></td>
<td class="name">
<a class="10008688453" href="javascript:;">
ORSINI Umberto
</a>
</td>
<td class="team"><img alt="BARDIANI CSF FAIZANE'" src="/Content/images/event/2020/tirreno-adriatico/jerseys/BCF.png" title="BARDIANI CSF FAIZANE'"/></td>
<td class="noc"><img alt="ITA" src="/Content/images/flags/ITA.png" title="ITA"/></td>
<td class="bonif">
</td>
<td class="points" title="Points">3</td>
</tr>
<tr>
<td class="rank"><span title="Rank">3</span></td>
<td class="any-progression"></td>
<td class="bib"><span title="Bib">247</span></td>
<td class="name">
<a class="10005658114" href="javascript:;">
ZARDINI Edoardo
</a>
</td>
<td class="team"><img alt="VINI ZABU' KTM" src="/Content/images/event/2020/tirreno-adriatico/jerseys/THR.png" title="VINI ZABU' KTM"/></td>
<td class="noc"><img alt="ITA" src="/Content/images/flags/ITA.png" title="ITA"/></td>
<td class="bonif">
</td>
<td class="points" title="Points">2</td>
</tr>
<tr>
<td class="rank"><span title="Rank">4</span></td>
<td class="any-progression"></td>
<td class="bib"><span title="Bib">63</span></td>
<td class="name">
<a class="10003349312" href="javascript:;">
BODNAR Maciej
</a>
</td>
<td class="team"><img alt="BORA - HANSGROHE" src="/Content/images/event/2020/tirreno-adriatico/jerseys/BOH.png" title="BORA - HANSGROHE"/></td>
<td class="noc"><img alt="POL" src="/Content/images/flags/POL.png" title="POL"/></td>
<td class="bonif">
</td>
<td class="points" title="Points">1</td>
</tr>
</tbody>
<tbody>
<tr>
<td class="title" colspan="8">Follonica - km 190</td>
</tr>
<tr>
<td class="rank"><span title="Rank">1</span></td>
<td class="any-progression"></td>
<td class="bib"><span title="Bib">62</span></td>
<td class="name">
<a class="10007738055" href="javascript:;">
ACKERMANN Pascal
</a>
</td>
<td class="team"><img alt="BORA - HANSGROHE" src="/Content/images/event/2020/tirreno-adriatico/jerseys/BOH.png" title="BORA - HANSGROHE"/></td>
<td class="noc"><img alt="GER" src="/Content/images/flags/GER.png" title="GER"/></td>
<td class="bonif">
</td>
<td class="points" title="Points">12</td>
</tr>
<tr>
<td class="rank"><span title="Rank">2</span></td>
<td class="any-progression"></td>
<td class="bib"><span title="Bib">231</span></td>
<td class="name">
<a class="10008656828" href="javascript:;">
GAVIRIA RENDON Fernando
</a>
</td>
<td class="team"><img alt="UAE TEAM EMIRATES" src="/Content/images/event/2020/tirreno-adriatico/jerseys/UAD.png" title="UAE TEAM EMIRATES"/></td>
<td class="noc"><img alt="COL" src="/Content/images/flags/COL.png" title="COL"/></td>
<td class="bonif">
</td>
<td class="points" title="Points">10</td>
</tr>
<tr>
<td class="rank"><span title="Rank">3</span></td>
<td class="any-progression"></td>
<td class="bib"><span title="Bib">137</span></td>
<td class="name">
<a class="10007506366" href="javascript:;">
ZABEL Rick
</a>
</td>
<td class="team"><img alt="ISRAEL START - UP NATION" src="/Content/images/event/2020/tirreno-adriatico/jerseys/ISN.png" title="ISRAEL START - UP NATION"/></td>
<td class="noc"><img alt="GER" src="/Content/images/flags/GER.png" title="GER"/></td>
<td class="bonif">
</td>
<td class="points" title="Points">8</td>
</tr>
<tr>
<td class="rank"><span title="Rank">4</span></td>
<td class="any-progression"></td>
<td class="bib"><span title="Bib">91</span></td>
<td class="name">
<a class="10008661777" href="javascript:;">
BALLERINI Davide
</a>
</td>
<td class="team"><img alt="DECEUNINCK - QUICK - STEP " src="/Content/images/event/2020/tirreno-adriatico/jerseys/DQT.png" title="DECEUNINCK - QUICK - STEP "/></td>
<td class="noc"><img alt="ITA" src="/Content/images/flags/ITA.png" title="ITA"/></td>
<td class="bonif">
</td>
<td class="points" title="Points">7</td>
</tr>
<tr>
<td class="rank"><span title="Rank">5</span></td>
<td class="any-progression"></td>
<td class="bib"><span title="Bib">12</span></td>
<td class="name">
<a class="10007096239" href="javascript:;">
MERLIER Tim
</a>
</td>
<td class="team"><img alt="ALPECIN - FENIX" src="/Content/images/event/2020/tirreno-adriatico/jerseys/AFC.png" title="ALPECIN - FENIX"/></td>
<td class="noc"><img alt="BEL" src="/Content/images/flags/BEL.png" title="BEL"/></td>
<td class="bonif">
</td>
<td class="points" title="Points">6</td>
</tr>
<tr>
<td class="more" colspan="8"><a href="javascript:;">More...</a></td>
</tr>
<tr style="display: none;">
<td class="rank"><span title="Rank">6</span></td>
<td class="any-progression"></td>
<td class="bib"><span title="Bib">133</span></td>
<td class="name">
<a class="10028417041" href="javascript:;">
CIMOLAI Davide
</a>
</td>
<td class="team"><img alt="ISRAEL START - UP NATION" src="/Content/images/event/2020/tirreno-adriatico/jerseys/ISN.png" title="ISRAEL START - UP NATION"/></td>
<td class="noc"><img alt="ITA" src="/Content/images/flags/ITA.png" title="ITA"/></td>
<td class="bonif">
</td>
<td class="points" title="Points">5</td>
</tr>
<tr style="display: none;">
<td class="rank"><span title="Rank">7</span></td>
<td class="any-progression"></td>
<td class="bib"><span title="Bib">213</span></td>
<td class="name">
<a class="10007216275" href="javascript:;">
MANZIN Lorrenzo
</a>
</td>
<td class="team"><img alt="TOTAL DIRECT ENERGIE" src="/Content/images/event/2020/tirreno-adriatico/jerseys/TDE.png" title="TOTAL DIRECT ENERGIE"/></td>
<td class="noc"><img alt="FRA" src="/Content/images/flags/FRA.png" title="FRA"/></td>
<td class="bonif">
</td>
<td class="points" title="Points">4</td>
</tr>
<tr style="display: none;">
<td class="rank"><span title="Rank">8</span></td>
<td class="any-progression"></td>
<td class="bib"><span title="Bib">23</span></td>
<td class="name">
<a class="10007744624" href="javascript:;">
PACIONI Luca
</a>
</td>
<td class="team"><img alt="ANDRONI GIOCATTOLI - SIDERMEC" src="/Content/images/event/2020/tirreno-adriatico/jerseys/ANS.png" title="ANDRONI GIOCATTOLI - SIDERMEC"/></td>
<td class="noc"><img alt="ITA" src="/Content/images/flags/ITA.png" title="ITA"/></td>
<td class="bonif">
</td>
<td class="points" title="Points">3</td>
</tr>
<tr style="display: none;">
<td class="rank"><span title="Rank">9</span></td>
<td class="any-progression"></td>
<td class="bib"><span title="Bib">147</span></td>
<td class="name">
<a class="10010946028" href="javascript:;">
<abbr title="Young rider">*</abbr>
VERMEERSCH Florian
</a>
</td>
<td class="team"><img alt="LOTTO SOUDAL" src="/Content/images/event/2020/tirreno-adriatico/jerseys/LTS.png" title="LOTTO SOUDAL"/></td>
<td class="noc"><img alt="BEL" src="/Content/images/flags/BEL.png" title="BEL"/></td>
<td class="bonif">
</td>
<td class="points" title="Points">2</td>
</tr>
<tr style="display: none;">
<td class="rank"><span title="Rank">10</span></td>
<td class="any-progression"></td>
<td class="bib"><span title="Bib">195</span></td>
<td class="name">
<a class="10006631548" href="javascript:;">
TEUNISSEN Mike
</a>
</td>
<td class="team"><img alt="JUMBO - VISMA" src="/Content/images/event/2020/tirreno-adriatico/jerseys/TJV.png" title="JUMBO - VISMA"/></td>
<td class="noc"><img alt="NED" src="/Content/images/flags/NED.png" title="NED"/></td>
<td class="bonif">
</td>
<td class="points" title="Points">1</td>
</tr>
</tbody>
</table>
</div>
</code></pre>
|
<p>You can use <code>np.split()</code></p>
<pre><code>import numpy as np
res = [x.reset_index(drop=True) for x in np.split(df, np.where(df.applymap(lambda x: x == ''))[0]) if not x.empty]
for x in res:
x = x.rename(columns=x.iloc[0]).drop(x.index[0])
print(x)
</code></pre>
<p><strong>Output:</strong></p>
<pre><code> thing
1 1 2
2 2 3
thing2
1 1 2
2 2 3
3 3 4
</code></pre>
|
python|pandas|dataframe|split
| 1 |
1,901,799 | 42,736,541 |
how to print the clipboard (which file its contains)
|
<p>I want to print clipboard contain by this program.
Instade of printing given output I want to print clipboard..</p>
<pre><code>def GetTextFromClipboard(self):
"""
"""
clipboard = wx.Clipboard()
if clipboard.Open():
if clipboard.IsSupported(wx.DataFormat(wx.DF_FILENAME)):
data = wx.FileDataObject()
clipboard.GetData(data)
s = data.GetText()
self.tc.AppendText("Clip content:\n%s\n\n" % s )
clipboard.Close()
else:
self.tc.AppendText("")
</code></pre>
|
<p>If I understand the question correctly, you want to copy the <em>path</em> of the file on clipboard and not the <em>text</em> which is there. In this case you need to use <code>wx.DF_FILENAME</code> instead of <code>wx.DF_TEXT</code> and <code>wxFileDataObject</code> that you currently use.</p>
|
windows|python-2.7|wxwidgets|clipboard|pywin32
| 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.