Unnamed: 0
int64 0
1.91M
| id
int64 337
73.8M
| title
stringlengths 10
150
| question
stringlengths 21
64.2k
| answer
stringlengths 19
59.4k
| tags
stringlengths 5
112
| score
int64 -10
17.3k
|
---|---|---|---|---|---|---|
1,904,100 | 41,900,315 |
contains_eager and limits in SQLAlchemy
|
<p>I have 2 classes:</p>
<pre><code>class A(Base):
id = Column(Integer, primary_key=True)
name = Column(String)
children = relationship('B')
class B(Base):
id = Column(Integer, primary_key=True)
id_a = Column(Integer, ForeignKey('a.id'))
name = Column(String)
</code></pre>
<p>Now I need all object A which contains B with some name and A object will contain all B objects filtered.</p>
<p>To achieve it I build query.</p>
<pre><code>query = db.session.query(A).join(B).options(db.contains_eager(A.children)).filter(B.name=='SOME_TEXT')
</code></pre>
<p>Now I need only 50 items of query so I do:</p>
<pre><code>query.limit(50).all()
</code></pre>
<p>Result contain less then 50 even if without limit there is more than 50. I read The Zen of Eager Loading. But there must be some trick to achieve it. One of my idea is to make 2 query. One with innerjoin to take ID's then use this ID's in first query.</p>
<p>But maybe there is better solve for this.</p>
|
<p>First, take a step back and look at the SQL. Your current query is</p>
<pre><code>SELECT * FROM a JOIN b ON b.id_a = a.id WHERE b.name == '...' LIMIT 50;
</code></pre>
<p>Notice the limit is on <code>a JOIN b</code> and not <code>a</code>, but if you put the limit on <code>a</code> you can't filter by the field in <code>b</code>. There are two solutions to this problem. The first is to use a scalar subquery to filter on <code>b.name</code>, like this:</p>
<pre><code>SELECT * FROM a
WHERE EXISTS (SELECT 1 FROM b WHERE b.id_a = a.id AND b.name = '...')
LIMIT 50;
</code></pre>
<p>This can be inefficient depending on the DB backend. The second solution is to do a DISTINCT on <code>a</code> after the join, like this:</p>
<pre><code>SELECT DISTINCT a.* FROM a JOIN b ON b.id_a = a.id
WHERE b.name == '...'
LIMIT 50;
</code></pre>
<p>Notice how in either case you do not get any column from <code>b</code>. How do we get them? Do another join!</p>
<pre><code>SELECT * FROM (
SELECT DISTINCT a.* FROM a JOIN b ON b.id_a = a.id
WHERE b.name == '...'
LIMIT 50;
) a JOIN b ON b.id_a = a.id
WHERE b.name == '...';
</code></pre>
<p>Now, to write all of this in SQLAlchemy:</p>
<pre><code>subquery = (
session.query(A)
.join(B)
.with_entities(A) # only select A's columns
.filter(B.name == '...')
.distinct()
.limit(50)
.subquery() # convert to subquery
)
aliased_A = aliased(A, subquery)
query = (
session.query(aliased_A)
.join(B)
.options(contains_eager(aliased_A.children))
.filter(B.name == "...")
)
</code></pre>
|
python|sqlalchemy
| 3 |
1,904,101 | 48,066,384 |
Recursion Error with Surface_polygon
|
<p>I'm having a recursion error that I think is very basic but I'm not seeing the problem, although I think it's in the stopping condition.
Basically, I have a list of lists, where each list is a floor with around 30 points. I'm doing a recursion that goes to each floor and makes a surface polygon in each and then i make an extrusion with each surface_polygon in another function.</p>
<p>Down below is the code and the error message, he makes me the 75 floors I want to make but once the list of floors is empty he errors. </p>
<pre><code> def pisos_generico(list_lists):
if list_lists == []:
pass
else:
return surface_polygon (list_lists[0]) + pisos_generico(list_lists[1:])
def pisos (list_lists):
return extrusion(pisos_generico (list_lists)
</code></pre>
<p>This is the error code:</p>
<pre><code> Traceback (most recent call last):
File "D:\Universidade\3o Ano\PCA\Projeto\First Try Base Planta 2.py", line
547, in <module>
pisos (xy(0,0), 1.41, 0.02, 75)
File "D:\Universidade\3o Ano\PCA\Projeto\First Try Base Planta 2.py", line
529, in pisos
return extrusion(pisos_generico (piso_pisos_rodados (lista, n_andares, p, a, a_torcao)), vz(0.3))
File "D:\Universidade\3o Ano\PCA\Projeto\First Try Base Planta 2.py", line 503, in pisos_generico
return surface_polygon (list_lists[0]) + pisos_generico(list_lists[1:])
File "D:\Universidade\3o Ano\PCA\Projeto\First Try Base Planta 2.py", line 503, in pisos_generico
return surface_polygon (list_lists[0]) + pisos_generico(list_lists[1:])
File "D:\Universidade\3o Ano\PCA\Projeto\First Try Base Planta 2.py", line 503, in pisos_generico
return surface_polygon (list_lists[0]) + pisos_generico(list_lists[1:])
[Previous line repeated 72 more times]
TypeError: unsupported operand type(s) for +: 'surface_polygon' and 'NoneType'
</code></pre>
<hr>
<p>Updated version:</p>
<pre><code>def pisos (p, a, a_torcao, n_andares):
return pisos_generico (lista_listas)
def pisos_generico(list_lists):
if list_lists == []:
[]
else:
return [extrusion(surface_polygon (list_lists[0]), vz(0.3))] + pisos_generico(list_lists[1:])
</code></pre>
<p>And the new Error message, pretty much like the old one:</p>
<pre><code>Traceback (most recent call last):
File "D:\Universidade\3o Ano\PCA\Projeto\First Try Base Planta 2.py", line
548, in <module>
pisos (xy(0,0), 1.41, 0.02, 75)
File "D:\Universidade\3o Ano\PCA\Projeto\First Try Base Planta 2.py", line
530, in pisos
return pisos_generico (lista_listas)
File "D:\Universidade\3o Ano\PCA\Projeto\First Try Base Planta 2.py", line
503, in pisos_generico
return [extrusion(surface_polygon (list_lists[0]), vz(0.3))] +
pisos_generico(list_lists[1:])
File "D:\Universidade\3o Ano\PCA\Projeto\First Try Base Planta 2.py", line
503, in pisos_generico
return [extrusion(surface_polygon (list_lists[0]), vz(0.3))] +
pisos_generico(list_lists[1:])
File "D:\Universidade\3o Ano\PCA\Projeto\First Try Base Planta 2.py", line
503, in pisos_generico
return [extrusion(surface_polygon (list_lists[0]), vz(0.3))] +
pisos_generico(list_lists[1:])
[Previous line repeated 72 more times]
TypeError: can only concatenate list (not "NoneType") to list
</code></pre>
<p>Thank you so much for the help</p>
|
<p>Function implicitly return <code>None</code> if you return noting. This happens on</p>
<pre><code>def pisos_generico(list_lists):
if list_lists == []:
pass
else:
return surface_polygon (list_lists[0]) + pisos_generico(list_lists[1:])
# ^^^^^^^^^^^^
# empty => returns None
# exactly here on list_lists == [] is None returned
</code></pre>
<p>You probably need something like:</p>
<pre><code>def pisos_generico(list_lists):
if list_lists == []: # you can delete the ``... == []: pass``
pass # part and make the next one
elif len(list_lists) == 1: # a normal if instead of elif
return surface_polygon (list_lists[0]) # just make sure you do not call it with
else: # an empty list to start with
return surface_polygon (list_lists[0]) + pisos_generico(list_lists[1:])
</code></pre>
|
python|recursion
| 0 |
1,904,102 | 64,474,376 |
How can I use a swift package in python?
|
<p>I am trying to interface Swift functions to Python. Python can interpret any <code>@_cdecl</code> functions in a <code>.dylib</code>.</p>
<p>For a project directory with a single Swift source:</p>
<pre><code>project/
test.swift
test.py
</code></pre>
<p>I can run <code>swiftc test.swift -emit-library</code> to generate a .dylib file.</p>
<p>More advanced, using a Swift Package, it looks like this:</p>
<pre><code>project/
TestPackage/
...
test.py
</code></pre>
<p>In my Swift Package, I can pass the <code>-emit-library</code> parameter to the Swift Compiler within swift build like such: <code>swift build -Xswiftc -emit-library</code>. This exports my package to a .dylib.</p>
<p>My problem is adding dependencies to my package. I added the SwifterSwift package as a dependency just to test, and ran <code>swift build -Xswiftc -emit-library</code>. I got this error:</p>
<pre><code>swift build -Xswiftc -emit-library
Undefined symbols for architecture x86_64:
"_$s10Foundation4DateV12SwifterSwiftE11weekOfMonthSivg", referenced from:
_$s11TestPackage6swiftyyyF in TestPackage.swift.o
ld: symbol(s) not found for architecture x86_64
<unknown>:0: error: link command failed with exit code 1 (use -v to see invocation)
</code></pre>
<p>However, it looks like SwifterSwift exported a .dylib successfully. But my main project, TestPackage did not. <code>swift build</code> did work on its own, but does not reach my goal for generating a .dylib.</p>
<p><strong>Question:</strong></p>
<p>How can I get the whole package to compile as .dylib with dependencies? Am I missing a linker command?</p>
|
<p>After hours of experimenting with <code>swift build</code> and <code>swiftc</code> variations, I found the much easier solution:</p>
<p>In the manifest, set type to dynamic.</p>
<p><code>.library(name: "TestPackage", type: .dynamic, targets: ["TestPackage"]),</code></p>
<p>After <code>swift build</code> (no fancy parameters), the .dylib is found at:</p>
<p><code>TestPackage/.build/x86_64-apple-macosx/debug/libTestPackage.dylib</code></p>
<p>Now I can use some Swift Packages in Python!</p>
|
python|swift|xcode|xcodebuild|swift-package-manager
| 2 |
1,904,103 | 64,585,016 |
Why I have addition '/' character when using BeautifulSoup.find_all function?
|
<p>I tried to find image tags from a HTML page like this:</p>
<p><code><img src="../img/gifts/img1.jpg"></code></p>
<p><code><img src="../img/gifts/img1.jpg"></code></p>
<p>etc....</p>
<p>but when I use this code from Web Scraping 2 - author: Ryan Mitchell</p>
<pre><code>from bs4 import BeautifulSoup
import re
html = urlopen('http://www.pythonscraping.com/pages/page3.html')
bs = BeautifulSoup(html,'html.parser')
images = bs.find_all('img',{'src':re.compile('\.\.\/img\/gifts/img.*\.jpg')})
</code></pre>
<p>the list of tags I received look like this:</p>
<pre><code>[<img src="../img/gifts/img1.jpg"/>,
<img src="../img/gifts/img2.jpg"/>,
<img src="../img/gifts/img3.jpg"/>,
<img src="../img/gifts/img4.jpg"/>,
<img src="../img/gifts/img6.jpg"/>]
</code></pre>
<p>I saw that there is an additional '/' character at the end of each tag? Can someone explain this for me?
Thank so much</p>
|
<p>In HTML the tags which don't have an end tag, are ended with <code>/></code>. This is optional in most HTML versions, except XHTML where it is mandatory, and it is good practice. Beautifulsoup API automatically adds this to prettify the parsed DOM.</p>
|
python|html|beautifulsoup
| 0 |
1,904,104 | 64,576,979 |
How to properly slice this dataframe?
|
<p>I have a dataframe with 18000 rows and I want to slice it in 18 dataframes. I am doing this:</p>
<pre><code>p1 = df[0:1000]
p2 = df[1001:2001]
p3 = df[2002:3002]
p4 = df[3003:4003]
p5 = df[4004:5004]
p6 = df[5005:6005]
p7 = df[6006:7006]
p8 = df[7007:8007]
p9 = df[8008:9008]
p10 = df[9009:10009]
p11 = df[10013:11013]
p12 = df[11014:12014]
p13 = df[12015:13015]
p14 = df[13016:14016]
p15 = df[14017:15017]
p16 = df[15018:16018]
p17 = df[16019:17019]
p18 = df[17020:18020]
</code></pre>
<p>Is any other way of doing this but more efficiently?</p>
<p>I am using pandas and geopy because I want to look for addresses but since geopy has a limit requests per day I am going to put each dataframe in different notebooks so like this I might be able to run it in differents computers.</p>
|
<p>Use <a href="https://numpy.org/doc/stable/reference/generated/numpy.array_split.html" rel="nofollow noreferrer"><code>numpy.array_split()</code></a>:</p>
<pre><code>In [4]: import numpy as np
In [5]: np.array_split(df, 18)
</code></pre>
<p>This will return you a list with <code>18</code> elements, each in itself in a <code>df</code>.</p>
|
python|pandas|dataframe|slice|geopy
| 2 |
1,904,105 | 70,575,149 |
having trouble with function and asyncio
|
<p>here is my Code so far:</p>
<pre><code>import discord
from discord import Webhook, AsyncWebhookAdapter
from discord.ext import commands
from discord import Activity, ActivityType
import aiohttp
from bs4 import BeautifulSoup
from requests_html import AsyncHTMLSession
intents = discord.Intents.default()
intents.members = True
client = commands.Bot(command_prefix="$", intents=intents, case_insensitive=True)
async def amazon():
URL = "https://www.amazon.com/s?k=gaming"
with AsyncHTMLSession() as session:
response = await session.get(URL)
response.html.arender(timeout=20)
soup = BeautifulSoup(response.html.html, "lxml")
results = soup.select("a.a-size-base.a-link-normal.s-link-style.a-text-normal")
max_price = 10
for result in results:
price = result.text.split('$')[1].replace(",", "")
if float(price) < max_price:
print(f"Price: ${price}\nLink: https://www.amazon.com{result['href'].split('?')[0]}")
@client.command()
async def amaz(ctx):
await amazon()
await ctx.send("hello")
client.run("iputmytokenhere")
</code></pre>
<p>here is the error I get when doing $amaz:</p>
<pre><code>RuntimeWarning: coroutine 'HTML.arender' was never awaited
response.html.arender(timeout=20)
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
C:\Users\CK\AppData\Roaming\Python\Python37\site-packages\requests\sessions.py:428: RuntimeWarning: coroutine 'AsyncHTMLSession.close' was never awaited
self.close()
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
</code></pre>
<p>I am using this as a fun project, any help is greatly appreciated. I tried many things but nothing seems to be working. I want the bot to send the scraped data to a discord webhook.</p>
|
<p>+You can use the following code for the <code>amazon()</code> function.</p>
<pre><code>async def amazon():
URL = "https://www.amazon.com/s?k=gaming"
# add the string of the webhook url
WEBHOOK_URL = "https://your_webhook_url"
# change the with statement to assignment following the documentation
session = AsyncHTMLSession()
response = await session.get(URL)
# add await to prevent the "was not awaited" error
await response.html.arender(timeout=20)
# create the webhook object
webhook = Webhook.from_url(WEBHOOK_URL, adapter=AsyncWebhookAdapter(session))
soup = BeautifulSoup(response.html.html, "lxml")
results = soup.select("a.a-size-base.a-link-normal.s-link-style.a-text-normal")
max_price = 10
for result in results:
price = result.text.split('$')[1].replace(",", "")
if float(price) < max_price:
# change print() to webhook.send() to send the data from a webhook
await webhook.send(f"Price: ${price}\nLink: https://www.amazon.com{result['href'].split('?')[0]}")
</code></pre>
|
python|discord.py|python-asyncio
| -1 |
1,904,106 | 70,591,747 |
Processing text files via Azure functions
|
<p>I'm new to Azure Functions and I want to deploy a python script I wrote as a webservice. The script takes a text file and outputs the language the file is written in as a json. Should I be using a Http request and how do I modify my script so that it accepts a text file via a put request?</p>
|
<p>One of the workaround is to connect to your storage account and read the .txt file using blob trigger</p>
<p><strong>SCENARIO - 1</strong>
Reading directly from blob storage</p>
<pre><code>from azure.identity import DefaultAzureCredential
from azure.storage.blob import BlobClient, BlobServiceClient
# Connecting to your account
account_url = "https://storage1.blob.core.windows.net/"
creds = DefaultAzureCredential()
service_client = BlobServiceClient(
account_url=account_url,
credential=creds
)
# Reading the blob
blob_download = blob_client.download_blob()
blob_content = blob_download.readall().decode("utf-8")
print(f"Your content is: '{blob_content}'")
</code></pre>
<p><strong>SCENARIO - 2</strong>
Downloading the .txt file and then processing</p>
<pre><code> def download_file(self, source, dest):
# Download a single file to a path on the local filesystem
# dest is a directory if ending with '/' or '.', otherwise it's a file
if dest.endswith('.'):
dest += '/'
blob_dest = dest + os.path.basename(source) if dest.endswith('/') else dest
print(f'Downloading {source} to {blob_dest}')
os.makedirs(os.path.dirname(blob_dest), exist_ok=True)
bc = self.client.get_blob_client(blob=source)
with open(blob_dest, 'wb') as file:
data = bc.download_blob()
file.write(data.readall())
</code></pre>
<p>and then here is the code to convert text file to JSON</p>
<pre><code>import json
# the file to be converted
filename = '<YOUR .TXT FILE PATH>'
# resultant dictionary
dict1 = {}
# fields in the sample file
fields =['name', 'designation', 'age', 'salary']
with open(filename) as fh:
# count variable for employee id creation
l = 1
for line in fh:
# reading line by line from the text file
description = list( line.strip().split(None, 4))
# for output see below
print(description)
# for automatic creation of id for each employee
sno ='emp'+str(l)
# loop variable
i = 0
# intermediate dictionary
dict2 = {}
while i<len(fields):
# creating dictionary for each employee
dict2[fields[i]]= description[i]
i = i + 1
# appending the record of each employee to
# the main dictionary
dict1[sno]= dict2
l = l + 1
# creating json file
out_file = open("test2.json", "w")
json.dump(dict1, out_file, indent = 4)
out_file.close()
</code></pre>
<p>For instance here is my text file</p>
<p><img src="https://i.imgur.com/V45lUwm.png" alt="enter image description here" /></p>
<p>Here is the JSON file that I received</p>
<p><img src="https://i.imgur.com/19pjuRp.png" alt="enter image description here" /></p>
<p><strong>REFERENCE:</strong></p>
<ol>
<li><a href="https://www.geeksforgeeks.org/convert-text-file-to-json-in-python/" rel="nofollow noreferrer">Convert Text file to JSON in Python</a></li>
<li><a href="https://gist.github.com/amalgjose/7de7b93a326e5a6f53f8b43ba5187932" rel="nofollow noreferrer">Python program to download a complete directory or file</a></li>
<li><a href="https://trstringer.com/read-write-azure-storage-blob-python/" rel="nofollow noreferrer">Reading and Writing an Azure Storage Blob from Python</a></li>
</ol>
|
python|azure-functions
| 0 |
1,904,107 | 69,992,497 |
How to detect a pressed key within Python Process?
|
<p>With a simple example I try to demonstrate a typical multiprocessing setup with two processes:</p>
<ol>
<li>To receive data (here simulated by random array genereation)</li>
<li>To send data</li>
</ol>
<p>I want to terminate the first process by a keyboard keypress, which will send <code>None</code> to a queue, which then stops the program.
I use the <a href="https://github.com/boppreh/keyboard" rel="nofollow noreferrer">keyboard</a> package for detecting if a key is pressed.</p>
<pre><code>import multiprocessing
import keyboard
import numpy as np
def getData(queue):
KEY_PRESSED = False
while KEY_PRESSED is False:
if keyboard.is_pressed("a"):
queue.put(None)
print("STOP in getData")
KEY_PRESSED=True
else:
data = np.random.random([8, 250])
queue.put(data)
def processData(queue):
FLAG_STOP = False
while FLAG_STOP is False:
data = queue.get() # # ch, samples
if data is None:
print("STOP in processData")
FLAG_STOP = True
else:
print("Processing Data")
print(str(data[0,0]))
if __name__ == "__main__":
queue = multiprocessing.Queue()
processes = [
multiprocessing.Process(target=getData, args=(queue,)),
multiprocessing.Process(target=processData, args=(queue,))
]
for p in processes:
p.start()
for p in processes:
p.join()
</code></pre>
<p>If I debug the code, the pressed key is actually detected, but simultaneously random data from the while loop is put into the queue. Which makes it very difficult debugging the code.</p>
<p>Additionally I tried the <a href="https://pynput.readthedocs.io/en/latest/keyboard.html" rel="nofollow noreferrer">pynput</a> package, which creates a thread to detect a pressed key. Using this method however the same problem occured, the program did not savely terminated the execution by sending out <code>None</code> to the other process.</p>
<p>I would be very happy if someone could point to the error in the described method, or propose another way of savely detecting keypress within a processs.</p>
|
<p>I am not sure what problem you are describing: <em>savely</em> is not an English word. You say that the pressed key is actually detected. If that is the case and if you have <code>print("STOP...")</code> statements in both functions, then if you just run the code from a command prompt and <code>getData</code> detects that an <code>a</code> is being pressed, then I don't see how both print statements won't <em>eventually</em> be executed and the two processes terminate.</p>
<p>If the problem is that the program does not terminate for a very long while, then I think what you are missing is that unless the call to <code>keyboard.is_pressed("a")</code> is a particularly slow function to execute, by time you get around to pressing <code>a</code> on the keyboard, function <code>getData</code> will have already written thousands of records to the queue before it writes <code>None</code> out. That means that <code>processData</code> has to first read those thousands of records <strong>and print them</strong> before it finally gets to the <code>None</code> record. Since it has to also print the numbers, there is no way the <code>processData</code> can keep up with <code>getData</code>. So long after <code>getData</code> has written its <code>None</code> record, <code>processData</code> still has thousands of records to read.</p>
<p>This can be demonstrated in a variation of your code where function <code>getData</code> does not wait for keyboard input but simply writes random numbers to the output queue for 5 seconds before writing its <code>None</code> record and terminating (this simulates your program where you wait 5 seconds before pressing <code>a</code>). Function <code>processData</code> prints the number of records it has read before it gets to the <code>None</code> record and the elapsed time it took to read and print those records:</p>
<pre class="lang-py prettyprint-override"><code>import multiprocessing
import numpy as np
import time
def getData(queue):
KEY_PRESSED = False
expiration = time.time() + 5
# Run for 5 seconds:
while KEY_PRESSED is False:
if time.time() > expiration:
queue.put(None)
print("STOP in getData")
KEY_PRESSED=True
else:
data = np.random.random([8, 250])
queue.put(data)
def processData(queue):
FLAG_STOP = False
t = time.time()
cnt = 0
while FLAG_STOP is False:
data = queue.get() # # ch, samples
if data is None:
print("STOP in processData")
print('Number of items read from queue:', cnt, 'elapsed_time:', time.time() - t)
FLAG_STOP = True
else:
cnt += 1
print("Processing Data")
print(str(data[0,0]))
if __name__ == "__main__":
queue = multiprocessing.Queue()
processes = [
multiprocessing.Process(target=getData, args=(queue,)),
multiprocessing.Process(target=processData, args=(queue,))
]
for p in processes:
p.start()
for p in processes:
p.join()
</code></pre>
<p>Prints:</p>
<pre class="lang-none prettyprint-override"><code>...
Processing Data
0.21449036510257957
Processing Data
0.27058883046461824
Processing Data
0.5433716680659376
STOP in processData
Number of items read from queue: 128774 elapsed_time: 35.389172077178955
</code></pre>
<p>Even though <code>getData</code> wrote numbers for only 5 seconds, it took 35 seconds for <code>processData</code> to read and print them.</p>
<p>The problem can be resolved by putting a limit on the number of messages that can be on the Queue instance:</p>
<pre class="lang-py prettyprint-override"><code> queue = multiprocessing.Queue(1)
</code></pre>
<p>This will block <code>getData</code> from putting the next value on the queue until <code>processData</code> has read the value.</p>
<p>Prints:</p>
<pre class="lang-none prettyprint-override"><code>...
Processing Data
0.02822635996071321
Processing Data
0.05390434023333657
Processing Data
STOP in getData
0.9030600225686444
STOP in processData
Number of items read from queue: 16342 elapsed_time: 5.000030040740967
</code></pre>
<p><strong>So if you use a queue with a <em>maxsize</em> of 1, your program should terminate immediately upon pressing the <code>a</code> key.</strong></p>
|
python|multiprocessing|python-multiprocessing
| 1 |
1,904,108 | 69,679,914 |
How get access to Netwrok response area with Chrome DevTools Protocol?
|
<p>I`m trying to get access to an area with a Network response
<a href="https://i.stack.imgur.com/99Ffh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/99Ffh.png" alt="enter image description here" /></a></p>
<p>I`m using the following code, but it does not work.</p>
<pre><code>import websocket
import json
from pprint import pprint
ws = websocket.WebSocket()
ws.connect("ws://localhost:9222/devtools/page/5EC90A588BEC2DA0229988D28BA67495")
ws.send(json.dumps({"method": "Network.getResponseBody", "id": })) # don`t know where i can find it
response = ws.recv()
pprint(response)
</code></pre>
<p>And I`m not sure if I do the right things.
So, does anybody know how to do this?</p>
<p>P.S. I know, I can make a direct request to API web-source and get JSON Object, but I need to make it with Chrome DevTools Protocol</p>
|
<p>Ok, I finally found it!
U just need to use the following method: "Network.getResponseBody"</p>
|
python|google-chrome-devtools|network-protocols
| 0 |
1,904,109 | 72,872,320 |
Tesseract OCR is inaccurate for images with letter spacing
|
<p>I'm trying to use Tesseract OCR to extract a string of characters (not a valid word) from an image. The issue is that the characters in the image are spaced out, like in the picture below.
<a href="https://i.stack.imgur.com/BdgJJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BdgJJ.png" alt="enter image description here" /></a>
With default properties, this image is recognized as <code>5 O M E T E—E X fT</code>.</p>
<p>I tried to tinker with the page segmentation properties, but the closest I got is <code>"SOME TEXT.</code> with <code>--psm 8</code>. I'm wondering if there is a setting that will enable Tesseract to better deal with the spacing in between the letters, or if I need to train a custom model.</p>
|
<p>1st way is resizing the image.</p>
<p>If you resize the image (0.15, 0.15)</p>
<p><a href="https://i.stack.imgur.com/0ji1p.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0ji1p.png" alt="enter image description here" /></a></p>
<p>With default properties you will get:</p>
<pre><code>S O M E T E X T
</code></pre>
<p>Code:</p>
<pre><code>import cv2
import pytesseract
bgr = cv2.imread("BdgJJ.png")
gray = cv2.cvtColor(bgr, cv2.COLOR_BGR2GRAY)
resized = cv2.resize(gray, (0, 0), fx=.15, fy=.15)
text = pytesseract.image_to_string(resized)
print(text)
</code></pre>
<p>2nd way is using <a href="https://docs.opencv.org/4.x/d7/d4d/tutorial_py_thresholding.html" rel="nofollow noreferrer">adaptive threshold</a></p>
<p>If you apply adaptive threshold:</p>
<p><a href="https://i.stack.imgur.com/vPiEC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vPiEC.png" alt="enter image description here" /></a></p>
<p>With psm mode 6, result will be:</p>
<pre><code>S O M E T E X T
</code></pre>
<p>Code:</p>
<pre><code>import cv2
import pytesseract
bgr = cv2.imread("BdgJJ.png")
gray = cv2.cvtColor(bgr, cv2.COLOR_BGR2GRAY)
thresh = cv2.adaptiveThreshold(gray, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 21, 2)
text = pytesseract.image_to_string(thresh, config="--psm 6")
print(' '.join(ch for ch in text if ch.isalnum()).upper()[:-1])
</code></pre>
|
python|computer-vision|ocr|tesseract|python-tesseract
| 1 |
1,904,110 | 73,046,684 |
Python sort tensor according to the diff between target and element
|
<p>suppose the original tensor is below
<br></p>
<pre><code>tensor([[0.9950, 0.6175, 0.1253, 1.3536],
[0.1208, 0.4237, 1.1313, 0.9022],
[1.1995, 0.0699, 0.4396, 0.8043]])
</code></pre>
<br>
I want to sort the tensor according to the diff between 1 and the element, the element more closer to 1 will in front of the tensor, so the sorted tensor will be
<br>
<pre><code>sorted_tensor([[ 0.9950, 1.3536, 0.6175, 0.1253],
[ 0.9022, 1.1313, 0.4237, 0.1208],
[ 1.1995, 0.8043, 0.4396, 0.0699]])
</code></pre>
<p>is their any function provided bt torch? thanks in advance.</p>
|
<p>Probably something like this will work:</p>
<pre><code>diffs_from_one = torch.abs(tensor - 1)
indices = torch.argsort(diffs_from_one)
sorted_tensor = tensor[indices]
</code></pre>
<p>Note I'm not super familiar with PyTorch specifically. But the general idea, using argsort to find the indices that should sort the original tensor for you, seems like the way to go.</p>
|
python|sorting|tensor|torch
| 0 |
1,904,111 | 73,062,901 |
PySpark- getting default column name as "value" in the dataframe
|
<p>So I have a dataframe, df2 ,which looks like:</p>
<p><a href="https://i.stack.imgur.com/tTBRp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tTBRp.png" alt="enter image description here" /></a></p>
<p>I had to convert the values to python float type because of errors-</p>
<pre><code>df2 = spark.createDataFrame([float(x) for x in data],FloatType())
</code></pre>
<p>Now maybe due to this I'm getting the default column name as "value" whereas I want the column name to be "Result". I tried renaming the column using the withColumnRenamed() method but it's not working, it's showing the same output. Any idea how I can change the default column name?</p>
|
<p>I think you do <code>withColumnRenamed()</code> but don't assign it to <code>df2</code>:</p>
<pre class="lang-py prettyprint-override"><code>df2 = df2.withColumnRenamed("value", "Result")
</code></pre>
<p>Or during dataframe creation you could pass the name of the column you want:</p>
<pre class="lang-py prettyprint-override"><code>from pyspark.sql.types import *
schema = StructType([StructField("Result", FloatType(), True)])
df2 = spark.createDataFrame([float(x) for x in data], schema)
</code></pre>
|
python|pyspark|apache-spark-sql
| 1 |
1,904,112 | 55,665,934 |
Problem with unittest -- receiving str object has no attribute 'get'
|
<p>I am using sample code with <code>unittest</code> but I receive an error when I execute it -- 'str' object has no attribute 'get'.</p>
<p>I searched Google but did not get an answer.</p>
<pre><code>from selenium import webdriver
import unittest
class google1search(unittest.TestCase):
driver = 'driver'
@classmethod
def setupClass(cls):
cls.driver = webdriver.Chrome(chrome_options=options)
cls.driver.implicitly_wait(10)
cls.driver.maximize_window()
def test_search_automationstepbystep(self):
self.driver.get("https://google.com")
self.driver.find_element_by_name("q").send_keys("Automation Step By step")
self.driver.find_element_by_name("btnk").click()
def test_search_naresh(self):
self.driver.get("https://google.com")
self.driver.find_element_by_name("q").send_keys("Naresh")
self.driver.find_element_by_name("btnk").click()
@classmethod
def teardownClass(cls):
cls.driver.close()
cls.driver.quit()
print("Test completed")
if __name__== "__main__":
unittest.main()
</code></pre>
<p>Supposed to execute 2 steps and give result pass.</p>
|
<p>In the above code :
There is no initialization of self.driver for </p>
<pre><code>self.driver.get("https://google.com")
</code></pre>
<p>as the driver initiated is for </p>
<pre><code>cls.driver = webdriver.Chrome(chrome_options=options)
</code></pre>
<p>please replace <strong>cls</strong> with <strong>self</strong></p>
|
python|selenium|python-unittest
| 2 |
1,904,113 | 55,791,563 |
Multiple camera feeds not working with PyQt5 threading:
|
<p>I have a data collection software that requires camera feeds from two different camera sources one is a Brio webcam & the other one is an IP webcam connected via USB tethering.
Now when I edited the code for streaming two multiple videos it was showing just one & not from the other.
The code is given below:</p>
<pre><code>import sys
import cv2
#from gsp import GstreamerPlayer
import datetime
from pyfirmata import util, Arduino
from PyQt5 import QtCore, QtGui
import openpyxl
from openpyxl import load_workbook
from PyQt5.QtCore import pyqtSlot, QThread, pyqtSignal
from PyQt5.QtGui import QImage, QPixmap
from PyQt5.QtWidgets import QLayout, QDialog, QApplication, QMainWindow, QFileDialog, QPushButton, QWidget, QLabel
from PyQt5.uic import loadUi
import xlrd
from xlutils.copy import copy
import serial
import xlsxwriter
from xlwt import Workbook
sys.setrecursionlimit(15000)
# For the camera feed
class Thread(QThread):
changePixmap = pyqtSignal(QImage)
def run(self):
cap = cv2.VideoCapture(0)
while True:
ret, frame = cap.read()
if ret:
rgbImage = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
convertToQtFormat = QImage(rgbImage.data, rgbImage.shape[1], rgbImage.shape[0], QImage.Format_RGB888)
p = convertToQtFormat.scaled(256, 181)
self.changePixmap.emit(p)
class Thread1(QThread):
changePixmap = pyqtSignal(QImage)
def run(self):
cap = cv2.VideoCapture('http://192.168.42.129:8080/video')
while True:
ret, frame = cap.read()
if ret:
rgbImage = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
convertToQtFormat = QImage(
rgbImage.data, rgbImage.shape[1], rgbImage.shape[0], QImage.Format_RGB888)
p1 = convertToQtFormat.scaled(111, 181)
self.changePixmap.emit(p1)
</code></pre>
<p>The calling functions in the main are as follows:</p>
<blockquote>
<pre><code> @pyqtSlot(QImage)
def setImage(self, image):
self.webcam.setPixmap(QPixmap.fromImage(image))
@pyqtSlot(QImage)
def setImage1(self, image):
self.webcam_2.setPixmap(QPixmap.fromImage(image))
def initUI(self):
th = Thread(self)
th1 = Thread1(self)
th1.changePixmap1.connect(self.setImage1)
th.changePixmap.connect(self.setImage)
th.start()
</code></pre>
<p>I am new to python programming can anyone tell me what I am doing wrong here? I tried the other approach of doing the streaming in a function & setting streams up but this was not a conventional approach as my application kept crashing due to the while loop*( I guess)*.
If I use one source at a time it works but I can't seem to get them working at a time.</p>
</blockquote>
|
<p>I got it worked around as I wasn't adding </p>
<blockquote>
<p>thread.start()</p>
</blockquote>
<p>function before.</p>
|
python-3.x|pyqt5|opencv3.0|qthread
| 0 |
1,904,114 | 55,848,867 |
Do I have to import excel file every time when I run the python code?
|
<p>I'm an R user but trying to learn Python.
When using R, once I run a single code to import excel file and save it as dataframe, I was able to use dataframe saved in my working space without re-importing every time. </p>
<p>While using Python, I noticed that unless I use interpreter, I only can run full script(whole "foo.py" file) but not code by code. </p>
<p>I would like to load excel file and work with data inside. My code begins with importing excel file as dataframe. Hence, every time I add new code and want to see the result, I am running all py script and it loads the data every time I run it. </p>
<p>Maybe I am using Python in a wrong way.
With Jupyter notebook I didn't have this issue because I was able to run the code cell by cell just like R. But I am trying to use Pycharm now. </p>
<pre><code>import pandas as pd
df = pd.read_excel('foo.xlsx', sheet_name = 'sales_data')
print("Column headings:")
print(df.columns)
</code></pre>
|
<p><code>Jupyter</code> which runs a script line by line, and you have the variables including the dataframe which has been loaded to memory, so you can use that <code>df</code> until you exit <code>Jupyter</code> .<br>
Whereas an IDE like <code>PyCharm</code>, depending on the edition, especially Community edition runs the whole script in one go, so it needs to load the excel into memory the next time it runs, because it's not persisting any information from the last run.</p>
<p><code>Jupyter</code> is what we call a <a href="https://en.wikipedia.org/wiki/Read%E2%80%93eval%E2%80%93print_loop" rel="nofollow noreferrer">REPL</a>, which means all user information is persistent till the session is killed, whereas <code>PyCharm</code> runs all the code in one go while it evauates things line by line, and give you the output at the end.</p>
|
python|python-3.x
| 1 |
1,904,115 | 50,077,264 |
Regex (Python) - Match words with two or more distinct vowels
|
<p>I'm attempting to match words in a string that contain two or more distinct vowels. The question can be restricted to lowercase.</p>
<pre><code>string = 'pool pound polio papa pick pair'
</code></pre>
<p>Expected result:</p>
<pre><code>pound, polio, pair
</code></pre>
<p><code>pool</code> and <code>papa</code> would fail because they contain only one distinct vowel. However, <code>polio</code> is fine, because even though it contains two <code>o</code>s, it contains two distinct vowels (<code>i</code> and <code>o</code>). <code>mississippi</code> would fail, but <code>albuquerque</code> would pass).</p>
<p>Thought process: Using a lookaround, perhaps five times (ignore uppercase), wrapped in a parenthesis, with a <code>{2}</code> afterward. Something like:</p>
<pre><code>re.findall(r'\w*((?=a{1})|(?=e{1})|(?=i{1})|(?=o{1})|(?=u{1})){2}\w*', string)
</code></pre>
<p>However, this matches on all six words.</p>
<p>I killed the <code>{1}</code>s, which makes it prettier (the <code>{1}</code>s seem to be unnecessary), but it still returns all six:</p>
<pre><code>re.findall(r'\w*((?=a)|(?=e)|(?=i)|(?=o)|(?=u))\w*', string)
</code></pre>
<p>Thanks in advance for any assistance. I checked other queries, including <a href="https://stackoverflow.com/questions/29689858/how-to-find-words-with-two-vowels">"How to find words with two vowels"</a>, but none seemed close enough. Also, I'm looking for pure RegEx.</p>
|
<p>You don't need 5 separate lookaheads, that's complete overkill. Just capture the first vowel in a <a href="https://docs.python.org/3/howto/regex.html#grouping" rel="nofollow noreferrer">capture group</a>, and then use a <a href="https://docs.python.org/3/howto/regex.html#lookahead-assertions" rel="nofollow noreferrer">negative lookahead</a> to assert that it's different from the second vowel:</p>
<pre><code>[a-z]*([aeiou])[a-z]*(?!\1)[aeiou][a-z]*
</code></pre>
<p><a href="https://regex101.com/r/K2Yppn/1" rel="nofollow noreferrer">See the online demo.</a></p>
|
python|regex
| 6 |
1,904,116 | 64,991,519 |
Function's attributes when in a class
|
<p>I can use a function's attribute to set a status flag, such as</p>
<pre><code>def Say_Hello():
if Say_Hello.yet == True:
print('I said hello already ...')
elif Say_Hello.yet == False:
Say_Hello.yet = True
print('Hello!')
Say_Hello.yet = False
if __name__ == '__main__':
Say_Hello()
Say_Hello()
</code></pre>
<p>and the output is</p>
<pre><code>Hello!
I said hello already ...
</code></pre>
<p>However, when trying to put the function in a class, like</p>
<pre><code>class Speaker:
def __init__(self):
pass
def Say_Hello(self):
if self.Say_Hello.yet == True:
print('I said hello already ...')
elif self.Say_Hello.yet == False:
self.Say_Hello.yet = True
print('Hello!')
Say_Hello.yet = False
if __name__ == '__main__':
speaker = Speaker()
speaker.Say_Hello()
speaker.Say_Hello()
</code></pre>
<p>There is this error:</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "...func_attribute_test_class_notworking.py", line 16, in <module>
speaker.Say_Hello()
File "...func_attribute_test_class_notworking.py", line 9, in Say_Hello
self.Say_Hello.yet = True
AttributeError: 'method' object has no attribute 'yet'
</code></pre>
<p>What is the proper way to use function's attribute in a class?</p>
|
<p><code>Speaker.Say_Hello</code> and <code>speaker.Say_Hello</code> are two different objects. The former is the function defined in the body of the <code>class</code> statement:</p>
<pre><code>>>> Speaker.Say_Hello
<function Speaker.Say_Hello at 0x1051e5550>
</code></pre>
<p>while the latter is an instance of <code>method</code>:</p>
<pre><code>>>> speaker.Say_Hello
<bound method Speaker.Say_Hello of <__main__.Speaker object at 0x10516dd60>>
</code></pre>
<p>Further, every time you access <code>speaker.Say_Hello</code>, you get a <em>different</em> instance of <code>method</code>.</p>
<pre><code>>>> speaker.Say_Hello is speaker.Say_Hello
False
</code></pre>
<p>You should just use <code>self</code> instead. Function attributes are more of an accidental feature that isn't specifically prohibited rather than something you should use, anyway.</p>
<pre><code>class Speaker:
def __init__(self):
self._said_hello = False
def Say_Hello(self):
if self._said_hello:
print('I said hello already ...')
else:
self._said_hello = True
print('Hello!')
</code></pre>
<p>If you want to track if <em>any</em> instance of <code>Speaker</code> has called its <code>Say_Hello</code> method, rather than tracking this for each instance separately, use a class attribute.</p>
<pre><code>class Speaker:
_said_hello = False
def Say_Hello(self):
if self._said_hello:
print('Someone said hello already ...')
else:
type(self)._said_hello = True
print('Hello!')
</code></pre>
|
python|class|attributes
| 4 |
1,904,117 | 65,418,766 |
Removing a value in a specific row of a column without removing the entire row in a pandas dataframe
|
<p>If I have a csv that looks like this, the first A,B,C being the column headers:</p>
<pre><code>A, B, C
A1,A2,A3
blah,B2,B3
C1,C2,C3
blahxxxxxtr,D4,D5
</code></pre>
<p>How do I remove the entries containing 'blah' or anything with 'blah' in them, without removing the whole rows.</p>
<p>Here is what is working so far:</p>
<pre><code>import pandas as pd
file = r"\\fileserver\data\test.csv"
df = pd.read_csv(file)
for index, row in df.iterrows():
if 'blah' in str[row[0]:
print(row['A']
#this is where I don't know how to remove 'blah' if this is True
#I want the new value of that to be a blank field, so '',B2,B3
#Same would go for the 4th row, '',D4,D5
# Using a drop command removes the whole row.
df.to_csv(file, index = false)
</code></pre>
<p>This successfully prints the 2nd and 4th row of 'blah' values.<br>
How do I remove and replace 'blah' with a blank string of '', so it does not have anything, not even Nan in that column for that specific row?</p>
|
<p>You can simply use replace.</p>
<pre><code>df.replace(to_replace='blah', value='the value you want', inplace=True)
</code></pre>
<p>or if you want to replace a certain column value use</p>
<pre><code>df[colname].replace(to_replace='blah', value='the value you want', inplace=True)
</code></pre>
<p>If inplace is set to true, the df will be updated internally. Else a df with changed value will be returned</p>
|
python|pandas|dataframe
| 1 |
1,904,118 | 65,267,861 |
Is there a way to optimize my list comprehension for better performance? It is slower than a for loop
|
<p>I am trying to optimize my code for looping through an ASC raster file. The input to the function is the data array from the ASC file with the shape 1.000 x 1.000 (1mio data points), the ASC file information and a column skipping value. The skip value is not important in this case.</p>
<p>My function with a for loop code performs decent and skips an array cell if the data == nodata_value. Here is the function:</p>
<pre><code>def asc_process_single(self, asc_array, asc_info, skip=1):
# ncols = asc_info['ncols']
nrows = asc_info['nrows']
xllcornor = asc_info['xllcornor']
yllcornor = asc_info['yllcornor']
cellsize = asc_info['cellsize']
nodata_value = asc_info['nodata_value']
raster_size_y = cellsize*nrows
# raster_size_x = cellsize*ncols
# Looping over array rows and cols with skipping
xyz = []
for row in range(asc_array.shape[0])[::skip]:
for col in range(asc_array.shape[1])[::skip]:
val_z = asc_array[row, col] # Z value of datapoint
# The no data value is not processed
if val_z == nodata_value:
pass
else:
# Xcoordinate for current Z value
val_x = xllcornor + (col * cellsize)
# Ycoordinate for current Z value
val_y = yllcornor + raster_size_y - (row * cellsize)
# x, y, z to LIST
xyz.append([val_x, val_y, val_z])
return xyz
</code></pre>
<p>Timing this for 7 repeats on an ASC file where nodata_value(s) are present is:</p>
<pre><code>593 ms ± 34.4 ms per loop (mean ± std. dev. of 10 runs, 1 loop each)
</code></pre>
<p>I thought i could do this better with a list comprehension:</p>
<pre><code>def asc_process_single_listcomprehension(self, asc_array, asc_info, skip=1):
# ncols = asc_info['ncols']
nrows = asc_info['nrows']
xllcornor = asc_info['xllcornor']
yllcornor = asc_info['yllcornor']
cellsize = asc_info['cellsize']
nodata_value = asc_info['nodata_value']
raster_size_y = cellsize*nrows
# raster_size_x = cellsize*ncols
# Looping over array rows and cols with skipping
rows = range(asc_array.shape[0])[::skip]
cols = range(asc_array.shape[1])[::skip]
xyz = [[xllcornor + (col * cellsize),
yllcornor + raster_size_y - (row * cellsize),
asc_array[row, col]]
for row in rows for col in cols
if asc_array[row, col] != nodata_value]
return xyz
</code></pre>
<p>However, this performs slower than my for loop and i am wondering why?</p>
<pre><code>757 ms ± 58.4 ms per loop (mean ± std. dev. of 10 runs, 1 loop each)
</code></pre>
<p>Is it because the list comprehension looks up asc_array[row, col] twice? This operation alone cost</p>
<pre><code>193 ns ± 11.4 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)
</code></pre>
<p>versus just assigning using the z value from an already lookup value in the array in my for loop</p>
<pre><code>51.2 ns ± 1.18 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)
</code></pre>
<p>Doing this 1 mio times would add up the time spent doing this for the list comprehension.
Any ideas how to optimize my list comprehension further so it is better performing than my for loop? Any other ideas to improve performance?</p>
<p>EDIT:
Solution:
I tried the 2 proposals given.</p>
<ol>
<li>Reference my Z value in my list comprehension and not do
the lookup in the array twice which took longer to do.</li>
<li>Rewrrite the function to handle the problem with numpy arrays</li>
</ol>
<p>The list comprehension i rewrote to this:</p>
<pre><code>xyz = [[xllcornor + (col * cellsize),
yllcornor + raster_size_y - (row * cellsize),
val_z]
for row in rows for col in cols for val_z in
[asc_array[row, col]]
if val_z != nodata_value]
</code></pre>
<p>and the numpy function became this:</p>
<pre><code>def asc_process_numpy_single(self, asc_array, asc_info, skip):
# ncols = asc_info['ncols']
nrows = asc_info['nrows']
xllcornor = asc_info['xllcornor']
yllcornor = asc_info['yllcornor']
cellsize = asc_info['cellsize']
nodata_value = asc_info['nodata_value']
raster_size_y = cellsize*nrows
# raster_size_x = cellsize*ncols
rows = np.arange(0,asc_array.shape[0],skip)[:,np.newaxis]
cols = np.arange(0,asc_array.shape[1],skip)
x = np.zeros((len(rows),len(cols))) + xllcornor + (cols * cellsize)
y = np.zeros((len(rows),len(cols))) + yllcornor + raster_size_y - (rows *
cellsize)
z = asc_array[::skip,::skip]
xyz = np.asarray([x,y,z]).T.transpose((1,0,2)).reshape(
(int(len(rows)*len(cols)), 3) )
mask = (xyz[:,2] != nodata_value)
xyz = xyz[mask]
return xyz
</code></pre>
<p>I added the mask in the last 2 lines of the numpy function because i dont want the nodata_values.
The performance is as follows in the order; for loop, list comprehension, list comprehension suggestion and numpy function suggestion:</p>
<pre><code>609 ms ± 44.8 ms per loop (mean ± std. dev. of 10 runs, 1 loop each)
706 ms ± 22 ms per loop (mean ± std. dev. of 10 runs, 1 loop each)
604 ms ± 21.5 ms per loop (mean ± std. dev. of 10 runs, 1 loop each)
70.4 ms ± 1.26 ms per loop (mean ± std. dev. of 10 runs, 1 loop each)
</code></pre>
<p>The list comprehension compares to the for loop when optimized, but the numpy function speeds up the party with a factor of 9.</p>
<p>Thank you so much for your comments and suggestions. I learned a lot today.</p>
|
<p>The only thing I can imagine that's slowing you down is that in the original code, you put <code>asc_array[row, col]</code> into a temporary variable, while in the list comprehension, you evaluate it twice.</p>
<p>There are two things you might want to try:</p>
<ol>
<li><p>Assign the value to <code>val_z</code> in the "if" statement using a walrus operator, or</p>
</li>
<li><p>Add <code>for val_z in [asc_array[row, col]]</code> after the other two <code>for</code>s.</p>
</li>
</ol>
<p>Good luck.</p>
|
python|performance|for-loop|list-comprehension
| 0 |
1,904,119 | 71,925,157 |
Split pdf from A4 into A6 quarters and don't save empty quarters
|
<p>Do not judge strictly, I'm a self-taught beginner)))</p>
<p>Please help me figure out how to share I learned both with the help of PyPDF2 and with the help of PyMuPDF (fitz). But when splitting, it often happens that there is text in only one quarter, but it writes all 4 quarters to the new file, both with text and empty, one with text, the rest are empty, and I need something so that the empty ones are not saved, I wanted to somehow do a check, but it didn't work out, lack of knowledge. I tried to read the newly recorded file and delete empty pages, but there is text on each page, even on empty ones, I open the file in acrobat reader, but the pages are empty, I don’t understand how.</p>
<p>Here is my code just in case how and what I do: <a href="https://paste.aiogram.dev/opiquhehus.py" rel="nofollow noreferrer">https://paste.aiogram.dev/opiquhehus.py</a></p>
<p>This is my first time posting here and I don't know how to attach files. pdf files for example in the telegram channel: <a href="https://t.me/+Tq7WpP1ImcjQXSZF" rel="nofollow noreferrer">https://t.me/+Tq7WpP1ImcjQXSZF</a>.</p>
<pre><code>import copy
import logging
import random
from pathlib import Path
import PyPDF2
import fitz
from PyPDF2.filters import decodeStreamData, ASCII85Decode
from PyPDF2.generic import EncodedStreamObject, DecodedStreamObject
def from_a4_to_a6_not_sync(input_file, output_file):
input_file = str(input_file.absolute())
pdf_reader = PyPDF2.PdfFileReader(input_file)
# print(f'{pdf_reader.getNumPages()=}')
# print(f'{pdf_reader.documentInfo=}')
first_page = pdf_reader.getPage(0)
left_up_side = copy.deepcopy(first_page)
right_up_side = copy.deepcopy(first_page)
left_down_side = copy.deepcopy(first_page)
right_down_side = copy.deepcopy(first_page)
# print(f'{left_up_side.extractText()=}')
# print(f'{right_up_side.extractText()=}')
# print(f'\nДО ОБРЕЗКИ:\n{type(left_up_side)=}\n{left_up_side=}\n')
# print(f'\nДО ОБРЕЗКИ:\n{type(right_up_side)=}\n{right_up_side=}\n')
# second_page = pdf_reader.getPage(0)
# print(f'{type(second_page)=}\n{second_page.extractText()=}')
# third_page = pdf_reader.getPage(0)
# fourth_page = pdf_reader.getPage(0)
first_coord = first_page.mediaBox.upperRight[0]
second_coord = first_page.mediaBox.upperRight[1]
# print(f'{first_coord=}')
# print(f'{second_coord=}')
# cords_upperLeft = first_page.mediaBox.upperLeft
# cords_lowerLeft = first_page.mediaBox.lowerLeft
# cords_upperRight = first_page.mediaBox.upperRight
# cords_lowerRight = first_page.mediaBox.lowerRight
# print(f'{cords_upperLeft=}')
# print(f'{cords_lowerLeft=}')
# print(f'{cords_upperRight=}')
# print(f'{cords_lowerRight=}')
# first_page.mediaBox.lowerRight = (first_coord / 2, second_coord / 2) # ВЕРХНЯЯ ЛЕВАЯ ЧЕТВЕРТИНКА
# second_page.mediaBox.lowerLeft = (first_coord / 2, second_coord / 2) # ВЕРХНЯЯ ПРАВАЯ ЧЕТВЕРТИНКА
# third_page.mediaBox.upperRight = (first_coord / 2, second_coord / 2) # НИЖНЯЯ ЛЕВАЯ ЧЕТВЕРТИНКА
# fourth_page.mediaBox.upperLeft = (first_coord / 2, second_coord / 2) # НИЖНЯЯ ПРАВАЯ ЧЕТВЕРТИНКА
left_up_side.mediaBox.lowerRight = (first_coord / 2, second_coord / 2) # ВЕРХНЯЯ ЛЕВАЯ ЧЕТВЕРТИНКА
right_up_side.mediaBox.lowerLeft = (first_coord / 2, second_coord / 2) # ВЕРХНЯЯ ПРАВАЯ ЧЕТВЕРТИНКА
left_down_side.mediaBox.upperRight = (first_coord / 2, second_coord / 2) # НИЖНЯЯ ЛЕВАЯ ЧЕТВЕРТИНКА
right_down_side.mediaBox.upperLeft = (first_coord / 2, second_coord / 2) # НИЖНЯЯ ПРАВАЯ ЧЕТВЕРТИНКА
# print(f'{first_page=}\n\n')
# one_page = left_up_side.getContents()
# second_page = right_up_side.getContents()
# decode_one = DecodedStreamObject()
# print(f'{decode_one.getData()}')
# print(f'{decodeStreamData(second_page)}')
# print(f'ПОСЛЕ ОБРЕЗКИ:\n{type(left_up_side)=}\n{left_up_side=}\n')
# print(f'{left_up_side.extractText().encode("utf8")=} {type(left_up_side.extractText())=}')
# print(f'{right_up_side.extractText().encode("utf8")=} {type(right_up_side.extractText())=}')
# print(f'{left_up_side.getContents()=} {type(left_up_side.getContents())=}')
# print(f'{right_up_side.getContents()=} {type(right_up_side.getContents())=}')
# print(f'\nПОСЛЕ ОБРЕЗКИ:\n{type(left_up_side)=}\n{left_up_side=}\n')
# print(f'\nПОСЛЕ ОБРЕЗКИ:\n{type(right_up_side)=}\n{right_up_side=}\n')
pdf_writer = PyPDF2.PdfFileWriter()
# pdf_writer.addPage(first_page)
pdf_writer.addPage(left_up_side)
pdf_writer.addPage(right_up_side)
with open(output_file, 'wb') as file:
pdf_writer.write(file)
file.close()
def fitz_four_piaces(input_file, output_file):
input_file = str(input_file.absolute())
src = fitz.open(input_file)
doc = fitz.open() # empty output PDF
page = 0
for spage in src: # for each page in input
r = spage.rect # input page rectangle
d = fitz.Rect(spage.cropbox_position, # CropBox displacement if not
spage.cropbox_position) # starting at (0, 0)
# --------------------------------------------------------------------------
# example: cut input page into 2 x 2 parts
# --------------------------------------------------------------------------
r1 = r / 2 # top left rect
r2 = r1 + (r1.width, 0, r1.width, 0) # top right rect
r3 = r1 + (0, r1.height, 0, r1.height) # bottom left rect
r4 = fitz.Rect(r1.br, r.br) # bottom right rect
rect_list = [r1, r2, r3, r4] # put them in a list
for rx in rect_list: # run thru rect list
count = 0 # почему-то не считает
rx += d # add the CropBox displacement
# print(f'{rx=}')
page = doc.new_page(-1, # new output page with rx dimensions
width=rx.width,
height=rx.height)
page.show_pdf_page(
page.rect, # fill all new page with the image
src, # input document
spage.number, # input page number
clip=rx, # which part to use of input page
)
# print(f'{spage.number=}')
# text_in_page = page.get_text("text")#.encode("utf8")
# print(f'{text_in_page=}')
# print(f'{count=} {doc.get_page_text(doc.page_count - 1)=}')
# print(f'in cicle {doc.page_count - 1=}')
count += 1
# that's it, save output file
# print(f'{doc.metadata=}')
# print(f'{doc.page_count=}')
doc.save(output_file, #
garbage=3, # eliminate duplicate objects
deflate=True, # compress stuff where possible
)
# input_file2 = str(output_file.absolute())
# src2 = fitz.open(input_file2)
# print(f'{src2.page_count=}')
# for page in src2:
# print(f'{page.get_text("words")=}')
def fitz_four_piaces_read(input_file):
input_file = str(input_file.absolute())
src = fitz.open(input_file)
print(f'{src.page_count=}')
for page in src:
print(f'{page.get_text("text")=}')
destination = Path().joinpath("MAKETS")
destination.mkdir(parents=True, exist_ok=True)
destination_input = destination.joinpath(
f'up_lef.pdf') # up_lef_up_rig_low_lef_low_rig
destination_output = destination.joinpath(
f'output_a6_{random.randint(1, 100)}_{random.randint(1, 200)}.pdf') # f'output_a6_{random.randint(1, 100)}_{random.randint(1, 200)}.pdf'
# from_a4_to_a6_not_sync(destination_input, destination_output)
fitz_four_piaces(destination_input, destination_output)
fitz_four_piaces_read(destination_output)
</code></pre>
|
<p>Solution found! It is necessary after dividing the page into 4 parts, convert the resulting pages into pictures and then compare the size. I will share the code, maybe it will be useful to someone)</p>
<pre><code>import os
import fitz
def get_size(filename):
st = os.stat(filename)
return st.st_size
async def from_a4_to_a6(input_file, output_file):
input_file = str(input_file.absolute())
src = fitz.open(input_file)
doc = fitz.open() # empty output PDF
for spage in src: # for each page in input
r = spage.rect # input page rectangle
d = fitz.Rect(spage.cropbox_position, # CropBox displacement if not
spage.cropbox_position) # starting at (0, 0)
# --------------------------------------------------------------------------
# example: cut input page into 2 x 2 parts
# --------------------------------------------------------------------------
r1 = r / 2 # top left rect
r2 = r1 + (r1.width, 0, r1.width, 0) # top right rect
r3 = r1 + (0, r1.height, 0, r1.height) # bottom left rect
r4 = fitz.Rect(r1.br, r.br) # bottom right rect
rect_list = [r1, r2, r3, r4] # put them in a list
for rx in rect_list: # run thru rect list
rx += d # add the CropBox displacement
page = doc.new_page(-1, # new output page with rx dimensions
width=rx.width,
height=rx.height)
page.show_pdf_page(
page.rect, # fill all new page with the imageb
src, # input document
spage.number, # input page number
clip=rx, # which part to use of input page
)
# Here we will convert the pdf to an image and check the size
pix = page.get_pixmap() # render page to an image
name_png = f"page-{page.number}.png" # _{random.randint(1,100)}
pix.save(name_png) # store image as a PNG
imgsize = get_size(name_png)
os.remove(name_png)
if imgsize < 1300: # A6 blank page size approximately 1209 Yours may be different, check first
doc.delete_page(pno=-1)
break
doc.save(output_file,
garbage=4, # eliminate duplicate objects
clean=True,
deflate=True, # compress stuff where possible
)
</code></pre>
|
python|pdf
| 1 |
1,904,120 | 10,585,864 |
NER naive algorithm
|
<p>I never really dealt with NLP but had an idea about NER which should NOT have worked and somehow DOES exceptionally well in one case. I do not understand why it works, why doesn't it work or weather it can be extended. </p>
<p>The idea was to extract names of the main characters in a story through:</p>
<ol>
<li>Building a dictionary for each word</li>
<li>Filling for each word a list with the words that appear right next to it in the text</li>
<li>Finding for each word a word with the max correlation of lists (meaning that the words are used similarly in the text)</li>
<li>Given that one name of a character in the story, the words that are used like it, should be as well (Bogus, that is what should not work but since I never dealt with NLP until this morning I started the day naive) </li>
</ol>
<p>I ran the overly simple code (attached below) on <a href="http://www.umich.edu/~umfandsf/other/ebooks/alice30.txt" rel="nofollow">Alice in Wonderland</a>, which for "Alice" returns:</p>
<blockquote>
<p>21 ['Mouse', 'Latitude', 'William', 'Rabbit', 'Dodo', 'Gryphon', 'Crab', 'Queen', 'Duchess', 'Footman', 'Panther', 'Caterpillar', 'Hearts', 'King', 'Bill', 'Pigeon', 'Cat', 'Hatter', 'Hare', 'Turtle', 'Dormouse']</p>
</blockquote>
<p>Though it filters for upper case words (and receives "Alice" as the word to cluster around), originally there are ~500 upper case words, and it's still pretty spot on as far as <a href="http://en.wikipedia.org/wiki/Alice%27s_Adventures_in_Wonderland#Characters" rel="nofollow">main characters</a> goes.</p>
<p>It does not work that well with other characters and in other stories, though gives interesting results. </p>
<p>Any idea if this idea is usable, extendable or why does it work at all in this story for "Alice" ?</p>
<p>Thanks!</p>
<pre><code>#English Name recognition
import re
import sys
import random
from string import upper
def mimic_dict(filename):
dict = {}
f = open(filename)
text = f.read()
f.close()
prev = ""
words = text.split()
for word in words:
m = re.search("\w+",word)
if m == None:
continue
word = m.group()
if not prev in dict:
dict[prev] = [word]
else :
dict[prev] = dict[prev] + [word]
prev = word
return dict
def main():
if len(sys.argv) != 2:
print 'usage: ./main.py file-to-read'
sys.exit(1)
dict = mimic_dict(sys.argv[1])
upper = []
for e in dict.keys():
if len(e) > 1 and e[0].isupper():
upper.append(e)
print len(upper),upper
exclude = ["ME","Yes","English","Which","When","WOULD","ONE","THAT","That","Here","and","And","it","It","me"]
exclude = [ x for x in exclude if dict.has_key(x)]
for s in exclude :
del dict[s]
scores = {}
for key1 in dict.keys():
max = 0
for key2 in dict.keys():
if key1 == key2 : continue
a = dict[key1]
k = dict[key2]
diff = []
for ia in a:
if ia in k and ia not in diff:
diff.append( ia)
if len(diff) > max:
max = len(diff)
scores[key1]=(key2,max)
dictscores = {}
names = []
for e in scores.keys():
if scores[e][0]=="Alice" and e[0].isupper():
names.append(e)
print len(names), names
if __name__ == '__main__':
main()
</code></pre>
|
<p>From the looks of your program and previous experience with NER, I'd say this "works" because you're not doing a proper evaluation. You've found "Hare" where you should have found "March Hare".</p>
<p>The difficulty in NER (at least for English) is not finding the names; it's detecting their full extent (the "March Hare" example); detecting them even at the start of a sentence, where all words are capitalized; classifying them as person/organisation/location/etc.</p>
<p>Also, <em>Alice in Wonderland</em>, being a children's novel, is a rather easy text to process. Newswire phrases like "Microsoft CEO Steve Ballmer" pose a much harder problem; here, you'd want to detect</p>
<pre><code>[ORG Microsoft] CEO [PER Steve Ballmer]
</code></pre>
|
python|nlp
| 7 |
1,904,121 | 10,844,099 |
Finding the biggest number associated with the value in a dictionary
|
<p>I am working on a dictionary that maps names to votes received. I need associate the name with the most votes, assigning it to the variable win. </p>
<p>So far:</p>
<pre><code>vote = {}
for key in vote:
vote(max(key)) = win
</code></pre>
<p>How do I associate win to the name, because I believe my error now is that I am associating it to the highest number.</p>
<p>Thank you for your help.</p>
|
<p>The usual way would be</p>
<pre><code>win = max(vote, key=vote.get)
</code></pre>
<p>You could also use a Counter</p>
<pre><code>from collections import Counter
win, = Counter(vote).most_common(1)
</code></pre>
|
python
| 3 |
1,904,122 | 10,799,573 |
Matplotlib imshow - Displaying different colours
|
<p>I have n matrices (np.array) of floats and I want to plot them together using imshow but with each one having a different colour range for its values. e.g. n = white->blue, n+1 = white -> red etc.
Is there a way of doing this?</p>
<p>The matrices are of the same size, and colouring over each other is not too much of an issue as the majority of the matrices' values are 0 (hope that will be white).</p>
<p>I was thinking of something like:</p>
<p>1st matrix</p>
<pre><code>000
010
000
</code></pre>
<p>2nd matrix</p>
<pre><code>000
000
001
</code></pre>
<p>So I thought maybe I could convert the second matrix into:</p>
<pre><code>222
222
223
</code></pre>
<p>and then make 0->1 white to blue and 2->3 white to red.</p>
<p>I unfortunately have no idea how to do this with the matplotlib colormaps.</p>
|
<p><code>imshow</code> will not plot values that are set to <code>None</code>. If the data are sparse enough you can lay them on top of each other.</p>
<pre><code>import numpy as np
import pylab as plt
# Your example data
A1 = np.zeros((3,3))
A2 = np.zeros((3,3))
A1[1,1] = 1
A2[2,2] = 1
# Apply a mask to filter out unused values
A1[A1==0] = None
A2[A2==0] = None
# Use different colormaps for each layer
pwargs = {'interpolation':'nearest'}
plt.imshow(A1,cmap=plt.cm.jet,**pwargs)
plt.imshow(A2,cmap=plt.cm.hsv,**pwargs)
plt.show()
</code></pre>
<p><img src="https://i.stack.imgur.com/vYy1L.png" alt="enter image description here"></p>
|
python|numpy|matplotlib
| 6 |
1,904,123 | 5,213,421 |
Are there any sandboxable scripting engines which can be integrated with PHP/Python/other?
|
<p>I'm performing a thought-experiment which, judging by other questions, isn't so novel after all, but I think I'll proceed anyway. I want to sandbox a user-supplied server-side script by, among other things, confining it to a virtual filesystem and setting the root directory, and further mapping certain virtual directories to specific physical ones, inconsistent with the actual directory structure. For example (using PHP string parsing), my preconception is "~$user/..." but the less-semantic "/$user/..." would work fine too; either might map to "users/$user/$script_name/data/...". Yes, under certain circumstances multiple users can be affected by the script.</p>
<p>Since this is a thought-experiment and I therefore don't consider the implementation language an issue, I'm expecting to do it on my localhost and would rather use PHP than install something else. (I also have Python 2 available, and could get mod_wsgi to use it instead. I'd install Python 3 if I had to.) Ideally, I wish a PEAR module would do this - but from what I can see none does.</p>
<p>I tried and failed to find a server module, such as SSJS, that could accomplish this. The closest things to answers that I found were << <a href="https://stackoverflow.com/questions/5095988/looking-for-a-locked-down-script-interpreter">Looking for a locked down script interpreter</a> >> and << <a href="https://stackoverflow.com/questions/1970195/allowing-users-to-script-inline-what-inline-scripting-engines-are-there-for-eith">Allowing users to script inline, what inline scripting engines are there for either .net or java?</a> >>. I'll move to Java or, less likely, Mono if I absolutely have to, but I'm not enthusiastic to the idea. (I'm extremely rusty on Java, and have hardly used it server-side at all. Mono is totally alien to me.)</p>
<p>Since they're the most promising options so far, I also wonder how extensive the sandboxing facilities are in Java and Mono. For example, can they do virtual filesystems? Entering APIs from Java user-code into the engine? Are any standard APIs offered to the script, and if so can they be removed?</p>
<p><strong>Clarification</strong>
I don't really care which way this goes, but I was actually expecting Java/Mono to be the implementation platform rather than the sandboxed one, based on the questions & answers I linked. I'm a little surprised to see that flipped in the answers, but either way works.</p>
|
<p>I have never tried to truly sandbox Mono but this might give you a starting point:</p>
<p><a href="http://www.mono-project.com/MonoSandbox" rel="nofollow">http://www.mono-project.com/MonoSandbox</a></p>
<p>File system access in the sandbox is touched on in that link.</p>
<p>Popular choices for <a href="http://www.mono-project.com/Scripting_With_Mono" rel="nofollow">Mono scripting</a> seem to be <a href="http://boo.codehaus.org/" rel="nofollow">Boo</a> and <a href="http://ironpython.net/" rel="nofollow">Python</a>. Both ship with the latest version of Mono (<a href="http://www.mono-project.com/Release_Notes_Mono_2.10" rel="nofollow">2.10</a>). <a href="http://msdn.microsoft.com/en-us/vbasic/default" rel="nofollow">Visual Basic, </a><a href="http://ironruby.net/" rel="nofollow">Ruby</a> and <a href="http://msdn.microsoft.com/en-us/fsharp/default" rel="nofollow">F#</a> (OCaml-ish) do as well.</p>
<p>The <a href="http://www.mono-project.com/CSharp_Compiler" rel="nofollow">Mono C# compiler</a> can be easily embedded as a service for scripting. Here is a <a href="http://tirania.org/blog/archive/2011/Feb-24.html" rel="nofollow">nice article about it</a>.</p>
<p>If you are partial to PHP, you should check out <a href="http://phalanger.codeplex.com/" rel="nofollow">Phalanger</a>.</p>
<p>There are many other choices. There are new .NET based scripting languages all the time. I came across <a href="http://blogs.lessthandot.com/index.php/DesktopDev/MSTech/machete-a-scripting-runtime-for" rel="nofollow">this one</a> earlier today.</p>
|
java|php|python|mono|security
| 1 |
1,904,124 | 62,873,341 |
PYTHON file open using list
|
<pre><code>file_1=open("file.txt")
lis=[]
for i in file_1:
lis=i
print(lis)
print(lis)
</code></pre>
<p>I want to keep my text file into the list, but after for loop the list becomes empty</p>
|
<p>The comment posted explains the issue with the code. However, python has a built-in function for this purpose:</p>
<pre><code>file_1=open("file.txt")
lis = file_1.readlines()
</code></pre>
|
python|list|file
| 0 |
1,904,125 | 61,998,686 |
Check maximum and minimum ylim and set_ylim for all subplots
|
<p>I have 7 subplots for each day of week. I want to have all of them the same size based on maximum range founded in plots and minimum range founded in plot.</p>
<p>I used something like this:</p>
<pre><code>fig, ax = plt.subplots(4, 2)
min_ylim = math.inf
max_ylim = 0
#...
#iterate over ax plots, draw a plot and check ylim
#...
plt.setp(ax, ylim=(min_ylim, max_ylim)) #apply to all subplots
</code></pre>
<p>Is there any simple method setting automatically for all plots the same y ax limitation?</p>
|
<p>You can pass <code>sharey=True</code> to <code>subplots()</code> so that all subplots share the same y limits</p>
|
python|matplotlib
| 1 |
1,904,126 | 61,945,733 |
How to use OLSResults.f_test with experimental groups in python
|
<p>I'm trying to do an F-test of equality of coefficient for the three experimental groups I have in my data. </p>
<p>I've run a regression to evaluate the results of a random control trial that included four groups, G1, G2, G3 and control. </p>
<p>Now I need to determine that the experimental groups (G1, G2, G3) are equal. </p>
<p>I know I can do this using Statsmodel's OLSResults.f_test. But I am unclear on how to configure it. The website gives examples, but I'm not sure how to translate it: <a href="https://www.statsmodels.org/stable/generated/statsmodels.regression.linear_model.OLSResults.f_test.html" rel="nofollow noreferrer">https://www.statsmodels.org/stable/generated/statsmodels.regression.linear_model.OLSResults.f_test.html</a></p>
<p>The example given there is:</p>
<pre><code>from statsmodels.datasets import longley
from statsmodels.formula.api import ols
dta = longley.load_pandas().data
formula = 'TOTEMP ~ GNPDEFL + GNP + UNEMP + ARMED + POP + YEAR'
results = ols(formula, dta).fit()
hypotheses = '(GNPDEFL = GNP), (UNEMP = 2), (YEAR/1829 = 1)'
f_test = results.f_test(hypotheses)
print(f_test)
</code></pre>
<p>How do I essentially write the below hypotheses so that I can check whether my 3 experimental groups are different?</p>
<pre><code>hypotheses = '(G1=G2), (G1=G3), (G2=G3)'
</code></pre>
|
<p>We can use the iris example:</p>
<pre><code>from statsmodels.formula.api import ols
import pandas as pd
data = load_iris()
df = pd.read_csv("http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data",
header=None,names=["s_wid","s_len","p_wid","p_len","species"])
df.species.unique()
array(['Iris-setosa', 'Iris-versicolor', 'Iris-virginica'], dtype=object)
</code></pre>
<p>There's 3 categories in species and we can fit a model like you did:</p>
<pre><code>formula = 's_len ~ species'
results = ols(formula, df).fit()
</code></pre>
<p>If we look at the results:</p>
<pre><code>results.summary()
Dep. Variable: s_len R-squared: 0.392
Model: OLS Adj. R-squared: 0.384
Method: Least Squares F-statistic: 47.36
Date: Sat, 23 May 2020 Prob (F-statistic): 1.33e-16
Time: 01:07:39 Log-Likelihood: -49.688
No. Observations: 150 AIC: 105.4
Df Residuals: 147 BIC: 114.4
Df Model: 2
Covariance Type: nonrobust
coef std err t P>|t| [0.025 0.975]
Intercept 3.4180 0.048 70.998 0.000 3.323 3.513
species[T.Iris-versicolor] -0.6480 0.068 -9.518 0.000 -0.783 -0.513
species[T.Iris-virginica] -0.4440 0.068 -6.521 0.000 -0.579 -0.309
</code></pre>
<p>If your model consist of only the groups, like the above, then the F-statistic (47.36) and p.value (1.33e-16) is what you need. This F-test test this model against an intercept only model. </p>
<p>A more detailed explanation: the model is fitted with <code>Iris-setosa</code> as reference, and the other two species effects on sepal length, <code>s_len</code> , are calculated as coefficients with respect to <code>Iris-setosa</code>. If we look at the mean, this becomes clear:</p>
<pre><code>df.groupby('species')['s_len'].mean()
Iris-setosa 3.418
Iris-versicolor 2.770
Iris-virginica 2.974
</code></pre>
<p>In this case, the hypothesis is Iris-versicolor = 0 and Iris-virginica=0, so that the groups are all equal:</p>
<pre><code>hypotheses = '(species[T.Iris-versicolor] = 0), (species[T.Iris-virginica] = 0)'
results.f_test(hypotheses)
<class 'statsmodels.stats.contrast.ContrastResults'>
<F test: F=array([[47.3644614]]), p=1.327916518456957e-16, df_denom=147, df_num=2>
</code></pre>
<p>Now, you can see this is exactly the same as the F-statistic provided in the summary.</p>
|
python|statsmodels|hypothesis-test
| 1 |
1,904,127 | 60,412,967 |
UndefinedTable: relation does not exist
|
<p>I'm running with the <code>python 3.5.2</code> and trying to run basic commands with <code>postgres</code>.</p>
<p>I'm connecting to the postgres docker server:</p>
<pre><code>conn = psycopg2.connect(user='postgres', password='docker', host='127.0.0.1', port='5432')
cursor = self.conn.cursor()
</code></pre>
<p>For the <strong>first</strong> <strong>time</strong> I'm creating a new table:</p>
<pre><code>sql = '''CREATE TABLE ''' + self.table_name + '''(
FIRST_NAME CHAR(20) NOT NULL,
LAST_NAME CHAR(20),
AGE INT,
SEX CHAR(6),
INCOME FLOAT
)'''
cursor.execute(sql)
</code></pre>
<p>And inserting rows with:</p>
<pre><code>def add_emloyee(name, last, age, sex, income):
sql = '''INSERT INTO ''' + self.table_name + '''(FIRST_NAME,LAST_NAME, AGE,SEX,INCOME)
VALUES (%s, %s, %s, %s, %s)'''
cursor.execute(sql, (name, last, age, sex, income))
</code></pre>
<p>Everything works good.
But when I'm running the application again (after I closed it and run again) and comment the create table commands (because I created it in previous run), and trying to insert new data into the table, I'm getting error:</p>
<pre><code>psycopg2.errors.UndefinedTable: relation "employee" does not exist
LINE 1: INSERT INTO EMPLOYEE(FIRST_NAME,LAST_NAME, AGE,SEX,INCOME)
</code></pre>
<p>Why is this happen ? </p>
|
<p>In my case, I was using <code>flask_sqlalchemy</code> with <code>postgres</code>.</p>
<p>I forgot to give <code>create_all</code> in the flask shell which gave me the same error.</p>
<pre><code>db.create_all()
</code></pre>
<p>Hope this helps someone.</p>
|
python|postgresql
| 0 |
1,904,128 | 71,166,582 |
Leave -One-out kfold for a linear regression in Python
|
<p>I am trying to run a leave-one-one kfold validation on a linear regression model I have but keep getting errors with my script leaving with nan values at the end. x7 is my true values and y7 is my modeled values. Why do I keep getting an error at the end?</p>
<pre><code> from sklearn.model_selection import train_test_split
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
x7 =np.array([16.36,24.67,52.31,87.31,3.98,63.45,40.47,35.67,52.12,9.39,57.61,35.77,113.1])
a=np.reshape(x7, (-1,1))
y7 = np.array([19.678974,4.824257,75.617537,62.587548,40.287506,76.576852,38.777129,29.062245
,50.088907,34.415783,46.466144,44.848378,68.988740])
b=np.reshape(y7, (-1,1))
a_train, a_test, b_train, b_test = train_test_split(x7, y7, test_size=12,
random_state=None)
train_test_split(b, shuffle=True)
kfolds = KFold(n_splits=13, random_state=None)
model = LinearRegression()
score = cross_val_score(model, a, b, cv=kfolds)
print(score)
</code></pre>
|
<p>If you run it, you will see the error:</p>
<pre><code>UndefinedMetricWarning: R^2 score is not well-defined with less than two samples.
</code></pre>
<p>When you don't provide the metric, it defaults to the <a href="https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html#sklearn.linear_model.LinearRegression.score" rel="nofollow noreferrer">default scorer</a> for LinearRegression, which is R^2. R^2 cannot be calculated for just 1 sample.</p>
<p>In your case, check out the <a href="https://scikit-learn.org/stable/modules/model_evaluation.html" rel="nofollow noreferrer">options</a> and decide which one is suitable. one option might be to use RMSE (here it is the negative of RMSE) :</p>
<pre><code>score = cross_val_score(model, a, b, cv=kfolds,scoring ="neg_mean_squared_error")
score
array([ -191.24253413, -1196.96087661, -849.60502864, -17.24243385,
-371.71996402, -623.67802306, -21.95720802, -163.79409063,
-2.16490531, -62.32600883, -29.3290439 , -19.44669535,
-315.64087633])
</code></pre>
|
python|scikit-learn|sklearn-pandas|leave-one-out
| 0 |
1,904,129 | 70,134,152 |
ERROR: No matching distribution found for kivy_deps.sdl2_dev~=0.3.1
|
<p><strong>Hello I am trying to install kivy with python 3.10</strong> (most latest as of this ques) it is giving me an error<br />
<code> ERROR: No matching distribution found for kivy_deps.sdl2_dev~=0.3.1</code></p>
<p>then it is also trying to install Kivy-1.11.1.tar.gz and I am getting more errors out</p>
<p><a href="https://i.stack.imgur.com/lazGI.png" rel="nofollow noreferrer">Screenshot{errors(kivy)}</a></p>
|
<p>It's because at this time Kivy only supports Python 3.6-3.9.</p>
<p>Here is what I see on Kivy page</p>
<p><a href="https://i.stack.imgur.com/zbLwc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zbLwc.png" alt="enter image description here" /></a></p>
<p>So, you can install virtual environment of Python 3.9 and try again. It worked for me.</p>
|
python|installation|kivy|python-3.10
| 0 |
1,904,130 | 11,346,415 |
Python 3, NetworkX - Graph node key list is not displaying repeated edges
|
<p>I'm using the NetworkX module on Python 3.2.3. In a multigraph G with multiple edges between two nodes - say, 'a' and 'b' with three edges between them - typing G['a'].keys() into the IDLE prompt returns a dict_keys list with 'b' occurring only once in it. Any way to make it so that 'b' occurs as many times as there are edges between the two nodes?</p>
|
<p>Something like</p>
<pre><code>[(k, len(v)) for k, v in G['a'].items()]
</code></pre>
|
python|graph|python-3.x|networkx
| 1 |
1,904,131 | 63,410,223 |
Data not being added to database despite having conn.commit()
|
<p>first time posting here so I apologize for poor formatting. I recently starting to use SQLite3 and have looked at tutorials/instructions but I can't seem to get it working, specifically, adding data doesn't commit it to the database.</p>
<pre><code>import random, sqlite3
conn = sqlite3.connect('card.s3db')
cur = conn.cursor()
cur.execute("""CREATE TABLE card(
id INT,
number TEXT,
pin TEXT,
balance INT DEFAULT 0
);""")
conn.commit()
card_number = ''.join([str(i) for i in number_list_copy])
pin = ''.join(["{}".format(random.randint(0,9)) for i in range(0, 4)])
cur.execute("INSERT INTO card (number, pin) VALUES (?, ?)", (card_number, pin))
conn.commit()
</code></pre>
<p>I omitted some code but essentially card_number is just a string. When I execute</p>
<pre><code>cur.fetchall()
</code></pre>
<p>or</p>
<pre><code>cur.fetchone()
</code></pre>
<p>I get an empty list or <em>none</em> respectively. I've also tried different ways to insert the data and gone through some threads on here.</p>
<p>Thanks!</p>
|
<p>An <code>insert</code> modifies the data but has no result.</p>
<p>To get data from the database you have to use a <a href="https://sqlite.org/lang_select.html" rel="nofollow noreferrer"><code>select</code></a> query.</p>
<p>For example :</p>
<pre><code>cur.execute("SELECT number, pin FROM card")
the_data = cur.fetchall()
</code></pre>
<p>As a side note, you probably want to make the <code>id</code> column a primary key :</p>
<pre><code>id INTEGER PRIMARY KEY
</code></pre>
|
python-3.x|sqlite
| 0 |
1,904,132 | 63,483,963 |
How can I do a vlookup in a new column with Python?
|
<p>I am trying to create a new column in df1 with the "soll" values from df2 by the key "mittlere Wind". I've tried doing a vlookup and it works when I do it in a new df. Can someone help me by doing it in a new column in an existing df?</p>
<p>This works just fine:</p>
<pre><code>results=df1.merge(df2,on='mittlere Wind', how='left')
</code></pre>
<p>When I try this, I get an error:</p>
<pre><code>df1['soll']=df1['mittlere Leistung'].merge(df2['soll'],on='mittlere Wind', how='left')
Error: AttributeError: 'Series' object has no attribute 'merge'
</code></pre>
|
<p>You must make right side of '=' to series type in order to avoid not necessary column.
Refer to code below. You can see ['C'] added for making them to series type.</p>
<pre><code>import pandas as pd
df1 = pd.DataFrame( { 'A' : [i for i in range(1,11,2)],
'B' : [3,5,2,4,3]})
df2 = pd.DataFrame( { 'A' : [i for i in range(1,11)],
'B' : [i for i in range(1,11)],
'C' : [4,5,7,9,20,4,5,3,4,10]})
df1['C'] = df1.merge(df2, on='A', how='left')['C']
</code></pre>
|
python|pandas|merge|vlookup
| 0 |
1,904,133 | 55,771,412 |
How to convert .npy(numpy) file to .png file where .npy array shape (30,256,256)?
|
<p>I want to convert .npy file to .png file </p>
<pre><code>from scipy.misc import toimage, imsave
img_array = np.load('MRNet-v1.0/train/sagittal/0003.npy')
print(img_array.shape)
name = "img"+str(i)+".png"
imsave(name,img_array)
</code></pre>
<p>shape : (30,256,256)</p>
<p>But getting error like </p>
<p>ValueError: 'arr' does not have a suitable array shape for any mode.</p>
|
<p>First of all, these <code>scipy</code> image tools are <strong>deprecated</strong> and will be removed in the future (beginning at scipy version 1.2.0). Instead, install <code>imageio</code> and then run:</p>
<pre><code>import imageio
for i in range(30):
imsave("./slice_{0}.png".format(i), img_array[i,...])
</code></pre>
|
python|numpy|matplotlib|scipy|numpy-ndarray
| 1 |
1,904,134 | 56,775,798 |
Change version of Python via Anaconda Prompt
|
<p>I am trying to download python for a colleague via Anaconda. We are trying to download the right version as to maintain compatibility with Power BI. We're looking to implement Python visuals from within Power BI. By default, 3.7 is downloaded. I have read that 3.7 is not supported within Power BI. I need to downgrade to 3.6.5 (my version that I have confirmed works).</p>
<p>We have tried opening the anaconda prompt and running
conda install python=3.6.5</p>
<p>When running the above, we're told all of the upgrades and downgrades that will take place, we accept them, they all render to 100% but the code returns/ends with "[Errno 13] Permission denied: 'C:\Users\NAME\AppData\Local\Continuum\anaconda3\python.exe'"</p>
<p>After the "change" we run:
import sys
print(sys.version)</p>
<p>and he is still returns 3.7.3</p>
<p>Should we be attempting to change the version of python from within anaconda prompt? Command prompt? Do we need to uninstall anaconda altogether and specify a particular version of python to install? Will uninstalling anaconda uninstall python?</p>
<p>THANK YOU!</p>
|
<p>You can run it this way, so each version of python has its own anaconda environment (this will install all of anaconda's default packages in this environment as well):</p>
<pre><code>conda create --name python3.6.5 python=3.6.5 anaconda
</code></pre>
<p>And when you want to use this version of python and all the installed packages do:</p>
<pre><code>conda activate python3.6.5
</code></pre>
<p>To just create the environment without anaconda's default packages:</p>
<pre><code>conda create --name python3.6.5 python=3.6.5
</code></pre>
|
python|anaconda|powerbi|command-prompt|conda
| 1 |
1,904,135 | 70,009,064 |
unable to import TA-Lib python
|
<p>I've a problem regarding the installation of <a href="https://mrjbq7.github.io/ta-lib/" rel="nofollow noreferrer">TA-Lib</a>, I've followed both the documentation and this tutorial (<a href="https://blog.quantinsti.com/install-ta-lib-python/" rel="nofollow noreferrer">https://blog.quantinsti.com/install-ta-lib-python/</a>) but both don't seem to work. The <a href="https://i.stack.imgur.com/4XMSC.png" rel="nofollow noreferrer">module</a> is present but when I try to check in <a href="https://i.stack.imgur.com/eWfkv.png" rel="nofollow noreferrer">vscode</a> if the import works it just doesn't. But it does work on <a href="https://i.stack.imgur.com/ToE7Q.png" rel="nofollow noreferrer">anaconda prompt</a> so I don't know what the problem might be here.</p>
|
<p>can you create a virtual environment and install TA LIB inside the virtual environment and open the folder in vs code and run it. I think that should work.</p>
|
python|visual-studio-code|anaconda|ta-lib
| 0 |
1,904,136 | 17,779,103 |
Is there a python web browser that supports user login?
|
<p>I want to login to a website via python, is there a browser or some wrapper that could do the job.</p>
|
<p>Websites determine if someone is logged in or not using what are called cookies. In order to masquerade as a logged in user, all you need to do is to supply the Cookie in your request headers. Any python module to handle web requests can do this. Below is sample code using <a href="http://docs.python-requests.org/en/latest/" rel="nofollow">requests</a></p>
<pre><code>import requests
url = 'www.targetsite.com/page/i/need/'
headers = { 'Connection' : 'keep-alive',
'Header' : 'header-value',
'Cookie' : 'CookieFieldName=CookieValue'
}
r = requests.get(url, headers=headers)
print r.content
</code></pre>
|
python|browser
| 2 |
1,904,137 | 18,136,651 |
using subprocess.call method
|
<p>Python version: '2.6.5 (r265:79063, Oct 1 2012, 22:07:21) \n[GCC 4.4.3]'</p>
<p>I have this:</p>
<pre><code>>>> ss = subprocess.call("ls -z", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
</code></pre>
<p>How do I get the error message now ?</p>
<p>This does not work:</p>
<pre><code>>>> for i in subprocess.PIPE:
... print i
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'int' object is not iterable
</code></pre>
|
<p>That combo of options doesn't work very well. Communicate() will read stdout, stderr and wait for the program to terminate. </p>
<pre><code>proc = subprocess.Popen("ls -z", shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = proc.communicate()
assert proc.returncode == 0, 'i am sad'
print out
print err
</code></pre>
<p>If you want to read stdout line by line as the program runs, you can create your own thread to babysit stderr or pipe stderr to stdout.</p>
|
python
| 2 |
1,904,138 | 18,177,700 |
How do recursive calls work (sierpinksi triangle)?
|
<p>I came across a program that draws the Sierpinski Triangle with recursion.
How I interpret this code is sierpinski1 is called until n == 0, and then only 3 small triangles (one triangle per call) would be drawn because n == 0 is the only case when something is drawn (panel.canvas.create_polygon). However, this is not how the code works because when run the number of triangles dependent upon n are drawn, not just the 3 small triangles I think would show.</p>
<p>Can someone explain to me how many things can be drawn when the function sierpinski1 only has 1 condition for when something can be drawn? That is the one part of the program that I can't understand. I looked up everything I could on recursion, but no information pertained to explaining why this format of recursion works.</p>
<pre><code>def sierpinski(n):
x1 = 250
y1 = 120
x2 = 400
y2 = 380
x3 = 100
y3 = 380
panel = DrawingPanel(500,500)
sierpinski1(n,x1,y1,x2,y2,x3,y3,panel)
def sierpinski1(n,x1,y1,x2,y2,x3,y3,panel):
if n == 0:
panel.canvas.create_polygon(x1,y1,x2,y2,x3,y3, fill = 'yellow', outline = 'black')
else:
sierpinski1(n-1,x1,y1,(x1+x2)/2,(y1+y2)/2,(x1+x3)/2,(y1+y3)/2, panel)
sierpinski1(n-1,(x1+x3)/2,(y1+y3)/2,(x2+x3)/2,(y2+y3)/2,x3,y3,panel)
sierpinski1(n-1,(x1+x2)/2,(y1+y2)/2,x2,y2,(x2+x3)/2,(y2+y3)/2,panel)
</code></pre>
|
<p>This is the principle of how recursion works: there is a <strong>base case</strong> and there is a <strong>recursive case</strong>. Since recursion makes use of a LIFO structure (such as a call stack), we have to know when to stop adding calls to the stack.</p>
<p>The base case:</p>
<ul>
<li>Occurs when <code>n == 0</code></li>
<li>Performs the actual drawing action</li>
<li>Means that there are no more triangles to be generated, so it's okay to start drawing them.</li>
</ul>
<p>The recursive case:</p>
<ul>
<li>Occurs when <code>n > 0</code> (and strictly speaking, when <code>n < 0</code>)</li>
<li>Makes three distinct calls to itself, each with varying values for x1, x2, y1, and y2.</li>
<li>Means that there are still more triangles to be generated.</li>
</ul>
<p>Think of it like this. The number of triangles to be drawn is given by this formula T:</p>
<p><img src="https://i.stack.imgur.com/Y1EQU.gif" alt="T of x = 3 to the n if n is greater than 0, 0 otherwise."></p>
<p>This holds for simple triangles: If n = 1, then there's only three triangles drawn. If n = 2, then 9 are drawn, and so forth.</p>
<p><strong>Why will it work?</strong> The <strong>call stack</strong> plays a big role in this.</p>
<p>For brevity, here's a trace of n = 1:</p>
<pre><code>sierpinski1(n,x1,y1,x2,y2,x3,y3,panel)
condition n = 0 FAILS
sierpinski1(n-1,x1,y1,(x1+x2)/2,(y1+y2)/2,(x1+x3)/2,(y1+y3)/2, panel)
condition n = 0 PASSES
panel.canvas.create_polygon(x1,y1,x2,y2,x3,y3, fill = 'yellow', outline = 'black')
sierpinski1(n-1,(x1+x3)/2,(y1+y3)/2,(x2+x3)/2,(y2+y3)/2,x3,y3,panel)
condition n = 0 PASSES
panel.canvas.create_polygon(x1,y1,x2,y2,x3,y3, fill = 'yellow', outline = 'black')
sierpinski1(n-1,(x1+x2)/2,(y1+y2)/2,x2,y2,(x2+x3)/2,(y2+y3)/2,panel)
condition n = 0 PASSES
panel.canvas.create_polygon(x1,y1,x2,y2,x3,y3, fill = 'yellow', outline = 'black')
</code></pre>
<p>So, for n = 1, there are exactly three lines drawn. For higher values of <code>n</code>, things get trickier to see at a pseudocode high level, but the same principle applies.</p>
|
python|recursion
| 3 |
1,904,139 | 68,935,428 |
Create hierarchical Json from nested dictory in python
|
<p>I have nested dictionary like below (it is cosmos gremlin output), which I want to convert in to a hierarchical json in python. I want to use this in D3.js for creating a hierarchical tree. Can you pls let me know how to solve this? Appreciate your response.</p>
<p>example nested dictionary:</p>
<pre><code>{"Root_Level":{"key":"Root_Level","value":{
"Operation":{
"key":"Operation",
"value":{}
},
"Technology":{
"key":"Technology",
"value":{
"Top Management":{
"key":"Top Management",
"value":{
"Associate Product Lead":{
"key":"Associate Product Lead",
"value":{
"Associate Architect":{
"key":"Associate Architect",
"value":{
"Principal Consultant":{
"key":"Principal Consultant",
"value":{}
}}}}}}}}}}}}
</code></pre>
<p>expected output:</p>
<pre><code>{ "name": "Root_Level", "children":[ { "name": "Operation", "children": [] }, { "name": "Technology", "children":[ { "name": "Top Management", "children": [ { "name": "Associate Product Lead" ,"children": [ { "name": "Associate Architect" ,"children": [ {
"name": "Principal Consultant", "children": [] }]}]}]}]}]}
</code></pre>
|
<p>You can use the function below to get what you want.</p>
<p><code>result = tree(data)[0]</code> the <code>[0]</code> is needed because the first element is what you want.</p>
<pre><code>def tree(data):
result = []
if isinstance(data, dict):
for v in data.values():
temp = {'name': v['key']}
temp['children'] = tree(v['value'])
result.append(temp)
return result
data = {"Root_Level":{"key":"Root_Level","value":{ "Operation":{ "key":"Operation", "value":{} }, "Technology":{ "key":"Technology", "value":{ "Top Management":{ "key":"Top Management", "value":{ "Associate Product Lead":{ "key":"Associate Product Lead", "value":{ "Associate Architect":{ "key":"Associate Architect", "value":{ "Principal Consultant":{ "key":"Principal Consultant", "value":{} }}}}}}}}}}}}
result = tree(data)[0]
print(result)
</code></pre>
<p>Output</p>
<pre><code>{'name': 'Root_Level',
'children': [{'name': 'Operation', 'children': []},
{'name': 'Technology',
'children': [{'name': 'Top Management',
'children': [{'name': 'Associate Product Lead',
'children': [{'name': 'Associate '
'Architect',
'children': [{'name': 'Principal '
'Consultant',
'children': []}]}]}]}]}]}
</code></pre>
|
python|json|d3.js|hierarchical-data
| 2 |
1,904,140 | 68,294,585 |
Python pandas scatterplot of year against month-day
|
<p>So I have a dataframe with a date column.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>date</th>
</tr>
</thead>
<tbody>
<tr>
<td>2021-06-17</td>
</tr>
<tr>
<td>2020-06-20</td>
</tr>
</tbody>
</table>
</div>
<p>What I want to do is to do a scatterplot with the x-axis being the year, and the y-axis being month-day. So what I have already is this:</p>
<h2><a href="https://i.stack.imgur.com/78k7W.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/78k7W.png" alt="enter image description here" /></a></h2>
<p>What I would like is for the y-axis ticks to be the actual month-day values and not the day number for the month-day-year. Not sure if this is possible, but any help is much appreciated.</p>
|
<p>Some Sample Data:</p>
<pre><code>import pandas as pd
from matplotlib import pyplot as plt, dates as mdates
# Some Sample Data
df = pd.DataFrame({
'date': pd.date_range(
start='2000-01-01', end='2020-12-31', freq='D'
)
}).sample(n=100, random_state=5).sort_values('date').reset_index(drop=True)
</code></pre>
<p>Then one option would be to normalize the dates to the same year. Any year works as long as it's a leap year to handle the possibility of a February 29th (leap day).</p>
<p>This becomes the new y-axis.</p>
<pre><code># Create New Column with all dates normalized to same year
# Any year works as long as it's a leap year in case of a Feb-29
df['month-day'] = pd.to_datetime('2000-' + df['date'].dt.strftime('%m-%d'))
# Plot DataFrame
ax = df.plot(kind='scatter', x='date', y='month-day')
# Set Date Format On Axes
ax.xaxis.set_major_formatter(mdates.DateFormatter('%Y')) # Year Only
ax.yaxis.set_major_formatter(mdates.DateFormatter('%m-%d')) # No Year
plt.tight_layout()
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/jN5c1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jN5c1.png" alt="plot" /></a></p>
|
python|pandas
| 1 |
1,904,141 | 59,235,282 |
How can I assign slices of one structured Numpy array to another?
|
<p>I have two numpy structured arrays <code>arr1</code>, <code>arr2</code>.
<br>
<code>arr1</code> has fields <code>['f1','f2','f3']</code>.
<br>
<code>arr2</code> has fields <code>['f1','f2','f3','f4']</code>.
<br>
I.e.:</p>
<pre><code>arr1 = [[f1_1_1, f2_1_1, f3_1_1 ], arr2 = [[f1_2_1, f2_2_1, f3_2_1, f4_2_1 ],
[f1_1_2, f2_1_2, f3_1_2 ], [f1_2_2, f2_2_2, f3_2_2, f4_2_2 ],
... , ... ,
[f1_1_N1, f2_1_N1, f3_1_N1]] [f1_2_N2, f2_2_N2, f3_2_N2, f4_2_N2]]
</code></pre>
<p>I want to assign various slices of <code>arr1</code> to the corresponding slice of <code>arr2</code> (slices in the indexes and in the fields).
See below for the various cases.</p>
<p>From answers I found (to <em>related</em>, but not exactly the same, questions) it seemed to me that the only way to do it is assigning one slice at a time, for a single field, i.e., something like</p>
<pre><code>arr2['f1'][0:1] = arr1['f1'][0:1]
</code></pre>
<p>(and I can confirm this works), looping over all source fields in the slice.</p>
<p><strong>Is there a way to assign all intended source fields in the slice at a time?</strong></p>
<p><hr>
I mean to assign, say, the elements <code>x</code> in the image</p>
<p><strong>Case 1</strong> (only some fields in <code>arr1</code>)</p>
<pre><code>arr1 = [[ x , x , f3_1_1 ], arr2 = [[ x , x , f3_2_1, f4_2_1 ],
[ x , x , f3_1_2 ], [ x , x , f3_2_2, f4_2_2 ],
... , ... ,
[f1_1_N1, f2_1_N1, f3_1_N1]] [f1_2_N2, f2_2_N2, f3_2_N2, f4_2_N2]]
</code></pre>
<p><strong>Case 2</strong> (all fields in <code>arr1</code>)</p>
<pre><code>arr1 = [[ x , x , x ], arr2 = [[ x , x , x , f4_2_1 ],
[ x , x , x ], [ x , x , x , f4_2_2 ],
... , ... ,
[f1_1_N1, f2_1_N1, f3_1_N1]] [f1_2_N2, f2_2_N2, f3_2_N2, f4_2_N2]]
</code></pre>
<p><strong>Case 3</strong>
<br>
<code>arr1</code> has fields <code>['f1','f2','f3','f5']</code>.
<br>
<code>arr2</code> has fields <code>['f1','f2','f3','f4']</code>.
<br>
Assign a slice of <code>['f1','f2','f3']</code></p>
<hr>
<p>Sources:</p>
<p><a href="https://stackoverflow.com/questions/3058602/python-numpy-structured-array-recarray-assigning-values-into-slices">Python Numpy Structured Array (recarray) assigning values into slices</a></p>
<p><a href="https://stackoverflow.com/questions/50028309/convert-a-slice-of-a-structured-array-to-regular-numpy-array-in-numpy-1-14">Convert a slice of a structured array to regular NumPy array in NumPy 1.14</a></p>
|
<p>You can do it for example like that:</p>
<pre><code>import numpy as np
x = np.array([('Rex', 9, 81.0), ('Fido', 3, 27.0)], dtype=[('name', 'U10'), ('age', 'i4'), ('weight', 'f4')])
y = np.array([('Carl', 10, 75.0), ('Joe', 7, 76.0)], dtype=[('name2', 'U10'), ('age2', 'i4'), ('weight', 'f4')])
print(x[['name', 'age']])
print(y[['name2', 'age2']])
# multiple field indexing
y[['name2', 'age2']] = x[['name', 'age']]
print(y[['name2', 'age2']])
# you can also use slicing if you want specific parts or the size does not match
y[:1][['name2', 'age2']] = x[1:][['name', 'age']]
print(y[:][['name2', 'age2']])
</code></pre>
<p>The names field names can be different, I am not sure about the dtypes and if there is (down)casting.</p>
<p><a href="https://docs.scipy.org/doc/numpy/user/basics.rec.html#assignment-from-other-structured-arrays" rel="nofollow noreferrer">https://docs.scipy.org/doc/numpy/user/basics.rec.html#assignment-from-other-structured-arrays</a></p>
<p><a href="https://docs.scipy.org/doc/numpy/user/basics.rec.html#accessing-multiple-fields" rel="nofollow noreferrer">https://docs.scipy.org/doc/numpy/user/basics.rec.html#accessing-multiple-fields</a></p>
|
arrays|numpy|slice|structured-array
| 1 |
1,904,142 | 73,167,690 |
How use variable defined in one module in another module?
|
<p>I want to pass a variable from <strong>fileA.py</strong> to <strong>fileB.py</strong> but i don't know how use (import) it in <strong>fileB.py</strong>.</p>
<p>I have this situation (frames folder and <code>file.A</code> are at same level)</p>
<pre><code>frames
|
folder
|
fileB.py
fileA.py
</code></pre>
<p>In the <code>fileA.py</code>i have an image and i want to pass it in <code>fileB.py</code></p>
<pre><code>immagine= cv2.imread('image.jpg')
os.chdir("frames/folder")
subprocess.call(["python", "fileB.py", 'immagine'])
</code></pre>
<p>I think this works well but i don't know how import <code>immagine</code> in <code>fileB.py</code>.
Maybe i should to use:</p>
<pre><code>from ..fileA.py from immagine
</code></pre>
<p>but not works and i have this error: <strong>ImportError: attempted relative import with no known parent package</strong></p>
<p>I hope you can help me... i'm really new in python (like Ide i use pycharm and i use it to install modules - of course <code>fileB.py</code> and <code>fileA.py</code> aren't modules but normal python files)</p>
|
<p>I can think of two solutions, one using your <code>subprocess</code> call and another one with <code>import</code>:</p>
<p><strong>Easiest solution</strong></p>
<p>You can treat them as console command parameters and read them with <code>sys.argv</code>.</p>
<p>For example in <code>fileB.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>from sys import argv
var = argv[1]
print(var)
</code></pre>
<p>Should output: <code>immagine</code></p>
<p><strong>Arguably best solution</strong></p>
<p>Another way of doing it is to make both <code>fileA.py</code> and <code>fileB.py</code> part of the same module, even if <code>fileB.py</code> is a different submodule. For example:</p>
<pre><code>mymodule
| | |
| | fileA.py
| |
| folder
| |
| fileB.py
|
__init__.py (this marks mymodule as a moduke)
</code></pre>
<p>And then in <code>fileB.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>from mymodule.fileA import immagine
</code></pre>
<blockquote>
<p><strong>Take into account that you must run fileA with <code>python -m mymodule.fileA</code> for this to work</strong>.</p>
</blockquote>
|
python
| 1 |
1,904,143 | 63,003,341 |
Receiving ValueError: invalid recstyle object
|
<p>I am following the instructions of a pygame project on youtube, <a href="https://www.youtube.com/watch?v=XGf2GcyHPhc" rel="nofollow noreferrer">this is the video</a>, I am on the project called "Snake", and in the description of the video, you can find what time it starts and the actual code. I am about 10 minutes into the video. I will show you the code that I have written so far:</p>
<pre><code># Snake project on python
import math
import random
import pygame
import tkinter as tk
from tkinter import messagebox
class cube(object):
rows = 0
w = 0
def __init__(self, start,dirnx=1, dirny=0, colour=(255,0,0)):
pass
def move(self, dirnx, dirny):
pass
def draw(self, surface, eyes=False):
pass
class snake(object):
def __init__(self, color, pos):
pass
def move(self):
pass
def reset(self, pas):
pass
def addCube(self):
pass
def draw(self, surface):
pass
def drawGrid(w, rows, surface):
sizeBtwn = w // rows
x = 0
y = 0
for l in range(rows):
x = x + sizeBtwn
y = y + sizeBtwn
pygame.draw.line(surface, (255,255,255), (x,0), (x,w))
pygame.draw.line(surface, (255,255,255), (0,y), (w,y))
def redrawWindow(surface):
global rows, width
surface.fill(0,0,0)
drawGrid(width, rows, surface)
pygame.display.update()
def randomSnack(rows, items):
pass
def message_box(subject, content):
pass
def main():
global width, rows
width = 500
rows = 20
win = pygame.display.set_mode((width, width))
s = snake((255, 0, 0), (10, 10))
flag = True
clock = pygame.time.Clock()
while flag:
pygame.time.delay(50)
clock.tick(10)
redrawWindow(win)
main()
</code></pre>
<p>When I run the code the tutorial says I should be getting a grid (shown in 55:32 of the video). Instead, I am receiving this message:</p>
<pre><code>Windows PowerShell
Try the new cross-platform PowerShell https://aka.ms/pscore6
PS C:\Users\holca\Desktop\Snake> & C:/Users/holca/AppData/Local/Programs/Python/Python38-32/python.exe c:/Users/holca/Desktop/Snake/snake.py
pygame 1.9.6
Hello from the pygame community. https://www.pygame.org/contribute.html
Traceback (most recent call last):
File "c:/Users/holca/Desktop/Snake/snake.py", line 77, in <module>
main()
File "c:/Users/holca/Desktop/Snake/snake.py", line 74, in main
redrawWindow(win)
File "c:/Users/holca/Desktop/Snake/snake.py", line 51, in redrawWindow
surface.fill(0,0,0)
ValueError: invalid rectstyle object
PS C:\Users\holca\Desktop\Snake>
</code></pre>
<p>I am using VSCode, and I have 2 warning messages talking about some unused variables, but I doubt they have to do anything with the problem. Am I missing something?</p>
|
<p>You have to specify the color by a tuple:</p>
<p><s><code>surface.fill(0,0,0)</code></s></p>
<pre class="lang-py prettyprint-override"><code>surface.fill( (0, 0, 0) )
</code></pre>
<p>Explanation:</p>
<p>The first argument of <a href="https://www.pygame.org/docs/ref/surface.html#pygame.Surface.fill" rel="nofollow noreferrer"><code>fill()</code></a> is the color (tuple). The other arguments are optional. The 2nd argument is an optional rectangle specifying area to be filled. The 3rd arguments are optional flags.</p>
<p>Hence the instruction <code>surface.fill(0,0,0)</code> can be read as:</p>
<pre class="lang-py prettyprint-override"><code>surface.fill(0, rect=0, special_flags=0)
</code></pre>
<p>The 2nd argument (<code>rect=0</code>) causes the error</p>
<blockquote>
<p>ValueError: invalid rectstyle object</p>
</blockquote>
|
python|tkinter|pygame|valueerror
| 1 |
1,904,144 | 59,029,992 |
Using gitpython, how can I checkout a certain Git commit ID?
|
<p>Checking out a branch works well, but I also need to check out a certain commit ID in a given Git repository.</p>
<p>I mean the equivalent of</p>
<pre><code>git clone --no-checkout my-repo-url my-target-path
cd my-target-path
git checkout my-commit-id
</code></pre>
<p>How can I do this with gitpython?</p>
|
<pre class="lang-py prettyprint-override"><code>repo = git.Repo.clone_from(repo_url, repo_path, no_checkout=True)
repo.git.checkout(commit_id)
</code></pre>
<p>You can find more details in the <a href="https://gitpython.readthedocs.io/en/stable/tutorial.html" rel="noreferrer">documentation</a>.</p>
|
python-3.x|gitpython
| 13 |
1,904,145 | 73,269,841 |
Leetcode jumpgame recursive approach
|
<p>Given the following problem:</p>
<blockquote>
<p>You are given an integer array nums. You are initially positioned at the array's first index, and each element in the array represents your maximum jump length at that position.</p>
<p>Return true if you can reach the last index, or false otherwise.</p>
</blockquote>
<p><strong>Example 1:</strong></p>
<blockquote>
<p>Input: <code>nums = [2,3,1,1,4]</code></p>
<p>Output: <code>True</code></p>
<p>Explanation: Jump 1 step from index 0 to 1, then 3 steps to the last index.</p>
</blockquote>
<p><strong>Example 2:</strong></p>
<blockquote>
<p>Input: <code>nums = [3,2,1,0,4]</code></p>
<p>Output: <code>False</code></p>
<p>Explanation: You will always arrive at index 3 no matter what. Its maximum jump length is 0, which makes it impossible to reach the last index.</p>
</blockquote>
<p>I am trying to come up with a recursive solution. This is what I have so far. I am not looking for the optimal solution. I am just trying to solve using recursion for now. If n[i] is 0 I want the loop to go back to the previous loop and continue recursing, but I can't figure out how to do it.</p>
<pre><code>def jumpGame(self, n: []) -> bool:
if len(n) < 2:
return True
for i in range(len(n)):
for j in range(1, n[i]+1):
next = i + j
return self.jumpGame(n[next:])
return False
</code></pre>
|
<p>When considering recursive solutions, the first thing you should consider is the 'base case', followed by the 'recursive case'. The base case is just 'what is the smallest form of this problem for which I can determine an answer', and the recursive is 'can I get from some form n of this problem to some form n - 1'.</p>
<p>That's a bit pedantic, but lets apply it to your situation. What is the base case? That case is if you have a list of length 1. If you have a list of length 0, there is no last index and you can return false. That would simply be:</p>
<pre><code>if len(ls) == 0:
return False
if len(ls) == 1:
return True
</code></pre>
<p>Since we don't care what is in the last index, only at arriving at the last index, we know these if statements handle our base case.</p>
<p>Now for the recursive step. Assuming you have a list of length n, we must consider how to reduce the size of the problem. This is by making a 'jump', and we know that we can make a jump equal to a length up to the value of the current index. Then we just need to test each of these lengths. If any of them return <code>True</code>, we succeed.</p>
<pre><code>any(jump_game(n[jump:] for jump in range(1, n[0] + 1)))
</code></pre>
<p>There are two mechanisms we are using here to make this easy. <code>any</code> takes in a sequence and quits as soon as one value is <code>True</code>, returning <code>True</code>. If none of them are true, it will return <code>False</code>. The second is a list slice, <code>n[jump:]</code> which takes a slice of a list from the index <code>jump</code> to the end. This might result in an empty list.</p>
<p>Putting this together we get:</p>
<pre><code>def jump_game(n: list) -> bool:
# Base cases
if len(n) == 0:
return False
if len(n) == 1:
return True
# Recursive case
return any(jump_game(n[jump:]) for jump in range(1, n[0] + 1))
</code></pre>
<p>The results:</p>
<pre><code>>>> jump_game([2,3,1,1,4])
True
>>> jump_game([3,2,1,0,1])
False
>>> jump_game([])
False
>>> jump_game([1])
True
</code></pre>
<p>I'm trying to lay out the rigorous approach here, because I think it helps to clarify where recursion goes wrong. In your recursive case you do need to iterate through your options - but that is only one loop, not the two you have. In your solution, in <em>each</em> recursion, you're iterating (<code>for i in range(len(n))</code>) through the entire list. So, you're really hiding an iterative solution inside a recursive one. Further, your base case is wrong, because a list of length 0 is considered a valid solution - but in fact, only a list of length 1 should return a True result.</p>
<p>What you should focus on for recursion is, again, solving the smallest possible form(s) of the problem. Here, it is if the list is one or zero length long. Then, you need to step each other possible size of the problem (length of the list) to a base case. We know we can do that by examining the first element, and choosing to jump anywhere up to that value. This gives us our options. We try each in turn until a solution is found - or until the space is exhausted and we can confidently say there is no solution.</p>
|
python|algorithm|recursion|nested-loops
| 0 |
1,904,146 | 15,735,440 |
Python - Updating contents of option menu
|
<p>So the problem I am currently having is that I want to update the second option menu, based on what the user selected in the first. I think I have to use a lambda function here to make it so that the frame updates or something, but I am unsure of how exactly to do this.
Here is my code so far:</p>
<pre><code>from tkinter import *
import time
class CustomerEntryForm(Frame):
def __init__(self):
Frame.__init__(self)
self.master.title("Customer Entry form:")
self.pack()
execute = True
thirtyMonthList = [4,6,9,11]
thirtyOneMonthList = [1,2,6,7,8,10,12]
monthList = []
dayList = []
for i in range(1,13):
monthList.append(i)
initialMonth = IntVar(self)
initialMonth.set(monthList[0])
initialDay = IntVar(self)
def resetDayOptionMenu():
for i in range(1,len(dayList)+1):
dayList.remove(i)
def setDayList():
resetDayOptionMenu()
if initialMonth.get() == 2:
for i in range(1, 29):
dayList.append(i)
initialDay.set(dayList[0])
elif initialMonth.get() in thirtyMonthList:
for i in range(1, 31):
dayList.append(i)
initialDay.set(dayList[0])
elif initialMonth.get() in thirtyOneMonthList:
for i in range(1, 32):
dayList.append(i)
initialDay.set(dayList[0])
self.om2 = OptionMenu(self, initialMonth, *monthList, command = setDayList())
self.om2.grid(row=0)
self.om = OptionMenu(self, initialDay, *dayList)
self.om.grid(row=1)
root = CustomerEntryForm()
root.mainloop()
</code></pre>
<p>I appreciate any help.
Thanks. </p>
|
<p>It would be easier to remove and then just add the second <code>OptionMenu</code> field after the month changes.</p>
<p>Like this:</p>
<pre><code> ...
thirtyMonthList = [4,6,9,11]
initialMonth = IntVar(self)
initialMonth.set(1)
initialDay = IntVar(self)
initialDay.set(1)
def removeDayOptionMenu():
self.om.destroy()
def setDayList(event):
removeDayOptionMenu()
if initialMonth.get() == 2:
addDayOptionMenu(range(1,29))
elif initialMonth.get() in thirtyMonthList:
addDayOptionMenu(range(1,31))
else:
addDayOptionMenu(range(1,32))
def addDayOptionMenu(dayList):
self.om = OptionMenu(self, initialDay, *dayList)
self.om.grid(row=1)
self.om2 = OptionMenu(self, initialMonth, *range(1,12), command = setDayList)
self.om2.grid(row=0)
self.om = OptionMenu(self, initialDay, *range(1,32))
self.om.grid(row=1)
</code></pre>
|
python|user-interface|widget|tkinter|optionmenu
| 2 |
1,904,147 | 59,665,654 |
split a string on a character when it's in the middle of the string
|
<p>I want to split a string on a character when it's in the middle of the string</p>
<p>Example :</p>
<pre><code>string_1 = '-1232-1412'
string_2 = '1234-1243'
</code></pre>
<p>I want to have this :</p>
<pre><code>output_string_1 = ['-1232','1412']
output_string_2 = ['1234','1243']
</code></pre>
<p>I tried this :</p>
<pre><code>import re
In[1]: re.split(r'\w-',string_1)
Out[1]: ['-123', '1412']
</code></pre>
<p>but it deleted the last character before '-'</p>
|
<p>One way could be to split the string asserting not the start of the string <code>^</code> and after matching <code>-</code> asserting not the end of the string <code>$</code> using a negative lookahead <code>(?!</code></p>
<pre><code>(?!^)-(?!$)
</code></pre>
<p><a href="https://regex101.com/r/Ppp5Qg/1/" rel="nofollow noreferrer">Regex demo</a> | <a href="https://ideone.com/9bS9gU" rel="nofollow noreferrer">Python demo</a></p>
|
python|regex|python-3.x|split
| 2 |
1,904,148 | 49,204,509 |
Python3 While Loop jumping to else statement
|
<p>Im in my first couple of weeks of programming. I am trying to make a function that asks for a users input, checks whether or not that input is alphabetic, if it is alphabetic then i want to break the while loop.</p>
<p>If I run the script and enter the correct types in the input it breaks and works fine. If I intentionally put a number in the input field it doesn't break out of the loop when I reenter the correct type.</p>
<p>any ideas on what im doing wrong?</p>
<pre><code>def wallType():
wall_type = input("What type of wall was the route on? ")
while wall_type:
#if there is text in the input,instead of an alphanumeric, then tell them they must put in a number.
if str.isalpha(wall_type):
return wall_type
else:
print("You entered a number, you must enter a word. Try again. ")
wallType()
</code></pre>
|
<p>Put your code in a <code>while True</code> loop. When input received is alphabetic, use <code>break</code> to break out of <code>while</code> loop. Else loop again to ask for input.</p>
<pre><code>def wallType():
while True:
wall_type = input("What type of wall was the route on? ")
# string is alphabetic, break out of loop
if str.isalpha(wall_type):
break
else:
print("Please reenter an alphabetic word. Try again. ")
</code></pre>
|
python|input
| 0 |
1,904,149 | 70,952,865 |
What is the cleanest to completely exit a locust process if test prerequisites fail?
|
<p>I have an 'on_test_start' hook that tests for the existence of some environmental variables before the test starts:</p>
<pre><code>@events.test_start.add_listener
def on_test_start(**_kwargs):
env_vars_present = check_vars.check_existence_of_env_vars(["VAR1", "VAR2"])
if not env_vars_present:
sys.exit()
</code></pre>
<p>This works, but I don't like using <code>sys.exit()</code> which is a rather dirty way to quit a process. Is there a better alternative? I've tried raising an exception, but this doesn't stop the runners. I've also tried <code>environment.runner.quit()</code>. This quits the runners, but locust process continues until run-time limit is reached. What's the cleanest way to exit the process if some prerequisite checks fail? Thanks!</p>
|
<p>You have the right idea with <code>environment.runner.quit()</code> you just need to have that happen on the master, not the workers. You need the workers to communicate the failure to the master and then the master can can quit, which will stop the master and all the workers.</p>
<p>You could either use other Event Hooks like <a href="http://docs.locust.io/en/stable/api.html#locust.event.Events.request" rel="nofollow noreferrer">request</a> with a custom Name you could check a failure for, <a href="http://docs.locust.io/en/stable/api.html#locust.event.Events.report_to_master" rel="nofollow noreferrer">report_to_master</a> to add in some extra data to the report payload sent to the master that you could check for its existence and quit on. Both of those you'd use worker_report on the master to trigger your check and quit. Doing it this way could help you get these steps in any test output data so you could clearly see what the problem was without doing much custom reporting work.</p>
<p>Or along the same lines but maybe slightly cleaner and more direct would be to use the <a href="http://docs.locust.io/en/stable/running-distributed.html#communicating-across-nodes" rel="nofollow noreferrer">communication between nodes</a> to have the workers send a message directly to the master and have the master quit when that message was received. Code would look something like this:</p>
<pre><code>@events.init.add_listener
def quit_env_failure_message(environment, **_kwargs):
global ENV
ENV = environment
if isinstance(environment.runner, MasterRunner):
environment.runner.register_message('env_failure', quit_on_env_failure)
def quit_on_env_failure(msg, **kwargs):
ENV.runner.quit()
@events.test_start.add_listener
def on_test_start(**_kwargs):
env_vars_present = check_vars.check_existence_of_env_vars(["VAR1", "VAR2"])
if not env_vars_present:
environment.runner.send_message('env_failure')
</code></pre>
<p>Doing it this way, you may need to do extra work to get whatever kind of tests result reporting you'd want. That would most likely go in <code>quit_on_env_failure()</code>.</p>
|
python|locust
| 0 |
1,904,150 | 70,822,328 |
discord.ext.commands.errors.CommandInvokeError: Command raised an exception: ValueError: I/O operation on closed file
|
<p><strong>Hi!</strong> I have a separate help command for my discord bot.</p>
<pre><code>class Help(commands.Cog):
def __init__(self,client):
self.client = client
self.eSantaLogo = discord.File("resources\embedImages\eSanta.png", filename="eSanta.png")
@commands.group(aliases=["help","commands"], invoke_without_command=True)
async def help_(self,ctx):
prefix = ''.join(get_prefix(self.client, ctx))
embed=discord.Embed(title="eSanta Command Groups",
url="https://esanta.me",
color=discord.Color.dark_purple(),
timestamp=datetime.datetime.now(),
description=f'Current server prefix: {prefix}')
embed.set_thumbnail(url='attachment://eSanta.png')
embed.set_footer(text=f"Requested by {ctx.author.name}")
embed.add_field(name='**Categories**',value=f"{prefix}help moderator\n{prefix}help fun", inline=True)
await ctx.send(file=self.eSantaLogo,embed=embed)
def setup(client):
client.add_cog(Help(client))
</code></pre>
<p>When I run it first time it works completly normal.</p>
<p><a href="https://i.stack.imgur.com/azb3H.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/azb3H.png" alt="enter image description here" /></a></p>
<p>But when I try to run it again this exception is raised.</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\notri\AppData\Local\Programs\Python\Python39\lib\site-packages\discord\client.py", line 343, in _run_event
await coro(*args, **kwargs)
File "E:\bot\bot.py", line 104, in on_command_error
raise error
File "C:\Users\notri\AppData\Local\Programs\Python\Python39\lib\site-packages\discord\ext\commands\bot.py", line 939, in invoke
await ctx.command.invoke(ctx)
File "C:\Users\notri\AppData\Local\Programs\Python\Python39\lib\site-packages\discord\ext\commands\core.py", line 1353, in invoke
await super().invoke(ctx)
File "C:\Users\notri\AppData\Local\Programs\Python\Python39\lib\site-packages\discord\ext\commands\core.py", line 863, in invoke
await injected(*ctx.args, **ctx.kwargs)
File "C:\Users\notri\AppData\Local\Programs\Python\Python39\lib\site-packages\discord\ext\commands\core.py", line 94, in wrapped
raise CommandInvokeError(exc) from exc
discord.ext.commands.errors.CommandInvokeError: Command raised an exception: ValueError: I/O operation on closed file
</code></pre>
|
<p>From the documentation:</p>
<blockquote>
<p>File objects are single use and are not meant to be reused in multiple <code>abc.Messageable.send()s</code>.</p>
</blockquote>
<p>You should reload the file once it's sent, a pretty solution:</p>
<pre class="lang-py prettyprint-override"><code>class Help(commands.Cog):
def __init__(self,client):
self.client = client
@property
def eSantaLogo(self):
return discord.File("resources\embedImages\eSanta.png", filename="eSanta.png")
@commands.group(aliases=["help","commands"], invoke_without_command=True)
async def help_(self,ctx):
prefix = ''.join(get_prefix(self.client, ctx))
embed=discord.Embed(title="eSanta Command Groups",
url="https://esanta.me",
color=discord.Color.dark_purple(),
timestamp=datetime.datetime.now(),
description=f'Current server prefix: {prefix}')
embed.set_thumbnail(url='attachment://eSanta.png')
embed.set_footer(text=f"Requested by {ctx.author.name}")
embed.add_field(name='**Categories**',value=f"{prefix}help moderator\n{prefix}help fun", inline=True)
await ctx.send(file=self.eSantaLogo,embed=embed)
</code></pre>
<p>def setup(client):
client.add_cog(Help(client))</p>
|
python-3.x|discord.py|python-3.9
| 2 |
1,904,151 | 68,019,221 |
How do I sum up values in a column into groups that match a given condition by date in pandas?
|
<p>I have a dataframe like this with age groups</p>
<pre><code> Date AgeGroup Quantity
1 2020-12-08 18 - 29 1
2 2020-12-08 30 - 49 4
3 2020-12-08 50 - 54 0
4 2020-12-08 55 - 59 1
5 2020-12-08 60 - 64 1
6 2020-12-08 65 - 69 0
7 2020-12-08 70 - 74 3
8 2020-12-08 75 - 79 2
9 2020-12-08 80+ 1
....
10 2020-12-09 18 - 29 0
11 2020-12-09 30 - 49 2
12 2020-12-09 50 - 54 1
13 2020-12-09 55 - 59 2
14 2020-12-09 60 - 64 3
15 2020-12-09 65 - 69 0
16 2020-12-09 70 - 74 1
17 2020-12-09 75 - 79 1
18 2020-12-09 80+ 1
</code></pre>
<p>I want to sum up the numbers for each date by three broader age groups like so:</p>
<pre><code>1 2020-12-08 18 - 59 6
2 2020-12-08 60+ 7
3 2020-12-08 75+ 3
...
4 2020-12-09 18 - 59 5
5 2020-12-09 60+ 5
6 2020-12-09 75+ 2
</code></pre>
<p>(the first age group includes the ages up to 59, the second one has the ages over 60 and the third one has the ages over 75+).</p>
<p>I have tried the following:</p>
<pre><code>df1 = df.loc[(df['AgeGroup'] == '18 - 29') | (df['AgeGroup'] == '30 - 49') | (df['AgeGroup'] == '50 - 54') | (df['AgeGroup'] == '55 - 59') , 'Quantity'].sum()
</code></pre>
<p>however, this lacks the breakdown by date as it just gives me the sum for these age groups for all dates.</p>
<p>I have also tried this.</p>
<pre><code>df.groupby(['Date', 'AgeGroup'])['Quantity'].sum()
Date AgeGroup
2020-12-08 18 - 29 1
30 - 49 4
50 - 54 0
55 - 59 2
60 - 64 1
65 - 69 0
70 - 74 3
75 - 79 2
2020-12-09 18 - 29 0
30 - 49 2
50 - 54 1
55 - 59 2
60 - 64 3
65 - 69 0
70 - 74 1
75 - 79 1
</code></pre>
<p>I still cant figure out how to combine these age groups within the dates. thank you for any ideas.</p>
|
<p>You can get first numeric value by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.extract.html" rel="nofollow noreferrer"><code>Series.str.extract</code></a>, compare by <code>60</code> and set by <code>np.where</code> to 2 groups:</p>
<pre><code>m = df['AgeGroup'].str.extract('(\d+)', expand=False).astype(int) < 60
df['AgeGroup'] = np.where(m, '18 - 59', '60+')
df1 = df.groupby(['Date', 'AgeGroup'])['Quantity'].sum()
print (df1)
Date AgeGroup
2020-12-08 18 - 59 7
60+ 6
2020-12-09 18 - 59 5
60+ 5
Name: Quantity, dtype: int64
</code></pre>
|
python-3.x|pandas|dataframe|sum|pandas-groupby
| 3 |
1,904,152 | 67,963,988 |
How do I remove custom information in a PNG image file that I've previously added?
|
<p>I store metadata in Pillow using <code>PngImageFile</code> and <code>PngInfo</code> from the <code>PngImagePlugin</code> module by following the code from Jonathan Feenstra in this answer: <a href="https://stackoverflow.com/questions/58399070/how-do-i-save-custom-information-to-a-png-image-file-in-python">How do I save custom information to a PNG Image file in Python?</a></p>
<p>The code is :</p>
<pre><code>from PIL.PngImagePlugin import PngImageFile, PngInfo
targetImage = PngImageFile("pathToImage.png")
metadata = PngInfo()
metadata.add_text("MyNewString", "A string")
metadata.add_text("MyNewInt", str(1234))
targetImage.save("NewPath.png", pnginfo=metadata)
targetImage = PngImageFile("NewPath.png")
print(targetImage.text)
>>> {'MyNewString': 'A string', 'MyNewInt': '1234'}
</code></pre>
<p>Now, I want to remove the additional metadata that was previously added to the image which is the text string. How do I remove the metadata on the previously added PNG image?</p>
|
<p>It seems the <code>text</code> attribute isn't preserved when doing a copy of <code>targetImage</code>. So, if you need the image without the additional metadata at runtime, just make a copy.</p>
<p>On the other hand, you can save <code>targetImage</code> again, but without using the <code>pnginfo</code> attribute. After opening, the <code>text</code> attribute is present, but empty. Maybe, in the <code>save</code> call, <code>pnginfo=None</code> is implicitly set!?</p>
<p>Here's some code for demonstration:</p>
<pre class="lang-py prettyprint-override"><code>from PIL.PngImagePlugin import PngImageFile, PngInfo
def print_text(image):
try:
print(image.text)
except AttributeError:
print('No text attribute available.')
targetImage = PngImageFile('path/to/your/image.png')
metadata = PngInfo()
metadata.add_text('MyNewString', 'A string')
metadata.add_text('MyNewInt', str(1234))
# Saving using proper pnginfo attribute
targetImage.save('NewPath.png', pnginfo=metadata)
# On opening, text attribute is available, and contains proper data
newPath = PngImageFile('NewPath.png')
print_text(newPath)
# {'MyNewString': 'A string', 'MyNewInt': '1234'}
# Saving without proper pnginfo attribute (implicit pnginfo=None?)
newPath.save('NewPath2.png')
# On opening, text attribute is available, but empty
newPath2 = PngImageFile('NewPath2.png')
print_text(newPath2)
# {}
# On copying, text attribute is not available at all
copyTarget = targetImage.copy()
print_text(copyTarget)
# No text attribute available.
</code></pre>
<pre class="lang-none prettyprint-override"><code>----------------------------------------
System information
----------------------------------------
Platform: Windows-10-10.0.19041-SP0
Python: 3.9.1
PyCharm: 2021.1.1
Pillow: 8.2.0
----------------------------------------
</code></pre>
|
python|text|python-imaging-library|png|metadata
| 1 |
1,904,153 | 64,111,997 |
How to save wfdb plot as an image in local directory?
|
<p>I want to be able to save wfdb ECG plots as an image file locally to be used for image processing processes. I am using Jupyter notebook on Anaconda, in python programming language.</p>
<p>I have tried plt.savefig() to save the image, and it does generate a file. However when I open the said file, there is no image.</p>
<pre><code>import wfdb
import matplotlib.pyplot as plt
record =wfdb.rdrecord(r'D:\User\qt-database-1.0.0\sel33', sampfrom=0, sampto=999, channels = [1])
#annotation = wfdb.rdann(r'D:\User\qt-database-1.0.0\sel33', 'atr', sampto=1000)
wfdb.plot_wfdb(record=record,
#annotation=annotation,
plot_sym=True,
time_units='seconds',
title='',
figsize=(20,3),
ecg_grids='')
plt.savefig("foo.png")
</code></pre>
<p>The file is also a continuous waveform, and I intend to create a set of images from the waveform. Is there any way to do this?</p>
<p>P.S. I did manually save the image using snipping tool, however I have found it to be inefficient and time consuming. A code-oriented solution would be a major help.</p>
|
<p>Within the wfdb.plot() function, there is an optional parameter called return_fig which returns the matplotlib figure that you are trying to save. If you change your code to have wfdb.plot() return the figure, you should be able to save it.</p>
<pre><code>import wfdb
import matplotlib.pyplot as plt
record = wfdb.rdrecord(r'D:\User\qt-database-1.0.0\sel33', sampfrom=0, sampto=999, channels = [1])
#annotation = wfdb.rdann(r'D:\User\qt-database-1.0.0\sel33', 'atr', sampto=1000)
myFigure = wfdb.plot_wfdb(record=record,
#annotation=annotation,
plot_sym=True,
time_units='seconds',
title='',
figsize=(20,3),
ecg_grids='')
myFigure.savefig("foo.png")
</code></pre>
|
python|matplotlib|image-processing|waveform
| 0 |
1,904,154 | 72,335,217 |
Fastest way to convert CSV files from UTF-16 tabs, to UTF-8-SIG commas
|
<p>I have two 30 gb CSV files, each contains tens of millions of records.</p>
<p><a href="https://i.stack.imgur.com/R2l4K.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/R2l4K.png" alt="enter image description here" /></a></p>
<p>They are using commas as delimiter, and saved as UTF-16 (thanks Tableau :-( )</p>
<p>I wish to convert these files to utf-8-sig with commas instead of tabs.</p>
<p>I tried the following code (some variables are declared earlier):</p>
<pre><code>csv_df = pd.read_csv(p, encoding=encoding, dtype=str, low_memory=False, error_bad_lines=True, sep=' ')
csv_df.to_csv(os.path.join(output_dir, os.path.basename(p)), encoding='utf-8-sig', index=False)
</code></pre>
<p>And I have also tried:
<a href="https://stackoverflow.com/questions/71508111/convert-utf-16-to-utf-8-using-python">Convert utf-16 to utf-8 using python</a></p>
<p>Both work slowly, and practically never finish.</p>
<p>Is there a better way to make the conversion? Maybe Python is not the best tool for this?</p>
<p>Ideally, I would love the data to be stored in a database, but I'm afraid this is no a plausible option at the moment.</p>
<p>Thanks!</p>
|
<p>I converted a 35GB file in two minutes, Note that you can optimize performances by changing the constants in the top lines.</p>
<pre><code>BUFFER_SIZE_READ = 1000000 # depends on available memory in bytes
MAX_LINE_WRITE = 1000 # number of lines to write at once
source_file_name = 'source_fle_name'
dest_file_name = 'destination_file_name'
source_encoding = 'file_source_encoding' # 'utf_16'
destination_encoding = 'file_destination_encoding' # 'utf_8'
BOM = True # True for utf_8_sig
lines_count = 0
def read_huge_file(file_name, encoding='utf_8', buffer_size=1000):
def read_buffer(file_obj, size=1000):
while True:
data = file_obj.read(size)
if data:
yield data
else:
break
source_file = open(file_name, encoding=encoding)
buffer_in = ''
for buffer in read_buffer(source_file, size=buffer_size):
buffer_in += buffer
lines = buffer_in.splitlines()
buffer_in = lines.pop()
if len(lines):
yield lines
else:
break
def process_data(data):
def write_lines(lines_to_write):
with open(dest_file_name, 'a', encoding=destination_encoding) as dest_file:
if BOM and dest_file.tell() == 0:
dest_file.write(u'\ufeff')
dest_file.write(lines_to_write)
return ''
global lines_count
lines = ''
for line in data:
lines_count += 1
lines += (line + '\n')
if not lines_count % MAX_LINE_WRITE:
lines = write_lines(lines)
if len(lines):
with open(dest_file_name, 'a', encoding=destination_encoding) as dest_file:
write_lines(lines)
for buffer_data in read_huge_file(source_file_name, encoding=source_encoding, buffer_size=BUFFER_SIZE_READ):
process_data(buffer_data)
</code></pre>
|
python|pandas|utf-8|tableau-desktop
| 1 |
1,904,155 | 72,236,018 |
How to use Python to parse custom file into JSON format
|
<p>I am trying to parse a machine/software generated file type into a JSON file type for easy analysis with other software and other Python scripts. The file is structured similarly to a JSON file, but not automatically convertible as far as I can tell.</p>
<p>The file looks similar to this (.bpf filetype):</p>
<pre><code>PACKET fileName.bpf
STYLE 502
last_modified 1651620170 # Tue May 03 19:22:50 2022
STRUCTURE BuildInfo
PARAM Version
Value = 1128
ENDPARAM
PARAM build_height
Units = 1 # Inches
Value = 0.905512
ENDPARAM
PARAM build_time_s
Value = "3:22:53"
ENDPARAM
... # Parameters continue
ENDSTRUCTURE #BuildInfo only called once
STRUCTURE PartInfo
PARAM BandOffset
Category_mask = 65537
GUIName = "Stripe Offset"
Order = 38
Type = 3
Units = 1 # Inches
ZUnits = 1
profile_z= 0.000000 profile_value = 0.243307
ENDPARAM
PARAM Color
B = 0.380000
G = 0.380000
R = 0.380000
UseDefault = 0
ENDPARAM
... # Parameters continue
ENDSTRUCTURE #PartInfo ranges from 1 to however many parts are needed, max ~100
ENDPACKET
checksum 0xa61d
</code></pre>
<p>I want the end product to look like this:</p>
<pre class="lang-json prettyprint-override"><code>{
"name": "fileName.bpf",
"style": "502",
"last_modified": "",
"BuildInfo": {
"Version": "1128",
"build_height": {
"Units": "1",
"Value": "0.905512"
},
"build_time_s": "3:22:53",
... # parameters continue
},
"PartInfo-001": [
"id": "1" #incremented for each part
"BandOffset": {
"Category_mask": "65537",
"GUIName": "Stripe Offset",
"Order": "38",
"Type": "3",
"Units": "1",
"ZUnits": "1",
"profile_z": "0.000000",
"profile_value": "0.243307",
}
"Color": {
"B": "0.380000",
"G": "0.380000",
"R": "0.380000",
}
... # parameters continue
... # PartInfo repeats
]
}
</code></pre>
<p>The file is over 55,000 lines with too many parameters to manually create a dictionary out of them. I started writing a script to parse out just a subsection of the file for one PartInfo into a python dictionary, then save to a JSON file, but the script runs through none of the document.</p>
<pre class="lang-py prettyprint-override"><code># Python program to convert text
# file to JSON
import json
def main():
# the file to be converted to
# json format
filename = r'samplePartParameters.txt'
# dictionary where the lines from
# text will be stored
partParameters = {}
paramStart = []
paramEnd = []
# creating dictionary
count = 0
with open(filename, 'r') as file:
for currentLine in file.readlines():
if currentLine[0:4:1] == 'PARAM':
paramStart.append(count)
elif currentLine[0:2:1] == 'END':
paramEnd.append(count)
content = file.readlines()
numParam = len(paramEnd)
for paramNum in range(0, numParam-1, 1):
paramName = content[paramNum][6:]
partParameters[paramName] = {}
for propertyNum in range(paramStart[paramNum]+1, paramEnd[paramNum]-1, 1):
splitPos = content[paramNum].find("=")
propertyName = content[paramNum][:,splitPos-1]
propertyVal = content[paramNum][splitPos+1,:]
partParameters[paramName][propertyName] = propertyVal
# creating json file
# the JSON file is named as test1
out_file = open("test1.json", "w")
json.dump(partParameters, out_file, indent = 4, sort_keys = False)
out_file.close()
if __name__ == "__main__":
print("Running.")
main()
print("Done.")
</code></pre>
<p>Please let me know if you see an error in my code, or if you know of an easier way to go about this.</p>
<p>Thanks!</p>
|
<p>The following python module should help. Please see the example:</p>
<pre><code>!pip install ttp
from ttp import ttp
import json
data_to_parse = """
PACKET fileName.bpf
STYLE 502
last_modified 1651620170 # Tue May 03 19:22:50 2022
STRUCTURE BuildInfo
PARAM Version
Value = 1128
ENDPARAM
PARAM build_height
Units = 1 # Inches
Value = 0.905512
ENDPARAM
PARAM build_time_s
Value = "3:22:53"
ENDPARAM
... # Parameters continue
ENDSTRUCTURE #BuildInfo only called once
STRUCTURE PartInfo-001
PARAM BandOffset
Category_mask = 65537
GUIName = "Stripe_Offset"
Order = 38
Type = 3
Units = 1 # Inches
ZUnits = 1
profile_z= 0.000000 profile_value = 0.243307
ENDPARAM
PARAM Color
B = 0.380000
G = 0.380000
R = 0.380000
UseDefault = 0
ENDPARAM
... # Parameters continue
ENDSTRUCTURE #PartInfo ranges from 1 to however many parts are needed, max ~100
ENDPACKET
checksum 0xa61d
STRUCTURE PartInfo-002
PARAM BandOffset
Category_mask = 65538
GUIName = "Stripe_Offset"
Order = 39
Type = 4
Units = 2 # Inches
ZUnits = 1
profile_z= 0.100000 profile_value = 0.253307
ENDPARAM
PARAM Color
B = 0.390000
G = 0.390000
R = 0.390000
UseDefault = 0
ENDPARAM
... # Parameters continue
ENDSTRUCTURE #PartInfo ranges from 1 to however many parts are needed, max ~100
ENDPACKET
checksum 0xa61d
"""
ttp_template = """
<group name="MyData">
PACKET {{name}}
STYLE {{style}}
<group name="{{INFO}}">
STRUCTURE {{INFO}}
<group name="{{INFO_TYPE}}">
PARAM {{INFO_TYPE}}
Value = {{Value}}
Units = {{Units}} {{ignore}} {{ignore}}
</group>
</group>
<group name="{{INFO}}">
STRUCTURE {{INFO}}-{{id}}
<group name="{{INFO_TYPE}}">
PARAM {{INFO_TYPE}}
Value = {{Value}}
Units = {{Units}} {{ignore}} {{ignore}}
Category_mask = {{Category_mask}}
GUIName = {{GUIName}}
Order = {{Order}}
Type = {{Type}}
ZUnits = {{ZUnits}}
profile_z= {{profile_z}} profile_value = {{profile_value}}
B = {{B}}
G = {{G}}
R = {{R}}
</group>
</group>
</group>
"""
parser = ttp(data=data_to_parse, template=ttp_template)
parser.parse()
# print result in JSON format
results = parser.result(format='json')[0]
# converting str to json.
result = json.loads(results)
print(results)
</code></pre>
<p>See the output as:</p>
<pre><code>[
{
"MyData": {
"BuildInfo": {
"Version": {
"Value": "1128"
},
"build_height": {
"Units": "1",
"Value": "0.905512"
},
"build_time_s": {
"Value": "\"3:22:53\""
}
},
"PartInfo": [
{
"BandOffset": {
"Category_mask": "65537",
"GUIName": "\"Stripe_Offset\"",
"Order": "38",
"Type": "3",
"Units": "1",
"ZUnits": "1",
"profile_value": "0.243307",
"profile_z": "0.000000"
},
"Color": {
"B": "0.380000",
"G": "0.380000",
"R": "0.380000"
},
"id": "001"
},
{
"BandOffset": {
"Category_mask": "65538",
"GUIName": "\"Stripe_Offset\"",
"Order": "39",
"Type": "4",
"Units": "2",
"ZUnits": "1",
"profile_value": "0.253307",
"profile_z": "0.100000"
},
"Color": {
"B": "0.390000",
"G": "0.390000",
"R": "0.390000"
},
"id": "002"
}
],
"name": "fileName.bpf",
"style": "502"
}
}
]
</code></pre>
|
python|json|parsing
| 0 |
1,904,156 | 65,752,903 |
Combine flask form field with html input
|
<p>I have a FieldList form that allows users to enter in an origin and destination for routes they have travelled.</p>
<p>I am trying to add Google's autocomplete API to make it easier for users to enter in addresses into the fields.</p>
<p>forms.py</p>
<pre><code>from flask_wtf import Form
from wtforms import (
StringField,
FormField,
FieldList
)
from wtforms.validators import (
Length,
Optional
)
class RouteForm(Form):
origin = StringField([Optional(), Length(1, 256)])
destination = StringField([Optional(), Length(1, 256)])
class JourneysForm(Form):
ids = []
journeys = FieldList(FormField(RouteForm))
</code></pre>
<p>edit.html</p>
<pre><code>
{% import 'macros/form.html' as f with context %}
<tbody>
<tr>
{% for journey, route in zip(form.journeys, routes) %}
<td>
{% call f.location_search(journey.origin,
css_class='sm-margin-bottom hide-label') %}
{% endcall %}
</td>
</tr>
{% endfor %}
</tbody>
<div class="col-md-6">
<button type="submit" class="btn btn-primary btn-block">
'Save'
</button>
</div>
</code></pre>
<p>macros/forms.html</p>
<pre><code>
<head>
<script src="https://maps.googleapis.com/maps/api/js?libraries=places&key=<KEY>&callback=initMap" async defer></script>
<script>
function initMap() {
var input = document.getElementById('searchInput');
var autocomplete = new google.maps.places.Autocomplete(input);
}
</script>
</head>
{# Render a form for searching a location. #}
{%- macro location_search(form, css_class='') -%}
<input type="text" class="form-control"
id="searchInput" placeholder="Enter location">
{{ form(class=css_class, **kwargs) }}
{{ caller () }}
{%- endmacro -%}
</code></pre>
<p>routes.py</p>
<pre><code>@route('/edit', methods=['GET', 'POST'])
def routes_edit():
routes = get_routes()
journeys_form = JourneysForm()
if journeys_form.validate_on_submit():
for i, entry in enumerate(journeys_form.journeys.entries):
origin = entry.data['origin']
</code></pre>
<p>However, this renders two fields. One which contains the Google autocomplete input, but does not submit the value (top). And another field which does not have the Google autocomplete input but submits the value to the db via routes.py (bottom).</p>
<p><a href="https://i.stack.imgur.com/sbgP8.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sbgP8.jpg" alt="form" /></a></p>
<p>Is it possible to combine this into a single field that both contains the Google autocomplete and submits the input value to the db?</p>
|
<p>WTForms actually renders an input class itself:</p>
<pre><code><input class="form-control" id="journeys-0-origin" name="journeys-0-origin" type="text" value="" placeholder="Enter a location" autocomplete="off">
</code></pre>
<p>Therefore I was unnecessarily duplicating the input element. In order to get Google autocomplete to work I simply just had to pass the input id into my js function:</p>
<pre><code> function initMap() {
var input = document.getElementById('journeys-0-origin');
var autocomplete = new google.maps.places.Autocomplete(input);
}
</code></pre>
|
python|html|flask|flask-wtforms
| 0 |
1,904,157 | 50,904,250 |
Pandas export text column as a single, unescaped, text file
|
<p>I would like to export the entire concatenation of a single dataframe column, to a file, for use as one big text blob, for a downstream unsupervised machine learning task. (give or take a separator character between the strings).</p>
<p>It looks like the pandas csv writer is not built for this special case, it insists on escaping characters, and it actually should.</p>
<pre><code>df.to_csv('output.txt', columns = ['tokens'], header=False, index=False, quoting=csv.QUOTE_NONE)
</code></pre>
<blockquote>
<p>_csv.Error: need to escape, but no escapechar set</p>
</blockquote>
<p>This is very understandable, as the csv packages scope their methods for symmetry, and not escaping means a one way street. </p>
<p>How would you <em>efficiently</em> spit out the concatenation of a single dataframe column's values, given that the dataframe is at least one million rows?</p>
|
<p>You're going to have issues with quoting as long as you're using a CSV writer to write raw text. Why not iterate and write to a text file directly?</p>
<pre><code>with open('output.txt', 'w') as f:
for text in df['tokens'].tolist():
f.write(text + '\n')
</code></pre>
<p>Or, in a single line,</p>
<pre><code>with open('output.txt', 'w') as f:
f.write(df['tokens'].str.cat(sep='\n'))
</code></pre>
|
python|pandas
| 15 |
1,904,158 | 50,951,027 |
select index value from groupby on a pandas dataframe in python
|
<p>I have the following dataframe:</p>
<pre><code>df = pd.DataFrame({'place' : ['A', 'B', 'C', 'D', 'E', 'F'],
'population': [10 , 20, 30, 15, 25, 35],
'region': ['I', 'II', 'III', 'I', 'II', 'III']})
</code></pre>
<p>And it looks like this:</p>
<pre><code> place population region
0 A 10 I
1 B 20 II
2 C 30 III
3 D 15 I
4 E 25 II
5 F 35 III
</code></pre>
<p>I would like to select the place with the smallest population from the region with the highest population.</p>
<pre><code>df.groupby('region').population.sum()
</code></pre>
<p>Returns:</p>
<pre><code>region
I 25
II 45
III 65
Name: population, dtype: int64
</code></pre>
<p>But I have no clue how to proceed from here (using .groupby / .loc / .iloc)</p>
<p>Any suggestion?</p>
|
<p>First add a column for region population:</p>
<pre><code>df['region_pop'] = df.groupby('region')['population'].transform(sum)
</code></pre>
<p>Then sort your dataframe and extract the first row:</p>
<pre><code>res = df.sort_values(['region_pop', 'population'], ascending=[False, True])\
.head(1)
</code></pre>
<p>Result:</p>
<pre><code> place population region region_pop
2 C 30 III 65
</code></pre>
|
python|python-3.x|pandas|dataframe|pandas-groupby
| 5 |
1,904,159 | 50,783,087 |
\0 values in Python dataframes
|
<p>I have a dataframe column called <code>amount</code> in python containing <code>null</code> and <code>"\0"</code> values. I want to convert this null and <code>\0</code> values into <code>\u0000</code>
string function replace worked for null values. but for \0 values its not working. here is the example of my data.</p>
<pre>
amount
null
"\0"
"\0"
"\0"
"\0"
"\0"
</pre>
<p>I check the datatype using df['amount'].dtypes and it is string. why i cant replace the value?
I had try using str.replace("\0", "\u0000") but its not working. </p>
|
<p>For me working:</p>
<pre><code>df['amount'].str.replace(r"\\0", r"\\0000")
</code></pre>
|
python|string|dataframe|null|utf
| 0 |
1,904,160 | 50,436,596 |
Django UserAdmin's add_fieldsets?
|
<p>I'm following a tutorial on making a custom User for authentication purposes. The tutorial used a certain property <code>add_fieldsets</code> within UserAdmin. What does this mean? I can't seem to find any documentation on this.</p>
<p>Here is the snippet:</p>
<pre><code>class UserAdmin(UserAdmin):
"""Define admin model for custom User model with no email field."""
fieldsets = (
(None, {'fields': ('email', 'password')}),
('Personal info', {'fields': ('first_name', 'last_name')}),
('Permissions', {'fields': ('is_active', 'is_staff', 'is_superuser', 'groups', 'user_permissions')}),
('Important dates', {'fields': ('last_login', 'date_joined')}),
('Contact info', {'fields': ('contact_no',)}),)
add_fieldsets = (
(None, {
'classes': ('wide',),
'fields': ('email', 'password1', 'password2'),}),)
list_display = ('email', 'first_name', 'last_name', 'is_staff')
search_fields = ('email', 'first_name', 'last_name')
ordering = ('email',)
</code></pre>
<p>Here is the tutorial I was following: <a href="https://www.fomfus.com/articles/how-to-use-email-as-username-for-django-authentication-removing-the-username" rel="noreferrer">How to use email as username for Django authentication (removing the username)</a></p>
|
<p>The <code>add_fieldsets</code> class variable is used to define the fields that will be displayed on the create user page.</p>
<p>Unfortunately it's not well documented, and the closest thing I found to documentation was a comment in the code example on <a href="https://docs.djangoproject.com/en/2.1/topics/auth/customizing/" rel="noreferrer">https://docs.djangoproject.com/en/2.1/topics/auth/customizing/</a> (search page for <code>add_fieldsets</code>).</p>
<p>The <code>classes</code> key sets any custom CSS classes we want to apply to the form section.</p>
<p>The <code>fields</code> key sets the fields you wish to display in your form.</p>
<p>In your example, the create page will allow you to set an <code>email</code>, <code>password1</code> and <code>password2</code>.</p>
|
python|django|web|django-models
| 20 |
1,904,161 | 50,389,840 |
Python numpy indexing confusion
|
<p>I'm new in <code>python</code>, I was looking into a code which is similar to as follows,</p>
<pre><code>import numpy as np
a = np.ones([1,1,5,5], dtype='int64')
b = np.ones([11], dtype='float64')
x = b[a]
print (x.shape)
# (1, 1, 5, 5)
</code></pre>
<p>I looked into the python <code>numpy</code> documentation I didn't find anything related to such case. I'm not sure what's going on here and I don't know where to look.</p>
<p><strong>Edit</strong>
The actual code</p>
<pre><code>def gausslabel(length=180, stride=2):
gaussian_pdf = signal.gaussian(length+1, 3)
label = np.reshape(np.arange(stride/2, length, stride), [1,1,-1,1])
y = np.reshape(np.arange(stride/2, length, stride), [1,1,1,-1])
delta = np.array(np.abs(label - y), dtype=int)
delta = np.minimum(delta, length-delta)+length/2
return gaussian_pdf[delta]
</code></pre>
|
<p>I guess that this code is trying to demonstrate that if you index an array with an array, the result is an array with the same shape as the indexing array (in this case <code>a</code>) and not the indexed array (i.e. <code>b</code>)</p>
<p>But it's confusing because <code>b</code> is full of <code>1</code>s. Rather try this with a <code>b</code> full of different numbers:</p>
<pre><code>>> a = np.ones([1,1,5,5], dtype='int64')
>> b = np.arange(11) + 3
array([ 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13])
>>> b[a]
array([[[[4, 4, 4, 4, 4],
[4, 4, 4, 4, 4],
[4, 4, 4, 4, 4],
[4, 4, 4, 4, 4],
[4, 4, 4, 4, 4]]]])
</code></pre>
<p>because <code>a</code> is an array of <code>1</code>s, the only element of <code>b</code> that is indexed is <code>b[1]</code> which equals <code>4</code>. The shape of the result though is the shape of <code>a</code>, the array used as the index.</p>
|
python|numpy
| 1 |
1,904,162 | 56,705,075 |
Is there a shorthand for naming and printing variables values, not names, within a loop?
|
<p>I want to initialize and assign variables in a single loop, like I used to be able to in Stata, using a static element + an iterable element. I've edited the examples for clarity. The reason I want this is because it allows me to perform an operation on variables that have different contents in one loop. I can't just print a fixed string followed by something else. </p>
<p>Kind of like (pseudocode):</p>
<pre><code>for each `x' in 1/10:
print var`x'
</code></pre>
<p>And this would print, not "var1, var2, var3, etc." but the actual values of contained within var1, var2, and var3, so long as they had been previously defined. </p>
<p>Or you could even do things like this, again more pseudocode:</p>
<pre><code>foreach `x' in 1/10:
thing`x' = `x'
</code></pre>
<p>This would both initialize the variable and put a value into it, in one step, in the same loop. Is there a way to do this sort of thing in python, or is it, as I've read elsewhere, "unpythonlike" and more or less forbidden?</p>
<p>I don't have a pressing problem at the moment. In the worst case, I can just link together long chains of print statements. But it bothers me that I can't just print the results I want by calling a bunch of serial variables in a loop. I've tried variations of </p>
<pre><code>Books1 = Dogs
Books2 = Cats
Books3 = Lemurs
for x in range(10):
for y in [books]:
print y, x
</code></pre>
<p>But that just outputs something like</p>
<pre><code>Books1
Books2
Books3
</code></pre>
<p>...</p>
<p>When I want:</p>
<pre><code>
Dogs
Cats
Lemurs
</code></pre>
<p>Would appreciate if someone could point me the right direction!</p>
|
<p>You dont need to iterate over <code>books</code>:</p>
<pre><code>for idx in range(10):
print("Author's name", idx)
>> Author's name 0
>> Author's name 1
>> Author's name 2
>> Author's name 3
>> Author's name 4
>> Author's name 5
>> Author's name 6
>> Author's name 7
>> Author's name 8
>> Author's name 9
</code></pre>
|
python
| 0 |
1,904,163 | 45,244,854 |
Identifying special characters in password verification in python
|
<p>I am working on an assignment for password validation where the program keeps asking the user for a valid password until one is given. I am having trouble checking the input string for special characters. Currently the program accepts the password even if their is no special characters. Also I would like to implement a feature that terminates the loop after 3 unsuccessful attempts but am not sure which loop to implement the count in.
Here is my code:</p>
<pre><code>import re
specialCharacters = ['$', '#', '@', '!', '*']
def passwordValidation():
while True:
password = input("Please enter a password: ")
if len(password) < 6:
print("Your password must be at least 6 characters.")
elif re.search('[0-9]',password) is None:
print("Your password must have at least 1 number")
elif re.search('[A-Z]',password) is None:
print("Your password must have at least 1 uppercase letter.")
elif re.search('specialCharacters',password) is None:
print("Your password must have at least 1 special character ($, #, @, !, *)")
else:
print("Congratulations! Your password is valid.")
break
passwordValidation()
</code></pre>
|
<p>There is no need to use regular expression for something so simple. How about:</p>
<pre><code>elif not any(c in specialCharacters for c in password):
</code></pre>
<p>or</p>
<pre><code>specialCharacters = set('$#@!*')
...
elif not specialCharacters.intersection(password):
</code></pre>
|
python|validation|input|passwords
| 1 |
1,904,164 | 44,879,479 |
Unresolved reference when calling a global variable?
|
<p>I intended to call two global variables ('head' and 'tail') in the function 'In_queue', and turned out to call 'head' successfully but 'tail' not. The error is:</p>
<pre><code>UnboundLocalError: local variable 'tail' referenced before assignment.
</code></pre>
<p>While in another function 'Out_queue', the two variables both called successfully.</p>
<p>The code:</p>
<pre><code>tail = NODE(VALUE())
head = NODE(VALUE())
def In_queue():
try:
node = Create_node(*(Get_value()))
except:
print("OVERFLOW: No room availible!\n")
exit(0)
if not head.nextprt or not tail.nextprt:
tail.nextprt = head.nextprt = node
else:
tail.nextprt = node
tail = node
return None
def Out_queue():
if head.nextprt == tail.nextprt:
if not head.nextprt:
print("UNDERFLOW: the queue is empty!\n")
exit(0)
else:
node = head.nextprt
head.nextprt = tail.nextprt = None
return node.value
else:
node = head.nextprt
head.nextprt = node.nextprt
return node.value
</code></pre>
|
<p>Ok, so why did head work but tail didn't? As others have mentioned in comments, assigning value to tail caused it to be treated like local variable. In case of head, you didn't assign anything to it, so the interpreter looked for it in local and global scope. To make sure both <code>tail</code> and <code>head</code> works as globals you should use <code>global tail, head</code>. Like this:</p>
<pre><code>def In_queue():
global tail, head
try:
node = Create_node(*(Get_value()))
except:
print("OVERFLOW: No room availible!\n")
exit(0)
if not head.nextprt or not tail.nextprt:
tail.nextprt = head.nextprt = node
else:
tail.nextprt = node
tail = node
return None
</code></pre>
|
python|python-3.x
| 15 |
1,904,165 | 45,195,738 |
Serving Django and Channels with Daphne
|
<p>I have a Django project on Google Compute Engine. Here is the structure of my Django project.</p>
<pre><code>example_channels
├── db.sqlite3
├── example
│ ├── admin.py
│ ├── apps.py
│ ├── consumers.py
│ ├── __init__.py
│ ├── migrations
│ │
│ ├── models.py
│ ├── templates
│ │ └── example
│ │ ├── _base.html
│ │ └── user_list.html
│ ├── tests.py
│ ├── urls.py
│ └── views.py
├── example_channels
│ ├── asgi.py
│ ├── __init__.py
│ ├── routing.py
│ ├── settings.py
│ ├── urls.py
│ └── wsgi.py
└── manage.py
</code></pre>
<p>Following the tutorialsб I made an <code>asgi.py</code>:</p>
<pre><code>import os
from channels.asgi import get_channel_layer
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "my_project.settings")
channel_layer = get_channel_layer()
</code></pre>
<p>I use <code>asgi_redis</code> as a back-end. The settings file looks like this:</p>
<pre><code>CHANNEL_LAYERS = {
'default': {
'BACKEND': 'asgi_redis.RedisChannelLayer',
'CONFIG': {
'hosts': [('localhost', 6379)],
},
'ROUTING': 'example_channels.routing.channel_routing',
}
}
</code></pre>
<p>I then try to start the server. I run <code>python manage.py runworker &</code> and get:</p>
<pre><code>~/websockets_prototype/example_channels$ 2017-07-19 16:04:19,204 - INFO - runworker - Usi
ng single-threaded worker.
2017-07-19 16:04:19,204 - INFO - runworker - Running worker against channel layer default (asgi_redis.core.RedisCha
nnelLayer)
2017-07-19 16:04:19,205 - INFO - worker - Listening on channels http.request, websocket.connect, websocket.disconne
ct, websocket.receive
</code></pre>
<p>And then run Daphne:</p>
<pre><code>~/websockets_prototype/example_channels$ 2017-07-19 16:05:28,619 INFO Starting server
at tcp:port=80:interface=0.0.0.0, channel layer example_channels.asgi:channel_layer.
2017-07-19 16:05:28,620 INFO HTTP/2 support not enabled (install the http2 and tls Twisted extras)
2017-07-19 16:05:28,620 INFO Using busy-loop synchronous mode on channel layer
2017-07-19 16:05:28,620 INFO Listening on endpoint tcp:port=80:interface=0.0.0.0
</code></pre>
<p>I then start sending requests to the server, but I get <code>This site can’t be reached</code> error.</p>
|
<p>Try with daphne in port 8000 and reviews your ALLOWED_HOSTS=["*"] in settings.py</p>
|
python|django|django-channels|daphne
| 0 |
1,904,166 | 45,129,222 |
How can I plot a deque whenever it gets a value, using pyqtgraph and pypubsub?
|
<p>I'm trying to make a program in which an instance sends values to another instance and it plots them continuously. I programmed this using pypubsub to send values from one instance to the other. The other instance gets values and store them into a deque, and it plots the deque whenever it is updates. </p>
<p>I think the instances communicate with one another well and I can see the deque is updated every second as I planned, however, the problem is that the graph doesn't show the deque's values whenever it is updated, but rather it shows the values once the whole updating has finished. I'd like to know how I can plot the deque whenever it is updated.</p>
<pre><code>from pyqtgraph.Qt import QtGui, QtCore
import pyqtgraph as pg
from collections import deque
from pubsub import pub
import time
class Plotter:
def __init__(self):
self.deq = deque()
self.pw = pg.GraphicsView()
self.pw.show()
self.mainLayout = pg.GraphicsLayout()
self.pw.setCentralItem(self.mainLayout)
self.p1 = pg.PlotItem()
self.p1.setClipToView=True
self.curve_1 = self.p1.plot(pen=None, symbol='o', symbolPen=None, symbolSize=10, symbolBrush=(102, 000, 000, 255))
self.mainLayout.addItem(self.p1, row = 0, col=0, rowspan=2)
def plot(self, msg):
print('Plotter received: ', msg)
self.deq.append(msg)
print(self.deq)
self.curve_1.setData(self.deq)
class Sender:
def __init__(self):
self.list01 = [1,2,3,4,5] # A list of values that will be sent through pub.sendMessage
def send(self):
for i in range(len(self.list01)):
pub.sendMessage('update', msg = self.list01[i] )
time.sleep(1)
plotterObj = Plotter()
senderObj = Sender()
pub.subscribe(plotterObj.plot, 'update')
senderObj.send()
</code></pre>
|
<p>Looking at the sendmessage and subscribe, all looks good. But I notice you don't have a QApplication instance and an event loop. Create the app, and call exec() at the end so it enters event loop. Rendering will occur then. </p>
<pre><code>app = QtGui.QApplication([])
plotterObj = Plotter()
senderObj = Sender()
pub.subscribe(plotterObj.plot, 'update')
senderObj.send()
app.exec()
</code></pre>
|
python|publish-subscribe|pyqtgraph|pypubsub
| 0 |
1,904,167 | 64,623,564 |
How to remove all layers *except* some specific ones in pyqgis?
|
<p>I need to load several vector layers for my QGIS project, so that I test every function of my script in every one of them. However, in the end I want to work only with one or two layers of interest and discard the others, so I would like to do that automatically.</p>
<p>I did that successfully with some layers, but there is one layer that is causing me problems and I haven't figured out why.</p>
<p>Here's some code:</p>
<p>Loading layers (these shouldn't be the problem <em>almost</em> for sure):</p>
<pre><code>a2 = iface.addVectorLayer(path + ardida2, "", "ogr")
if not a2:
print("Layer failed to load!")
a3 = iface.addVectorLayer(path + ardida3, "", "ogr")
if not a3:
print("Layer failed to load!")
</code></pre>
<p>Now this is the function I've created to delete all of the loaded layers except the ones that I want to work with. The <code>prints</code> where only because I was trying to understand the problem.</p>
<pre><code>def RemoveAllLayersExcept(*layers):
layer_ids = []
for l in layers:
layer_ids.append(l.id())
print('layer_ids', layer_ids)
for lyr in QgsProject.instance().mapLayers():
print(lyr)
if lyr not in layer_ids:
print('not')
QgsProject.instance().removeMapLayer(lyr)
else:
pass
</code></pre>
<p>Then I created a new layer - the one that is causing me problems. I need to edit this layer in an iterative process later. I followed a step-by-step example in OpenSourceOptions tutorial titled <a href="https://opensourceoptions.com/blog/pyqgis-create-a-shapefile/" rel="nofollow noreferrer">PyQGIS: Create a Shapefile</a>:</p>
<pre><code># OPENSOURCEOPTIONS TUTORIAL - PYQGIS: Create a Shapefile
# create fields
layerFields = QgsFields()
layerFields.append(QgsField('ID', QVariant.Int))
layerFields.append(QgsField('Value', QVariant.Double))
layerFields.append(QgsField('Name', QVariant.String))
# Now define the file path for the new shapefile
# Note: the CRS used here is NAD 1983 UTM Zone 11 N
fn = 'D:/Sara/Trabalho/QGIS/pnogas/fireball_points.shp'
writer = QgsVectorFileWriter(fn, 'UTF-8', layerFields, QgsWkbTypes.Point,QgsCoordinateReferenceSystem('EPSG:26912'), 'ESRI Shapefile')
# For each feature we need to set the geometry (in this case a point)
# set the attribute values, then add it to the vector layer.
feat = QgsFeature() # create an empty QgsFeature()
feat.setGeometry(QgsGeometry.fromPointXY(QgsPointXY(cx14, cy14))) # create a point and use it to set the feature geometry
feat.setAttributes([1, 1.1, 'one']) # set the attribute values
writer.addFeature(feat) # add the feature to the layer
layer_fireball = iface.addVectorLayer(fn, '', 'ogr')
if not layer_fireball:
print("Layer failed to load!")
del(writer)
</code></pre>
<p>Then I remove the layers don't interest me:</p>
<pre><code>RemoveAllLayersExcept(layer, layer_fireball)
</code></pre>
<p>And this is it. When I run the program for the first time, nothing happens. Here's what I get:</p>
<pre><code>layer_ids ['COS2018_ardida2018_3_clip_cbf56f4b_e668_4c2e_9259_7d22d5943097', 'fireball_points_f92b32e0_f8bf_42c1_95e6_b317ddf6ee84']
COS2018_ardida2018_2_clip_45b241c4_fb9b_4654_9916_5ff08514c559
not
COS2018_ardida2018_3_clip_cbf56f4b_e668_4c2e_9259_7d22d5943097
fireball_points_f92b32e0_f8bf_42c1_95e6_b317ddf6ee84
</code></pre>
<p>which is in accordance to what I was expecting. But in a second and n-th times, I get:</p>
<pre><code>Layer failed to load!
Traceback (most recent call last):
File "C:\PROGRA~1\QGIS3~1.10\apps\Python37\lib\code.py", line 90, in runcode
exec(code, self.locals)
File "<input>", line 1, in <module>
File "<string>", line 676, in <module>
File "<string>", line 104, in RemoveAllLayersExcept
AttributeError: 'NoneType' object has no attribute 'id'
</code></pre>
<p>Can you detect where the problem is? Why this error? And why does it happen only from the 2nd run on?</p>
<p>Thanks!</p>
|
<p>Change <code>RemoveAllLayersExcept</code> method like below:</p>
<pre class="lang-py prettyprint-override"><code>def RemoveAllLayersExcept(*keep_layers):
layers = QgsProject.instance().mapLayers().values()
will_be_deleted = [l for l in layers if l not in keep_layers]
for layer in will_be_deleted:
QgsProject.instance().removeMapLayer(layer)
</code></pre>
|
python|python-3.x|geometry|layer|pyqgis
| 4 |
1,904,168 | 61,361,954 |
Invalid version specification on Shiny app
|
<p>When I try to deploy my (reticulate-powered) Shiny app to <a href="http://shinyapps.io/" rel="nofollow noreferrer">shinyapps.io</a>, I get the following error:</p>
<pre><code>Error in value[[3L]](cond) : invalid version specification ‘20.1b1’
Calls: local ... tryCatch -> tryCatchList -> tryCatchOne -> <Anonymous>
Execution halted
</code></pre>
<p>While it is not explicit, I suppose the error is refering to the pip version, which I never explicitly specify.</p>
<p>This is the part of the code that precedes the ui and server functions:</p>
<pre class="lang-r prettyprint-override"><code>library(reticulate)
library(shiny)
virtualenv_create(envname = "elicit", python="python3")
virtualenv_install("elicit", packages = c('numpy', 'Gpy'))
use_virtualenv("elicit", required = TRUE)
</code></pre>
<p>When I comment this out together with any Python-related code from the UI and the server, everything works fine.</p>
<p>How can I set the valid version the site is requesting? I see that <code>reticulate::virtualenv</code> has a <code>pip_options</code> argument, but I can't find useful documentation about how to use it.</p>
<p>I am also not very proficient at setting virtual and conda environments, so I could very well be missing some basic step.</p>
<h1>Update</h1>
<p>I noticed that if I switch the order of the <code>use_virtualenv</code> and the <code>virtualenv_install</code> calls I get a different error:</p>
<pre><code>ERROR: The requested version of Python
('~/.virtualenvs/elicit/bin/python') cannot be used, as another version
of Python ('/usr/bin/python3') has already been initialized. Please
restart the R session if you need to attach reticulate to a different
version of Python.
Error in value[[3L]](cond) :
failed to initialize requested version of Python
Calls: local ... tryCatch -> tryCatchList -> tryCatchOne -> <Anonymous>
Execution halted
</code></pre>
<p>I've tried everything I could think of but I can't get that fixed either.</p>
|
<p>I actually found a solution for this issue. Since the bugged version of pip gets installed as soon as your create the virtualenv, I forcibly uninstalled it and then installed the version that worked when I built my app. Here is the code that I used:</p>
<pre><code>virtualenv_create(envname = "python_environment", python = "python3")
virtualenv_remove(envname = "python_environment", packages = "pip")
virtualenv_install(envname = "python_environment", packages = c("pip==19.0.3","numpy","nltk","torch"), ignore_installed = TRUE)
use_virtualenv("python_environment", required = TRUE)
</code></pre>
|
python|r|shiny|pip|shinyapps
| 2 |
1,904,169 | 61,375,362 |
Creating a new column based on the key of a dictionary?
|
<p>I am trying to create a new column in a dataframe within a for loop of dictionary items that uses a string literal and the key, but it throws a "ValueError: cannot set a frame with no defined index and a scalar" error message. </p>
<h1>Dictionary definition for exp categories</h1>
<pre><code> d = {'Travel & Entertainment': [1,2,3,4,5,6,7,8,9,10,11], 'Office supplies & Expenses': [13,14,15,16,17],
'Professional Fees':[19,20,21,22,23], 'Fees & Assessments':[25,26,27], 'IT Expenses':[29],
'Bad Debt Expense':[31],'Miscellaneous expenses': [33,34,35,36,37],'Marketing Expenses':[40,41,42],
'Payroll & Related Expenses': [45,46,47,48,49,50,51,52,53,54,55,56], 'Total Utilities':[59,60],
'Total Equipment Maint, & Rental Expense': [63,64,65,66,67,68],'Total Mill Expense':[70,71,72,73,74,75,76,77],
'Total Taxes':[80,81],'Total Insurance Expense':[83,84,85],'Incentive Compensation':[88],
'Strategic Initiative':[89]}
</code></pre>
<h1>Creating a new dataframe based on a master dataframe</h1>
<pre><code>mcon = VA.loc[:,['Expense', 'Mgrl', 'Exp Category', 'Parent Category']]
mcon.loc[:,'Variance Type'] = ['Unfavorable' if x < 0 else 'favorable' for x in mcon['Mgrl']]
mcon.loc[:,'Business Unit'] = 'Managerial Consolidation'
mcon = mcon[['Business Unit', 'Exp Category','Parent Category', 'Expense', 'Mgrl', 'Variance Type']]
mcon.rename(columns={'Mgrl':'Variance'}, inplace=True)
</code></pre>
<h1>Creating a new dataframe that will be written to excel eventually</h1>
<pre><code>a1 = pd.DataFrame()
for key, value in d.items():
umconm = mcon.iloc[value].query('Variance < 0').nsmallest(5, 'Variance')
fmconm = mcon.iloc[value].query('Variance > 0').nlargest(5, 'Variance')
if umconm.empty == False or fmconm.empty == False:
a1 = pd.concat([a1,umconm,fmconm], ignore_index = True)
else:
continue
a1.to_csv('example.csv', index = False)
</code></pre>
<h1>Output looks like this</h1>
<p><a href="https://i.stack.imgur.com/ZnBTf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZnBTf.png" alt="enter image description here"></a></p>
<h1>I am trying to add a new column that says Higher/Lower budget than {key} where key stands for the expense type using the below code</h1>
<pre><code>for key, value in d.items():
umconm = mcon.iloc[value].query('Variance < 0').nsmallest(5, 'Variance')
umconm.loc[:,'Explanation'] = f'Lower than budgeted {key}'
fmconm = mcon.iloc[value].query('Variance > 0').nlargest(5, 'Variance')
fmconm.loc[:,'Explanation'] = f'Higher than budgeted {key}'
if umconm.empty == False or fmconm.empty == False:
a1 = pd.concat([a1,umconm,fmconm], ignore_index = True)
else:
continue
</code></pre>
<p>but using the above string literal gives me the error message "ValueError: cannot set a frame with no defined index and a scalar"</p>
<p>I would really appreciate any help to either correct this or find a different solution for adding this field to my dataframe. Thanks in advance! </p>
|
<p>this error occurs because this line </p>
<pre><code>umconm = mcon.iloc[value].query('Variance < 0').nsmallest(5, 'Variance')
</code></pre>
<p>will produce empty dataframe sometimes without index. instead use this approach when you want to set your column (not loc):</p>
<pre><code>a['Explanation'] = f'Lower than budgeted {key}'
</code></pre>
|
python|pandas|string-literals|data-wrangling
| 2 |
1,904,170 | 56,339,421 |
Find elemtnt*s* return array with same element
|
<p>I've been up on this for 2 days.</p>
<p>Trying to get all Text(s) from span(s), which appear in many div(s).</p>
<p>All the div(s) looks pretty much the same :</p>
<pre><code><div class="_3_7SH _3DFk6 message-in">
<div class="Tkt2p">
<div class="copyable-text" data-pre-plain-text="[10:26 AM, 5/28/2019] יוסף צדוק: ">
<div class="_3zb-j ZhF0n">
<span dir="rtl" class="XELVh selectable-text invisible-space copyable-text">TEXT TO COPY IS ME</span></div></div>
<div class="_2f-RV"><div class="_1DZAH">
<span class="_1ORuP">
</span><span class="_3EFt_">10:26 AM</span></div></div></div><span></span></div>
</code></pre>
<p>This is how tried to find ALL "message-in" elements :</p>
<p><code>in_mesg_arr = driver.find_elements_by_xpath("//div[contains(@class, 'message-in')]")</code></p>
<p>I got back the length of the array: <code>11</code></p>
<p>Then, tried to get all text from span(s):</p>
<pre><code>for index in in_mesg_arr:
last_msg = last_msg + str(index.find_element_by_xpath(
"//span[contains(@class,'selectable-text invisible-space copyable-text')]").text)
</code></pre>
<p>HOWEVER, i get back the same text (same element over and over!).</p>
<p>print(last_msg) = bla bla bla bla bla bla bla bla bla bla bla</p>
<p>Will be glad to get some directions.</p>
<p>FULL HTML:</p>
|
<pre><code>for index in last_msg:
last_msg = last_msg + str(in_mesg_arr[index].find_element_by_xpath(
"//span[contains(@class,'selectable-text invisible-space copyable-text')]").text)
</code></pre>
<p>This code will always returns the 1st element because it will search <code>span</code> element any where in the <code>DOM</code>.</p>
<p>The <code>XPath</code> expression in the loop has to start with a <code>dot</code> to be context-specific.Use any of the following code.</p>
<pre><code> in_mesg_arr = driver.find_elements_by_xpath("//div[contains(@class, 'message-in')]")
for item in in_mesg_arr:
spanele=item.find_element_by_xpath(".//span[contains(@class,'selectable-text invisible-space copyable-text')]")
print(spanele.text)
</code></pre>
<p>OR</p>
<pre><code>in_mesg_arr = driver.find_elements_by_xpath("//div[contains(@class, 'message-in')]")
for item in range(len(in_mesg_arr)):
spanele=in_mesg_arr[item].find_element_by_xpath(".//span[contains(@class,'selectable-text invisible-space copyable-text')]")
print(spanele.text)
</code></pre>
<p>Let me know how it goes.</p>
|
python|selenium|selenium-webdriver
| 2 |
1,904,171 | 56,342,930 |
A pythonic way to insert a space before capital letters but not insert space between an Abbreviation
|
<p>I've got a file whose format I'm altering via a python script. I have several camel cased strings in this file where I just want to insert a single space before the capital letter - so "WordWordWord" becomes "Word Word Word" but I also have some abbreviations as well, Like in the text "General Manager or VP".</p>
<p>I found an answer from David Underhill in this post:</p>
<p><a href="https://stackoverflow.com/questions/199059/a-pythonic-way-to-insert-a-space-before-capital-letters/46760056#46760056">A pythonic way to insert a space before capital letters</a></p>
<p>While this answer helps me to not insert spaces between abbreviations inside the text like "DaveIsAFKRightNow!Cool"</p>
<p>But it sure inserts a space between V and P in "VP".</p>
<p>I only have 25 experience points and i am unable to comment on the existing post, i am left with no other choice than to create another post for this similar sort of problem.</p>
<p>I am not that good at RegEx and not able to figure how to handle this situation.</p>
<p>I have tried this:</p>
<pre><code>re_outer = re.compile(r'([^A-Z ])([A-Z])')
re_inner = re.compile(r'(?<!^)([A-Z])([^A-Z])')
re_outer.sub(r'\1 \2', re_inner.sub(r' \1\2', 'DaveIsAFKRightNow!Cool'))
</code></pre>
<p>It gives me 'Dave Is AFK Right Now! Cool'</p>
<p>My text sample is this: </p>
<pre><code>General Manager or VP Torrance, CARequired education
</code></pre>
<p>I want the output as: <code>General Manager or VP Torrance, CA Required education</code></p>
<p>The Output i am getting is: <code>General Manager or V P Torrance, CA Required education</code></p>
|
<p>You may swap the replacements to first insert spaces before uppercase letters that are preceded with chars other than uppercase letters and spaces, and then append a space before words that start with 1+ uppercase letters that are followed with an uppercase and a lowercase letter:</p>
<pre><code>import re
re_outer = re.compile(r'([^A-Z ])([A-Z])')
re_inner = re.compile(r'\b[A-Z]+(?=[A-Z][a-z])')
print(re_inner.sub(r'\g<0> ', re_outer.sub(r'\1 \2', 'DaveIsAFKRightNow!Cool')))
# => Dave Is AFK Right Now! Cool
print(re_inner.sub(r'\g<0> ', re_outer.sub(r'\1 \2', 'General Manager or VP Torrance, CARequired education')))
# => General Manager or VP Torrance, CA Required education
</code></pre>
<p>See the <a href="https://ideone.com/jfgNAB" rel="nofollow noreferrer">Python demo</a></p>
<p>The <code>\b[A-Z]+(?=[A-Z][a-z])</code> regex matches</p>
<ul>
<li><code>\b</code> - word boundary</li>
<li><code>[A-Z]+</code> - 1+ uppercase letters that are</li>
<li><code>(?=[A-Z][a-z])</code> - followed with an uppercase letter and a lowercase letter. </li>
</ul>
<p>Note that <code>\g<0></code> inserts the whole match in the replacement pattern.</p>
|
regex|python-3.x
| 2 |
1,904,172 | 18,520,823 |
Python find most recent created file
|
<p>I'm trying to find the name of the most recently created file in a directory:</p>
<pre><code> try:
#print "creating path"
os.makedirs(self.path)
self.dictName = "0"
except OSError as exception:
if exception.errno != errno.EEXIST:
raise
listOfFiles = os.listdir(self.path)
listOfFiles = filter(lambda x: not os.path.isdir(x), listOfFiles)
print "creating newestDict"
newestDict = max(listOfFiles, key=lambda x: os.stat(x).st_mtime) ------<
print "setting self.name"
</code></pre>
<p>The arrow points where the code is causing the program to crash. Printing listOfFiles returns a non-empty list. This is using a third-party python program that has its own interpreter. Any ideas what I'm doing wrong? Thanks!</p>
|
<p>You can clarify the process of listing files by using the <code>os.path.isfile()</code> function and a comprehension list, but that is up to you. If you look into the related link in the comments, you'll see the <code>os.path.getctime()</code><a href="http://docs.python.org/release/2.5.2/lib/module-os.path.html#l2h-2178" rel="nofollow">function</a>, which allows you to get the creation time of a file in windows systems and the time of the last modification in Unix-like systems (like the <code>stat()</code>system call <code>st_mtime</code> attribute): </p>
<pre><code>from os import listdir
from os.path import isfile, join, getctime
listOfFiles = [f for f in listdir(self.path) if isfile(join(self.path,f))]
newestDict = max(listOfFiles, key=lambda x: os.path.getctime(x))
</code></pre>
<p>But so far, your code looks correct. Maybe we could be more helpful if you provide the error the interpreter is throwing or the full piece of code.</p>
|
python|file|directory
| 0 |
1,904,173 | 18,646,240 |
Longer than intended loop due to multiple variables
|
<p>I'm making a program that charts a random movement across a grid, where the starting point is the middle of the grid and the user gives the number of columns and rows. The current problem is that I have two changing variables that both need to signal the end of program. X is for horizontal movement and Y is for vertical: if either of them go outside of the grid, I need the program to end. At the moment, when one variable goes off, it continues running until the other does. (Ex. If it moves off the grid vertically, the program still keeps picking random directions and moving horizontally. The same it true for horizontal.) So I'm not sure how to write the program so that it ends when one of them goes off, instead of waiting for both. Here's what I've got so far:</p>
<pre><code>import random
def rightmove(width, px):
px += 1
if px > width:
return px
else:
return px
def leftmove(width, px):
px -= 1
if px == -1:
return px
else:
return px
def downmove(height, py):
py += 1
if py == height:
return py
else:
return py
def upmove(height, py):
py += 1
if py == -1:
return py
else:
return py
def main():
height = raw_input("Please enter the desired number of rows: ")
height = int(height)
width = raw_input("Please enter the desired number of columns: ")
width = int(width)
px = round(width/2)
px = int(px)
py = round(height/2)
py = int(py)
print "Manhattan (" + str(width) + ", " + str(height) + ")"
print "(x, y) " + str(px) + " " + str(py)
topy = height + 1
topx = width + 1
while 0 <= px <= width:
while 0 <= py <= height:
s = random.randint(0, 1)
if s == 0:
x = random.randint(0, 1)
if x == 0:
px = leftmove(width, px)
if px <= 0:
print "Direction E (x, y) " + str(px)
else:
print "Direction E"
else:
px = rightmove(height, px)
if px <= width:
print "Direction W (x, y) " + str(px)
else:
print "Direction W"
else:
y = random.randint(0, 1)
if y == 0:
py = downmove(height, py)
if py <= height:
print "Direction S (x, y) " + str(py)
else:
print "Direction S"
else:
py = upmove(height, py)
if py <= 0:
print "Direction N (x, y) " + str(py)
else:
print "Direction N"
main()
</code></pre>
<p>Here's a sample of the intended output:</p>
<pre><code>>>> manhattan(5,7)
(x, y) 2 3
direction N (x, y) 1 3
direction N (x, y) 0 3
direction S (x, y) 1 3
direction W (x, y) 1 2
direction S (x, y) 2 2
direction E (x, y) 2 3
direction S (x, y) 3 3
direction W (x, y) 3 2
direction N (x, y) 2 2
direction W (x, y) 2 1
direction E (x, y) 2 2
direction W (x, y) 2 1
direction N (x, y) 1 1
direction N (x, y) 0 1
direction S (x, y) 1 1
direction W (x, y) 1 0
direction E (x, y) 1 1
direction W (x, y) 1 0
direction W
</code></pre>
|
<p>Although this is only a guess without seeing your actual indented code, I believe your problem is this:</p>
<pre><code>while 0 <= px <= width:
while 0 <= py <= height:
# a whole mess of logic
</code></pre>
<p>If <code>py</code> goes out of bounds, you'll escape the inner loop, but then just keep repeating that inner loop until <code>px</code> also goes out of bounds.</p>
<p>If <code>px</code> goes out of bounds, you'll still be stuck inside the inner loop until <code>py</code> also goes out of bounds.</p>
<p>What you want is to escape as soon as <em>either</em> goes out of bounds. In other words, you want a single loop, that keeps going as long as both are in bounds. Which you can translate directly from English to Python:</p>
<pre><code>while 0 <= px <= width and 0 <= py <= height:
# same whole mess of logic
</code></pre>
|
python|function|loops|python-2.7
| 1 |
1,904,174 | 69,506,520 |
How do I merge two sets of data with Pandas in Pyhton without losing rows?
|
<p>I'm using Pandas in Python to compare two data frames. I want to match up the data from one set to another.</p>
<p><strong>Dataframe 1</strong></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Name</th>
</tr>
</thead>
<tbody>
<tr>
<td>Sam</td>
</tr>
<tr>
<td>Mike</td>
</tr>
<tr>
<td>John</td>
</tr>
<tr>
<td>Matthew</td>
</tr>
<tr>
<td>Mark</td>
</tr>
</tbody>
</table>
</div>
<p><strong>Dataframe 2</strong></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Name</th>
<th>Number</th>
</tr>
</thead>
<tbody>
<tr>
<td>Mike</td>
<td>76</td>
</tr>
<tr>
<td>John</td>
<td>92</td>
</tr>
<tr>
<td>Mark</td>
<td>32</td>
</tr>
</tbody>
</table>
</div>
<p><strong>This is the output I would like to get:</strong></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Name</th>
<th>Number</th>
</tr>
</thead>
<tbody>
<tr>
<td>Sam</td>
<td>0</td>
</tr>
<tr>
<td>Mike</td>
<td>76</td>
</tr>
<tr>
<td>John</td>
<td>92</td>
</tr>
<tr>
<td>Matthew</td>
<td>0</td>
</tr>
<tr>
<td>Mark</td>
<td>32</td>
</tr>
</tbody>
</table>
</div>
<p><strong>At the moment I am doing this</strong></p>
<pre><code>df1 = pd.read_csv('data_frame1.csv', usecols=['Name', 'Number'])
df2 = pd.read_csv('data_frame2.csv')
df3 = pd.merge(df1, df2, on = 'Name')
df3.set_index('Name', inplace = True)
df3.to_csv('output.csv')
</code></pre>
<p>However, this is deleting the names which do not have a number. I want to keep them and assign 0 to them.</p>
|
<p>You can use <code>pd.merge(..., , how = 'outer')</code> this keep all row and insert for them <code>Nan</code> and then use <code>.fillna(0)</code> and insert <code>0</code> for <code>Nan</code>:</p>
<pre><code>>>> pd.merge(df1, df2, on = 'Name', how = 'outer').fillna(0)
Name Number
0 Sam 0
1 Mike 76
2 John 92
3 Matthew 0
4 Mark 32
</code></pre>
<p>With <code>pd.merge(..., , how = 'outer')</code> you consider two DataFrame if you want megre one DataFrame with another you can merge like below, see this example:</p>
<pre><code>>>> df1 = pd.DataFrame({'Name': ['Mike','John','Mark','Matthew']})
>>> df2 = pd.DataFrame({'Name': ['Mike','John','Mark', 'Sara'], 'Number' : [76,92,32,50]})
>>> pd.merge(df1, df2, on='Name', how='outer').fillna(0)
Name Number
0 Mike 76.0
1 John 92.0
2 Mark 32.0
3 Matthew 0.0
4 Sara 50.0
>>> df1.merge(df2,on='Name', how='left').fillna(0)
Name Number
0 Mike 76.0
1 John 92.0
2 Mark 32.0
3 Matthew 0.0
</code></pre>
|
python|pandas|dataframe
| 2 |
1,904,175 | 69,520,746 |
Count of Year in Python
|
<p>How can I find count of year in python from certain date and a date since people opened an account (CRDACCT_DTE_OPEN)? The certain date (MIS_DATE) is 2021-03-01 with format= <code>'%Y%m%d'</code>.</p>
<p>The dataset given below:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(
{ "MIS_DATE": ["2018-03-02", "2020-03-26", "2019-08-17", "2019-08-17", "2019-08-19"],
"CRDACCT_DTE_OPEN": ["2021-03-31", "2021-03-31", "2021-03-31", "2021-03-31", "2021-03-31"]})
</code></pre>
<p>Format date:</p>
<pre><code>df['CRDACCT_DTE_OPEN'] = pd.to_datetime(df['CRDACCT_DTE_OPEN'], format='%Y%m%d')
df['MIS_DATE'] = pd.to_datetime(df['MIS_DATE'], format='%d%m%Y')
</code></pre>
<p>I've tried to do this operation. Let's say <strong>MOB</strong> is subtract of <strong>MIS_DATE - CRDACCT_DTE_OPEN</strong>, but the result is not what I expected. <em>I want the output in form of year format, for example if someone opened an account in 2018-03-31, hence the MOB is 3.</em> Meaning that 3 years since that person open an account.</p>
<pre><code>MOB = df['MIS_DATE'] - df['CRDACCT_DTE_OPEN']
MOB
</code></pre>
<p>Output:</p>
<pre><code>1 370 days
2 592 days
3 592 days
4 590 days
...
Name: MOB, Length: 5, dtype: timedelta64[ns]
</code></pre>
|
<p>this is what you need.</p>
<h3><code>df['col_of_datetime'].dt.year</code></h3>
<p>here is the example</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame(
{"MIS_DATE": ["2018-03-02", "2020-03-26", "2019-08-17", "2019-08-17", "2019-08-19"],
"CRDACCT_DTE_OPEN": ["2021-03-31", "2021-03-31", "2021-03-31", "2021-03-31", "2021-03-31"]})
df['CRDACCT_DTE_OPEN'] = pd.to_datetime(df['CRDACCT_DTE_OPEN'], format='%Y-%m-%d')
df['MIS_DATE'] = pd.to_datetime(df['MIS_DATE'], format='%Y-%m-%d')
target_year = 2021
result = target_year - df['MIS_DATE'].dt.year
print(result)
</code></pre>
<p>Output:</p>
<pre><code>0 3
1 1
2 2
3 2
4 2
Name: MIS_DATE, dtype: int64
</code></pre>
|
python|pandas|datetime-format|subtraction|data-wrangling
| 1 |
1,904,176 | 55,567,824 |
TypeError: 'module' object is not callable - pygsp modules not callable
|
<p>I am trying to implement graph signal processing in pygsp by follwoing the documentation. PyGSP version I am using in 0.5.1 and is imported successfully but am not able to use any of its modules.</p>
<pre><code>import pygsp
G = pygsp.graphs.logo()
f = pygsp.filters.Heat(G)
Sl = f.analysis(G.L.todense(), method='cheby')
</code></pre>
<p>Traceback (most recent call last):</p>
<pre><code> File "C:/Users/SAI_SHREYASHI_PENUGO/Documents/.../gsp_trial1.py", line 3,
in <module>
G = pygsp.graphs.logo()
TypeError: 'module' object is not callable
</code></pre>
<p>Expected for it to compile without errors considering I have pygsp installed in the list of site-packages where all other packages are stored(which are being accessed without any error).</p>
|
<p>As rightly pointed out in comments, <code>pygsp.graphs.logo</code> is a <a href="https://pygsp.readthedocs.io/en/stable/_modules/pygsp/graphs/logo.html?highlight=Logo" rel="nofollow noreferrer">pygsp module</a>.
There are a couple of errors too.</p>
<p>The correct way to use the module is as follows.</p>
<pre><code>import pygsp
G = pygsp.graphs.Logo()
f = pygsp.filters.Heat(G)
Sl = f.analyze(G.L.todense(), method='chebyshev')
</code></pre>
<p>You can now view one of the filtered signal on the graph by</p>
<pre><code>pygsp.plotting.plot_signal(G, Sl[0])
pygsp.plotting.show()
</code></pre>
<p><img src="https://i.stack.imgur.com/Ls1tD.png" alt="Output Image"></p>
|
python-3.6
| 0 |
1,904,177 | 57,421,669 |
Question about pip using Python from Windows Store
|
<p>I have python installed through windows store and I can install programs using pip, but when I try to run said programs, they fail to execute in powershell.</p>
<p>How can I make sure that the necessary "scripts" folder is in my path? I never faced these problems when installing from executable.</p>
<p>For example, "pip install ntfy" runs successfully in Powershell.</p>
<p>The command "ntfy send test" fails telling me the term is not part of a cmdlet, function, etc. etc.</p>
<p>The 'ntfy' program is located here /mnt/c/Users/vlouvet/AppData/Local/Packages/PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0/LocalCache/local-packages/Python37/Scripts/ntfy.exe</p>
<p>What is the <em>recommended</em> way of editing my path so that programs installed via pip are available across windows store updates of the Python language?</p>
|
<h2>In advance</h2>
<p>I highly recommend you not to use python installed from the Windows Store, because you'll face such errors, and even more nasty ones.</p>
<h1>The easy solution</h1>
<p>Create a virtual environment on a more accessible folder, for example in <code>C:\Users\<user>\python</code>. To do so, do the following:</p>
<ul>
<li>Using <em>PowerShell</em>, go to your user folder, using <code>cd</code>(Note that usually PowerShell already starts inside your user folder. This is an important setting to have, and if not, you should change your powershell starting point to this folder for the future.);</li>
<li>Now that you're in your user folder, type in the <em>PowerShell</em> <code>mkdir python; cd python</code>;</li>
<li>Now, to create a virtual environment, type <code>python -m venv venv</code>;</li>
<li>(You can verify that your virtual enviroment has been created by listing the folders, with the command <code>ls</code>);</li>
<li>You have created a virtual environment. Now, you need to activate it. To activate, run the following: <code>./venv/Scripts/activate</code>;</li>
</ul>
<p>Now, you have fully created and activated a <em>virtual environment</em> for your current PowerShell session. You can now install any packages / programs using <code>pip</code>.</p>
<p>After that, the only thing that you need to do is to add <code>C:\Users\<user>\python\venv\Scripts</code> to your Path, and you're good to go.</p>
<h2>Caveats</h2>
<p>By adding this folder to your Path, you may be using an outdated python version in the future, since the <code>Scripts</code> folder inside your virtual environment also adds a python executable that will be enabled in the path.</p>
<h1>The recommended solution</h1>
<p>As I stated before, I do not recommend to have the Microsoft Store version of python installed on your machine. That said, you're probably using it for the conveniences of having the latest Python version installed as soon as they're released. To alleviate this need while also getting rid of your MS Store Python. I recommend you using Chocolatey to install python (and pretty much any other programs for development).</p>
<p><strong>What is Chocolatey?</strong></p>
<p>Chocolatey is a package manager for Windows, pretty much like <code>apt-get</code> for Ubuntu Linux or HomeBrew for MacOS. By using a package manager, you get rid of the hassle of always having to execute the (mostly annoying) install wizards on windows.</p>
<p>To install Chocolatey:</p>
<ul>
<li>Go to <a href="https://chocolatey.org/install" rel="noreferrer">chocolatey.org/install</a> and follow the install instructions;</li>
<li>(Recommended: Take a look on their documentation later to see what Chocolatey is capable of);</li>
<li>With Chocolatey installed, take a test drive and see if it is working properly by running <code>choco -v</code> in PowerShell;</li>
<li>By having Chocolatey installed, you can now run <code>choco install python -y</code>. Let's break down this command:
<ul>
<li><code>choco install</code> -> The package installer of chocolatey</li>
<li><code>python</code> -> the name of the package you want to install</li>
<li><code>-y</code> -> This tells the installer to skip install verification by saying "Yes to All" scripts that will be executed in order to install a package.</li>
</ul></li>
<li>With python installed from chocolatey, you can also see that Python is already added to your path - This means that any python package or executable installed globally will be now available on your machine!</li>
</ul>
<p>Hope I could help you!</p>
|
python|windows|pip
| 12 |
1,904,178 | 57,513,545 |
RabbitMQ - Python/Pika How to know if queue is empty?
|
<p>My idea is also described here if I express myself incorrectly (<a href="https://stackoverflow.com/questions/57491859/send-images-with-their-names-in-one-message-rabbitmq-python-3-x">Send images with their names in one message - RabbitMQ (Python 3.X)</a>)</p>
<p>I currently have a problem with RabbitMQ ---></p>
<p>I made a working queue on which several consumers work at the same time, it is a containerized image processing that gives a str output with the requested information.</p>
<p>The results must be sent on another queue when the processing is finished,
but how do I know if the queue containing the images is empty and there is no more work to do? I would like to know if a command like "if the queue is empty, then send the results..." to say it roughly. </p>
<p>Thank you for your time, have a good day.</p>
|
<p>You can do a passive declare of the queue to get the count of messages, but that may not be reliable as the count returned does not include messages in the "unacked" state. You could query the queue's counts via the HTTP API.</p>
<p>Or, whatever application publishes the images could send a "no more images" message to indicate no more work to do. The consumer that receives that message could then query the HTTP API to confirm that no messages are in the Ready or Unacked state, then send the results to the next queue.</p>
<hr>
<p><sub><b>NOTE:</b> the RabbitMQ team monitors the <code>rabbitmq-users</code> <a href="https://groups.google.com/forum/#!forum/rabbitmq-users" rel="nofollow noreferrer">mailing list</a> and only sometimes answers questions on StackOverflow.</sub></p>
|
python|rabbitmq|pika
| 1 |
1,904,179 | 53,873,889 |
Reading specific column from a csv file in python
|
<p>I am writing some python code to practice for an exam in January. I need to read the second column into my code and print it out. If possible i also need to add data to specific columns. </p>
<p>The code i have tried is:</p>
<p>def view_this_weeks_sales(): #name of function</p>
<pre><code>with open('employees-names1.csv ') as data: #name of csv file
reader = csv.reader(data)
first_column = next(zip(*reader))
print first_column
</code></pre>
<p>My file is below, it has four columns. Thanks in advance :)</p>
<p>Name of employee,Number of cars sold last week, Number of cars sold this week,Number of cars sold total (<em>all on one line</em>)</p>
<p>1,7,7,14</p>
<p>2,7,8,15</p>
<p>3,9,2,11</p>
<p>4,8,6,14</p>
<p>5,2,9,11</p>
<p>6,4,15,19</p>
<p>Total, ,47,84</p>
|
<p>One way of doing this would be</p>
<pre><code>import csv
#replace the name with your actual csv file name
file_name = "data.csv"
f = open(file_name)
csv_file = csv.reader(f)
second_column = [] #empty list to store second column values
for line in csv_file:
second_column.append(line[1])
print(line[1]) #index 1 for second column
</code></pre>
<p>Second_column variable will hold the necessary values. Hope this helps.</p>
|
python|csv|multiple-columns
| 1 |
1,904,180 | 53,848,949 |
import text data having spaces using pandas.read_csv
|
<p>I would like to import a text file using pandas.read_csv:</p>
<pre><code>1541783101 8901951488 file.log 12345 123456
1541783401 21872967680 other file.log 23456 123
1541783701 3 third file.log 23456 123
</code></pre>
<p>The difficulty here is that the columns are separated by one or more spaces, but there is one column that contains a file name having spaces. So I can't use <code>sep=r"\s+"</code> to identify the columns as that would fail at the first file name having a space. The file format does not have a fixed column width.</p>
<p>However each file name ends with ".log". I could write separate regular expressions matching each column. Is it possible to use these to identify the columns to import? Or is it possible to write a separator regular expression that selects all characters NOT matching any of the column matching regular expressions?</p>
|
<p><strong>Answer for updated question -</strong></p>
<p>Here's the code which will not fail whatever the data width may be. You can modify it as per your needs.</p>
<pre><code>df = pd.read_table('file.txt', header=None)
# Replacing uneven spaces with single space
df = df[0].apply(lambda x: ' '.join(x.split()))
# An empty dataframe to hold the output
out = pd.DataFrame(np.NaN, index=df.index, columns=['col1', 'col2', 'col3', 'col4', 'col5'])
n_cols = 5 # number of columns
for i in range(n_cols-2):
# 0 1
if i == 0 or i == 1:
out.iloc[:, i] = df.str.partition(' ').iloc[:,0]
df = df.str.partition(' ').iloc[:,2]
else:
out.iloc[:, 4] = df.str.rpartition(' ').iloc[:,2]
df = df.str.rpartition(' ').iloc[:,0]
out.iloc[:,3] = df.str.rpartition(' ').iloc[:,2]
out.iloc[:,2] = df.str.rpartition(' ').iloc[:,0]
print(out)
+---+------------+-------------+----------------+-------+--------+
| | col1 | col2 | col3 | col4 | col5 |
+---+------------+-------------+----------------+-------+--------+
| 0 | 1541783101 | 8901951488 | file.log | 12345 | 123456 |
| 1 | 1541783401 | 21872967680 | other file.log | 23456 | 123 |
| 2 | 1541783701 | 3 | third file.log | 23456 | 123 |
+---+------------+-------------+----------------+-------+--------+
</code></pre>
<p><em>Note - The code is hardcoded for 5 columns. It can be generalized too.</em> </p>
<p><strong>Previous answer -</strong></p>
<p>Use <code>pd.read_fwf()</code> to read files with fixed-width.</p>
<p>In your case:</p>
<pre><code>pd.read_fwf('file.txt', header=None)
+---+----------+-----+-------------------+-------+--------+
| | 0 | 1 | 2 | 3 | 4 |
+---+----------+-----+-------------------+-------+--------+
| 0 | 20181201 | 3 | file.log | 12345 | 123456 |
| 1 | 20181201 | 12 | otherfile.log | 23456 | 123 |
| 2 | 20181201 | 200 | odd file name.log | 23456 | 123 |
+---+----------+-----+-------------------+-------+--------+
</code></pre>
|
python|pandas|regex-negation
| 3 |
1,904,181 | 58,255,668 |
Python Basic Client Server Socket Programs
|
<p>I tried the basic programs for client and server from realpython (<a href="https://realpython.com/python-sockets/#echo-client-and-server" rel="nofollow noreferrer">https://realpython.com/python-sockets/#echo-client-and-server</a>)</p>
<p>While these work fine when running on the same computer, there is following problem when trying on different machines: </p>
<p><em>ConnectionRefusedError: [WinError 10061] No connection could be made because the target machine actively refused it</em></p>
<p><strong>Client code:</strong></p>
<pre><code>
HOST = '10.0.0.55' # The server's hostname or IP address
PORT = 65432 # The port used by the server
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.connect((HOST, PORT))
s.sendall(b'Hello, world')
data = s.recv(1024)
print('Received', repr(data))
</code></pre>
<p><strong>Server Code:</strong></p>
<pre><code>import socket
HOST = '127.0.0.1' # Standard loopback interface address (localhost)
PORT = 65432 # Port to listen on (non-privileged ports are > 1023)
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.bind((HOST, PORT))
s.listen()
conn, addr = s.accept()
with conn:
print('Connected by', addr)
while True:
data = conn.recv(1024)
if not data:
break
conn.sendall(data)
</code></pre>
<ul>
<li>I can make pings from one computer to the other.</li>
<li>Firewall is turned down</li>
<li>Wireshark shows that the SYN message arrives on the second computer which is answered by a RST message (<a href="https://i.stack.imgur.com/1uL3a.png" rel="nofollow noreferrer">Wireshark PC server</a>)</li>
</ul>
|
<p>If you want the server to be open to other computers, you can't listen on <code>127.0.0.1</code> which is basically an inner local loop only located on the computer running the program (that's why it's called loopback in comments). You should have the server listen on its own real address (for example: <code>10.0.0.55</code> explicitly).</p>
<p>This however can be annoying if your host can change addresses, an easy workaround is to just use the local IP address like this (on the server):</p>
<pre><code>HOST = socket.gethostbyname(socket.gethostname())
</code></pre>
<p>Or if you specifically want to use the address from one network interface:</p>
<pre><code>HOST = '10.0.0.55'
</code></pre>
<p>Or, if you want to listen on all network interfaces:</p>
<pre><code>HOST = '0.0.0.0'
</code></pre>
|
python|sockets|tcp
| 0 |
1,904,182 | 22,748,668 |
Why does this code give me an error?
|
<p>I have this code:</p>
<pre><code>import re
with open("text2.txt", "r") as f:
content = f.readlines()
numbers = re.findall(r'\b\d{3}\b', content)
with open("text3.txt", "w") as f:
f.write(str(numbers))
</code></pre>
<p>When run, it's supposed to find all of the three digit numbers, and print them to a new text file.</p>
<p>When I run it, I get this error:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\Zach\Desktop\test3.py", line 4, in <module>
numbers = re.findall(r'\b\d{3}\b', content)
File "C:\Panda3D-1.7.2\python\lib\re.py", line 177, in findall
return _compile(pattern, flags).findall(string)
TypeError: expected string or buffer
</code></pre>
<p>What am I doing wrong?</p>
|
<p><code>re.findall</code> expects a string as its second argument, but the <a href="https://docs.python.org/2/library/stdtypes.html#file.readlines" rel="nofollow"><code>readlines</code></a> method of a file object returns a list. Perhaps you meant to use the <a href="https://docs.python.org/2/library/stdtypes.html#file.read" rel="nofollow"><code>read</code></a> method instead (which returns a string):</p>
<pre><code>with open("text2.txt", "r") as f:
content = f.read()
</code></pre>
|
python|python-2.7|python-3.x
| 4 |
1,904,183 | 45,497,649 |
JSON(incorrect) to CSV
|
<p>I have a (incorrect)JSON file that I want to convert to a CSV tabel.
Bellow I show two rows (2 of the 2500) of the JSON file:</p>
<pre><code>{
"usage":{
"text_characters":7653,
"features":2,
"text_units":1
},
"emotion":{
"document":{
"emotion":{
"anger":0.085554,
"joy":0.526103,
"sadness":0.533085,
"fear":0.148549,
"disgust":0.078346
}
}
},
"language":"en",
"sentiment":{
"document":{
"score":-0.323271,
"label":"negative"
}
},
"retrieved_url":"http://blogs.plos.org/speakingofmedicine/2017/01/20/the-why-vaccines-dont-cause-autism-papers/"
}{
"usage":{
"text_characters":5528,
"features":2,
"text_units":1
},
"emotion":{
"document":{
"emotion":{
"anger":0.160801,
"joy":0.443317,
"sadness":0.596578,
"fear":0.555745,
"disgust":0.127581
}
}
},
"language":"en",
"sentiment":{
"document":{
"score":-0.558026,
"label":"negative"
}
},
"retrieved_url":"http://www.cnn.com/2011/HEALTH/01/05/autism.vaccines/index.html"
}
</code></pre>
<p>However I want to convert this to a CSV table like this with python:</p>
<pre><code>usage__text_characters usage__features usage__text_units emotion__document__emotion__anger emotion__document__emotion__joy emotion__document__emotion__sadness emotion__document__emotion__fear emotion__document__emotion__disgust language sentiment__document__score sentiment__document__label retrieved_url
7653 2 1 0.085554 0.526103 0.533085 0.148549 0.078346 en -0.323271 negative http://blogs.plos.org/speakingofmedicine/2017/01/20/the-why-vaccines-dont-cause-autism-papers/
5528 2 1 0.160801 0.443317 0.596578 0.555745 0.127581 en -0.558026 negative http://www.cnn.com/2011/HEALTH/01/05/autism.vaccines/index.html
</code></pre>
<p>I have tried several thinks already without success(I merged the things that I already have tried):</p>
<pre><code>import json
import pandas as pd
with open('data.json') as data_file:
dd = json.load(data_file)
print dd
df = pd.read_json('data.json').unstack().dropna()
data = pd.read_json('data.json', lines=True)
with open('data.json', 'rb') as f:
data = f.readlines()
data = map(lambda x: x.rstrip(), data)
data_json_str = "[" + ','.join(data) + "]"
data_df = pd.read_json(data_json_str)
</code></pre>
<p>The answer of @JeffMercado solved the question</p>
|
<p>Well this is a bit late, but my coworker was determined to make a solution using a recursive function so I'll share it.</p>
<pre><code>import json, pandas
with open('emotions.json') as emotions:
emotions = json.load(emotions)
def flattener(my_dict, return_dict={}, mykey=''):
for key,item in my_dict.items():
if isinstance(my_dict[key],dict):
return_dict = flattener(item,return_dict,mykey+'__'+str(key))
else:
return_dict[mykey+'__'+str(key)] = item
return return_dict
dictionary = {}
for key in flattener(emotions[0]).keys():
# get flattened keys and make a new dictionary of lists with them
dictionary[key[2:]] = []
for emotion in emotions:
# get more flattened dictionaries and store them in our
# dictionary of lists
for key, value in flattener(emotion).items():
dictionary[key[2:]].append(value)
# pandas make writing to csv easy
df = pandas.DataFrame(dictionary)
df.to_csv('csv_name.csv')
</code></pre>
|
python|json|csv
| 0 |
1,904,184 | 45,335,351 |
Reading in a text file, and separating the string by bracket in Python
|
<p>I have a text file that I am feeding with data as a string, using the following lines of Python:</p>
<pre><code> file = open("C:\\Users\\Me\\Desktop\\data.txt", "a")
file.writelines(str(mathfunction(readField())))
file.flush()
file.close()
</code></pre>
<p>in the format as follows:</p>
<p><a href="https://i.stack.imgur.com/AR6d7.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AR6d7.jpg" alt="enter image description here"></a></p>
<p>Every input into the text file consists of an array of three items.</p>
<p>My goal is to extract the third item from each input, convert it to a float, and then store these values in a new array. So, ideally, in the above case, the array would contain:</p>
<p><code>[1.0087890625, 0.4404296875, 0.4404296875]</code></p>
<p>I tried the following:</p>
<pre><code>data = pd.read_csv("C:\\Users\\User\\Desktop\\data.txt", sep="]", header = None)
data.head()
</code></pre>
<p><a href="https://i.stack.imgur.com/mDt9J.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mDt9J.jpg" alt="enter image description here"></a></p>
<p>and it returned the data in what looks like a string format. </p>
<p>What are the next steps that I should take, in order to isolate the third item in each subarray, and store it all in one array?</p>
<p><strong>EDIT: Here is some data from data.txt</strong></p>
<pre><code>[0.0263671875, 0.03515625, 1.0087890625][0.01171875, 0.0146484375, 0.4404296875][0.01171875, 0.0146484375, 0.4404296875]
</code></pre>
|
<p>You can then split by <code>','</code> commas after reading in the data:</p>
<pre><code>data = pd.read_csv("C:\\Users\\User\\Desktop\\data.txt", sep="]", header = None)
data = data.iloc[0]
data = data.apply(lambda x: x.split(',')[2]).astype(float).tolist()
</code></pre>
<p>If you want to keep the data in a Pandas Series, just remove the <code>tolist()</code> portion above. Here is an example:</p>
<pre><code>data = pd.DataFrame([['[1,2,3','[3,4,5','[4,5,6']])
print(data)
0 1 2
0 [1,2,3 [3,4,5 [4,5,6
data = data.iloc[0]
data = data.apply(lambda x: x.split(',')[2]).astype(float).tolist()
print(data)
[3.0, 5.0, 6.0]
</code></pre>
|
python|arrays
| 2 |
1,904,185 | 28,575,443 |
Python adding outputs from a loop to an variable
|
<p>The output I want to have is like:</p>
<pre><code>the result is 8
the result is 16
the result is 24
the result is 32
the result is 40
</code></pre>
<p>So I do:</p>
<pre><code>final = ''
for each_one in [1, 2, 3, 4, 5]:
result = 'the result is ' + str(each_one * 8)
final.join(result)
final.join('\n')
print final
</code></pre>
<p>But it doesn't work out. How can I adjust it?</p>
|
<pre><code>final = ''
for each_one in [1, 2, 3, 4, 5]:
final += ('the result is ' + str( each_one * 8 )+'\n\n')
print final
</code></pre>
<p>I believe you are mixing join function with concatenation. </p>
<p>Not saying it would be impossible to do it with join,but<br>
your code resembles usage of string concatenation.</p>
<p>Learn more about join <a href="http://www.tutorialspoint.com/python/string_join.htm" rel="nofollow">here</a></p>
|
python
| 3 |
1,904,186 | 6,884,764 |
Error: Invalid Syntax on "def"
|
<p>So I'm getting <strong>Invalid Syntax</strong> where it says def before the add_entry function. I have no idea why. I tried commenting out, and then there was the same error on a different function. I'm using Python 2.7.</p>
<pre><code>date,number = 0,1
month,day,year = 1,2,0
from datetime import datetime
def home():
option = ''
option = raw_input('Press ENTER to view log or input anything to add entries: ')
print '\n'
if option == '':
view_log()
else:
add_entry()
def view_log():
log_a = open('storage.txt', 'r')
log_b = log_a.read()
for line in log_b:
print line[date[month]],line[date[day]],line[date[[year]],line[number]
def add_entry():
old_entry = open('storage.txt', 'r')
save = ''
for line in old_entry:
save = save + line
new_entry = open('storage.txt','w')
new = input_entry()
save = save + str(new) + '\n'
new_entry.write(save)
def input_entry():
n_date = get_date()
print 'Todays date is: %s/%s/%s' %(n_date[month],n_date[day],n_date[year])
n_number = raw_input('What was todays number? ')
return (n_date,n_number)
def get_date():
time_a = datetime.now()
time_b = str(time_a)
time_c = time_b.split(' ')
time_d = time_c[0].split('-')
time_e = tuple(time_d)
return time_e
</code></pre>
|
<p>Your print statement in <code>view_log</code> has an extra <code>[</code>
It should be</p>
<pre><code> print line[date[month]],line[date[day]],line[date[year]],line[number]
</code></pre>
|
python
| 12 |
1,904,187 | 6,361,600 |
counting (large number of) strings within (very large) text
|
<p>I've seen a couple variations of the "efficiently search for strings within file(s)" question on Stackoverflow but not quite like my situation. </p>
<ul>
<li><p>I've got one text file which contains a relatively large number (>300K) of strings. The vast majority of these strings are multiple words (for ex., "Plessy v. Ferguson", "John Smith", etc.).</p></li>
<li><p>From there, I need to search through a very large set of text files (a set of legal docs totaling >10GB) and tally the instances of those strings. </p></li>
</ul>
<p>Because of the number of search strings, the strings having multiple words, and the size of the search target, a lot of the "standard" solutions seem fall to the wayside. </p>
<p>Some things simplify the problem a little - </p>
<ul>
<li><p>I don't need sophisticated tokenizing / stemming / etc. (e.g. the only instances I care about are "Plessy v. Ferguson", don't need to worry about "Plessy", "Plessy et. al." etc.)</p></li>
<li><p>there will be some duplicates (for ex., multiple people named "John Smith"), however, this isn't a very statistically significant issue for this dataset so... if multiple John Smith's get conflated into a single tally, that's ok for now.</p></li>
<li><p>I only need to count these specific instances; I don't need to return search results</p></li>
<li><p>10 instances in 1 file count the same as 1 instance in each of 10 files</p></li>
</ul>
<p>Any suggestions for quick / dirty ways to solve this problem? </p>
<p>I've investigated NLTK, Lucene & others but they appear to be overkill for the problem I'm trying to solve. Should I suck it up and import everything into a DB? bruteforce grep it 300k times? ;)</p>
<p>My preferred dev tool is Python.</p>
<hr>
<p>The docs to be searched are primarily legal docs like this - <a href="http://www.lawnix.com/cases/plessy-ferguson.html" rel="nofollow">http://www.lawnix.com/cases/plessy-ferguson.html</a></p>
<p>The intended results are tallys for how often the case is referenced across those docs -
"Plessey v. Ferguson: 15"</p>
|
<p>An easy way to solve this is to build a trie with your queries (simply a prefix tree, list of nodes with a single character inside), and when you search through your 10gb file you go through your tree recursively as the text matches. </p>
<p>This way you prune <em>a lot</em> of options really early on in your search for each character position in the big file, while still searching your whole solution space.</p>
<p>Time performance will be very good (as good as a lot of other, more complicated solutions) and you'll only need enough space to store the tree (a lot less than the whole array of strings) and a small buffer into the large file. Definitely a lot better than grepping a db 300k times...</p>
|
python|lucene|full-text-search|nltk
| 2 |
1,904,188 | 25,823,142 |
WXPython: getting the initial value of a combo box
|
<p>In a dialog box I have a combo box and a text field. I would like to make so that if one particular value in the combo box is selected, the text field would be disabled (or hidden), and if another value is selected, the text field would be enabled.<br>
I have:</p>
<pre><code>self.myCombo = wx.ComboBox(parent=self, choices=['value1', 'value2'], style = wx.CB_READONLY)
self.myCombo.Bind(wx.EVT_COMBOBOX, self.onChange)
# ...
def onChange(self, ev):
self.myTextField.Enable(False) if self.myCombo.GetValue() != "value1" else self.myTextField.Enable(True)
</code></pre>
<p>And this does work like a charm, the text field gets enabled and disabled.<br>
However, I would like to have the text field enabled or disabled depending on the initial value of the combo box, meaning the value gotten from a config file and selected when the dialog box is open.<br>
I've tried the same:</p>
<pre><code>self.myTextField = wx.TextCtrl(parent=self)
self.myTextField.Enable(False) if self.myCombo.GetValue() != "value1" else self.myTextField.Enable(True)
</code></pre>
<p>but this doesn't work. I've tried <code>GetSelection</code> also, but when logging this, both <code>GetValue</code> and <code>GetSelection</code> return -1.</p>
|
<p>The combobox probably isn't fully initialized yet when you try to query it. If you want to disable it when it loads, you don't have to check its value. Just disable it. But for what you want to do, I would recommend using wxPython's <a href="http://wiki.wxpython.org/CallAfter" rel="nofollow">wx.CallAfter()</a> method. </p>
<p>Something like the following should suffice:</p>
<pre><code>def __init__(self):
# initialize various variables
wx.CallAfter(self.check_combobox, args)
def check_combobox(self, args):
self.myTextField.Enable(False) if self.myCombo.GetValue() != "value1" else self.myTextField.Enable(True)
</code></pre>
|
python-2.7|wxpython
| 2 |
1,904,189 | 44,769,152 |
Difficulty implementing server side session storage using redis and flask
|
<p>I have a setup where a node.js app is making ajax requests to a flask based python server. Since ajax requests lack cookie data, I can't use the simple flask session object to persist data across requests. To remedy this, I'd like to implement a redis based server side implementation of a session storage system, but the solutions I've found so far do not work.</p>
<p>One solution I've tried is the following <a href="http://flask.pocoo.org/snippets/75/" rel="nofollow noreferrer">this</a> snippet.<br>
But this doesn't work. Is there more setup I need to do to configure redis beyond what is mentioned in the quickstart guide? Here is my attempt:</p>
<pre><code>...
from flask import session
# Snippet code is copy pasted here verbatum
import session_interface
...
app = Flask(__name__)
app.session_interface = session_interface.RedisSessionInterface()
...
# Can't access this as session['key'] across requests
session['key'] = value
...
if __name__ == '__main__':
app.secret_key = '123456789012345678901234'
app.run(debug=True)
</code></pre>
<p>Another solution I've tried is importing the Flask-Session <a href="https://pythonhosted.org/Flask-Session/" rel="nofollow noreferrer">extention</a>.<br>
However, I can't get this to work either. The section I'm confused about is the following: </p>
<p>"We are not supplying something like SESSION_REDIS_HOST and SESSION_REDIS_PORT, if you want to use the RedisSessionInterface, you should configure SESSION_REDIS to your own redis.Redis instance. This gives you more flexibility, like maybe you want to use the same redis.Redis instance for cache purpose too, then you do not need to keep two redis.Redis instance in the same process." </p>
<p>What is meant by this section and how would I have figured this out? Here is my attempt to make this extension work:</p>
<pre><code>...
from flask import session
from flask_session import Session
import redis
...
app = Flask(__name__)
SESSION_TYPE = 'redis'
app.config.from_object(__name__)
Session(app)
...
# Can't access this as session['key'] across requests
session['key'] = value
...
if __name__ == '__main__':
app.secret_key = '123456789012345678901234'
app.run(debug=True)
</code></pre>
<p>Has anyone successfully implemented manual session storage on a server running flask? Are there other options for getting this functionality?</p>
<p>Thanks for your input.</p>
|
<p>I think that's because you missed the URL configuration for your storage Redis, to check that, you can use Redis-CLI to see if there is anything being inserted into Redis.</p>
<p>I use this code and it worked:</p>
<pre><code>from flask import Flask
from flask_session import Session
import redis
……
app = Flask(__name__)
app.config['SESSION_TYPE'] = 'redis'
app.config['SESSION_REDIS'] = redis.from_url('127.0.0.1:6379')
sess = Session()
sess.init_app(app)
def getSession():
return session.get('key', 'not set')
def setSession():
session.set('key')=123
return 'ok'
……
</code></pre>
|
python|ajax|session|flask|redis
| 13 |
1,904,190 | 44,765,914 |
Cannot open PhantomJS webpages in desktop mode (always in mobile mode)
|
<p>I have been trying to fix this issue through stack overflow posts, but cannot find any relevant topics to my issue.</p>
<p>I am creating an automated python script that would automatically login to my facebook account and would utilize some features that facebook offers.</p>
<p>When I use selenium, I usually have the program run on the Chrome browser and I use the code as following</p>
<pre><code>driver = webdriver.Chrome()
</code></pre>
<p>And I program the rest of the stuff that I want to do from there since it's easy to visually see whats going on with the program. However, when I switch to the PhantomJS browser, the program runs Facebook in a mobile version of the website (Like an android/ios version of Facebook). Here is an example of what it looks like</p>
<p><a href="https://i.stack.imgur.com/0OuSZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0OuSZ.png" alt="Mobile Version"></a> </p>
<p>I was wondering if anyone would be able to help me in try understanding how to convert this into desktop mode, since the mobile version of Facebook is coded differently than the desktop version, and I don't want to redo the code for this difference. I need to have this running on PhantomJS, because it will be running on a low-powered raspberry pi device that can barely open google chrome. </p>
<p>I have also tried the following to see if it worked, and it didn't help.</p>
<pre><code>headers = { 'Accept':'*/*',
'Accept-Encoding':'gzip, deflate, sdch',
'Accept-Language':'en-US,en;q=0.8',
'Cache-Control':'max-age=0',
'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36
(KHTML, like Gecko) Chrome/48.0.2564.116 Safari/537.36'
}
driver = webdriver.PhantomJS(desired_capabilities = headers)
driver.set_window_size(1366, 768)
</code></pre>
<p>Any help would be greatly appreciated!!</p>
|
<p>I had the same problem with PhantomJS Selenium and Python and next code was resolve it.</p>
<pre><code>from selenium import webdriver
from selenium.webdriver import DesiredCapabilities
desired_capabilities = DesiredCapabilities.PHANTOMJS.copy()
desired_capabilities['phantomjs.page.customHeaders.User-Agent'] = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) ' \
'AppleWebKit/537.36 (KHTML, like Gecko) ' \
'Chrome/39.0.2171.95 Safari/537.36'
driver = webdriver.PhantomJS('./phantom/bin/phantomjs.exe', desired_capabilities=desired_capabilities)
driver.get('http://facebook.com')
</code></pre>
|
python|selenium-webdriver|phantomjs
| 1 |
1,904,191 | 61,613,013 |
i have a Dataframe with a data column ranging from 2005-01-01 to 2014-12-31. How do i sort the columns?
|
<p>input:</p>
<pre><code>data["Date"] = ["2005-01-01", "2005-01-02" , ""2005-01-03" ,..., "2014-12-30","2014-12-31"]
</code></pre>
<p>how can i sort the column such that the column gives 1st date of every year, 2nd date of every and so on:</p>
<p>i.e. </p>
<p>output:</p>
<pre><code>data["Date"] = ["2005-01-01","2006-01-01","2007-01-01", ... "2013-12-31","2014-12-31"]
</code></pre>
<p><strong>NOTE: assuming the date column has no leap days</strong></p>
|
<pre><code>>>> import datetime
>>> dates = [datetime.datetime.strptime(ts, "%Y-%m-%d") for ts in data["Date"]]
>>> dates.sort()
>>> sorteddates = [datetime.datetime.strftime(ts, "%Y-%m-%d") for ts in dates]
>>> sorteddates
['2010-01-12', '2010-01-14', '2010-02-07', '2010-02-11', '2010-11-16', '2010-11-
22', '2010-11-23', '2010-11-26', '2010-12-02', '2010-12-13', '2011-02-04', '2011
-06-02', '2011-08-05', '2011-11-30']
</code></pre>
|
python|data-science
| 0 |
1,904,192 | 61,986,957 |
Is there a possibility to fix CSV comma problem with Python?
|
<p>I received a csv file with many data in a comma delimited fashion. The data was in the below format:</p>
<pre><code>1,Dame,1900212-apple,mangoes,organges,pine,90,12
2,Mathew,1920121-cargo,90,11
</code></pre>
<p>How to handle this issue in python when we extract data from CSV?
Please provide some links or directions to it.</p>
<p>As of now they propose to fix that problem in excel file before conversion to CSV. In my case, I don't have access to the original excel file. </p>
<p>Expected Result</p>
<pre><code>1,Dame,"1900212-apple,mangoes,organges",90,12
2,Mathew,"1920121-cargo",90,11
</code></pre>
|
<p><strong>Assumptions:</strong></p>
<ul>
<li>Rows with 5 values per line are clean.</li>
<li>First two and last two values are not included in the list of items to be merged.</li>
</ul>
<p>Taking the assumptions above into account, the code reads the csv file line by line and checks if the number of values match 5(which is assumed to be a clean record in the csv) and if it doesnt it merges all values not being the first two values or last two values.</p>
<pre><code>from csv import reader
with open('path_to_csv.csv', 'r') as read_obj:
csv_reader = reader(read_obj)
for row in csv_reader:
if len(row) > 5:
temp = ",".join(row[2:len(row)-2])
new_row = [*row[0:2],temp,*row[-2:]]
row = new_row
print(row)
</code></pre>
<p>The output is as below,</p>
<pre><code>['1', 'Dame', '1900212-apple,mangoes,organges,pine', '90', '12']
['2', 'Mathew', '1920121-cargo', '90', '11']
</code></pre>
|
python|csv
| 1 |
1,904,193 | 23,489,842 |
python conversion int into arbitrary number of bytes
|
<p>I am facing a little corner case of the famous struct.pack. </p>
<p>The situation is the following: I have a dll with a thin layer wrapper to python. One of the python method in the wraper accept a byte array as argument. This byte array the representation of a register on a specific hardware bus. Each bus has different register width, typically 8, 16 and 24 bits wide (alignement is the same in all cases).</p>
<p>When calling this method I need to convert my value (whatever that is) to a byte array of 8/16 or 24bits. Such conversion is relatively easy with 8 or 16bits using the struct.pack:</p>
<pre><code>byteList = struct.pack( '>B', regValue ) # For 8 bits case
byteList = struct.pack( '>H', regValue ) # for 16 bits case
</code></pre>
<p>I am now looking to make it flexible enough for all three cases 8/16 & 24 bits. I could use a mix of the two previous line to handle the three cases; but I find it quite ugly. </p>
<p>I was hoping this would work:</p>
<pre><code>packformat = ">{0}B".format(regSize)
byteList = struct.pack( packformat, regValue )
</code></pre>
<p>But it is not the case as the struct.pack expect an equal amount of arguments. </p>
<p>Any idea how can I convert (neatly) my register value into an arbitrary number of bytes?</p>
|
<p>You are always packing <em>unsigned</em> integers, and only big endian to boot. Take a look at what happens when you pack them:</p>
<pre><code>>>> import struct
>>> struct.pack('>B', 255)
'\xff'
>>> struct.pack('>H', 255)
'\x00\xff'
>>> struct.pack('>I', 255)
'\x00\x00\x00\xff'
</code></pre>
<p>Essentially the value is padded with null bytes at the start. Use this to your advantage:</p>
<pre><code>>>> struct.pack('>I', 255)[-3:]
'\x00\x00\xff'
>>> struct.pack('>I', 255)[-2:]
'\x00\xff'
>>> struct.pack('>I', 255)[-1:]
'\xff'
</code></pre>
<p>You won't get an exception now, if your value is too large, but it would simplify your code enormously. You can always add a separate validation step:</p>
<pre><code>def packRegister(value, size):
if value < 0 or value.bit_length() > size:
raise ValueError("Value won't fit in register of size {} bits".format(size))
return struct.pack('>I', value)[-(size // 8):]
</code></pre>
<p>Demo:</p>
<pre><code>>>> packRegister(255, 8)
'\xff'
>>> packRegister(1023, 16)
'\x03\xff'
>>> packRegister(324353, 24)
'\x04\xf3\x01'
>>> packRegister(324353, 8)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 3, in packRegister
ValueError: Value won't fit in register of size 8 bits
</code></pre>
|
python|python-2.7|struct
| 2 |
1,904,194 | 20,556,245 |
Positionning a canvas and a button in Tkinter
|
<p>In Tkinter, how can I pack a canvas to the upper left corner and a button to the lower right corner? I tried with <code>can.pack(side=...)</code> and <code>button.pack(side=....)</code> but no luck. I want to get something like this <a href="http://i.stack.imgur.com/bSjZQ.png" rel="nofollow">Picture</a>.</p>
|
<p>You were close. You need to incorporate one more option: <a href="http://infohost.nmt.edu/tcc/help/pubs/tkinter/web/anchors.html" rel="nofollow"><code>anchor</code></a>.</p>
<p>Below is a simple script to demonstrate:</p>
<pre><code>import Tkinter as tk
root = tk.Tk()
canvas = tk.Canvas(bg="red", height=100, width=100)
canvas.pack(anchor=tk.NW)
button = tk.Button(text="button")
button.pack(side=tk.RIGHT, anchor=tk.SE)
root.mainloop()
</code></pre>
<p>When you resize the window, notice how the canvas stays in the upper lefthand corner and the button stays in the lower righthand corner.</p>
|
python|tkinter
| 2 |
1,904,195 | 15,434,793 |
AttributeError: type object 'datetime.date' has no attribute 'now'
|
<p>Using these lines of code:</p>
<pre><code>from datetime import date
date_start = date.now()
</code></pre>
<p>I'm getting this error:</p>
<pre><code>AttributeError: type object 'datetime.date' has no attribute 'now'
</code></pre>
<p>How can I solve this?</p>
|
<p>You need to use </p>
<pre><code> import datetime
now = datetime.datetime.now()
</code></pre>
<p>Or if you are using django 1.4+ and have timezone enabled you should use</p>
<pre><code> django.utils.timezone.now()
</code></pre>
|
python
| 29 |
1,904,196 | 46,613,734 |
How do I replace \xc3 etc. with umlauts?
|
<p>I have an output of <code>spannkr \xc3\xa4ftig, da\xc3\x9f unser</code> in Python. How do I replace this with umlauts?</p>
|
<p>The German characters are already there, but encoded as utf-8. If you want to see the umlauts etc in the interpreter then you can decode to <code>str</code>:</p>
<pre><code>>>> bs = b'spannkr \xc3\xa4ftig, da\xc3\x9f unser'
>>> s = bs.decode('utf-8')
>>> print(s)
spannkr äftig, daß unser
</code></pre>
<p>It's possible that you are dealing with a <code>str</code> that somehow contains utf-8 encoded data. In this case you need to perform an extra step:</p>
<pre><code>>>> s = 'spannkr \xc3\xa4ftig, da\xc3\x9f unser'
>>> bs = s.encode('raw-unicode-escape') # encode to bytes without double-encoding
>>> print(bs)
b'spannkr \xc3\xa4ftig, da\xc3\x9f unser'
>>> decoded = bs.decode('utf-8')
>>> print(decoded)
spannkr äftig, daß unser
</code></pre>
<p>There isn't an easy way to distinguish between incorrectly embedded spaces and the spaces between words. You would need to use some kind of spellchecker or natural language application.</p>
|
python|python-3.x|character-encoding
| 7 |
1,904,197 | 46,387,997 |
Python compare 2 lists by index
|
<p>How can one iterate through two lists an compare values by index. Ive tried both for loop and the use of zip.</p>
<pre><code> for a,b in zip(list1,list2):
if a[0] in b[4]
print ('found')
</code></pre>
<p>EDIT</p>
<p>This is what Im after</p>
<pre><code> results = cHandlers.fetchall() #from an sql query
response = (r.json()) # from a json request
for u in range(0,3):
for row in results:
if (response['data'][u]['item']) == row[3]
print (found)
</code></pre>
|
<p><code>zip</code> generates a list of tuples <code>(a,b)</code> with <code>a</code> being elements from <code>list1</code> and <code>b</code> from <code>list2</code>. To check all the elements you can do the following</p>
<pre><code>list1 = [1,2,3,5,4]
list2 = [5,3,4,3,4]
for a in zip(list1,list2):
if a[0] == a[1]:
print ('found')
</code></pre>
<p>To check specific indices you can use this:</p>
<pre><code>zipped = zip(list1,list2)
if zipped[0][0] == zipped[4][1]:
print ('found')
</code></pre>
<p>Again, in <code>zipped</code> tuple element 0 corresponds to <code>list1</code> and element 1 to <code>list2</code>.</p>
|
python-2.7|python-3.x|list|loops|for-loop
| 1 |
1,904,198 | 60,987,370 |
Pandas - Loop through over 1 million cells
|
<p>I have an exported file with over 200 000 codes that I am trying to filter out the codes only. The file itself becomes over 1 million rows due to each code having multiple rows of irrelevant information. </p>
<p>I wrote a script to read the file, find the codes based on the prefix, and then write to another .csv file: </p>
<pre><code>import pandas as pd
df = pd.read_csv('export_file.csv')
output = []
for index, row in df.iterrows():
if ('PREFIX-01' in str(row['code'])):
code = str(row['code'])
output.append(code)
with open('output.csv','w') as file:
for line in output:
file.write(line)
file.write('\n')
</code></pre>
<p>The script works well for smaller numbers of codes (around 50k) but it takes A LONG time to loop through all these rows. Python and Pandas is relatively new to me, so I wonder if there's a way to make the script more efficient? </p>
<p>I heard <code>grep</code> would be of use here, but the goal is to write this into a web service eventually so I rather not do it through the command line. </p>
|
<p>With thanks to @Datanovice i got the program working a lot better. Took execution time down from ~10 minutes to 5 seconds.</p>
<pre><code>import pandas as pd
import time
df = pd.read_csv('exported_file.csv')
df2 = df[df['code'].str.contains('PREFIX-01', na=False)]
output = df2['code'] # Feels redundant for this step (only extract the code column)
# Tips are welcome how to bake it into the line above
output.to_csv('output.csv', sep=',', encoding='utf-8', index=False)
</code></pre>
|
python|pandas|numpy
| 1 |
1,904,199 | 49,713,941 |
install cyrllic fonts in matplotlib
|
<p>I am unable to display cyrillic or arabic characters in matplot lib figures.<a href="https://i.stack.imgur.com/bAq5h.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bAq5h.png" alt="enter image description here" /></a></p>
<p>As you can see the square blocks appear under x axis. I am working on ipython3 notebook.</p>
|
<p>You have to specify Matplotlib to use a font which support your intended language. But you have to make sure that your system contains the font which can show your intended language. First you need to find the language code your intended language, find it <a href="https://www.loc.gov/standards/iso639-2/php/code_list.php" rel="nofollow noreferrer">here</a>. Then, if you are on Linux system, you can use the following command to find a font which support your language:</p>
<pre><code>fc-list :lang=<LANGUAGE_CODE>
</code></pre>
<p>Take Chinese for example, the language code is <code>zh</code>, so to find a language which support Chinese, you can use</p>
<pre><code>fc-list :lang=zh
</code></pre>
<p><a href="https://i.stack.imgur.com/HS1ay.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HS1ay.png" alt="enter image description here"></a></p>
<p>Maybe there are several fonts which support your language. The font name is listed after the font directory. Just pick one.</p>
<p>After you have picked up a font. Add the following code to the front your source code,</p>
<pre><code>import matplotlib as mpl
# <FONT_NAME> is the font picked by you, don't forget to
# enclose it with quotation marks
mpl.rcParams['font.family']='<FONT_NAME>'
</code></pre>
<p>Then you should be able to see the characters appear.</p>
|
pandas|matplotlib|fonts
| 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.