Unnamed: 0
int64 0
1.91M
| id
int64 337
73.8M
| title
stringlengths 10
150
| question
stringlengths 21
64.2k
| answer
stringlengths 19
59.4k
| tags
stringlengths 5
112
| score
int64 -10
17.3k
|
---|---|---|---|---|---|---|
1,908,900 | 58,423,574 |
Python created tar.gz file contains "_" folder, how to remove?
|
<p>The .tar.gz files I'm created via Python contain a "_" root-level folder which I need to remove.</p>
<p>Here's the .tar.gz function I'm using:</p>
<pre><code>def make_tarfile(output_filename, source_dir):
with tarfile.open(output_filename, "w:gz") as tar:
tar.add(source_dir, arcname='')
</code></pre>
<p>I create the .tar.gz with:
make_tarfile('ARCHIVE.tar.gz', 'C:\FolderA')</p>
<p><a href="https://i.stack.imgur.com/8egvr.png" rel="nofollow noreferrer">As you can see, there is a "_" folder added to the .tar.gz. Any suggestions on how I remove it?</a> Interestingly enough, when I extract the .tar.gz, the " _ " folder doesn't appear. In that sense, it's fine. But this would be a .tar.gz consumed by many users, so I'd like to have it not contain quirks like this.</p>
|
<p>In case anyone runs into this exact scenario--use 7Zip CLI and do a wildcard copy of all the contents of the desired folder (the parent folder will be omitted). Like this:</p>
<pre><code>subprocess.call(['C:\Program Files\\7-Zip\\7z.exe', 'a', '-ttar', 'C:\ARCHIVE.tar', 'C:\FolderA\*'])
subprocess.call(['C:\Program Files\\7-Zip\\7z.exe', 'a', '-tgzip', 'C:\ARCHIVE.tar.gz', 'C:\ARCHIVE.tar'])
</code></pre>
<p>No "_" folder will be in the archive either, it'll be nice and clean :)</p>
|
python
| 0 |
1,908,901 | 22,759,603 |
How do you tally the amount of times a item in appears from a randomly list?
|
<p><em><strong>How do you tally the amount of times a item in appears from a random list?</em></strong></p>
<p>Because at the moment i am am doing a un-boxing simulator for team fortress 2 and i have done it so that u can get a random strange <em>99%</em> chance or an UNUSUAL <em>1%</em> chance and i wish to know how to tally how many unusuals you have un-boxed.</p>
<pre><code>def no():
print "thankyou for playing crate unboxing simulator!"
time.sleep(1)
print "copyright Tristan Cook"
time.sleep(1)
print "You unboxed.."
time.sleep(1)
</code></pre>
<p>i need something just there saying the amount of unusuals they have unboxed. Im looking for something i can just copy and paste cause i'm quite new to python (this is my first program and its 359 lines long xD)</p>
|
<p>Try this:</p>
<pre><code>l =[...]
unusuals = l.count(unusual1)+l.count(unusual2)+...
</code></pre>
|
python|list|random
| 0 |
1,908,902 | 22,824,104 |
Python Pandas: Increase Maximum Number of Rows
|
<p>I am processing a large text file (500k lines), formatted as below:</p>
<pre><code>S1_A16
0.141,0.009340221649748676
0.141,4.192618196894668E-5
0.11,0.014122135626540204
S1_A17
0.188,2.3292323316081486E-6
0.469,0.007928706856794138
0.172,3.726771730573038E-5
</code></pre>
<p>I'm using the code below to return the correlation coefficients of each series, e.g. S!_A16:</p>
<pre><code>import numpy as np
import pandas as pd
import csv
pd.options.display.max_rows = None
fileName = 'wordUnigramPauseTEST.data'
df = pd.read_csv(fileName, names=['pause', 'probability'])
mask = df['pause'].str.match('^S\d+_A\d+')
df['S/A'] = (df['pause']
.where(mask, np.nan)
.fillna(method='ffill'))
df = df.loc[~mask]
result = df.groupby(['S/A']).apply(lambda grp: grp['pause'].corr(grp['probability']))
print(result)
</code></pre>
<p>However, on some large files, this returns the error:</p>
<pre><code>Traceback (most recent call last):
File "/Users/adamg/PycharmProjects/Subj_AnswerCorrCoef/GetCorrCoef.py", line 15, in <module>
print(result)
File "/Users/adamg/anaconda/lib/python2.7/site-packages/pandas/core/base.py", line 35, in __str__
return self.__bytes__()
File "/Users/adamg/anaconda/lib/python2.7/site-packages/pandas/core/base.py", line 47, in __bytes__
return self.__unicode__().encode(encoding, 'replace')
File "/Users/adamg/anaconda/lib/python2.7/site-packages/pandas/core/series.py", line 857, in __unicode__
result = self._tidy_repr(min(30, max_rows - 4))
TypeError: unsupported operand type(s) for -: 'NoneType' and 'int'
</code></pre>
<p>I understand that this is related to the <code>print</code> statement, but how do I fix it? </p>
<p><strong>EDIT</strong>:
This is related to the maximum number of rows. Does anyone know how to accommodate a greater number of rows?</p>
|
<p>The error message:</p>
<pre><code>TypeError: unsupported operand type(s) for -: 'NoneType' and 'int'
</code></pre>
<p>is saying <code>None</code> minus an <code>int</code> is a TypeError. If you look at the next-to-last line in the traceback you see that the only subtraction going on there is</p>
<pre><code>max_rows - 4
</code></pre>
<p>So <code>max_rows</code> must be <code>None</code>. If you dive into <code>/Users/adamg/anaconda/lib/python2.7/site-packages/pandas/core/series.py</code>, near line 857 and ask yourself how <code>max_rows</code> could end up being equal to <code>None</code>, you'll see that somehow </p>
<pre><code>get_option("display.max_rows")
</code></pre>
<p>must be returning <code>None</code>. </p>
<p>This part of the code is calling <code>_tidy_repr</code> which is used to summarize the Series. <code>None</code> is the correct value to set when you want pandas to display <em>all</em> lines of the <code>Series</code>.
So this part of the code should not have been reached when <code>max_rows</code> is None. </p>
<p>I've made a <a href="https://github.com/pydata/pandas/pull/6863" rel="nofollow">pull request</a> to correct this.</p>
|
python|numpy|pandas
| 5 |
1,908,903 | 45,329,754 |
Frames not stacking on top of each other in tkinter
|
<p>I have a problem stacking 'Pages' on top of each other in tkinter.</p>
<p>I have a main <code>Frame</code> that contains two sub frames which both contain different information. The first sub frame contains a <code>Listbox</code> and a couple buttons and is packed to the left in the main Frame. The 2nd frame is supposed to conain different 'Pages' (two for now) and have them fill up the entire frame.
My issue is that both 'Pages' are displayed side by side instead of on top of each other.</p>
<pre><code>import tkinter as tk
class Settings(tk.Tk):
def __init__(self, master=None):
tk.Tk.__init__(self, master)
self.focus_force()
self.grab_set()
# set focus to settings window
# Main window title
self.title("Settings")
# set up grid containers
container_main = tk.Frame(self, width=500, height=700)
container_main.pack(side='top', fill='both', expand=True)
container_main.grid_rowconfigure(0, weight=1)
container_main.grid_columnconfigure(0, weight=1)
container_listbox = tk.Frame(container_main, bg='blue', width=200, height=700)
container_listbox.pack(side='left', fill='both', expand=True)
container_listbox.grid_rowconfigure(0, weight=1)
container_listbox.grid_columnconfigure(0, weight=1)
container_settings = tk.Frame(container_main, bg='red', width=300, height=700)
container_settings.pack(side='right', fill='both', expand=True)
container_settings.grid_rowconfigure(0, weight=1)
container_settings.grid_columnconfigure(0, weight=1)
# build settings pages
self.frames = {}
self.frames["Options"] = Options(parent=container_listbox, controller=self)
self.frames["General"] = General(parent=container_settings, controller=self)
self.frames["Future"] = Future(parent=container_settings, controller=self)
</code></pre>
<p>if I uncoment these two lines. I get an error saying I cannot use geometry manager grid inside.</p>
<pre><code> # self.frames["General"].grid(row=0, column=0, sticky='nsew')
# self.frames["Future"].grid(row=0, column=0, sticky='nsew')
</code></pre>
<p>.</p>
<pre><code> def show_frame(self, page_name):
frame = self.frames[page_name]
frame.tkraise()
class Options(tk.Frame):
def __init__(self, parent, controller):
tk.Frame.__init__(self, parent)
self.controller = controller
label = tk.Label(parent, text='List Box')
label.grid(row=0, column=0, sticky='nsew', padx=1, pady=1)
button1 = tk.Button(parent, text='General', command=lambda: controller.show_frame('General'))
button2 = tk.Button(parent, text='Future', command=lambda: controller.show_frame('Future'))
button1.grid(row=1, column=0, sticky='ew')
button2.grid(row=2, column=0, sticky='ew')
class General(tk.Frame):
def __init__(self, parent, controller):
tk.Frame.__init__(self, parent)
self.controller = controller
label = tk.Label(parent, text='General')
label.pack(side='left', fill='both', expand=True, )
print("Hi I'm General")
class Future(tk.Frame):
def __init__(self, parent, controller):
tk.Frame.__init__(self, parent)
self.controller = controller
label = tk.Label(parent, text='Future')
label.pack(side='left', fill='both', expand=True)
print("Hi I'm Future")
app = Settings()
app.mainloop()
</code></pre>
<p>Both 'Pages' are initialized and displayed at the same time which makes sense. I just don't know how to make one rise over the other since <code>frame.tkraise()</code> supposed to be doing this but is not. I would also like to be able to do <code>grid_forget()</code> on the page or pages that are not on top to avoid potentially accidentally enter values into a hidden entrybox in the future. </p>
<p>EDIT: If I comment out the 'Future' page then the 'General' page will take up the whole frame space so with <code>grid_forget()</code> I would yield the same result. I just don't know where I would but <code>grid_forget()</code> and then also where would I re-configure or do a <code>grid()</code> call?</p>
|
<blockquote>
<p>My issue is that both 'Pages' are displayed side by side instead of on top of each other.</p>
</blockquote>
<p>If you use <code>pack()</code> to place a frame on your root window and then use <code>grid()</code> inside of that frame then it will work but if you try to use <code>pack()</code> inside of a frame and then try to use <code>grid()</code> inside of that same frame it will fail.</p>
<p>The same goes for the root window and frames. If <code>pack()</code> a frame in the root window then you cannot use <code>grid()</code> to place anything into that same root window.</p>
<p>The problem with the <code>grid()</code> vs <code>pack()</code> issue was because the location where the class <code>General</code> and class <code>Future</code> was configuring the label widgets was the parent frame where <code>pack()</code> was being used. This prevented the use of <code>grid()</code> in that same parent frame to place the General and Future frames.</p>
<p>To fix this we change:</p>
<pre><code>label = tk.Label(parent, text='General')
and
label = tk.Label(parent, text='Future')
</code></pre>
<p>to:</p>
<pre><code>label = tk.Label(self, text='General')
and
label = tk.Label(self, text='Future')
</code></pre>
<p>the above was the only fix needed for this to work properly.</p>
<pre><code>import tkinter as tk
class Settings(tk.Tk):
def __init__(self, master=None):
tk.Tk.__init__(self, master)
self.focus_force()
self.grab_set()
# set focus to settings window
# Main window title
self.title("Settings")
container_main = tk.Frame(self, width=500, height=700)
container_main.pack(side='top', fill='both', expand=True)
container_main.grid_rowconfigure(0, weight=1)
container_main.grid_columnconfigure(0, weight=1)
container_listbox = tk.Frame(container_main, bg='blue', width=200, height=700)
container_listbox.pack(side='left', fill='both', expand=True)
container_listbox.grid_rowconfigure(0, weight=1)
container_listbox.grid_columnconfigure(0, weight=1)
container_settings = tk.Frame(container_main, bg='red', width=300, height=700)
container_settings.pack(side='right', fill='both', expand=True)
container_settings.grid_rowconfigure(0, weight=1)
container_settings.grid_columnconfigure(0, weight=1)
self.frames = {}
self.frames["Options"] = Options(parent=container_listbox, controller=self)
self.frames["General"] = General(parent=container_settings, controller=self)
self.frames["Future"] = Future(parent=container_settings, controller=self)
self.frames["General"].grid(row=0, column=0, sticky='nsew')
self.frames["Future"].grid(row=0, column=0, sticky='nsew')
def show_frame(self, page_name):
frame = self.frames[page_name]
frame.tkraise()
class Options(tk.Frame):
def __init__(self, parent, controller):
tk.Frame.__init__(self, parent)
self.controller = controller
label = tk.Label(parent, text='List Box')
label.grid(row=0, column=0, sticky='nsew', padx=1, pady=1)
button1 = tk.Button(parent, text='General', command=lambda: controller.show_frame('General'))
button2 = tk.Button(parent, text='Future', command=lambda: controller.show_frame('Future'))
button1.grid(row=1, column=0, sticky='ew')
button2.grid(row=2, column=0, sticky='ew')
class General(tk.Frame):
def __init__(self, parent, controller):
tk.Frame.__init__(self, parent)
self.controller = controller
label = tk.Label(self, text='General')
label.pack(side='left', fill='both', expand=True)
print("Hi I'm General")
class Future(tk.Frame):
def __init__(self, parent, controller):
tk.Frame.__init__(self, parent)
self.controller = controller
label = tk.Label(self, text='Future')
label.pack(side='left', fill='both', expand=True)
print("Hi I'm Future")
app = Settings()
app.mainloop()
</code></pre>
|
python|tkinter
| 3 |
1,908,904 | 45,506,392 |
Python sockets. OSError: [Errno 9] Bad file descriptor
|
<p>It's my client:</p>
<pre><code>#CLIENT
import socket
conne = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
conne.setsockopt(socket.SOL_SOCKET, socket.SO_BROADCAST, 1)
i=0
while True:
conne.connect ( ('127.0.0.1', 3001) )
if i==0:
conne.send(b"test")
i+=1
data = conne.recv(1024)
#print(data)
if data.decode("utf-8")=="0":
name = input("Write your name:\n")
conne.send(bytes(name, "utf-8"))
else:
text = input("Write text:\n")
conne.send(bytes(text, "utf-8"))
conne.close()
</code></pre>
<p>It's my server:</p>
<pre><code>#SERVER
import socket
counter=0
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.bind(('', 3001))
sock.listen(10)
while True:
conn, addr = sock.accept()
data = conn.recv(1024)
if len(data.decode("utf-8"))>0:
if counter==0:
conn.send(b"0")
counter+=1
else:
conn.send(b"1")
counter+=1
else:
break
print("Zero")
conn.send("Slava")
conn.close()
))
</code></pre>
<p>After starting Client.py i get this error:</p>
<blockquote>
<p>Traceback (most recent call last): File "client.py", line 10, in
conne.connect ( ('127.0.0.1', 3001) ) OSError: [Errno 9] Bad file descriptor</p>
</blockquote>
<p>Problem will be created just after first input.
This program - chat. Server is waiting for messages. Client is sending.</p>
|
<p>There are a number of problems with the code, however, to address the one related to the traceback, a socket can not be reused once the connection is closed, i.e. you can not call <code>socket.connect()</code> on a closed socket. Instead you need to create a new socket each time, so move the socket creation code into the loop:</p>
<pre><code>import socket
i=0
while True:
conne = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
conne.connect(('127.0.0.1', 3001))
...
</code></pre>
<p>Setting socket option <code>SO_BROADCAST</code> on a stream socket has no affect so, unless you actually intended to use datagrams (UDP connection), you should remove the call to <code>setsockopt()</code>.</p>
<p>At least one other problem is that the server closes the connection before the client sends the user's name to it. Probably there are other problems that you will find while debugging your code.</p>
|
python|sockets
| 6 |
1,908,905 | 28,680,665 |
Access C# Enums and Classes from other namespaces in IronPython
|
<p>I am stuck on what I would think should be a rather basic feature of IronPython integration with C# (this is, of course, a very simplified example). Below is a simple multi-project solution. The first project defines an enum and a class from one namespace</p>
<pre class="lang-cs prettyprint-override"><code>namespace EnumTest
{
public class EnumTest
{
public enum FooEnum
{
FooOne = 101,
FooTwo = 102,
};
public EnumTest(FooEnum f)
{
_f = f;
}
}
}
</code></pre>
<p>Then, I have another project which encompasses all of IronPython: the runtime DLLs, the Python modules, and the C# class that runs the python script from a file.</p>
<pre class="lang-cs prettyprint-override"><code>using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.IO;
using System.Reflection;
using IronPython.Hosting;
using Microsoft.Scripting;
using Microsoft.Scripting.Hosting;
namespace IronPythonRunner
{
public class IronPythonRunner
{
public IronPythonRunner()
{
ScriptEngine ironPythonEngine = Python.CreateEngine();
ScriptScope pythonScope = ironPythonEngine.CreateScope();
dynamic scope = pythonScope;
const string script = "c:/temp/try.py";
String scriptDir = Path.GetDirectoryName(script);
String ironPyDir = Path.GetDirectoryName(Assembly.GetEntryAssembly().Location) + "\\IronPythonDistributable\\Lib";
ICollection<String> paths = ironPythonEngine.GetSearchPaths();
paths.Add(scriptDir);
paths.Add(ironPyDir);
ironPythonEngine.SetSearchPaths(paths);
ScriptSource source = ironPythonEngine.CreateScriptSourceFromFile(script);
try
{
source.Execute(pythonScope);
}
catch (Exception e)
{
Debug.WriteLine(e.ToString());
}
finally
{
ironPythonEngine.Runtime.Shutdown();
}
}
}
}
</code></pre>
<p>Finally, I have a c# project that is a test GUI for running a python script</p>
<pre class="lang-cs prettyprint-override"><code>using System;
using System.Windows.Forms;
namespace IronPythonNamespaceTest
{
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
}
private void button1_Click(object sender, EventArgs e)
{
IronPythonRunner.IronPythonRunner r = new IronPythonRunner.IronPythonRunner();
}
}
}
</code></pre>
<p>When I try to run the following python script</p>
<pre class="lang-py prettyprint-override"><code>print "hello world"
print str(FooEnum.FooOne)
t = EnumTest(FooEnum.FooTwo)
</code></pre>
<p>I get the "hello world" output, but then I get a C# IronPython.Runtime.UnboundNameException: name 'FooEnum' is not defined. Which brings me to my question, <strong>how should I be accessing the enum and the class from within my python script?</strong></p>
|
<p>You need to import your assembly:</p>
<pre><code>import clr
clr.AddReference("assembly_name")
from EnumTest import EnumTest
from EnumTest.EnumTest import FooEnum
</code></pre>
|
c#|.net|enums|namespaces|ironpython
| 1 |
1,908,906 | 14,751,121 |
Python: RegEx repetitive sub group finding
|
<p>I have a string <code>Tue 6:30 AM - 12:00 PM, 3:00 PM- 7:00 PM</code> from this I want to get</p>
<pre><code>["Tue", ["6:30 AM - 12:00 PM", "3:00 PM- 7:00 PM"]]
</code></pre>
<p>I tried,</p>
<pre><code>(
((?:mon|tue|wed|thu|fri|sat|sun|mo|tu|we|th|fr|sa|su|m|w|f|thurs)) #weekday
\s
( ( (?:\d{1,2}(?:[:]\d{1,2})?)\s*(?:[ap][.]?m.?) \s*[-|to]+\s* (?:\d{1,2}(?:[:]\d{1,2})?)\s*(?:[ap][.]?m.?) # hour:min period
) ,?\s?
)+
)
</code></pre>
<p>But this always giving first duration only, <code>["Tue", ["3:00 PM- 7:00 PM", "3:00 PM- 7:00 PM"]]</code>
I could try to split the duration by comma in program but I don't wish to do so, because there is a way to do it by <code>RegEx</code> itself but I am missing something in my expression.</p>
|
<p>When you repeat a capturing group, each new repetition will overwrite the previous one. This is normal behaviour in regular expressions in general. Only .NET allows access to each instance ("capture") of a repeated capturing group.</p>
<p>If you know in advance what the maximum number of possible repetitions will be, then you can simply repeat the group "manually" as often as needed.</p>
<p>If you don't know that, use two regexes: Let the first one match from the first to the last time range, and let the second one (applied to the first match using <code>finditer()</code>) match one single range repeatedly.</p>
|
python|regex|string|regex-lookarounds|regex-group
| 1 |
1,908,907 | 41,413,055 |
How can l launch each worker in a multiprocessing.Pool in a new shell?
|
<p>I'm trying to couple the execution of a spawned process in a worker pool to a new system terminal. In the following example (adapted from @sylvain-leroux's answer to <a href="https://stackoverflow.com/questions/17241663/filling-a-queue-and-managing-multiprocessing-in-python">this</a> question) a pool of workers is constructed to do some work with queued objects.</p>
<pre><code>import os
import time
import multiprocessing
# A main function, to be run by our workers.
def worker_main(queue):
print('The worker at', os.getpid(), 'is initialized.')
while True:
# Block until something is in the queue.
item = queue.get(True)
print(item)
time.sleep(0.5)
if __name__ == '__main__':
# Instantiate a Queue for communication.
the_queue = multiprocessing.Queue()
# Build a Pool of workers, each running worker_main.
the_pool = multiprocessing.Pool(3, worker_main, (the_queue,))
# Iterate, sending data via the Queue.
for i in range(5):
the_queue.put("That's a nice string you got there.")
the_queue.put("It'd be a shame if something were to... garble it.")
worker_pool.close()
worker_pool.join()
time.sleep(10)
</code></pre>
<p>If you run this from a system terminal you'll see a bunch of garbled text, because each of the workers is writing out to, and executing in, the same console. For an project I'm working on, it would be helpful to spawn a new shell/console to host each worker process, such that all printed output is displayed in that shell, and the execution of the worker process is host in that shell. I've seen several examples doing this with <code>Popen</code> using the <code>shell</code> keyword, but I need to stick to a pool-based implementation, due to compatibility constraints. Has anyone out there done this? Guidance is appreciated.</p>
|
<p>Try using the <code>Queue</code> the other way around.</p>
<p>Let the workers <code>put</code> messages into the <code>Queue</code>, and in the parent process <code>get</code> them from the <code>Queue</code> and print them. That should get rid of intermingled output.</p>
<p>If you want to pass messages both from parent to workers and back, use two queues. One for passing messages to workers, and one to pass messages back to the parent.</p>
|
python|multiprocessing|worker
| 1 |
1,908,908 | 41,674,910 |
AttributeError: 'module' object has no attribute 'textinput'
|
<p>I use Sublime Text and have a problem with that code:</p>
<pre><code>#coding: utf-8
import turtle
turtle.circle(20)
answer = turtle.textinput("Title", "Text")
</code></pre>
<p>When i run it, i get:</p>
<pre><code>AttributeError: 'module' object has no attribute 'textinput'
</code></pre>
<p>How can i fix it?</p>
|
<p><code>dir(turtle)</code> will list all the methods and attributes available in <code>turtle</code> module.
In python 3.4, <code>answer = turtle.textinput("Title", "Text")</code> is working. You can check if you have latest python and latest module installed.</p>
|
python|attributeerror|textinput
| 0 |
1,908,909 | 41,377,805 |
How to monitor the memory consumed by a program with decorators
|
<p>I built an algorithm (<code>sieve of Eratosthenes</code>) for finding primes, but it consumes a lot of memory. Currently, my code uses a decorator to monitor the time eclipsed. Could you come up with a similar decorator to evaluate the memory consumed by my program?</p>
<pre><code> import math
import time
def time_usage(func):
def wrapper(*args, **kwargs):
beg_ts = time.time()
result = func(*args, **kwargs)
end_ts = time.time()
print("[INFO] elapsed time: %f" % (end_ts - beg_ts))
return result
return wrapper
@time_usage
def find_prime(n):
is_composite = [False for _ in range(n + 1)]
# Cross out multiples of 2
for i in range(4, n, 2):
is_composite[i] = True
# Cross out multiples of primes found so far
next_prime = 3
stop_at = math.sqrt(n)
while next_prime < stop_at:
# Cross out multiples of this prime
for i in range(next_prime * 2, n, next_prime):
is_composite[i] = True
# Move the next prime, skipping the even number
next_prime += 2
while next_prime <= n and is_composite[next_prime]:
next_prime += 2
# Copy the primes into a list
primes = []
for i in range(2, n):
if not is_composite[i]:
primes.append(i)
return primes
if __name__ == '__main__':
print(find_prime(100000))
</code></pre>
<p>One suggestion is to use a third party library to profile the memory usage. I used the <code>memory_profiler</code> as it offers a nice decorator implementation however I cannot use both <code>time_usage</code> decorator and the memory profile.</p>
<p>Here I can see that <code>@profile</code> is actually profiling the memory of <code>time_usage</code>.</p>
<pre><code>import math
import time
from memory_profiler import profile
def time_usage(func):
def wrapper(*args, **kwargs):
beg_ts = time.time()
result = func(*args, **kwargs)
end_ts = time.time()
print("[INFO] elapsed time: %f" % (end_ts - beg_ts))
return result
return wrapper
@profile
@time_usage
def find_prime(n):
is_composite = [False for _ in range(n + 1)]
# Cross out multiples of 2
for i in range(4, n, 2):
is_composite[i] = True
# Cross out multiples of primes found so far
next_prime = 3
stop_at = math.sqrt(n)
while next_prime < stop_at:
# Cross out multiples of this prime
for i in range(next_prime * 2, n, next_prime):
is_composite[i] = True
# Move the next prime, skipping the even number
next_prime += 2
while next_prime <= n and is_composite[next_prime]:
next_prime += 2
# Copy the primes into a list
primes = []
for i in range(2, n):
if not is_composite[i]:
primes.append(i)
return primes
if __name__ == '__main__':
print(find_prime(100000))
</code></pre>
<p>Produce this :</p>
<blockquote>
<h1>Line # Mem usage Increment Line Contents</h1>
<pre><code> 7 27.4 MiB 0.0 MiB def wrapper(*args, **kwargs):
8 27.4 MiB 0.0 MiB beg_ts = time.time()
9 28.3 MiB 0.9 MiB result = func(*args, **kwargs)
10 28.3 MiB 0.0 MiB end_ts = time.time()
11 28.3 MiB 0.0 MiB print("[INFO] elapsed time: %f" % (end_ts - beg_ts))
12 28.3 MiB 0.0 MiB return result
</code></pre>
<p>[2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61,
67, 71, 73, 79, 83, 89, 97, 101, 103, 107, 109, 113, 127, 131, 137,
..., 99989, 99991]</p>
</blockquote>
|
<p>There are quite a few memory profilers for Python. <a href="https://stackoverflow.com/questions/110259/which-python-memory-profiler-is-recommended">This answer</a> lists some of them.</p>
<p>You could create a decorator that checks memory usage before the function call, after the function call and displays the difference. As long as you're running in a single thread, you should get the result you want.</p>
|
python|python-2.7|python-3.x|memory|memory-management
| -1 |
1,908,910 | 25,440,506 |
Problems with setdefault and Integers
|
<p>I'm testing the following function:</p>
<pre><code>def getDataMapOfFirstLine(line):
datamap = {}
for item in line:
hierarchy = item.split('^')
partialmap = datamap
i=0
for node in hierarchy:
partialmap = partialmap.setdefault(node, i)
i += 1
return datamap
</code></pre>
<p>It should create a dictionary out of the first line of a csv-file, that looks like this:</p>
<pre><code>nummer;such;ans;bverb^konum;bverb^namebspr;bverb^bank^iident;
1213;HANS;Hans Dominik;111000222;Hans' account; DE2145432523534232;
1444555;DIRK;Dirk Daniel;13300002;Dirk's account; DE2134634565462352;
</code></pre>
<p>As you see these circumflex-signs in each semicolon-separated string are something like a join in SQL. If I execute it, I get this error:</p>
<pre><code>Traceback (most recent call last):
File "./importtool.py", line 173, in <module>
main()
File "./importtool.py", line 38, in main
analyseImportFile(importfile, parser, options)
File "./importtool.py", line 119, in analyseImportFile
datamap = getDataMapOfFirstLine(line)
File "./importtool.py", line 149, in getDataMapOfFirstLine
partialmap = partialmap.setdefault(node, i)
AttributeError: 'int' object has no attribute 'setdefault'
</code></pre>
<p>If I replace the i in the setdefault-function by {} I get no error:</p>
<pre><code>{'bverb': {'namebspr': {}, 'konum': {}, 'bank': {'iident': {}}}, 'such': {}, 'ans': {}}
</code></pre>
<p>This is nearly, what I want, but instead of the {} I would like to get a column-number.</p>
<p>I just don't get what is wrong. I tried this in interactive mode:</p>
<pre><code>>>> mydict = {'foo': "Hallo", 'bar': 5}
>>> mydict.setdefault("sth", 12)
12
>>> print mydict
{'sth': 12, 'foo': 'Hallo', 'bar': 5}
</code></pre>
<p>As you see, this works...</p>
<p>I appreciate every help. Thanks in advance!</p>
|
<p>Your problem is this line:</p>
<pre><code>partialmap = partialmap.setdefault(node, i)
</code></pre>
<p><code>dict.setdefault</code> <em>returns</em> the thing that was set (or what was already there). In this case, it's an integer so you're setting <code>partialmap</code> to an <code>int</code>. You can probably just not grab the return value (which is what you've done in the interactive terminal BTW):</p>
<pre><code>partialmap.setdefault(node, i)
</code></pre>
|
python|tree
| 2 |
1,908,911 | 44,712,022 |
Django 1.11: post form data to database
|
<p>I am making a super-minimalistic blogging application as a first project. I'm having trouble getting my form's CharField to show up on the page, and I'm having trouble finding a concise explanation as to how I'd put that form data into my database.</p>
<p><strong>Edit for clarity</strong>: I'm trying to make it so that anyone can post something, not just someone with admin privileges like in the official tutorial.</p>
<h1>forms.py:</h1>
<pre><code> 1 from django import forms
2
3 class ContentForm(forms.Form):
4 form_post = forms.CharField(widget = forms.TextInput)
</code></pre>
<h1>views.py:</h1>
<pre><code> 1 from django.shortcuts import render
2 from django.http import HttpResponse
3 from .forms import ContentForm
4 from .models import Post
5
6 def post(request ):
7 #testvar = "TEST VARIABLE PLZ IGNORE"
8 post_list = Post.objects.order_by('id')
9
10 return render(request, 'posts/main.html',
11 {'post': post_list},
12 )
13
14 def content_get(request):
15
16 if request.method == 'POST':
17
18 form = ContentForm(request.POST)
19
20 return render(request, 'main.html', {'form':form})
</code></pre>
<h1>main.html:</h1>
<pre><code> 1 <head>
2 <h1>Nanoblogga</h1>
3
4 </head>
5
6 <body>
7 {% for i in post %}
8 <li>{{ i }}</li>
9 {% endfor %}
10
11
12 <form action = '/' method = 'post'>
13 {% csrf_token %}
14 {{ form }}
15 <input type = 'submit' value = 'Submit' />
16 </form>
17 </body>
</code></pre>
<h1>models.py</h1>
<pre><code>from django.db import models
class Post(models.Model):
content = models.CharField(max_length = 200)
def __str__(self):
return self.content
</code></pre>
<p>I appreciate your input. </p>
|
<p>There are lots of points you need to amend based on your code:</p>
<p><strong>forms.py</strong>: you need <code>ModelForm</code> to work with model <code>Post</code>.</p>
<pre><code>from django.forms import ModelForm
from .models import Post
class ContentForm(ModelForm):
class Meta:
model = Post
fields = "__all__"
</code></pre>
<p><strong>views.py</strong>: you have to pass the <code>form</code> to template so that it can be showed in your <code>main</code> html, and call <code>form.save()</code> to save data into db.</p>
<pre><code>from django.shortcuts import render, redirect
from django.http import HttpResponse
from .forms import ContentForm
from .models import Post
def post(request ):
post_list = Post.objects.order_by('id')
form = ContentForm()
return render(request, 'posts/main.html',
{'post': post_list,
'form':form},
)
def content_get(request):
if request.method == 'POST':
form=ContentForm(request.POST)
if form.is_valid():
form.save()
return redirect('/')
</code></pre>
<p>suppose you have app <strong>urls.py</strong> like this:</p>
<pre><code>from django.conf.urls import url
from . import views
app_name = 'post'
urlpatterns = [
url(r'^$', views.post, name='main'),
url(r'^content$', views.content_get, name='content_get'),
]
</code></pre>
<p>last in <strong>main.html</strong>, you need to define which <code>action</code> to post:</p>
<pre><code><html>
<head>
<h1>Nanoblogga</h1>
</head>
<body>
{% for i in post %}
<li>{{ i }}</li>
{% endfor %}
<form action = '/content' method ='post'>
{% csrf_token %}
{{ form }}
<input type = 'submit' value = 'Submit' />
</form>
</body>
</html>
</code></pre>
|
python|django|python-3.6|django-1.11
| 4 |
1,908,912 | 61,610,874 |
How can I assign bucket-owner-full-control when creating an S3 object with boto3?
|
<p>I'm using the Amazon boto3 library in Python to upload a file into another users bucket. The bucket policy applied to the other users bucket is configured like this</p>
<pre><code>{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DelegateS3BucketList",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::uuu"
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::bbb"
},
{
"Sid": "DelegateS3ObjectUpload",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::uuu"
},
"Action": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::bbb",
"arn:aws:s3:::bbb/*"
]
}
]
}
</code></pre>
<p>where <code>uuu</code> is my user id and <code>bbb</code> is the bucket name belonging to the other user. <strong>My user and the other user are IAM accounts belonging to different organisations</strong>. (I know this policy can be written more simply, but the intention is to add a check on the upload to block objects without appropriate permissions being created).</p>
<p>I can then use the following code to list all objects in the bucket and also to upload new objects to the bucket. This works, however the owner of the bucket has no access to the object due to Amazons default of making objects private to the creator of the object</p>
<pre class="lang-py prettyprint-override"><code>import base64
import hashlib
from boto3.session import Session
access_key = "value generated by Amazon"
secret_key = "value generated by Amazon"
bucketname = "bbb"
content_bytes = b"hello world!"
content_md5 = base64.b64encode(hashlib.md5(content_bytes).digest()).decode("utf-8")
filename = "foo.txt"
sess = Session(aws_access_key_id=access_key, aws_secret_access_key=secret_key)
bucket = sess.resource("s3").Bucket(bucketname)
for o in bucket.objects.all():
print(o)
s3 = sess.client("s3")
s3.put_object(
Bucket=bucketname,
Key=filename,
Body=content_bytes,
ContentMD5=content_md5,
# ACL="bucket-owner-full-control" # Uncomment this line to generate error
)
</code></pre>
<p>As soon as I uncomment the ACL option, the code generates an Access Denied error message. If I redirect this to point to a bucket inside my own organisation, the ACL option succeeds and the owner of the bucket is given full permission to the object.</p>
<p>I'm now at a loss to figure this out, especially as Amazons own advice appears to be to do it the way I have shown.</p>
<p><a href="https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-owner-access/" rel="noreferrer">https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-owner-access/</a></p>
<p><a href="https://aws.amazon.com/premiumsupport/knowledge-center/s3-require-object-ownership/" rel="noreferrer">https://aws.amazon.com/premiumsupport/knowledge-center/s3-require-object-ownership/</a></p>
|
<p>It's not enough to have permission in bucket policies only.</p>
<p>Check if your user (or role) is missing <code>s3:PutObjectAcl</code> permission in <strong>IAM</strong>. </p>
|
python-3.x|amazon-web-services|amazon-s3|boto3|amazon-iam
| 4 |
1,908,913 | 23,704,136 |
How do I avoid users of my code having to type redundant import lines?
|
<p>So I have a project called "Pants" that lives on GitHub.com. Originally the project was a single <code>.py</code> file called <code>pants.py</code>. </p>
<pre><code>Pants/
pants.py
README.md
</code></pre>
<p>and users could import and use it as follows:</p>
<pre><code>import pants
w = pants.World()
</code></pre>
<hr>
<p>Both of these feel fine to me. Then I read up on how to structure a project with unit tests included, and initially reorganized my project thusly:</p>
<pre><code>Pants/
pants.py
README.md
test/
__init__.py
test_world.py
test_ant.py
</code></pre>
<p>The problem with this is that although users can still import the same logical way, there is no <code>pants.test</code> module/package! No problem, I thought, I'll simply add another <code>__init__.py</code> file:</p>
<pre><code>Pants/
__init__.py
pants.py
README.md
test/
__init__.py
test_world.py
test_ant.py
</code></pre>
<p>But now the imports feel incredibly repetitive:</p>
<pre><code>import Pants.pants
w = Pants.pants.World()
</code></pre>
<hr>
<p>It just feels like there is a better way! Right now, my project is structured like this:</p>
<pre><code>Pants/
README.md
pants/
__init__.py
ants.py
world.py
solver.py
test/
__init__.py
test_world.py
test_ant.py
</code></pre>
<p>However, the import lines users are faced with are equally repetitive:</p>
<pre><code>import pants.world
import pants.solver
w = pants.world.World()
s = pants.solver.Solver()
</code></pre>
<hr>
<p>Now I know you can alias these things to shorter equivalents, such as <code>import pants.world.World as World</code> but the import line itself still feels repetitive. Any suggestions on how to avoid such repetition while retaining proper project structure? Does any of this have to change if I were to, say, put it up for installation via <code>pip</code>?</p>
|
<p>To fix it, I kept my package structure the same, and added the following lines to <code>pants/__init__.py</code>:</p>
<pre><code>from .ant import Ant
from .world import World
from .solver import Solver
</code></pre>
<p>Then I was able to change the import lines at the top of my demo file to:</p>
<pre><code>from pants import World
from pants import Solver
</code></pre>
|
python|import|coding-style
| 1 |
1,908,914 | 23,899,808 |
Remote Command Execution Python
|
<p>For educational purposes, I set up a server that allows remote command execution on Windows - or rather, I tried to. For some reason, the command line refuses to recognize some of the commands I send, but others work fine. For instance, sending the command <code>echo "Hello World!!!"</code> causes, as it should, a cmd window to pop up reading "Hello World!!!". Fine. But when I send the command <code>shutdown /s /t 30</code> it gives me the improper syntax / help screen for the shutdown command. When I send the command <code>msg * "Hello World"</code> it tells me that 'msg' is not a recognized internal or external command, operable program, or batch file. Here is my server code: </p>
<pre><code>import socket
import sys
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
server_address = ('', 4242)
sock.bind(server_address)
sock.listen(1)
connection, client_address = sock.accept()
print("Connection established with %s " % str(client_address))
while True:
command = input("Enter a command: ")
connection.send(bytes(command, 'UTF-8'))
confirm = connection.recv(128)
if confirm == "yes":
print("[+] Command executed successfully.")
else:
print("[-] Command failed to execute!!!")
</code></pre>
<p>And here is my client code:</p>
<pre><code>import socket
import sys
import os
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
server_address = ('', 42042)
sock.bind(server_address)
sock.connect(('192.168.1.5', 4242))
while True:
command = str(sock.recv(1024))
try:
os.system(command[2:]) # an odd thing, the commands somehow came out prefaced with "b'". Ideas?
sock.send(bytes("yes", 'UTF-8'))
except:
sock.send(bytes("no", 'UTF-8'))
</code></pre>
<p>So yeah, that's that. The fact that only SOME commands are getting screwed up is really confusing me. Anybody have any ideas? Also, what's up with that "b'"?</p>
|
<p><code>str(sock.recv(1024))</code> is not the way to convert a bytes object into a string, you should be using the <code>sock.recv(1024).decode('UTF-8')</code> method</p>
<p>You can look at the documentation for bytes.decode <a href="https://docs.python.org/3.4/library/stdtypes.html#bytes.decode" rel="nofollow noreferrer">https://docs.python.org/3.4/library/stdtypes.html#bytes.decode</a></p>
<p>Or this related question <a href="https://stackoverflow.com/questions/7585435/best-way-to-convert-string-to-bytes-in-python-3">Best way to convert string to bytes in Python 3?</a></p>
|
python|networking
| 0 |
1,908,915 | 23,609,352 |
python difference between "==" and "is" when compare object properties
|
<p>I am not sure why there is difference when checking or comparing properties of object. Object construtor:</p>
<pre><code>class FooBarObject():
def __init__(self, val_1, val_2):
self.val_1 = val_1
self.val_2 = val_2
</code></pre>
<p>Object is created:</p>
<pre><code>obj = FooBarObject(val_1 = "gnd", val_2 = 10).
</code></pre>
<p>I have noticed that I get different results when:</p>
<pre><code>obj.val_1 is "gnd"
obj.val_1 == "gnd"
>>> False
>>> True
</code></pre>
<p>What am I doing wrong here?</p>
|
<pre><code>obh.val_1 is "gnd"
</code></pre>
<p>compares the two objects in memory if they are the same object. Python sometimes interns strings in order to reuse them if they are identical. Using "is" to compare strings will not always have predictable results. In another sense, you sort of called </p>
<pre><code>id(obh.val_1) == id("gnd") #id demonstrates uniqueness
</code></pre>
<p>Use "==" for string equality to achieve your intention.</p>
|
python|python-2.7|boolean|boolean-logic|boolean-expression
| 3 |
1,908,916 | 23,842,540 |
How is term frequency calculated in scikit-learn CountVectorizer
|
<p>I do not understand how CountVectorizer calculates the term frequency. I need to know this so that I can make a sensible choice for the <code>max_df</code> parameter when filtering out terms from a corpus. Here is example code:</p>
<pre><code> import pandas as pd
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer(min_df = 1, max_df = 0.9)
X = vectorizer.fit_transform(['afr bdf dssd','afr bdf c','afr'])
word_freq_df = pd.DataFrame({'term': vectorizer.get_feature_names(), 'occurrences':np.asarray(X.sum(axis=0)).ravel().tolist()})
word_freq_df['frequency'] = word_freq_df['occurrences']/np.sum(word_freq_df['occurrences'])
print word_freq_df.sort('occurrences',ascending = False).head()
occurrences term frequency
0 3 afr 0.500000
1 2 bdf 0.333333
2 1 dssd 0.166667
</code></pre>
<p>It seems that 'afr' appears in half of the terms in my corpus, as I expect by looking at the corpus. However, when I set <code>max_df = 0.8</code> in <code>CountVectorizer</code>, the term 'afr' is filtered out of my corpus. Playing around, I find that with the coprus in my example, CountVectorizer assigns a frequency of ~0.833 to 'afr'. Could someone provide a formula on how the term frequency which enterts <code>max_df</code> is calculated?</p>
<p>Thanks</p>
|
<p>The issue is apparently not with how the frequency is calculated, but with how the <code>max_df</code> threshold is applied. The code for <code>CountVectorizer</code> does this:</p>
<pre><code>max_doc_count = (max_df
if isinstance(max_df, numbers.Integral)
else int(round(max_df * n_doc))
)
</code></pre>
<p>That is, the maximum document count is obtained by <em>rounding</em> the document proportion times the number of documents. This means that, in a 3-document corpus, any <code>max_df</code> threshold which equates to more than 2.5 documents actually counts the same as a threshold of 3 documents. You are seeing a "frequency" of 2.5/3=0.8333 --- that is, a term that occurs in ~83.3% of 3 documents occurs in 2.5 of them, which is rounded up to 3, meaning it occurs in all of them.</p>
<p>In short, "afr" is correctly considered to have a document frequency of 3, but the maximum document frequency is incorrectly considered to be 3 (0.9*3=2.7, rounded up to 3).</p>
<p>I would consider this a bug in scikit. A maximum document frequency should round <em>down</em>, not up. If the threshold is 0.9, a term which occurs in all documents exceeds the threshold and should be excluded.</p>
|
python|scikit-learn|tf-idf
| 6 |
1,908,917 | 72,136,046 |
Sort values with multi-index/ groupby object 'by group' without breaking the index level
|
<p>Is it possible to sort values by the count values of each group's sum. without breaking the index level? Both attempts I commented out would sort but breaks the index level.</p>
<pre><code>#DataFrame
ff = pd.DataFrame([('P1', 17, 'male'),
('P2', 10, 'female'),
('P3', 10, 'male'),
('P4', 19, 'female'),
('P5', 10, 'male'),
('P6', 12, 'male'),
('P7', 12, 'male'),
('P8', 15, 'female'),
('P9', 15, 'female'),
('P10', 10, 'male')],
columns=['Name', 'Age', 'Sex'])
# Attempts
(
ff
.groupby(['Age', 'Sex'])
.agg(**{
'Count': pd.NamedAgg(column="Name", aggfunc='count'),
'Who': pd.NamedAgg(column="Name", aggfunc=lambda x: ', '.join([i for i in x]))})
# .sort_values('Count') <- this breaks the index level
# .sort_values(['Count', 'Age']) <- this too breaks the index level
)
</code></pre>
<p><strong>Original Data:</strong></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;"></th>
<th style="text-align: left;"></th>
<th style="text-align: left;">Count</th>
<th style="text-align: left;">Who</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">Age</td>
<td style="text-align: left;">Sex</td>
<td style="text-align: left;"></td>
<td style="text-align: left;"></td>
</tr>
<tr>
<td style="text-align: left;">10</td>
<td style="text-align: left;">Female</td>
<td style="text-align: left;">1</td>
<td style="text-align: left;">p2</td>
</tr>
<tr>
<td style="text-align: left;"></td>
<td style="text-align: left;">male</td>
<td style="text-align: left;">3</td>
<td style="text-align: left;">p3,p5,p10</td>
</tr>
<tr>
<td style="text-align: left;">12</td>
<td style="text-align: left;">male</td>
<td style="text-align: left;">2</td>
<td style="text-align: left;">p6,p7</td>
</tr>
<tr>
<td style="text-align: left;">15</td>
<td style="text-align: left;">female</td>
<td style="text-align: left;">2</td>
<td style="text-align: left;">p8,p9</td>
</tr>
<tr>
<td style="text-align: left;">17</td>
<td style="text-align: left;">male</td>
<td style="text-align: left;">1</td>
<td style="text-align: left;">p1</td>
</tr>
<tr>
<td style="text-align: left;">19</td>
<td style="text-align: left;">female</td>
<td style="text-align: left;">1</td>
<td style="text-align: left;">p4</td>
</tr>
</tbody>
</table>
</div>
<p><strong>Desired Output: (sort values by the sum of 'Age' group, but keep the grouped index)</strong></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;"></th>
<th style="text-align: left;"></th>
<th style="text-align: left;">Count</th>
<th style="text-align: left;">Who</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">Age</td>
<td style="text-align: left;">Sex</td>
<td style="text-align: left;"></td>
<td style="text-align: left;"></td>
</tr>
<tr>
<td style="text-align: left;">17</td>
<td style="text-align: left;">male</td>
<td style="text-align: left;">1</td>
<td style="text-align: left;">p1</td>
</tr>
<tr>
<td style="text-align: left;">19</td>
<td style="text-align: left;">female</td>
<td style="text-align: left;">1</td>
<td style="text-align: left;">p4</td>
</tr>
<tr>
<td style="text-align: left;">12</td>
<td style="text-align: left;">male</td>
<td style="text-align: left;">2</td>
<td style="text-align: left;">p6,p7</td>
</tr>
<tr>
<td style="text-align: left;">15</td>
<td style="text-align: left;">female</td>
<td style="text-align: left;">2</td>
<td style="text-align: left;">p8,p9</td>
</tr>
<tr>
<td style="text-align: left;">10</td>
<td style="text-align: left;">Female</td>
<td style="text-align: left;">1</td>
<td style="text-align: left;">p2</td>
</tr>
<tr>
<td style="text-align: left;"></td>
<td style="text-align: left;">male</td>
<td style="text-align: left;">3</td>
<td style="text-align: left;">p3,p5,p10</td>
</tr>
</tbody>
</table>
</div><hr />
<p>Edit: This is how I finally solve the problem, any more advices are appreciated.</p>
<pre><code># DataFrame -- I update a bit for testcases.
ff = pd.DataFrame([('P1', 19, 'male'),
('P2', 10, 'female'),
('P3', 10, 'male'),
('P4', 19, 'female'),
('P5', 10, 'male'),
('P6', 12, 'male'),
('P7', 12, 'male'),
('P7', 12, 'male'),
('P7', 12, 'male'),
('P7', 12, 'male'),
('P8', 15, 'female'),
('P9', 15, 'female'),
('P10', 10, 'male')],
columns=['Name', 'Age', 'Sex'])
# It works !
(
ff.groupby(['Age', 'Sex']).agg(**{
'Count': pd.NamedAgg(column="Name", aggfunc='count'),
'Who': pd.NamedAgg(column="Name", aggfunc=lambda x: ', '.join([i for i in x]))})
# Sort by 'Count' and keep the group adding 'tmp'
.assign(
tmp=lambda x: x.reset_index().groupby('Age')['Count'].transform('sum').to_numpy())
.sort_values(['tmp','Age'])
# drop tmp
.drop('tmp', axis=1)
)
</code></pre>
|
<p>You can reshape by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.unstack.html" rel="nofollow noreferrer"><code>DataFrame.unstack</code></a> and sorting index by sum of both <code>Sex</code> values if exist, then reshape back by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.stack.html" rel="nofollow noreferrer"><code>DataFrame.stack</code></a>:</p>
<pre><code>df1 = df.unstack()
df1 = df1.sort_index(key=df1.sum(axis=1, numeric_only=True).get).stack().astype(df.dtypes)
print (df1)
Count Who
Age Sex
17 male 1 P1
19 female 1 P4
12 male 2 P6, P7
15 female 2 P8, P9
10 female 1 P2
male 3 P3, P5, P10
</code></pre>
<p>Another idea is sorting by sum both values with <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.DataFrameGroupBy.transform.html" rel="nofollow noreferrer"><code>GroupBy.transform</code></a>:</p>
<pre><code>df['tmp'] = df.groupby('Age')['Count'].transform('sum')
df1 = df.sort_values(['tmp','Age']).drop('tmp', axis=1)
print (df1)
Count Who
Age Sex
17 male 1 P1
19 female 1 P4
12 male 2 P6, P7
15 female 2 P8, P9
10 female 1 P2
male 3 P3, P5, P10
</code></pre>
<p>EDIT: One line solution is:</p>
<pre><code>df = (
ff
.groupby(['Age', 'Sex'])
.agg(**{
'Count': pd.NamedAgg(column="Name", aggfunc='count'),
'Who': pd.NamedAgg(column="Name", aggfunc=', '.join)})
.assign(tmp = lambda x: x.groupby('Age')['Count'].transform('sum'))
.sort_values(['tmp','Age'])
.drop('tmp', axis=1))
print (df)
Count Who
Age Sex
17 male 1 P1
19 female 1 P4
12 male 2 P6, P7
15 female 2 P8, P9
10 female 1 P2
male 3 P3, P5, P10
</code></pre>
|
pandas
| 2 |
1,908,918 | 72,023,732 |
AttributeError: 'NoneType' object has no attribute 'rect'
|
<p>I'm making my first game in python and pygame. And I found myself in a really strange situation. When I start the game and choose level 1 or 2, everything is working as expected. How ever if I choose level 3, my side-scrolling camera does not work.</p>
<p>Traceback :</p>
<pre><code>Traceback (most recent call last):
File "D:\AMisko-game\main\window.py", line 99, in <module>
game.run()
File "D:\AMisko-game\main\window.py", line 56, in run
self.level.run()
File "D:\AMisko-game\main\level.py", line 192, in run
self.scrollX()
File "D:\AMisko-game\main\level.py", line 104, in scrollX
playerX = player.rect.centerx
AttributeError: 'NoneType' object has no attribute 'rect'
</code></pre>
<p>The sidescrolling camera :</p>
<pre><code>def scrollX(self):
player = self.player.sprite
playerX = player.rect.centerx
direcX = player.direc.x
if playerX < screenWidth / 4 and direcX < 0:
self.worldShift = 4
player.speed = 0
elif playerX > screenWidth - (screenWidth / 4) and direcX > 0:
self.worldShift = -4
player.speed = 0
else:
self.worldShift = 0
player.speed = 5
</code></pre>
<p>Related things from Level <strong>init</strong> :</p>
<pre><code>def __init__(self, currentLevel, surface, createOverworld, syringeCollect, healthChange):
self.displaySurface = surface
self.worldShift = 0
self.currX = None
# player
playerLayout = importCsv(gameData['player'])
self.player = pygame.sprite.GroupSingle()
self.goal = pygame.sprite.GroupSingle()
self.playerSetup(playerLayout, healthChange)
</code></pre>
<p>I'm really not sure why is it happening. So I looked up the internet and didn't find anything usefull. I figured that probably there's a bug when calling Level class and gathering all the information. And these I have in a seperate file that holds information about each individual level. But as I was checking that, I did not found anything wrong. No misspelling or wrong index. Nothing. I also checked the Player class but I didn't find anything.<br />
Please help.</p>
<p>Edit: When I start the level it just closes my game.
And I'm using 12 files with around 200 lines in each (except support and settings).</p>
<p>Player setup def:</p>
<pre><code>def playerSetup(self, layout, healthChange):
for rowIndex, row in enumerate(layout):
for collIndex, coll in enumerate(row):
x = collIndex * tileSize
y = rowIndex * tileSize
if coll == '0':
sprite = Player((x, y), healthChange)
self.player.add(sprite)
if coll == '1':
setSurface = pygame.image.load(cesty['playerAss'].joinpath('setupend.png')).convert_alpha()
sprite = staticTile(tileSize, x, y, setSurface)
self.goal.add(sprite)
</code></pre>
|
<p>I don't know why you're using <code>GroupSingle</code>, but that's the problem By default, a <code>GroupSingle</code> is created empty. You need to add a sprite to it. Or, skip the group and just create a sprite.</p>
|
python|pygame|pygame-surface
| 1 |
1,908,919 | 36,060,648 |
renaming text files with user-acquired input in python 3
|
<p>I've been trying to make a function that takes user input, and renames a text file with that string. I've tried <code>open("%x.txt" % name, "w")</code>, and <code>os.rename</code>. Is there a more effective way I don't know about? </p>
<pre><code>import os, sys, time
def textfile():
f = open("old.txt", "w")
x = input("name for your file: ")
os.rename("old.txt", "%x.txt)
f.write("This is a sentence")
f.close()
textfile()
</code></pre>
|
<p>You forgot to actually format the string.</p>
<pre><code>import os, sys, time
def textfile():
f = open("old.txt", "w")
x = input("name for your file: ")
os.rename("old.txt", "{}.txt".format(x))
f.write("This is a sentence")
f.close()
textfile()
</code></pre>
|
python|string|python-3.x|variables|user-input
| 2 |
1,908,920 | 36,066,109 |
Python Twisted web server audio file
|
<p>I am trying to create a simple web server with twisted in python. I am having trouble serving an m4a audio file though. </p>
<p>In the current program, when I load <a href="http://localhost:8880/mp3.html" rel="nofollow noreferrer">http://localhost:8880/mp3.html</a>, it works fine. It shows the audio player and the mp3 plays. In addition, the program prints both "/mp3.html" and "/test.mp3".</p>
<p>However, when I load <a href="http://localhost:8880/m4a.html" rel="nofollow noreferrer">http://localhost:8880/m4a.html</a>, it doesn't work. It shows the audio player, but the m4a doesn't play. In addition, the program prints only "/m4a.html" and not "/test.m4a".</p>
<p>My current code is below.</p>
<pre><code>import urlparse
import os
from twisted.internet import reactor
from twisted.web.server import Site
from twisted.web.resource import Resource
from twisted.web.static import File
import time
import subprocess
import mimetypes
class playM4A(Resource):
isLeaf = True
def render_GET(self, request):
this=urlparse.urlparse(request.path)#scheme,netloc,path,query
root,ext=os.path.splitext(this.path)
filename=os.path.basename(request.path)
fileFolder=request.path.replace(filename,"")
self.serverRoot=os.getcwd()
print request.path
if ext==".m4a":
thisFile=File(self.serverRoot+request.path)
return File.render_GET(thisFile,request)
elif ext==".mp3":
thisFile=File(self.serverRoot+request.path)
return File.render_GET(thisFile,request)
elif filename=="m4a.html":
return """
<html>
<audio controls>
<source src="http://localhost:8880/test.m4a" type="audio/mp4a-latm">
Your browser does not support the audio element.
</audio>
not m4a </html>"""
elif filename=="mp3.html":
return """
<html>
<audio controls>
<source src="http://localhost:8880/test.mp3" type="audio/mp3">
Your browser does not support the audio element.
</audio>
not m4a </html>"""
resource = playM4A()
factory = Site(resource)
reactor.listenTCP(8880, factory)
reactor.run()
</code></pre>
|
<p>The code works if you change audio/mp4a-latm to audio/mp4</p>
|
python|twisted|m4a|twisted.web
| 0 |
1,908,921 | 15,215,790 |
"extended" IFFT
|
<p>If I have a waveform <code>x</code> such as</p>
<pre><code>x = [math.sin(W*t + Ph) for t in range(16)]
</code></pre>
<p>with arbitrary <code>W</code> and <code>Ph</code>, and I calculate its (Real) FFT <code>f</code> with</p>
<pre><code>f = numpy.fft.rfft(x)
</code></pre>
<p>I can get the original <code>x</code> with</p>
<pre><code>numpy.fft.irfft(f)
</code></pre>
<p>Now, what if I need to extend the range of the recovered waveform a number of samples to the left and to the right? I.e. a waveform <code>y</code> such that <code>len(y) == 48</code>, <code>y[16:32] == x</code> and <code>y[0:16], y[32:48]</code> are the periodic extensions of the original waveform.</p>
<p>In other words, if the FFT assumes its input is an infinite function <code>f(t)</code> sampled over <code>t = 0, 1, ... N-1</code>, <strong>how can I recover the values of <code>f(t)</code> for <code>t<0</code> and <code>t>=N</code>?</strong></p>
<p><strong>Note:</strong> I used a perfect sine wave as an example, but in practice <code>x</code> could be anything: arbitrary signals such as <code>x = range(16)</code> or <code>x = np.random.rand(16)</code>, or a segment of any length taken from a random <code>.wav</code> file.</p>
|
<blockquote>
<p>Now, what if I need to extend the range of the recovered waveform a number of samples to the left and to the right? I.e. a waveform y such that len(y) == 48, y[16:32] == x and y[0:16], y[32:48] are the periodic extensions of the original waveform.</p>
</blockquote>
<p>The <em>periodic</em> extension are also just x because it's the <em>periodic</em> extension.</p>
<blockquote>
<p>In other words, if the FFT assumes its input is an infinite function f(t) sampled over t = 0, 1, ... N-1, how can I recover the values of f(t) for t<0 and t>=N?</p>
</blockquote>
<p>The "N-point FFT assumes" that your signal is periodic with a periodicity of N. That's because all the harmonic base functions your block is decomposed into are periodic in the way that the previous N and succeding N samples are just a copy of the main N samples.</p>
<p>If you allow any value for <code>W</code> your input sinusoid won't be periodic with periodicity of N. But that does not stop the FFT function from decomposing it into a sum of many periodic sinusiods. And the sum of periodic sinusoids with periodicity of N will also have a periodicity of N.</p>
<p>Clearly, you have to rethink the problem.</p>
<p>Maybe you could make use of linear prediction. Compute a couple of linear prediction coefficients based on your fragment's windowed auto-correlation and the Levinson-Durbin recursion and extrapolate using those prediction coefficients. However, for a stable prediction filter, the prediction will converge to zero and the speed of convergence depends on what kind of signal you have. The perfect linear prediction coefficients for white noise, for example, are all zero. In that case you would "extrapolate" zeros to the left and the right. But there's not much you can do about it. If you have white noise, there is just no information in your fragment about surrounding samples because all the samples are independent (that's what white noise is about).</p>
<p>This kind of linear prediction is actually able to predict sinusoid samples perfectly. So, if your input is sin(W*t+p) for arbitrary W and p you will only need linear prediction with order two. For more complex signals I suggest an order of 10 or 16.</p>
|
python|numpy|signal-processing|fft|ifft
| 3 |
1,908,922 | 15,160,123 |
Adding a background image to a plot
|
<p>Say I am plotting a set of points with an image as a background. I've used the <a href="https://i.stack.imgur.com/N32KD.jpg" rel="noreferrer">Lena</a> image in the example:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from scipy.misc import imread
np.random.seed(0)
x = np.random.uniform(0.0,10.0,15)
y = np.random.uniform(0.0,10.0,15)
img = imread("lena.jpg")
plt.scatter(x,y,zorder=1)
plt.imshow(img,zorder=0)
plt.show()
</code></pre>
<p>This gives me<img src="https://i.stack.imgur.com/jaNap.png" alt="enter image description here"> .</p>
<p>My question is: How can I specify the corner coordinates of the image in the plot? Let's say I'd like the bottom-left corner to be at <code>x, y = 0.5, 1.0</code> and the top-right corner to be at <code>x, y = 8.0, 7.0</code>.</p>
|
<p>Use the <code>extent</code> keyword of <code>imshow</code>. The order of the argument is <code>[left, right, bottom, top]</code></p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
np.random.seed(0)
x = np.random.uniform(0.0,10.0,15)
y = np.random.uniform(0.0,10.0,15)
datafile = 'lena.jpg'
img = plt.imread(datafile)
plt.scatter(x,y,zorder=1)
plt.imshow(img, zorder=0, extent=[0.5, 8.0, 1.0, 7.0])
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/egaj7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/egaj7.png" alt="enter image description here" /></a></p>
<ul>
<li>For cases where it's desired to have an image in a small area of the scatter plot, change the order of the plots (<code>.imshow</code> then <code>.scatter</code>) and change the <code>extent</code> values.</li>
</ul>
<pre class="lang-py prettyprint-override"><code>plt.imshow(img, zorder=0, extent=[3.0, 5.0, 3.0, 4.50])
plt.scatter(x, y, zorder=1)
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/A2dgA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/A2dgA.png" alt="enter image description here" /></a></p>
|
python|numpy|matplotlib
| 44 |
1,908,923 | 64,230,848 |
Question on function multiplication of str and int and type errors in python
|
<p>Hi so I am pretty new to python and programing in general and I can not seem to figure this problem out I just keep on getting "TypeError: unsupported operand type(s) for *: 'NoneType' and 'int'" for a function I am writing.
Here is my function or at least the basics of it.</p>
<pre><code>def y(x):
return print("lorem") * x
y(100)
</code></pre>
<p>Any help I would be very grateful for because this is the first time i have asked a question on here.</p>
|
<p>It's hard to tell what you're looking for, but try the following:</p>
<pre><code>def y(x):
print("lorem" * x)
return
y(100)
</code></pre>
<p>If you're instead looking to print each "lorem" on separate lines, try the following:</p>
<pre><code>def y(x):
for i in range(x):
print("lorem")
return
y(100)
</code></pre>
|
python|typeerror
| 1 |
1,908,924 | 69,682,986 |
Why are the digits in my numbers printing separately rather than together?
|
<p>This is an example of my code. It is not the whole code, it is just the part where I am having trouble. Does anyone understand why it prints like this rather than the full numbers, like 104.0 and 96.0? They are strings, but it will not allow me to convert it to a float because the period in some of the digits..</p>
<pre><code>with open('file.csv','w') as file:
with open('file2.csv', 'r') as file2:
reader = csv.DictReader(file2)
file.write(','.join(row))
file.write('\n')
for num,row in enumerate(reader):
outrow = []
for x in row['numbers']:
print(x)
</code></pre>
<p>When I execute this, it prints out the values I am looking for but separately like this:</p>
<pre><code>1
0
4
.
0
9
6
.
0
N
a
N
1
3
6
.
0
N
a
N
6
2
.
0
</code></pre>
<p>The 'NaN' are values I am changing, but the rest of the numbers I have to use. I cannot insert them into a list because they will end up separated right?</p>
|
<p>Seems like you want something like:</p>
<pre class="lang-py prettyprint-override"><code>with open('file.csv','w') as file:
with open('file2.csv', 'r') as file2:
reader = csv.DictReader(file2)
file.write(','.join(row))
file.write('\n')
for num,row in enumerate(reader):
number = row['numbers']
print(number)
</code></pre>
<p><code>for x in row['numbers']</code> means, "Iterate over every individual character in the <strong>numbers</strong> cell/vallue".</p>
<p>Also, what are you doing here?</p>
<pre class="lang-py prettyprint-override"><code> file.write(','.join(row))
file.write('\n')
</code></pre>
<p>You don't have a <code>row</code> variable/object at that point (at least not visible in your example). Are you trying to write the header? Presumably it's working, so you defined <code>row</code> before, maybe like, <code>row = ['col1', 'numbers']</code></p>
<p>If so, maybe take this general approach:</p>
<pre class="lang-py prettyprint-override"><code>import csv
# Do your reading and processing in one step
rows = []
with open('input.csv', newline='') as f:
reader = csv.DictReader(f)
for row in reader:
# do some work on row, like...
number = row['numbers']
if row['numbers'] == 'NaN':
row['numbers'] = '-1' # whatever you do with NaN
rows.append(row)
# Do your writing in another step
my_field_names = rows[0].keys()
with open('output.csv', 'w', newline='') as f:
# Use the provided writer, in addition to reader
writer = csv.DictWriter(f, fieldnames=my_field_names)
writer.writeheader()
writer.writerows(rows)
</code></pre>
<p>At the very least, use the provide <strong>writer</strong> and <strong>DictWriter</strong> classes, they will make your life much easier.</p>
<p>I mocked up this sample CSV:</p>
<p><strong>input.csv</strong></p>
<pre><code>id,numbers
id_1,100.4
id2,NaN
id3,23
</code></pre>
<p>and the program above produced this:</p>
<p><strong>output.csv</strong></p>
<pre><code>id,numbers
id_1,100.4
id2,-1
id3,23
</code></pre>
|
python|csv|for-loop
| 0 |
1,908,925 | 55,621,906 |
send dynamic value as a parameter firestore where query
|
<ol>
<li><p>I need to compare data from a list to the <code>firestore</code> data, where the value for the query is dynamic. I am using python to perform the query:</p>
<pre><code>mail='sampoornitb@gmail.com'
doc_ref = store.collection(u'students').where(u'email', u'==', 'mail')
</code></pre>
<p>This code is not working with <code>NameError</code>. </p></li>
<li><p>If I use:</p>
<pre><code>doc_ref = store.collection(u'students').where(u'email', u'==', 'sampoornitb@gmail.com')
</code></pre>
<p>It is working fine.</p></li>
</ol>
<p>Can you suggest a query that uses a dynamic value as a parameter?</p>
|
<p>Don't put quotes around the name of a variable that you want to pass to the Firestore API. Quoted strings are taken literally in Python. (It has nothing to do with the Firestore SDK.)</p>
<pre><code>mail='sampoornitb@gmail.com'
doc_ref = store.collection(u'students').where(u'email', u'==', mail)
</code></pre>
|
python-3.x|google-cloud-firestore
| 0 |
1,908,926 | 65,053,135 |
Why does Visual Studio Code use global python config despite pointing to my virtual environment?
|
<p>I have a workspace folder, with a virtual environment under /venv/ I have an older version of opencv-python installed here (3.4.6.27) than what I have globally (4.4.0.46). However, despite pointing visual studio code to my venv, inspecting the version shows the higher one.</p>
<p>I.e., When I activate the venv in a terminal, then</p>
<pre><code>pip list
</code></pre>
<p>I get this</p>
<pre><code>Package Version
------------- --------
numpy 1.19.4
opencv-python 3.4.6.27
pip 20.1.1
setuptools 47.1.0
</code></pre>
<p>But in a visual studio notebook:</p>
<pre><code>import pkg_resources
pkg_resources.get_distribution("opencv-python").version
</code></pre>
<p>provides</p>
<pre><code>'4.4.0.46'
</code></pre>
<p>When I click to select to the interpreter, it is pointing to the <em>correct</em> path.</p>
<pre><code>Current: ~/Desktop/workspace/venv/bin/python
</code></pre>
<p>So what am I missing?</p>
|
<p>It is possible for a Jupyter notebook in VS Code to use a different interpreter than the VS Code interpreter setting. Check the Python kernel by clicking the kernel name up on the top right of the window (Look for "Python 3: Idle" or similar). Click that and make sure it is pointing at the right Python executable.</p>
|
python|visual-studio-code|virtualenv|vscode-settings
| 1 |
1,908,927 | 64,902,931 |
How can I send a TCP request with Python, and how does the routing work?
|
<p>I managed to finish a program that, in a network, sends data from pc to pc. That's easy as repeaters and everything in one network can route the request to other PCs in the network.</p>
<p>But if I leave my Network and maybe even add a VPN with a few reroutes, how does my data reach the destination?</p>
<p>More importantly, how do I program it? I know that via</p>
<p><code>send()</code></p>
<p>I can add bytes, flags and an IP address+port, and if I get data from the IP address of a VPN, i cannot just go "yes let's just return it to whenst it came". How do I know what port on the VPN and Router to send it to, how does the VPN know/how do i tell it to forward it to the Router and to what port there, and how do I tell the router to send it to the by me specied port to a specific PC in the Network?</p>
|
<p>TLDR: you just need to specify the destination (IP:port). A correctly configured VPN connection should instruct your system to set its IP range and gateway. The setting is either preconfigured or distributed by the VPN server.</p>
<p>It's not really a python question, but networking. Assume we're using TCP/IP, only one NIC attached to your system. Your system is connected to a router or switch, not a peer-to-peer or other network.</p>
<p>Your system's network stack, upon receiving the connection request by python, will only send packets to the default gateway, regardless of its final destination (such information is part of the packet, see <a href="https://en.wikipedia.org/wiki/IPv4" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/IPv4</a>).</p>
<p>From this point on, your system has almost no control over the route a packet travels, i.e. which devices to use and which to avoid. Instead, it's defined by the route tables of switches and routers in between. More specifically, the route table defines IP ranges and next hop, e.g.</p>
<pre><code>10.0.0.0/8 -> 10.0.0.1 # the next router's IP
1.2.3.4/32 -> a.b.c.d
0.0.0.0/0 -> none
</code></pre>
<p>Each intermediate device does the same, until the final gateway which your destination is connected to. That one will send the packet to your dest directly, because it knows your dest is within reach.</p>
<p>Common VPN (PPTP, L2TP, IPSec) only adds a virtual NIC, and sets your system's preference, so that corresponding traffic will go through this virtual gateway (instead of the default). But the above still applies.</p>
|
python|tcp|routes
| 1 |
1,908,928 | 53,173,591 |
Update pandas dataframe based on slice
|
<p>I have a dataframe that I wish to split into "train" and "test" datasets using the <code>sklearn.model_selection.train_test_split</code> function. This function returns two slices of the original DataFrame. I however need this to be in a single DataFrame, with a column entry that identifies identifies the entry type. I could write a function that does this instead, but using the sklearn function is convenient and reliable.</p>
<p>My current approach is as follows:</p>
<pre><code>import pandas as pd
import numpy as np
from sklearn import model_selection
dates = pd.date_range('20130101',periods=10)
df = pd.DataFrame(np.random.randn(10,4),index=dates,columns=list('ABCD')).reset_index()
split = [0.8, 0.2]
split_seed = 123
train_df, test_df = model_selection.train_test_split(df, train_size = split[0], test_size = split[1], random_state=split_seed)
train_df["Dataset"] = "train"
test_df["Dataset"] = "test"
final_df = train_df.append(test_df)
</code></pre>
<p>This works perfectly, but results in a warning since I am modifying copied slices instead of the original <code>df</code> object:</p>
<pre><code>A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
</code></pre>
<p>It doesn't really matter since the original DataFrame is no longer used after this. But I'm curious how I could do this differently. I presume that instead of editing <code>train_df</code> and <code>test_df</code> and the appending them again, I could just edit <code>df</code> directly, but as I am not very familiar with how <code>.loc</code> and <code>.iloc</code> works I'm struggling to see how this would work. </p>
<p>Psuedo code that illustrates what I am looking for would be as follows:</p>
<pre><code>df["Dataset"] = "train" WHERE index in train_df.index.values
df["Dataset"] = "test" WHERE index in test_df.index.values
</code></pre>
|
<p>If you don't want to <code>copy</code> your <code>DataFrame</code> in the <code>model_selection.train_test_split()</code> call you can use <code>loc</code>:</p>
<pre><code>df.loc[train_df.index, 'Dataset'] = 'train'
df.loc[test_df.index, 'Dataset'] = 'test'
</code></pre>
|
python|pandas|dataframe
| 3 |
1,908,929 | 65,249,994 |
Randomly shuffle elements of one list and save the order of shuffling
|
<p>I want to shuffle a list but store the order of the original list.
Something like this:</p>
<pre><code>originallist=[1,2,3,4,5]
newlist=[]
orderlist=[]
for i in range(0,len(originallist)):
randomindex=random.randrange(0,len(originallist))
if randomindex not in orderlist:
newlist.insert(randomindex,originallist[i])
orderlist.append(randomindex)
else:
i-=1
</code></pre>
<p>The problem is that if <code>originallist</code> contains more than 2 variables, <code>orderlist</code> has one less element in it. and the missing element is random.
How do i fix this?</p>
|
<p>Create a list of indices and shuffle the indices then lookup the original list in the order of the shuffled indices.</p>
<pre><code>originallist=[1,2,3,4,5]
orderlist = list(range(len(originallist)))
random.shuffle(orderlist)
newlist = [originallist[i] for i in orderlist]
print (orderlist)
print (newlist)
</code></pre>
<p>Output:</p>
<pre><code>[2, 0, 4, 1, 3]
[3, 1, 5, 2, 4]
</code></pre>
|
python
| 2 |
1,908,930 | 10,278,023 |
Node count of a graph not matching
|
<p>I have a MDB database with the following attributes about forum posts:</p>
<pre><code>thread
author (posted in the thread)
children (a list of authors who replied to the post)
child_count (number of children in the list)
</code></pre>
<p>Im trying to build a graph with the following nodes:</p>
<pre><code>thread
author
child authors
</code></pre>
<p>the total distinct authors in my database are more than 30,000 but the graph that is generated the author count is around 3000. Or, out of a total of 33000 nodes, the following code generates around 5000. What is going on here?</p>
<pre><code>for doc in coll.find():
thread = doc['thread'].encode('utf-8')
author_parent = doc['author'].encode('utf-8')
children = doc['children']
children_count = len(children)
#print G.nodes()
#print post_parent, author, doc['thread']
try:
if thread in G:
continue
else:
G.add_node(thread, color='red')
thread_count+=1
if author_parent in G:
G.add_edge(author_parent, thread)
else:
G.add_node(author_parent, color='green')
G.add_edge(author_parent, thread, weight=0)
author_count+=1
if doc['child_count']!=0:
for doc in children:
if doc['author'].encode("utf-8") in G:
print doc['author'].encode("utf-8"), 'in G'
G.add_edge(doc['author'].encode("utf-8"), author_parent)
else:
G.add_node(doc['author'].encode("utf-8"),color='green')
G.add_edge(doc['author'].encode("utf-8"), author_parent, weight=0)
author_count+=1
except:
print "failed"
nx.write_dot(G,PATH)
print thread_count, author_count, children_count
</code></pre>
|
<p>I got the answer. The continue statement was skipping to the next iteration so I was losing many nodes that way.</p>
|
python|mongodb|social-networking|data-visualization|networkx
| 1 |
1,908,931 | 4,869,391 |
Distributing list items to variables in python
|
<p>I'm fairly new to Python and I'm looking for a way to distribute items in a list into individual variables. The point of this is to display individual items as text objects in Blender. Here's what I have so far, but I know there's gotta be a more efficient way to go about doing this.</p>
<pre><code>file = open('lyrics.conf')
data = file.read()
file.close()
b = data.split('/')
v = len(b)
if v >= 1:
v1 = b[0]
if v >= 2:
v2 = b[1]
if v >= 3:
v3 = b[2]
if v >= 4:
v4 = b[3]
if v >= 5:
v5 = b[4]
if v >= 6:
v6 = b[5]
if v >= 7:
v7 = b[6]
if v >= 8:
v8 = b[7]
if v >= 9:
v9 = b[8]
if v >= 10:
v10 = b[9]
</code></pre>
|
<p>if you really want individual variables, at some point you have at least to do an unpacking like <code>v1,v2,v3,v4,v5,v6,v7,v8,v9,v10 = some_list</code></p>
<p>but why would you want to do this? if something is a collection/list of things, it is best represented as such.</p>
|
python
| 6 |
1,908,932 | 62,612,201 |
Print dynamically in just one line
|
<p>Been trying to improve my Fibonacci script. Made a few changes regarding actually how it visually looks (has like a minimalist "menu") and some other stuff to avoid it breaking, like not allowing text to be given as input to the amount of numbers it should generate. One of the things I wanted to change was the output to show all in just one line, but kinda been having a hard time doing so.</p>
<p>My code:</p>
<pre><code>count = int(input("How many numbers do you want to generate?"))
a = 0
b = 0
c = 1
i = 0
while i < count:
print(str(c))
a = b
b = c
c = a + b
i = i+1
</code></pre>
<p>What I also tried:
Instead of
<code>print(str(c))</code> I've tried, without any luck:</p>
<pre><code> print("\033[K", str(c), "\r", )
sys.stdout.flush()
</code></pre>
<p>Desired output:</p>
<pre><code>1, 1, 2, 3 ,5
</code></pre>
<p>Output:</p>
<pre><code>1
1
2
3
5
</code></pre>
|
<p>Use the <code>end</code> parameter of the <code>print</code> function, specifically in your example:</p>
<pre><code>while i < count:
print(c, end=", ")
...
</code></pre>
<p>To prevent the trailing comma after the last print:</p>
<pre><code>while i < count:
print(c, end=("" if i == count - 1 else ", "))
...
</code></pre>
|
python|python-3.x|formatting
| 3 |
1,908,933 | 62,589,374 |
Comparing two dataframes based on values within column
|
<p>I want to compare dataframes based on the 'Horse' column. I want to find rows where the 'Odds' in dataframe 1 are bigger than the 'AvgOdds' in dataframe 2 for a particular horse. For example, this would be rows 0 and 1 in dataframe 1 for 'Indian Sounds'. I want the output to include the 'Race', 'Horse', 'Bookmaker', and 'Difference between Odds and Avg Odds'.</p>
<p>Dataframe 1:</p>
<pre><code>Race Horse Bookmaker Odds
0 Bath R2 Indian Sounds BetEasy 2.65
1 Bath R2 Indian Sounds Neds 2.45
2 Bath R2 Indian Sounds Sportsbet 2.20
3 Bath R2 Hello BetEasy 4.2
4 Bath R2 Hello Neds 4.1
5 Bath R2 Hello Sportsbet 4
</code></pre>
<p>Dataframe 2:</p>
<pre><code>Horse AvgOdds
0 Indian Sounds 2.43
1 Hello 4.1
</code></pre>
<p>Code to construct dataframes:</p>
<pre><code>cols1 = ['Race', 'Horse', 'Bookmaker', 'Odds']
df1 = pd.DataFrame(data=data1, columns=cols1)
cols2 = ['Race', 'Horse', 'Bookmaker', 'AvgOdds']
df2 = pd.DataFrame(data=data1, columns=cols2)
df3 = df2.groupby(by='Horse', sort=False).mean()
df3 = df3.reset_index()
df4 = round(df3,2)
df1[df1['Odds'] > df4['AvgOdds']])
</code></pre>
<p>When I use this code I get an error saying can only compare identically-labeled Series objects. I think this is due to the fact that it is trying to compare row 0 from dataframe 1 with row 0 from dataframe 2 and so on, which does not work as there is more rows in dataframe 1. I need it to refer to row 0-2 in dataframe 1 and row 0 in dataframe 2, then row 3-5 in dataframe 1 and row 1 in dataframe 2.</p>
|
<p>I have assumed your df columns look like below:</p>
<pre><code>df1=pd.DataFrame({
'Race':['Bath R2','Bath R2','Bath R2','Bath R2','Bath R2','Bath R2'],
'Horse':['Indian Sounds','Indian Sounds','Indian Sounds','Hello','Hello','Hello'],
'Bookmaker':['BetEasy','Neds','Sportsbet','BetEasy','Neds','Sportsbet'],
'Odds':[2.65,2.45,2.20,4.2,4.1,4]
})
df2=pd.DataFrame({
'Horse':['Indian Sounds','Hello'],
'AvgOdds':[2.43,4.1]
})
</code></pre>
<p>and if you want to know the rows where the 'Odds' in data frame 1 are bigger than the 'AvgOdds' in data frame 2 you can do an inner join and filter like below:</p>
<pre><code>#merge df1 and df2 based on Horse column
result_df=pd.merge(df1,df2,on='Horse',how='inner')
#filter out the rows wher Odds are greater than AvgOdds
result_df[result_df['Odds']>result_df['AvgOdds']]
</code></pre>
|
python|pandas|dataframe
| 0 |
1,908,934 | 60,693,024 |
A way to count amount of elements in-between another element and returning an array
|
<p>So, I am trying to scrape data off a page to analyze it with R. In order for a complete analysis I need to be able to account for the day of each infection. The page portrays it's content as so:</p>
<pre><code><h4> 5 of March </h4>
<ul>
<li></li>
<li></li>
x10
</ul>
<h4> 4 of March </h4>
<ul>
<li></li>
<li></li>
x15
</ul>
<h4> 3 of March </h4>
<ul>
<li></li>
</ul>
</code></pre>
<p>and so on until the 21 of Januray. </p>
<p>What I want to do is to count the amount of <code><li></code> that are within a given <code><ul></code> that corresponds to its <code><h4></code> and for python to give me back a list that repeats the <code><h4></code> string the <code><li></code> amount of times.
So for example, in <code><h4></code> 5 of March <code><h4></code> case I would want a list that repeats "5 of March" 12 times, because there are 12 <code><li></code> that correspond to that <code><h4></code>. </p>
<p>so far this is my code, but it doesn't even return something to me:</p>
<pre><code>import re
from selenium import webdriver
from bs4 import BeautifulSoup
import pandas as pd
driver = webdriver.Chrome(#purposefully left blank)
amount = []
cases = []
deaths = []
Country = []
Province = []
driver.get("https://bnonews.com/index.php/2020/01/timeline-coronavirus-epidemic/")
content = driver.page_source
soup = BeautifulSoup(content)
h4_tag = str(soup.findAll('h4'))
li_tag = str(soup.findAll('li'))
FLAGS = re.VERBOSE | re.DOTALL | re.IGNORECASE
</code></pre>
<p>At this point I was just trying to see if I could count the amount of <code><ul></code> elements but I can't even do that...
Any Ideas? I've checked stack and git for answers but to no avail...</p>
<p><strong>UPDATE</strong>
the code <code>findChildren</code> doesn't work because the <code><ul></code> and <code><li></code> elements are not children of the <code><h4></code> element. Removed "recursive"</p>
<pre><code>ul_tag = soup.find('div', attrs = {'class':'mvp-post-soc-in'})
children = ul_tag.find('li')
print(children)
</code></pre>
<p>This code returns a list with all <code><li></code> elements and their content.</p>
<p><strong>Trying to check if soup(content) is string</strong>
This is what I get when I print soup:</p>
<pre><code><html lang="en-US" style="transform: none;"><head><meta content="60109657413" property="fb:pages"/>
<meta charset="utf-8"/>
<meta content="width=device-width, initial-scale=1.0, maximum-scale=1.0, minimum-scale=1.0, user-scalable=no" id="viewport" name="viewport"/>
<link href="https://bnonews.com/wp-content/uploads/2018/03/favicon2.ico" rel="shortcut icon"/><link href="https://bnonews.com/xmlrpc.php" rel="pingback"/>
<meta content="article" property="og:type"/>
<meta content="https://bnonews.com/wp-content/uploads/2019/04/2019EbolaWorker.jpg" property="og:image"/>
<meta content="https://bnonews.com/wp-content/uploads/2019/04/2019EbolaWorker.jpg" name="twitter:image"/>
<meta content="https://bnonews.com/index.php/2020/01/timeline-coronavirus-epidemic/" property="og:url"/>
<meta content="TIMELINE: Coronavirus epidemic" property="og:title"/>
<meta content="The following is a timeline of new cases in China and around the world. It is updated once a day. For the current day, click here. Timeline (GMT) 10 March 23:57: First 2 cases in Bolivia. (Source) 23:40: 21 new cases and 1 new death in Spain. (Source) 23:39: 1 new case in Nebraska, United […]" property="og:description"/>
<meta content="summary" name="twitter:card"/>
<meta content="https://bnonews.com/index.php/2020/01/timeline-coronavirus-epidemic/" name="twitter:url"/>
<meta content="TIMELINE: Coronavirus epidemic" name="twitter:title"/>
<meta content="The following is a timeline of new cases in China and around the world. It is updated once a day. For the current day, click here. Timeline (GMT) 10 March 23:57: First 2 cases in Bolivia. (Source) 23:40: 21 new cases and 1 new death in Spain. (Source) 23:39: 1 new case in Nebraska, United […]" name="twitter:description"/>
<title>TIMELINE: Coronavirus epidemic - BNO News</title>
<link href="https://bnonews.com/index.php/2020/01/timeline-coronavirus-epidemic/" rel="canonical"/>
<meta content="en_US" property="og:locale"/>
<meta content="article" property="og:type"/>
<meta content="TIMELINE: Coronavirus epidemic - BNO News" property="og:title"/>
<meta content="The following is a timeline of new cases in China and around the world. It is updated once a day. For the current day, click here. Timeline (GMT) 10 March 23:57: First 2 cases in Bolivia. (Source) 23:40: 21 new cases and 1 new death in Spain. (Source) 23:39: 1 new case in Nebraska, United …" property="og:description"/>
<meta content="https://bnonews.com/index.php/2020/01/timeline-coronavirus-epidemic/" property="og:url"/>
<meta content="BNO News" property="og:site_name"/>
</code></pre>
|
<p>Since <code>ul</code> is not contained within the parent container you want to print, you will probably have to walk the html elements and print as you go, checking for the different elements along the way.</p>
<pre><code>from bs4 import BeautifulSoup
html = '<h4> 5 of March </h4><ul><li></li><li></li><li></li><li></li><li></li><li></li><li></li><li></li><li></li><li></li><li></li><li></li></ul><h4> 4 of March </h4><ul><li></li><li></li></ul><h4> 3 of March </h4><ul><li></li></ul>'
soup = BeautifulSoup(html, 'html.parser')
current_h4 = ''
for e in soup:
if e.name == 'h4':
current_h4 = e.text
if e.name == 'ul':
for li in e.find_all('li'):
print(current_h4)
</code></pre>
<p>Result:</p>
<pre><code> 5 of March
5 of March
5 of March
5 of March
5 of March
5 of March
5 of March
5 of March
5 of March
5 of March
5 of March
5 of March
4 of March
4 of March
3 of March
</code></pre>
<p><strong>Additional Update</strong>:
Here is a working example using your exact data source. It is easier to target the <code>div</code> that contains your data directly (id="mvp-content-main") instead of one of the parent divs. I believe the issues you were having was in navigating down the child elements to the one you actually wanted. The result below prints 2,596 lines.</p>
<pre><code>from selenium import webdriver
from bs4 import BeautifulSoup
driver = webdriver.Chrome(executable_path=r'your/path/here/chromedriver.exe')
driver.get("https://bnonews.com/index.php/2020/01/timeline-coronavirus-epidemic/")
content = driver.page_source
soup = BeautifulSoup(content)
ul_tag = soup.find('div', attrs = {'id':'mvp-content-main'})
current_h4 = ''
for e in ul_tag:
if e.name == 'h4':
current_h4 = e.text
if e.name == 'ul':
for li in e.find_all('li'):
print(current_h4)
</code></pre>
|
python|web-scraping
| 0 |
1,908,935 | 71,331,479 |
Generating hashed passwords for Guacamole
|
<p><a href="https://guacamole.apache.org/" rel="nofollow noreferrer">Guacamole</a> provides a default username and password (<code>guacadmin</code> and <code>guacadmin</code>) initialized in a postgres database like this:</p>
<pre><code>INSERT INTO guacamole_user (entity_id, password_hash, password_salt, password_date)
SELECT
entity_id,
decode('CA458A7D494E3BE824F5E1E175A1556C0F8EEF2C2D7DF3633BEC4A29C4411960', 'hex'), -- 'guacadmin'
decode('FE24ADC5E11E2B25288D1704ABE67A79E342ECC26064CE69C5B3177795A82264', 'hex'),
CURRENT_TIMESTAMP
FROM guacamole_entity WHERE name = 'guacadmin' AND guacamole_entity.type = 'USER';
</code></pre>
<p>I'm trying to to understand how that password hash was generated. From the documentation:</p>
<blockquote>
<p>Every user has a corresponding entry in the guacamole_user and guacamole_entity tables. Each user has a corresponding unique username, specified via guacamole_entity, and salted password. The salted password is split into two columns: one containing the salt, and the other containing the password hashed with SHA-256.</p>
<p>[...]</p>
<p><code>password_hash</code></p>
<p>The result of hashing the user’s password concatenated with the contents of password_salt using SHA-256. The salt is appended to the password prior to hashing.</p>
<p><code>password_salt</code></p>
<p>A 32-byte random value. When a new user is created from the web interface, this value is randomly generated using a cryptographically-secure random number generator.</p>
</blockquote>
<p>And I think the corresponding Java code is <a href="https://github.com/apache/guacamole-client/blob/master/extensions/guacamole-auth-jdbc/modules/guacamole-auth-jdbc-base/src/main/java/org/apache/guacamole/auth/jdbc/security/SHA256PasswordEncryptionService1G.java" rel="nofollow noreferrer">here</a>:</p>
<pre><code> StringBuilder builder = new StringBuilder();
builder.append(password);
if (salt != null)
builder.append(BaseEncoding.base16().encode(salt));
// Hash UTF-8 bytes of possibly-salted password
MessageDigest md = MessageDigest.getInstance("SHA-256");
md.update(builder.toString().getBytes("UTF-8"));
return md.digest();
</code></pre>
<p>I'm trying to reproduce this in Python. It looks like they're taking
the password, appending the hex-encoded salt, and then calculating the
sha256 checksum of the resulting byte string. That should be this:</p>
<pre><code>>>> from hashlib import sha256
>>> password_salt = bytes.fromhex('FE24ADC5E11E2B25288D1704ABE67A79E342ECC26064CE69C5B3177795A82264')
>>> password_hash = sha256('guacadmin'.encode() + password_salt.hex().encode())
>>> password_hash.hexdigest()
'523912c05f1557e2da15350fae7217c04ee326edacfaa116248c1ee4e680bd57'
</code></pre>
<p>...but I'm not getting the same result. Am I misreading (or
misunderstanding) the Java code?</p>
|
<p>...and of course I figured it out right after posting the question. The difference is that <code>BaseEncoding.base16().encode(...)</code> produces a hex encoding using upper-case characters, while Python's <code>hex()</code> method uses lower case. That means the equivalent code is in fact:</p>
<pre><code>>>> from hashlib import sha256
>>> password_salt = bytes.fromhex('FE24ADC5E11E2B25288D1704ABE67A79E342ECC26064CE69C5B3177795A82264')
>>> password_hash = sha256("guacadmin".encode() + password_salt.hex().upper().encode())
>>> password_hash.hexdigest()
'ca458a7d494e3be824f5e1e175a1556c0f8eef2c2d7df3633bec4a29c4411960'
</code></pre>
<p>In case anyone stumbles across the same issue, I was able to extract the Java code into a simple test case:</p>
<pre><code>import com.google.common.io.BaseEncoding;
import java.io.UnsupportedEncodingException;
import java.security.MessageDigest;
import java.security.NoSuchAlgorithmException;
import java.util.HexFormat;
class Main {
public static void main(String args[]) {
String password = "guacadmin";
byte[] salt = HexFormat.of().parseHex("FE24ADC5E11E2B25288D1704ABE67A79E342ECC26064CE69C5B3177795A82264");
try {
StringBuilder builder = new StringBuilder();
builder.append(password);
if (salt != null)
builder.append(BaseEncoding.base16().encode(salt));
System.out.println("builder is: " + builder.toString());
// Hash UTF-8 bytes of possibly-salted password
MessageDigest md = MessageDigest.getInstance("SHA-256");
md.update(builder.toString().getBytes("UTF-8"));
System.out.println(BaseEncoding.base16().encode(md.digest()));
}
catch (UnsupportedEncodingException e) {
System.out.println("no such encoding");
}
catch (NoSuchAlgorithmException e) {
System.out.println("no such algorithm");
}
}
}
</code></pre>
<p>This gave me something to run interactively and check the output. This requires the <a href="https://github.com/google/guava/releases" rel="nofollow noreferrer">guava</a> library, and can be compiled like this:</p>
<pre><code>$ javac -classpath .:guava-31.1-jre.jar -d . Main.java
</code></pre>
<p>And run like this:</p>
<pre><code>$ java -classpath .:guava-31.1-jre.jar Main
</code></pre>
|
python|sha256|guacamole
| 0 |
1,908,936 | 70,728,384 |
How to restore files that are deleted by os.remove function in python?
|
<p>I am just writing a python code to delete files older than 30 days. While running it, Unfortunately, I gave the wrong path and some of my precious data was deleted.</p>
<p>Can you please tell me how can I restore that data?</p>
<p>This is the function that I used</p>
<pre><code>def remove_file(path):
# removing the file
if not os.remove(path):
# success message
print(f"{path} is removed successfully")
else:
# failure message
print(f"Unable to delete the {path}")
</code></pre>
|
<p>There are python modules that allow safe deletion like 'send2trash'. You could then recover the files from the recycle bin or trash</p>
|
python|restore|python-os
| 2 |
1,908,937 | 63,556,187 |
Normal Fit from experimental data
|
<p><a href="https://i.stack.imgur.com/uAuYN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uAuYN.png" alt="enter image description here" /></a></p>
<p>Hi all
I want to obtain a normal fit from a set of data obtained from experimental results. Since i am starting with python, I dont know where to start. Here's my experimental data. Its a particle size distribution. I want to obtain mean and std. x is the size and y the frequency.</p>
<p>Thank you in advance for any help!</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
x=([0.251839516,0.490440575,0.744647994,0.990643452,1.244142316,1.488611658,1.741274792,1.986416351,2.232538986,2.495993944,2.736393641,2.985059803,3.241792581,3.497435276,3.744829674,3.991788039,4.23860106])
y=([0.271164269,0.492366389,1.256781226,2.468772142,4.479769871,8.376708554,11.85803482,14.57231794,15.56056321,14.05547313,11.11227252,7.625604845,3.947070401,2.186355791,0.937144587,0.455061317,0.228687358])
plt.scatter(x,y,color='red',label='Experiment')
</code></pre>
|
<p>If you want to use SciPy you have <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.html" rel="nofollow noreferrer">scipy.stats.norm</a>:</p>
<pre class="lang-py prettyprint-override"><code>from scipy.stats import norm
mu, std = norm.fit(data)
</code></pre>
|
python|normal-distribution
| 3 |
1,908,938 | 63,420,061 |
How to give customize the title and the axis labels on the histogram of this plot?
|
<p>I want to add title 'Magnitude' on the right side of the y-axis of the histogram. Also want to add scatter plot title inside the frame. How to do that? Can I add these features with one-liner?</p>
<p>my code and output figure is given below</p>
<pre><code># definitions for the axes
left, width = 0.1, 0.65 #width : width of the main plot(Xaxis length)
bottom, height = 0.1, 0.4 #height : height of the main plot(Yaxis length)
spacing = 0.010 # gap between the plots
rect_scatter = [left, bottom, width, height]
rect_histx = [left, bottom + height + spacing, width, 0.2]
rect_histy = [left + width + spacing, bottom, 0.2, height]
# start with a square Figure
fig = plt.figure(figsize=(8, 10))
ax = fig.add_axes(rect_scatter)
ax_histx = fig.add_axes(rect_histx, sharex=ax)
ax_histy = fig.add_axes(rect_histy, sharey=ax)
# use the previously defined function
scatter_hist(df.YearDeci,df.Magnitude,ax,ax_histx,ax_histy,binx,biny)
ax.set_xticks(np.arange(t1,t2,5))
extraticks=[2018]
ax.set_xticks(list(ax.get_xticks()) + extraticks)
plt.show()
#######################################################
def scatter_hist(x, y,ax,ax_histx,ax_histy,binx,biny):
# no labels
ax_histx.tick_params(axis="x", labelbottom=False)
ax_histx.set(ylabel='Number of events',title='Time',facecolor='lightgray')
ax_histy.tick_params(axis="y", labelleft=False)
ax_histy.set(xlabel='Number of events',facecolor='lightgray')
ax_histy.yaxis.set_label_position("right")
# the scatter plot:
ax.scatter(x, y, facecolor='yellow', alpha=0.75,edgecolor='black',linewidth=0.5,s=30)
plt.setp(ax.get_xticklabels(), rotation = 90,fontsize=10)
ax.set(xlabel='Time',ylabel='Number of events',facecolor='lightgray')
# now determine nice limits by hand:
ax_histx.hist(x, bins=binx, density=False, facecolor='r', alpha=0.75,edgecolor='black',linewidth=0.5)
ax_histy.hist(y, bins=biny, density=False, facecolor='r', alpha=0.75,edgecolor='black',linewidth=0.5,orientation='horizontal')
</code></pre>
<p><a href="https://i.stack.imgur.com/Cffrv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Cffrv.png" alt="enter image description here" /></a></p>
|
<p>Rather than a title, just add text to that axis. You can rotate and position it however you like. <a href="https://matplotlib.org/examples/pylab_examples/alignment_test.html" rel="nofollow noreferrer">Here</a> is an example.</p>
<pre><code>ax_histy.text(left + width + spacing + 0.2 + 0.1, bottom + 0.5*height, 'test',
horizontalalignment='center',
verticalalignment='center',
rotation=270,
fontsize=12,
transform=ax_histy.transAxes)
</code></pre>
<p>seems to work OK for your case. Play around with the positioning and size to get it the way you want.</p>
<p><a href="https://i.stack.imgur.com/CFS1b.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CFS1b.png" alt="enter image description here" /></a></p>
|
python-3.x|matplotlib|histogram
| 1 |
1,908,939 | 63,664,333 |
How do I call my "counting()" function from within the "question()" function in python?
|
<p>How do I call my “counting()” function from within the question() function? So that I only need to specify that it should be run once - no matter how many questions I have in my quiz game.
I have tried, but nothing is working.</p>
<p>Please help me, thank you.</p>
<p>p.s my questions are on Swedish, but they don't matter.</p>
<pre><code> from time import sleep
def main():
option()
def start_menu():
"""This will display the main menu"""
print("*"*40)
print("MAIN MENU")
print("*"*40)
print("1. Start the quiz")
print("2. Statistics")
print("3. Exit game")
def option():
"""This should get user input """
while True:
start_menu()
option= input("Write your choice here [1-3]")
if option=="1":
qustion_game()
elif option=="2":
statistics()
elif option=="3":
game_over()
return
else:
print("he selection you specified was not valid, [1-3]")
def qustion_game():
"""Frågesporten"""
print("♡"*40)
print("Welcome to this quiz ")
print("♡"*40)
print("")
sleep(1)
question("Annika hade 4 barn, den första hette januari, den andra hette februari, tredje hette april, vad hette den fjärde.?", "vad")
print("")
counting()
sleep(1)
print("♡"*40)
question("Vem får lön utan att jobba en enda dag i hela sitt liv?", "nattvakt" )
print("")
counting()
sleep(1)
print("♡"*40)
question("Lägg till 1000 till 40. Lägg till 1000. Lägg till 30. Lägg till 1000 igen. Lägg nu till 20. Lägg till 1000 igen.Lägg nu till 10.Vad är svaret?", "4100")
print("")
counting()
def question(quiz,quiz_answer):
"""Outputen av frågor"""
user_guess=input(quiz)
while user_guess != quiz_answer:
print("Sorry, try again...")
fail_answer()
user_guess=input(quiz)
print("")
print("*"*40)
print("Correct")
def statistics():
"""Provides the user with statistics on how many questions they have answered and how many errors they have made """
print("Statistics")
print("*"*40)
print("You have totally answered " + str(answered) +" questions")
print("Off " +str(answered)+ " answer, have you answered incorrectly on " + str(fail))
def fail_answer():
"""prints how many questions the user has answered"""
global answered
answered = answered + 1
def counting():
"""prints how many errors the user has made"""
global fail
fail = fail + 1
def game_over():
"""Exit the game"""
print ("Game over, see you next time")
fail = 0
answered = 0
main()
</code></pre>
|
<p>You need to unindent the whole of your code.</p>
<p>Additionally, it seems that your functions <code>fail_answer</code> and <code>counting</code> are doing the wrong things - they need to be swapped around.</p>
<pre><code>def fail_answer():
"""prints how many errors the user has made"""
global fail
fail = fail + 1
def counting():
"""prints how many questions the user has answered"""
global answered
answered = answered + 1
</code></pre>
<p>Apart from that, your code is working in its existing form, if you also just insert a call to <code>counting</code> from inside <code>question</code> after printing <code>Correct</code>:</p>
<pre><code>def question(quiz,quiz_answer):
... existing code goes here ...
print("Correct")
counting()
</code></pre>
|
python|python-3.x
| 1 |
1,908,940 | 56,716,846 |
There are values in rows but dataframe returns Nan
|
<p>When i read a dataframe in panda, it returns Nan when there is values in there. Also how do i delete rows where the value is really N/A or blank. </p>
<pre><code># File to read
Exoticoutput=pd.read_excel(Exotic Deltas - SIGN OFF SHEET"+yesterday+".xlsx",sheetname="Exotic deltas (output to Curo)")
Exoticoutput.to_csv(Output\Exotic Deltas"+' '+ yesterday +' '+".csv", columns=["Hiport Code","ISIN","External Code 1","JSE code","Delta"], index=False)# Output file to create.
</code></pre>
<p>There are values in the Delta column with decimals in the actual excel file, however when i create an output file by reading into a dataframe and exporting gives Nan</p>
<p>First file</p>
<p>Input file
Option ISIN Delta </p>
<p>ZADA000 0.60972
ZADA 0.292603</p>
<p>Output file </p>
<p>Option Isin Delta
No data No Data
Nan</p>
<p>The input file has the data but on output file it either reads Nan or blank</p>
<p>is it because the number i am looking for is floating</p>
<p>Sorry for the lack of information, i am new to SE and python programming </p>
|
<p>hello dear you can use the</p>
<blockquote>
<p>na_values</p>
</blockquote>
<p>argument to set the default value for the na cells in addition use </p>
<blockquote>
<p>keep_default_na</p>
</blockquote>
|
pandas
| 0 |
1,908,941 | 56,799,629 |
Configure pip to install everything as with --user
|
<p>Is there a way to make <code>pip install</code> always run as if I gave the <code>--user</code> command? I have to type it every time, especially when copying commands from instructions, and it is tedious to do. I don't really see a good reason to ever install things as root when I can just install with <code>--user</code>.</p>
<p>Configuration is preferred to simple bash alias.</p>
|
<p>Add <code>user = true</code> to your <a href="https://pip.pypa.io/en/stable/user_guide/#config-file" rel="noreferrer">pip configuration file</a>. Specifically, it would go in the <code>[install]</code> section:</p>
<pre><code>[install]
user = true
</code></pre>
|
python|pip
| 7 |
1,908,942 | 69,696,936 |
Count the number of occurrences of characters in a string?
|
<p><strong>Given a string</strong> "<strong>a1a3b5a2c4b1</strong>".
The characters are followed by numbers representing the number of occurrences of the character in the string. The correct <strong>solution would return "a6b6c4"</strong> (a1 + a3 + a2 = a6, b5 + b1 = b6, c4)</p>
<p>My original idea was to convert the string into a list, then to a dict of key value pairs.</p>
<pre><code>data="a1a3b5a2c4b1"
lst = list(data)
{lst[i]: lst[i +1] for i in range(0, len(lst), 2)}
</code></pre>
<p>Returns:</p>
<pre><code>{'a': '2', 'b': '1', 'c': '4'}
</code></pre>
<p>The issue is that it does not increase the number of occurrences (values) but takes the last seen value.</p>
<p>How can I create a dictionary that increases the values rather than replaces them?</p>
|
<p>You keep overriding the key with the latest count in that comprehension. You would have to rather update them by addition:</p>
<pre><code>data = "a1a3b5a2c4b1"
counts = {}
i = iter(data)
for char, count in zip(i, i):
counts[char] = counts.get(char, 0) + int(count)
# {'a': 6, 'b': 6, 'c': 4}
</code></pre>
<p>The other natural util to handle counts is, well, a <a href="https://docs.python.org/3/library/collections.html#collections.Counter" rel="nofollow noreferrer"><code>collections.Counter</code></a>:</p>
<pre><code>from collections import Counter
counts = Counter()
i = iter(data)
for char, count in zip(i, i):
counts.update(**{char: int(count)})
</code></pre>
<p>This also uses the "<a href="https://docs.python.org/3/library/functions.html#zip" rel="nofollow noreferrer"><code>zip</code></a> the same iterator" trick to produce chunks of 2. AS for turning these dictionaries into the desired string output:</p>
<pre><code>"".join(f"{k}{v}" for k, v in counts.items())
</code></pre>
|
python|algorithm
| 2 |
1,908,943 | 17,893,542 |
Why does os.path.isfile return False?
|
<pre><code>>>> import os
>>> os.listdir("/home/user/Desktop/1")
['1.txt', '2', '3.txt']
>>> os.path.isfile("/home/user/Desktop/1/1.txt")
True
>>> for i in os.listdir("/home/user/Desktop/1"):
... print(os.path.isfile(i))
...
False
False
False
>>>
</code></pre>
<p>Two of them are files, why is the output <code>False</code> when it should be <code>True</code>?</p>
|
<p>When you print <code>os.path.isfile(i)</code>, you're checking if "1.txt" or "2" or "3.txt" is a file, whereas when you run <code>os.path.isfile("/home/user/Desktop/1/1.txt")</code> you have a full path to the file.</p>
<p>Try replacing that line with</p>
<pre><code>print(os.path.isfile("/home/user/desktop/1/" + i))
</code></pre>
<p><strong>Edit:</strong></p>
<p>As mentioned in the comment below by icktoofay, a better solution might be to replace the line with</p>
<pre><code>print(os.path.isfile(os.path.join("/home/user/desktop/1", i)))
</code></pre>
<p>or to earlier store "/home/user/desktop/1" to some variable x, allowing the line to be replaced with</p>
<pre><code>print(os.path.isfile(os.path.join(x,i)))
</code></pre>
|
python
| 54 |
1,908,944 | 60,772,788 |
adding noise to an area in the image
|
<p>As part of image preprocessing I want to corrupt an image by adding random pixel values to a part of the image, specified with a mask. I'm working with python. Are there any common ways to do this, or maybe is there a paper published with this information? All help very much appreciated</p>
|
<p>Random pixel values are just random integers between 0 and 255 (for color pictures). So you can just pick a random pixel on an image, and replace it by 3 random RGB values. Let's say you have a picture (all black so we can visualize):</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
pic = np.full((10, 10, 3), 0)
</code></pre>
<p><a href="https://i.stack.imgur.com/jcIDW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jcIDW.png" alt="enter image description here"></a></p>
<p>Then you can replace a coordinate inside the dimensions of the image (10 by 10 here) by 3 random RGB values between 0 and 255. </p>
<pre><code>pic[np.random.randint(0, 10, 5),
np.random.randint(0, 10, 5 )] = \
np.random.randint(0, 256, (5, 3))
</code></pre>
<p>The logic is as follows: take 5 random points in X, Y and replace them with 3 random values between 0 and 255. </p>
<p><a href="https://i.stack.imgur.com/bNb7i.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bNb7i.png" alt="enter image description here"></a></p>
|
python|image-processing|noise|generative-adversarial-network|image-preprocessing
| 2 |
1,908,945 | 66,079,083 |
ROS Camera Calibrator Errors Out
|
<p>I have ROS installed on a Ubuntu 20.04 running ros_core and a Raspberry Pi running a camera node. I tried to run <code>rosrun camera_calibration cameracalibrator.py --size 8x6 --square 0.108 image:=/cv_camera/image camera:=/cv_camera</code> I get the following error:</p>
<pre><code>Traceback (most recent call last):
File "/opt/ros/noetic/lib/camera_calibration/cameracalibrator.py", line 37, in <module>
import message_filters
File "/opt/ros/noetic/lib/python3/dist-packages/message_filters/__init__.py", line 35, in <module>
import rospy
File "/opt/ros/noetic/lib/python3/dist-packages/rospy/__init__.py", line 49, in <module>
from .client import spin, myargv, init_node, \
File "/opt/ros/noetic/lib/python3/dist-packages/rospy/client.py", line 60, in <module>
import rospy.impl.init
File "/opt/ros/noetic/lib/python3/dist-packages/rospy/impl/init.py", line 54, in <module>
from .tcpros import init_tcpros
File "/opt/ros/noetic/lib/python3/dist-packages/rospy/impl/tcpros.py", line 45, in <module>
import rospy.impl.tcpros_service
File "/opt/ros/noetic/lib/python3/dist-packages/rospy/impl/tcpros_service.py", line 54, in <module>
from rospy.impl.tcpros_base import TCPROSTransport, TCPROSTransportProtocol, \
File "/opt/ros/noetic/lib/python3/dist-packages/rospy/impl/tcpros_base.py", line 160
(e_errno, msg, *_) = e.args
^
SyntaxError: invalid syntax
</code></pre>
<p>I installed <code>python2</code> and <code>python2-dev</code> via apt and then installed pip via the get-pip.py. Then I ran <code>pip install pyyaml opencv-python</code> to install the dependencies required by the calibrator. I tried to run it with python3 before I installed python2 but I think it requires python2. What should I do. I have the camera publishing all the right topics. Also, I only see a black and white screen when I run <code>rosrun rqt_image_view rqt_image_view</code>. Please help!</p>
|
<p>You have to find the calibrator file which is under /opt/ros/noetic/lib/camera_calibration/cameracalibrator.py and then run it manually.</p>
|
python|python-3.x|camera|ros|camera-calibration
| 0 |
1,908,946 | 69,178,917 |
Inheritance in Python models
|
<p>Having a little misunderstanding with the inheritance in Python.
I have one parent class:</p>
<pre><code>class BaseClass(models.Model):
email = models.EmailField(blank=True)
phone = models.CharField(max_length=32, blank=True)
name = models.CharField(
max_length=64, blank=True, verbose_name=_(u'name')
)
surname = models.CharField(
max_length=64, blank=True, verbose_name=_(u'surname')
)
class Meta:
abstract = True
def __str__(self):
if self.name:
return self.name
elif self.email:
return self.email
else:
return self.phone
</code></pre>
<p>And I would like to use these all data in child class named SecondClass, but I dont know what I must to insert in body section of this class:</p>
<pre><code>class SecondClass(BaseClass):
</code></pre>
|
<p>This is a pretty good resource for this: <a href="https://www.w3schools.com/python/python_inheritance.asp" rel="nofollow noreferrer">https://www.w3schools.com/python/python_inheritance.asp</a></p>
<p>If you don't need to add anything else to the child class, then you can just do</p>
<pre><code>class SecondClass(BaseClass):
pass
</code></pre>
|
python|django|oop|inheritance|django-models
| 0 |
1,908,947 | 68,039,457 |
Rasa error when set use_entities to [ ] in domaim.yml
|
<p>I want certain intents to not have any entities and I get an error when trying to do this according to the documentation <a href="https://rasa.com/docs/rasa/domain#ignoring-entities-for-certain-intents" rel="nofollow noreferrer">https://rasa.com/docs/rasa/domain#ignoring-entities-for-certain-intents</a></p>
<p>The domain.yml file</p>
<pre><code>intents:
- greet:
use_entities: []
- goodbye
</code></pre>
<p>I get the following error</p>
<pre><code>/home/venv/lib/python3.8/site-packages/rasa/shared/utils/io.py:97: UserWarning: Loading domain from 'domain.yml' failed. Using empty domain. Error: 'In the `domain.yml` file, the intent 'greet' cannot have value of `<class 'NoneType'>`. If you have placed a ':' character after the intent's name without adding any additional parameters to this intent then you would need to remove the ':' character. Please see https://rasa.com/docs/rasa/domain for more information on how to correctly add `intents` in the `domain` and https://rasa.com/docs/rasa/domain#intents for examples on when to use the ':' character after an intent's name.'
</code></pre>
|
<p>You're just missing an indent:</p>
<pre><code>intents:
- greet:
use_entities: []
- goodbye
</code></pre>
|
python|rasa-nlu|rasa|rasa-core
| 2 |
1,908,948 | 59,101,014 |
Selenium Is not typing right thing with for loop python
|
<p>I Want To Fill The data in input</p>
<p>i have 250 records in my sheet</p>
<p>i want to fill country with currency and code and click on submit and then second country currency and code then third</p>
<p>But The Problem is When i try to run my this code:</p>
<p>It's Not Changing Country And Currency for example if i had two countries Afghanistan and Albania</p>
<p>my script want to write Afghanistan first time and hit submit and the next time it want to write Albania</p>
<p>but it's only typing Afghanistan again and again!</p>
<p>Here is the Video So You Guys Can Understand Better!</p>
<p><a href="https://i.stack.imgur.com/nP3nr.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nP3nr.gif" alt="enter image description here"></a></p>
<p>Here is my code:</p>
<pre><code>from selenium import webdriver
import pandas as pd
count = pd.read_csv('c.csv')
list_of_c = count['COUNTRY'].to_list()
cur = pd.read_csv('cur.csv')
list_of_cur = cur['CUR'].to_list()
co = pd.read_csv('code.csv')
list_of_code = co['CODE'].to_list()
def code():
driver = webdriver.Chrome()
driver.get('http://lachisolutions.com/bitcoinerrs/countries.php')
for code in list_of_code:
for countrys in list_of_c:
for curs in list_of_cur:
country = driver.find_element_by_css_selector('.text-center+ .form-group .form-control').send_keys(str(countrys))
currency = driver.find_element_by_css_selector('.form-group:nth-child(3) .form-control').send_keys(str(curs))
code = driver.find_element_by_css_selector('.form-group~ .form-group+ .form-group .form-control').send_keys(str(code))
button = driver.find_element_by_css_selector('.btn-block').click()
code()
</code></pre>
<p><strong>c.csv</strong></p>
<pre><code>COUNTRY
Afghanistan
Albania
Algeria
American Samoa
Andorra
Angola
Anguilla
Antigua and Barbuda
Argentina
</code></pre>
<p>It's Only Writing Currency in right way</p>
|
<p>Try this:</p>
<pre><code>from selenium import webdriver
import pandas as pd
count = pd.read_csv('c.csv')
list_of_c = count['COUNTRY'].to_list()
cur = pd.read_csv('cur.csv')
list_of_cur = cur['CUR'].to_list()
co = pd.read_csv('code.csv')
list_of_code = co['CODE'].to_list()
def code():
driver = webdriver.Chrome()
driver.get('http://lachisolutions.com/bitcoinerrs/countries.php')
for code in list_of_code:
i = 0
while True:
try:
country = driver.find_element_by_css_selector('.text-center+ .form-group .form-control').send_keys(str(list_of_c[i]))
currency = driver.find_element_by_css_selector('.form-group:nth-child(3) .form-control').send_keys(str(list_of_cur[i]))
code = driver.find_element_by_css_selector('.form-group~ .form-group+ .form-group .form-control').send_keys(str(code))
button = driver.find_element_by_css_selector('.btn-block').click()
i+=1
except Exception as e:
break
code()
</code></pre>
<p>You should also provide the expected output format along with the values in the other csv files.</p>
|
python|selenium
| 1 |
1,908,949 | 72,916,496 |
Recursively unzipping a folder with ZipFile/Python
|
<p>I am trying to write a script which can unzip something like this:</p>
<ul>
<li>Great grandfather.zip
<ul>
<li>Grandfather.zip
<ul>
<li>Father.zip
<ul>
<li>Child.txt</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p>What I have so far:</p>
<pre><code>from os import listdir
import os
from zipfile import ZipFile, is_zipfile
#Current Directory
mypath = '.'
def extractor(path):
for file in listdir(path):
if(is_zipfile(file)):
print(file)
with ZipFile(file,'r') as zipObj:
path = os.path.splitext(file)[0]
zipObj.extractall(path)
extractor(path)
extractor(mypath)
</code></pre>
<p>I can unzip great grandfather and when I call the extractor again with grandfather as path. It doesn't go inside the if statement. Even though, I can list the contents of grandfather.</p>
|
<p>Replace <code>extractor(path)</code> by these two lines:</p>
<ul>
<li><code>os.chdir(path)</code></li>
<li><code>extractor('.')</code></li>
</ul>
<p>So your code becomes:</p>
<pre><code>from os import listdir
import os
from zipfile import ZipFile, is_zipfile
#Current Directory
mypath = '.'
def extractor(path):
for file in listdir(path):
if(is_zipfile(file)):
print(file)
with ZipFile(file,'r') as zipObj:
path = os.path.splitext(file)[0]
zipObj.extractall(path)
os.chdir(path)
extractor('.')
extractor(mypath)
</code></pre>
|
python|scripting
| 0 |
1,908,950 | 62,303,478 |
Create new dataframe column based on a specific character of a string in another column, pandas
|
<p>I have a dataframe with a column that contain strings. I want to know if it is possible to create a new column based on the dataframe.
This is an example of the column:</p>
<pre><code> col1
016e3d588c
071b4x718g
011e3d598c
041e0i608g
</code></pre>
<p>I want to create a new column based on the last character of the string. This is what I tried:</p>
<pre><code>for i in DF['col1']:
if i[-1] == 'g':
DF['col2'] = 1
else:
DF['col2'] = 0
</code></pre>
<p>I want the new column like this:</p>
<pre><code>col2
0
1
0
1
</code></pre>
<p>but my code have the following output:</p>
<pre><code>col2
0
0
0
0
</code></pre>
<p>Is possible to do it? </p>
<p>Thanks in advance</p>
|
<p>Using <code>str.endswith()</code></p>
<p><strong>Ex:</strong></p>
<pre><code>df = pd.DataFrame({"Col1": ['016e3d588c', '071b4x718g', '011e3d598c', '041e0i608g']})
df["Col2"] = df["Col1"].str.endswith("g").astype(int)
print(df)
</code></pre>
<p><strong>Output:</strong></p>
<pre><code> Col1 Col2
0 016e3d588c 0
1 071b4x718g 1
2 011e3d598c 0
3 041e0i608g 1
</code></pre>
|
python|pandas|dataframe
| 1 |
1,908,951 | 62,383,929 |
ValueError: x and y must have same first dimension, but have shapes (50,) and (1, 50)/ Multiprocessing
|
<p>I am new to multiprocessing and plotting so I wished to make a code that could multi-process a range of heights to get the optimum launch angle for each height through an equation, then plot the results.</p>
<p>My main issue is when attempting to plot the results I run into an error. My secondary issue is by investigating my code for a while, now I'm beginning to think the multi-process is redundant? I still desire to use a multi-process. So any help in regards to that is also appreciated. </p>
<p>The equation seems to work fine when I don't wish to plot the results. But when I attempt to plot them I get "ValueError: x and y must have same first dimension, but have shapes (50,) and (1, 50)" which I don't exactly understand. I thought they were both just ranges to 50?</p>
<p>I've been messing around with it for a while attempting to get it to work. I believe its in the
"list_h = np.array(range(1))" line since it just doesn't seem right. But that range seems to just repeat the results if I increase it. leading me to think I've structured this multi-process poorly.</p>
<p>My code is as follows </p>
<pre><code>import numpy as np
from multiprocessing import Pool
import matplotlib.pyplot as plt
def f(x):
speed = 5
ran0 = (speed*speed)/9.8
array_hght = np.array(range(50))
return ((np.arccos(array_hght/(array_hght+ran0)))/2)*(180/np.pi)
if __name__ == '__main__':
with Pool(5) as p:
list_h = np.array(range(1))
list_angles = p.map(f, list_h)
print(list_angles)
array_h = np.array(range(50))
plt.plot(array_h, list_angles)
plt.show()
</code></pre>
<p>Any help is greatly appreciated :).</p>
|
<p>You're almost there!! The <code>pool.map()</code> method returns the result in a list-form by applying <code>f</code> over the given input which is <code>list_h</code> in this case. Since <code>list_h</code> has only one item, then the result will be a list of just one value.</p>
<p>So, all you need to do is to get the first element from <code>list_angles</code> like so:</p>
<pre><code>if __name__ == '__main__':
with Pool(5) as p:
list_h = np.array(range(1))
list_angles = p.map(f, list_h)[0] #<--- HERE
array_h = np.array(range(50))
print(array_h.shape)
plt.plot(array_h, list_angles)
plt.show()
</code></pre>
<p>And running the previous code will results in the following graph:
<a href="https://i.stack.imgur.com/60vLG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/60vLG.png" alt="enter image description here"></a></p>
|
python|matplotlib|multiprocessing|valueerror
| 1 |
1,908,952 | 62,109,787 |
Reading file in the same folder - Improvement?
|
<p>i am writing an python script. I was having some problems to open the file. The error was always that system just can not find the file.</p>
<p>Because of that i tried get the active path... Replace backslash ... and so on....</p>
<p>Is there any improvements to work with the file in the same folder? </p>
<h2>The Code</h2>
<pre><code>import os
# The name of the txt file that is in the same folder.
myFile = 'noticia.txt'
# Getting the active script
diretorio = os.path.dirname(os.path.abspath(__file__))
# Replace BackSlash and concatenate myFile
correctPath = diretorio.replace("\\", "/") + "/" + myFile
# Open file
fileToRead = open(correctPath, "r")
# Store text in a variable
myText = fileToRead.read()
# Print
print(myText)
</code></pre>
<h2>Note:</h2>
<p>The script is in the same folder of the txt file.</p>
|
<blockquote>
<p>Is there any improvements to work with the file in the same folder? </p>
</blockquote>
<p>First off, please see PEP 8 for standard conventions on variable names.</p>
<pre><code>correctPath = diretorio.replace("\\", "/") + "/" + myFile
</code></pre>
<p>While forward slashes are preferred when you specify a new path in your code, there is no need to replace the backslashes in a path that Windows gives you. Python and/or Windows will translate behind the scenes as necessary.</p>
<p>However, it would be better to use <code>os.path.join</code> to combine the path components (something like <code>correct_path = os.path.join(diretorio, my_file)</code>).</p>
<pre><code>fileToRead = open(correctPath, "r")
# Store text in a variable
myText = fileToRead.read()
</code></pre>
<p>It is better to use a <code>with</code> block to manage the file, which ensures that it is closed properly, like so:</p>
<pre><code>with open(correct_path, 'r') as my_file:
my_text = my_file.read()
</code></pre>
|
python|python-os
| 0 |
1,908,953 | 35,756,549 |
partial_fit Sklearn's MLPClassifier
|
<p>I've been trying to use Sklearn's neural network MLPClassifier. I have a dataset that is of size 1000 instances (with binary outputs) and I want to apply a basic Neural Net with 1 hidden layer to it. </p>
<p>The issue is that my data instances are not available all at the same time. At any point in time, I only have access to 1 data instance. I thought that partial_fit method of MLPClassifier can be used for this so I simulated the problem with an imaginary dataset of 1000 inputs and looped over the inputs one at a time and partial_fit to each instance but when I run the code, the neural net learns nothing and the predicted output is all zeros.</p>
<p>I am clueless as to what might be causing the problem. Any thought is hugely appreciated.</p>
<pre><code>from __future__ import division
import numpy as np
from sklearn.datasets import make_classification
from sklearn.neural_network import MLPClassifier
#Creating an imaginary dataset
input, output = make_classification(1000, 30, n_informative=10, n_classes=2)
input= input / input.max(axis=0)
N = input.shape[0]
train_input = input[0:N/2,:]
train_target = output[0:N/2]
test_input= input[N/2:N,:]
test_target = output[N/2:N]
#Creating and training the Neural Net
clf = MLPClassifier(activation='tanh', algorithm='sgd', learning_rate='constant',
alpha=1e-4, hidden_layer_sizes=(15,), random_state=1, batch_size=1,verbose= True,
max_iter=1, warm_start=True)
classes=[0,1]
for j in xrange(0,100):
for i in xrange(0,train_input.shape[0]):
input_inst = [train_input[i,:]]
input_inst = np.asarray(input_inst)
target_inst= [train_target[i]]
target_inst = np.asarray(target_inst)
clf=clf.partial_fit(input_inst,target_inst,classes)
#Testing the Neural Net
y_pred = clf.predict(test_input)
print y_pred
</code></pre>
|
<h2>Explanation of the problem</h2>
<p>The problem is with <code>self.label_binarizer_.fit(y)</code> in line 895 in <code>multilayer_perceptron.py</code>. </p>
<p>Whenever you call <code>clf.partial_fit(input_inst,target_inst,classes)</code>, you call <code>self.label_binarizer_.fit(y)</code> where <code>y</code> has only one sample corresponding to one class, in this case. Therefore, if the last sample is of class 0, then your <code>clf</code> will classify everything as class 0.</p>
<h2>Solution</h2>
<p>As a temporary fix, you can edit <code>multilayer_perceptron.py</code> at line 895.
It is found in a directory similar to this <code>python2.7/site-packages/sklearn/neural_network/</code></p>
<p>At line 895, change,</p>
<pre><code>self.label_binarizer_.fit(y)
</code></pre>
<p>to</p>
<pre><code>if not incremental:
self.label_binarizer_.fit(y)
else:
self.label_binarizer_.fit(self.classes_)
</code></pre>
<p>That way, if you are using <code>partial_fit</code>, then <code>self.label_binarizer_</code> fits on the classes rather than on the individual sample.</p>
<p>Further, the code you posted can be changed to the following to make it work,</p>
<pre><code>from __future__ import division
import numpy as np
from sklearn.datasets import make_classification
from sklearn.neural_network import MLPClassifier
#Creating an imaginary dataset
input, output = make_classification(1000, 30, n_informative=10, n_classes=2)
input= input / input.max(axis=0)
N = input.shape[0]
train_input = input[0:N/2,:]
train_target = output[0:N/2]
test_input= input[N/2:N,:]
test_target = output[N/2:N]
#Creating and training the Neural Net
# 1. Disable verbose (verbose is annoying with partial_fit)
clf = MLPClassifier(activation='tanh', algorithm='sgd', learning_rate='constant',
alpha=1e-4, hidden_layer_sizes=(15,), random_state=1, batch_size=1,verbose= False,
max_iter=1, warm_start=True)
# 2. Set what the classes are
clf.classes_ = [0,1]
for j in xrange(0,100):
for i in xrange(0,train_input.shape[0]):
input_inst = train_input[[i]]
target_inst= train_target[[i]]
clf=clf.partial_fit(input_inst,target_inst)
# 3. Monitor progress
print "Score on training set: %0.8f" % clf.score(train_input, train_target)
#Testing the Neural Net
y_pred = clf.predict(test_input)
print y_pred
# 4. Compute score on testing set
print clf.score(test_input, test_target)
</code></pre>
<p>There are 4 main changes in the code. This should give you a good prediction on both the training and the testing set!</p>
<p>Cheers.</p>
|
python|scikit-learn|neural-network|classification
| 8 |
1,908,954 | 58,916,504 |
Running dash on JupyterLab returns an error
|
<p>I tried to follow an example of building a data dashboard from <a href="https://towardsdatascience.com/build-your-own-data-dashboard-93e4848a0dcf" rel="nofollow noreferrer">here</a>.<br>
This is the code run on JupyterLab. </p>
<pre><code>import dash
from dash.dependencies import Input, Output
import dash_core_components as dcc
import dash_html_components as html
</code></pre>
<p>But, activating the Dash server using the following code return error. </p>
<pre><code>app = dash.Dash(__name__)
server = app.server
if __name__ == '__main__':
app.run_server(debug=True)
</code></pre>
<p>Error message: </p>
<pre><code>An exception has occurred, use %tb to see the full traceback.
SystemExit: 1
C:\ProgramData\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py:3334: UserWarning:
To exit: use 'exit', 'quit', or Ctrl-D.
</code></pre>
<p>I changed <code>debug=True</code> with <code>debug=False</code>, but the server (<code>http://127.0.0.1:8050/</code>) is not opening. Instead, it keeps processing without opening a new page.<br>
Thanks for your help.</p>
|
<p>You need to install <a href="https://pypi.org/project/jupyter-plotly-dash/" rel="nofollow noreferrer">jupyter-plotly-dash</a> but it is in "alpha" development phase. It is recommended to install it and run in <code>virtualenv</code>. It is known it gives issues when using system-python.</p>
<p>You will probably also need to run:</p>
<pre><code>sudo pip3 install jupyter_server_proxy
jupyter serverextension enable jupyter_server_proxy
</code></pre>
<p>See also <a href="https://github.com/GibbsConsulting/jupyter-plotly-dash/issues/6" rel="nofollow noreferrer">this discussion</a> on github.</p>
|
python|dashboard|plotly-dash
| 0 |
1,908,955 | 58,780,263 |
Django: list index out of range
|
<p>I have the following <code>MultilingualQuerySet</code>: <code>super_guest = self.request.event.surveys.get_super_guests()</code></p>
<p>Out of this, I filter a variable that I return as a context variable. (There are several different context variables.)</p>
<pre><code>context["reason_for_attending"] = list(filter(
lambda question: question.focus == QuestionFocus.REASON_FOR_ATTENDING,
super_guest
))[0]
</code></pre>
<p>Now it all works great as long there is an entry in the database. However, it can happen, that there are no "responses" yet. Then I get a <code>list index out of range</code> error. The reason is the <code>[0]</code>. Do you have a solution in mind?</p>
|
<p>The reason this happens is because no item in <code>super_guest</code> matches the given condition (and <code>super_guest</code> might simply be empty as well).</p>
<p>You can use <a href="https://docs.python.org/3.8/library/functions.html#next" rel="nofollow noreferrer"><strong><code>next(..)</code></strong> [python-doc]</a> here, and pass a default value, for example:</p>
<pre><code>context['reason_for_attending'] = <b>next(</b>filter(
lambda question: question.focus == QuestionFocus.REASON_FOR_ATTENDING,
super_guest
)<b>, None)</b></code></pre>
<p>If there are no elements, then <code>context['reason_for_attending']</code> will be <code>None</code>. You can then do some proper rendering in the template.</p>
|
python|django
| 1 |
1,908,956 | 15,749,100 |
Exporting a 3D numpy to a VTK file for viewing in Paraview/Mayavi
|
<p>For those that want to export a simple 3D numpy array (along with axes) to a .vtk (or .vtr) file for post-processing and display in Paraview or Mayavi there's a little module called <a href="https://bitbucket.org/pauloh/pyevtk" rel="noreferrer">PyEVTK</a> that does exactly that. The module supports structured and unstructured data etc..
Unfortunately, even though the code works fine in unix-based systems I couldn't make it work (keeps crashing) on any windows installation which simply makes things complicated. Ive contacted the developer but his suggestions did not work</p>
<p>Therefore my question is:
How can one use the <code>from vtk.util import numpy_support</code> function to export a 3D array (the function itself doesn't support 3D arrays) to a .vtk file? Is there a simple way to do it without creating vtkDatasets etc etc?</p>
<p>Thanks a lot!</p>
|
<p>It's been forever and I had entirely forgotten asking this question but I ended up figuring it out. I've written a post about it in my blog (PyScience) providing a tutorial on how to convert between NumPy and VTK. Do take a look if interested: </p>
<p><a href="http://pyscience.wordpress.com/2014/09/06/numpy-to-vtk-converting-your-numpy-arrays-to-vtk-arrays-and-files/" rel="noreferrer">pyscience.wordpress.com/2014/09/06/numpy-to-vtk-converting-your-numpy-arrays-to-vtk-arrays-and-files/</a></p>
|
numpy|vtk|paraview
| 11 |
1,908,957 | 15,879,315 |
What is the difference between ndarray and array in NumPy?
|
<p>What is the difference between <a href="https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html" rel="nofollow noreferrer"><code>ndarray</code></a> and <a href="https://numpy.org/doc/stable/reference/generated/numpy.array.html" rel="nofollow noreferrer"><code>array</code></a> in NumPy? Where is their implementation in the NumPy source code?</p>
|
<p><code>numpy.array</code> is just a convenience function to create an <code>ndarray</code>; it is not a class itself. </p>
<p>You can also create an array using <code>numpy.ndarray</code>, but it is not the recommended way. From the docstring of <code>numpy.ndarray</code>: </p>
<blockquote>
<p>Arrays should be constructed using <code>array</code>, <code>zeros</code> or <code>empty</code> ... The parameters given here refer to a
low-level method (<code>ndarray(...)</code>) for instantiating an array.</p>
</blockquote>
<p>Most of the meat of the implementation is in C code, <a href="https://github.com/numpy/numpy/tree/master/numpy/core/src/multiarray" rel="noreferrer">here in multiarray</a>, but you can start looking at the ndarray interfaces here:</p>
<p><a href="https://github.com/numpy/numpy/blob/master/numpy/core/numeric.py" rel="noreferrer">https://github.com/numpy/numpy/blob/master/numpy/core/numeric.py</a></p>
|
python|arrays|numpy|multidimensional-array|numpy-ndarray
| 303 |
1,908,958 | 59,801,940 |
(python utf-8) using 'à','ç','é','è','ê','ë','î','ô','ù'
|
<p>I am having trouble with accent in python </p>
<p>I wrote # -<em>- coding: utf-8 -</em>- so it can recognize the accent.
But still sometime it doesn't work. I get '?' and when I use it after I get an error " SyntaxError: Non-ASCII character '\xc3' " </p>
<p>Why ? What should I change? Thanks</p>
<p>(doesn't work for all those characters 'à','ç','é','è','ê','ë','î','ô','ù',"‘","’")</p>
<p>this is my code : </p>
<pre><code># -*- coding: utf-8 -*-
testList = ['à','ç','é','è','ê','ë','î','ô','ù',"‘","’"]
testCharacter = raw_input('test a character : ') # example : é
print(testCharacter) # getting é
print(testCharacter[0]) # getting ?
print(testCharacter + testCharacter[0]) # getting é?
testCharacterPosition = testList.index(testCharacter)
print(testCharacterPosition) #getting 2
</code></pre>
<p>this is the result on my console :</p>
<pre><code>test a character : é
é
?
é?
2
</code></pre>
|
<p>It seems you are still using python2 (you should consider switching to python3 since python2 is discontinued).</p>
<p>If pasted some utf8 string, it is encoded and therefore consists of multiple characters, e.g.:</p>
<pre><code>>>> s = 'à'
>>> s
'\xc3\xa0'
>>> s[0]
'\xc3'
</code></pre>
<p>Of course this will print an question mark since one alone doesn't make the full character:</p>
<pre><code>>>> print(s + s[0])
à�
</code></pre>
<p>you can convert this to a unicode string, which then consists of one character:</p>
<pre><code>>>> s.decode('utf-8')
u'\xe0'
>>> print(s.decode('utf-8'))
à
</code></pre>
<p>You can get around decode when directly using unicode strings in py2:</p>
<pre><code>>>> s = u'à'
>>> s
u'\xe0'
</code></pre>
<p>Better would be to use python3, which simplifies the whole thing to:</p>
<pre><code>>>> s = 'à'
>>> s
'à'
>>>
</code></pre>
|
python|utf-8|diacritics
| 2 |
1,908,959 | 25,402,293 |
Executing function after python test suite finished execution
|
<p>I'm using python unittest frame work for do some testing. </p>
<pre><code>class AbstractTest(unittest.TestCase):
def setUp(self):
def tearDown(self):
# Close!
self.transport.close()
def testVoid(self):
self.client.testVoid()
def testString(self):
global test_basetypes_fails
try:
self.assertEqual(self.client.testString('Python' * 20), 'Python' * 20)
except AssertionError, e:
test_basetypes_fails = True
print test_basetypes_fails
raise AssertionError( e.args )
try:
self.assertEqual(self.client.testString(''), '')
except AssertionError, e:
test_basetypes_fails = True
raise AssertionError( e.args )
def testByte(self):
global test_basetypes_fails
try:
self.assertEqual(self.client.testByte(63), 63)
except AssertionError, e:
test_basetypes_fails = True
raise AssertionError( e.args )
try:
self.assertEqual(self.client.testByte(-127), -127)
except AssertionError, e:
test_basetypes_fails = True
raise AssertionError( e.args )
@classmethod
def tearDownClass(cls):
#sys.exit(1)
</code></pre>
<p>When I execute my test I am getting following result.</p>
<pre><code>..................
----------------------------------------------------------------------
Ran 18 tests in 2.715s
OK
</code></pre>
<p>I need to execute a piece of program after this finishes execution. How can I do that? When I add code to class level tear down it executes it after following part of output is made.</p>
<pre><code>..................
</code></pre>
|
<p>You need to write your own testrunner, so that you can return with an exit-code depending on the result of the suite.</p>
<p>All you need to do is explained in the unittest-module documentation. Use a TestLoader to load your suite, and a TextTestRunner to run it. Then depending on the result of the suite, call sys.exit with your appropriate exit-code.</p>
|
python|unit-testing|python-unittest
| 2 |
1,908,960 | 24,944,723 |
Is there a best-practice for looping "all the way through" an array in C++?
|
<p>In Python, one can compare each element of an array with the "next" element (including the last element with the first element) with the following code:</p>
<pre><code>a = [0, 1, 2, 3]
for i in range(-1, 3):
if a[i] + 1 >= a[i+1]:
print a[i], 'works'
</code></pre>
<p>The output:</p>
<pre><code>3 works
0 works
1 works
2 works
</code></pre>
<p>If I want to compare the first element of an array with the second, the second with the third, etc., <strong>and finally the last with the first</strong>, I can do so with just a loop in Python.</p>
<p>Can I do this in C++? Can I loop though elements in this manner whilst staying entirely in one loop? To further illustrate what I mean, here is some C++ code that has the same functionality as the above Python code.</p>
<pre><code>int a[] = {0, 1, 2, 3};
std::cout a[3] << std::endl;
for(int i = 0; i < 2; i++)
std::cout << a[i] << std::endl;
</code></pre>
<p>It has the same output. My point is, for certain algorithms, I'd have to manually duplicate the contents of my <code>for</code> loops for certain steps. For example, if I want to see if the first element equals the second, the second the third, etc., and finally if the last equals the first, I'd have to manually add this step after or before my <code>for</code> loop. ---<code>if (a[0] == a[3]) foo();</code></p>
<p>Is this what I am supposed to do in C++? I am quite new to it and I don't want to get rooted in bad practices.</p>
|
<pre><code>for (int i=3; i<(3 + size_of_array); ++i)
std::cout << a[i % size_of_array] << '\n';
</code></pre>
|
python|c++|c|arrays|for-loop
| 8 |
1,908,961 | 71,060,060 |
i dont understand how can I put an amount in these letters?
|
<pre><code>totalBought = float(input("how much you bought in total:"))
discountAmt = input("what is your status:")
if discountAmt == "S":
S == 10
elif discountAmt == "O":
O == 5
elif discountAmt == "M":
M == 15
elif discountAmt == "E":
E == 20
totalPrice = totalBought - discountAmt
print(totalPrice)
</code></pre>
<h2>These letters are discounts ,students,old people,pwds,and vips</h2>
|
<p>You can try something with Using Dictionary:</p>
<p>Using Dictionary code is more more readable and less if else condition</p>
<pre><code>totalBought = float(input("how much you bought in total:"))
discountDict ={
'S':10,
'O': 5,
'M':15,
'E':20
}
discountAmt = input("what is your status:")
totalPrice = totalBought - discountDict[discountAmt]
print(totalPrice)
</code></pre>
|
python|conditional-statements
| 3 |
1,908,962 | 60,194,691 |
Right data structure or library to store a sequential workflow
|
<p>Here's my problem: </p>
<p>I have a user interface that basically consists of creating events. </p>
<p>Each event consists of multiple phases in sequence.
Each phase consists of tasks - these tasks are just containers for fields enter
Each tasks consists of fields to be entered by the user. So basically a task is the smallest unit of work.</p>
<p>Here's what it finally looks like, as an example:</p>
<pre><code>EVENT:
Pre-operation-phase:
taskA
has a bunch of form fields (label: value) that are entered and saved in backend
taskB
has a bunch of form fields (label: value) that are entered and saved in backend
taskC
has a bunch of form fields (label: value) that are entered and saved in backend
..
..
Operation-phase:
taskA
has a bunch of form fields (label: value) that are entered and saved in backend
taskB
has a bunch of form fields (label: value) that are entered and saved in backend
taskC
has a bunch of form fields (label: value) that are entered and saved in backend
taskD
a START OPERATION BUTTON that sends a request to an external service
Post-operation-phase:
taskA
has a bunch of form fields (label: value) that are entered and saved in backend
taskB
has a bunch of form fields (label: value) that are entered and saved in backend
taskC
has a bunch of form fields (label: value) that are entered and saved in backend
..
..
End-phase:
taskA
has a bunch of form fields (label: value) that are entered and saved in backend
taskB
has a bunch of form fields (label: value) that are entered and saved in backend
taskC
has a bunch of form fields (label: value) that are entered and saved in backend
..
</code></pre>
<p>Would a linked list be appropriate for this type of model? PhaseObj linked list -->
PhaseObj --> PhaseObj --> PhaseObj</p>
<p>Each Phase Obj has the following data linked lists (tasks)
TaskObj --> TaskObj --> TaskObj</p>
<p>Each Task Obj contains fields and operations.
So, the Phase->Tasks->Fields consist of a sequential workflow. An admin could define and create
a bunch of such Workflows that could be attached to an event.</p>
<p>How can I store this in a NoSQL backend ?</p>
<p>Please recommend if this is the right data structure? or any lean third party or built in Python library to create this kind of sequential workflow.</p>
|
<p>The most suitable data structure to use in your case is the Graph Data Structure.
check the <a href="https://en.wikipedia.org/wiki/Directed_acyclic_graph" rel="nofollow noreferrer">Directed Acyclic Graph</a>.I hope this will help you</p>
|
python|oop|data-structures
| 1 |
1,908,963 | 60,311,884 |
How to plot single pixel values from 3d NumPy array?
|
<p>I have a stack of 7 images of <code>288 x 288</code> pixels that I have converted to a 3d NumPy array</p>
<pre class="lang-none prettyprint-override"><code>newarray.shape = (288, 288, 7)
</code></pre>
<p>I want to plot a particular pixel value from each of the 7 images and plot it as a graph with y axis showing pixel values and x axis showing the image number.</p>
|
<pre class="lang-py prettyprint-override"><code>from matplotlib import pyplot as plt
import numpy as np
# NumPy array storing images
images = np.random.randint(0, 255, (288, 288, 7), np.uint8)
# Get pixel values across all images of pixel of interest
x, y = (8, 3)
pixels = images[y, x, :]
# Plot
plt.plot(np.arange(images.shape[2]), pixels)
plt.ylim(0, 255)
plt.title('Pixel values for x=' + str(x) + ', y=' + str(y))
plt.tight_layout()
plt.show()
</code></pre>
<p>Output:</p>
<p><a href="https://i.stack.imgur.com/ThoDR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ThoDR.png" alt="Output"></a></p>
<p>Hope that helps!</p>
<pre class="lang-none prettyprint-override"><code>----------------------------------------
System information
----------------------------------------
Platform: Windows-10-10.0.16299-SP0
Python: 3.8.1
Matplotlib: 3.2.0rc3
NumPy: 1.18.1
----------------------------------------
</code></pre>
|
python|numpy
| 2 |
1,908,964 | 60,310,888 |
Create a 10 minute running average of data with uneven time steps
|
<p>I'm attempting to create a 10 minute running average of wind speeds in order to plot on a graph. My data has numerous uneven time steps within it. I am currently using the CSV module to read in and gather my data, I am not very familiar with pandas and have had issues in the past with it. </p>
<pre><code>import matplotlib.pyplot as plt
import csv
from datetime import datetime
x=[]
y=[]
with open('KART_201901010000_201912310000.txt') as csvfile:
plots = csv.reader(csvfile, delimiter=',')
for row in plots:
if 'M' == row[1]:
continue
else:
x.append(datetime.strptime(row[0],'%Y-%m-%d %H:%M'))
y.append(int(row[1]))
plt.plot(x,y, label='Wind Speed')
plt.xlabel('Date and Time')
plt.ylabel('Wind Speed (Kts)')
plt.title('Wind Speed\nVersus Time')
plt.legend()
plt.show()
</code></pre>
<p>Here is a snippet of my data set showing one of the many uneven time steps.</p>
<pre><code>2019-11-01 11:40,30
2019-11-01 11:45,35
2019-11-01 11:50,32
2019-11-01 11:55,34
2019-11-01 11:56,33
2019-11-01 12:00,33
2019-11-01 12:05,36
2019-11-01 12:10,31
</code></pre>
<p>The obvious general idea is to use a for loop to continue the calculations that I would need to average the data. The issue I am running into is how do I account for the uneven steps? Is there a way to use datetime to achieve this that I have no idea about?</p>
|
<p>Something along the lines of:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.read_csv('KART_201901010000_201912310000.txt', header=1)
df.index = pd.to_datetime(df.index, format='%Y-%m-%d %H:%M')
df.rolling('10min', min_periods=1).mean()
</code></pre>
<p>I haven't tested it, details might differ. I know you are not familiar with pandas but implementing this feature on your own will take precious time I'm sure you'd gladly invest elsewhere.</p>
<p>This definitely works for the data you provided:</p>
<pre><code>>>> series = pd.Series([30, 35, 32, 34, 33, 33, 36, 31],
index=[pd.Timestamp('2019-11-01 11:40'),
pd.Timestamp('2019-11-01 11:45'),
pd.Timestamp('2019-11-01 11:50'),
pd.Timestamp('2019-11-01 11:55'),
pd.Timestamp('2019-11-01 11:56'),
pd.Timestamp('2019-11-01 12:00'),
pd.Timestamp('2019-11-01 12:05'),
pd.Timestamp('2019-11-01 12:10')])
>>> series.rolling('10min', min_periods=1).mean()
Out:
2019-11-01 11:40:00 30.000000
2019-11-01 11:45:00 32.500000
2019-11-01 11:50:00 33.500000
2019-11-01 11:55:00 33.000000
2019-11-01 11:56:00 33.000000
2019-11-01 12:00:00 33.333333
2019-11-01 12:05:00 34.000000
2019-11-01 12:10:00 33.500000
dtype: float64
</code></pre>
|
python|python-3.x
| 0 |
1,908,965 | 67,799,986 |
Question about dateutil.relativedelta - Why the ouput is always zero?
|
<p>Why the output of this relativedelta attribute is also zero?
The data file contains two date time strings, the purpose is to get the time difference between the two.</p>
<pre><code># python3.6 time_diff.py
0
0
0
0
# cat data
06/21/2019 21:45:24 06/21/2020 21:45:26
06/21/2019 22:42:25 06/22/2020 01:28:41
06/21/2019 22:41:32 06/21/2020 22:42:32
06/20/2019 23:42:25 06/22/2020 02:42:29
# cat time_diff.py
import dateutil.relativedelta, datetime
f = open("data", "r")
for line in f:
t1 = datetime.datetime.strptime(line.split()[0] + " " + line.split()[1], "%m/%d/%Y %H:%M:%S")
t2 = datetime.datetime.strptime(line.split()[0] + " " + line.split()[1], "%m/%d/%Y %H:%M:%S")
rd = dateutil.relativedelta.relativedelta(t1, t2)
print(rd.seconds)
</code></pre>
|
<p>instead of</p>
<pre><code>t1 = datetime.datetime.strptime(line.split()[0] + " " + line.split()[1], "%m/%d/%Y %H:%M:%S")
t2 = datetime.datetime.strptime(line.split()[0] + " " + line.split()[1], "%m/%d/%Y %H:%M:%S")
</code></pre>
<p>go with</p>
<pre><code>t1 = datetime.datetime.strptime(line.split()[0] + " " + line.split()[1], "%m/%d/%Y %H:%M:%S")
t2 = datetime.datetime.strptime(line.split()[2] + " " + line.split()[3], "%m/%d/%Y %H:%M:%S")
</code></pre>
|
python|relativedelta|python-relativedelta
| 1 |
1,908,966 | 67,689,445 |
URL not getting stored in a file
|
<p>The URL is not getting stored in the file 'youtube_alarm_videos.txt'. The code seems fine to me, but it is not working. Can someone please help me with the error?
<a href="https://pastebin.com/sjMMS2pZ" rel="nofollow noreferrer">Full code here</a></p>
<pre><code>""" Alarm Clock
----------------------------------------
"""
import datetime
import os
import time
import random
import webbrowser
import winsound
# If video URL file does not exist, create one
url = print("Enter the link to the video below")
while True:
url_input = input("Enter link:")
break
else:
print("Could not find the link entered. Please check and try again")
if not os.path.isfile("youtube_alarm_videos.txt"):
with open("youtube_alarm_videos.txt", "w") as alarm_file:
alarm_file.write(url_input)
</code></pre>
|
<p>i think you need somthing like this</p>
<pre class="lang-py prettyprint-override"><code>import datetime
import os
import time
import random
import webbrowser
import winsound
# If video URL file does not exist, create one
def create_file(filename="youtube_alarm_videos.txt"):
# Create new file if doesn't exist
with open(filename, 'w') as e:
pass
url = print("Enter the link to the video below")
while True:
url_input = input("Enter link:")
break
else:
print("Could not find the link entered. Please check and try again")
if os.path.isfile("youtube_alarm_videos.txt"):
create_file()
with open("youtube_alarm_videos.txt", 'w') as e:
e.write(url_input)
</code></pre>
|
python
| 0 |
1,908,967 | 30,364,949 |
isSet() in python threading
|
<p>I would like to understand the function of isSet() in python threading</p>
<p><img src="https://i.stack.imgur.com/xe3nZ.png" alt="enter image description here"></p>
<p>it's being called on function func(1)
<img src="https://i.stack.imgur.com/TbV18.png" alt="enter image description here"></p>
<p>What does this function trigger? I've been searching and did not found any clear answer.</p>
<p>Thanks! </p>
|
<p>Python threading have some Synchronization between threads events. like java threading. You find a class <code>threading.Event</code> that is a simple synchronization object. The event represents an internal flag like java synchronization monitor lock,and threads can wait for the flag to be set or unset.</p>
<p>lets say server code executed like this:-</p>
<pre><code>>>> import threading
>>> t = threading.Event()
>>> t.wait()
</code></pre>
<p>A server thread can wait for the flag to be set:</p>
<p>wile the client manipulates the event as follows:</p>
<pre><code>>>> e = threading.Event()
>>> e.isSet()
False
>>> e.set()
>>> e.isSet()
True
>>> e.clear()
>>> e.isSet()
False
</code></pre>
<p>If the flag is set, the wait method doesn’t do anything. If the flag is cleared, wait will block until it becomes set again. Any number of threads may wait for the same event.</p>
<p><img src="https://i.stack.imgur.com/Cooxb.png" alt="enter image description here"></p>
|
python|multithreading
| 2 |
1,908,968 | 64,089,275 |
Python type hints: What does Callable followed by a TypeVar mean?
|
<p>I am trying to understand the type hint <code>Getter[T]</code> in the following piece of code:</p>
<h3>Simplified example</h3>
<pre class="lang-py prettyprint-override"><code>T = TypeVar('T')
Getter = Callable[[T, str], str]
class AbstractClass(abc.ABC):
@abc.abstractmethod
def extract(
self,
get_from_carrier: Getter[T], # <---- See here
...
) -> Context:
</code></pre>
<p>Help much appreciated since I have been breaking my head over this.</p>
<h3>Original source code</h3>
<p>The original source code is from the <a href="https://github.com/open-telemetry/opentelemetry-python/blob/master/opentelemetry-api/src/opentelemetry/trace/propagation/textmap.py" rel="nofollow noreferrer">OpenTelemetry project file "textmap.py"</a>:</p>
<pre class="lang-py prettyprint-override"><code>import abc
import typing
from opentelemetry.context.context import Context
TextMapPropagatorT = typing.TypeVar("TextMapPropagatorT")
Setter = typing.Callable[[TextMapPropagatorT, str, str], None]
Getter = typing.Callable[[TextMapPropagatorT, str], typing.List[str]]
class TextMapPropagator(abc.ABC):
"""This class provides an interface that enables extracting and injecting
context into headers of HTTP requests.
...
"""
@abc.abstractmethod
def extract(
self,
get_from_carrier: Getter[TextMapPropagatorT],
carrier: TextMapPropagatorT,
context: typing.Optional[Context] = None,
) -> Context:
</code></pre>
|
<p>A Callable followed by a type variable means that the callable is a generic function that takes one or more arguments of generic type <code>T</code>.</p>
<p>The type variable <code>T</code> is a parameter for any generic type.</p>
<p>The line:</p>
<pre class="lang-py prettyprint-override"><code>Getter = Callable[[T, str], str]
</code></pre>
<p>defines <code>Getter</code> as a type alias for a callable function whose arguments are of generic type <code>T</code> and string, and whose return type is string.</p>
<p>Therefore, the line:</p>
<pre class="lang-py prettyprint-override"><code>get_from_carrier: Getter[T]
</code></pre>
<p>defines an argument (<code>get_from_carrier</code>) that is a generic function. And the first argument of the generic function is of generic type <code>T</code>.</p>
<h2>Concrete Example</h2>
<p>This can be better understood by looking at a concrete example. See <code>propagators.extract</code> below from <a href="https://github.com/open-telemetry/opentelemetry-python/blob/master/instrumentation/opentelemetry-instrumentation-asgi/src/opentelemetry/instrumentation/asgi/__init__.py" rel="nofollow noreferrer">"instrumentation/opentelemetry-instrumentation-asgi/src/opentelemetry/instrumentation/asgi/<strong>init</strong>.py "</a>:</p>
<p>In the call <code>propagators.extract</code>, the function <code>get_header_from_scope</code> is a callable function whose first argument is of type <code>dict</code>, and this <code>dict</code> is serving as a <code>TextMapPropagatorT</code>.</p>
<pre class="lang-py prettyprint-override"><code>def get_header_from_scope(scope: dict, header_name: str) -> typing.List[str]:
"""Retrieve a HTTP header value from the ASGI scope.
Returns:
A list with a single string with the header value if it exists, else an empty list.
"""
headers = scope.get("headers")
return [
value.decode("utf8")
for (key, value) in headers
if key.decode("utf8") == header_name
]
...
class OpenTelemetryMiddleware:
"""The ASGI application middleware.
...
"""
...
async def __call__(self, scope, receive, send):
"""The ASGI application ... """
if scope["type"] not in ("http", "websocket"):
return await self.app(scope, receive, send)
token = context.attach(
propagators.extract(get_header_from_scope, scope)
)
</code></pre>
|
python|generics|type-hinting|python-typing
| 1 |
1,908,969 | 66,625,013 |
python:list index out of range
|
<p>my code is:</p>
<pre><code>def my_sort(list):
for _ in list:
if list[0] > list[1]:
list[0], list[1] = list[1], list[0]
return my_sort(list[1:2])
</code></pre>
<p>but i keep getting this error:
IndexError: list index out of range</p>
<p>at this line: if list[0] > list[1]:</p>
<p>this is the test code i am using:</p>
<pre><code> def test_my_sort():
lst_test = random.choices(range(-99, 100), k=6)
lst_copy = lst_test.copy()
lst_output = my_sort(lst_test)
assert lst_copy == lst_test, "Fout: my_sort(lst) verandert de inhoud van lijst lst"
assert lst_output == sorted(lst_test), \
f"Fout: my_sort({lst_test}) geeft {lst_output} in plaats van {sorted(lst_test)}"
</code></pre>
|
<p>Your problem stems in this line:</p>
<pre><code>return my_sort(list[1:2])
</code></pre>
<p>In python, we start indices from <code>0</code> and not from <code>1</code>. So this translates to "Indices 1 to 2". The index you "stop at" is not included, so your syntax becomes "Give me index 1". The correct solution would be:</p>
<pre><code>return my_sort(list[0:2])
</code></pre>
<p>or better yet:</p>
<pre><code>return my_sort(list[:2])
</code></pre>
<p>Since empty here means from start.</p>
<p>Edit:
I haven't solved your recursion issue, I only solved your list index issue. However, please do take a look at <a href="https://stackoverflow.com/a/51146466/1961688">this solution for a recursive bubble sort</a>.</p>
|
python|indexing
| 0 |
1,908,970 | 72,303,759 |
Google Colab: torch cuda is true but No CUDA GPUs are available
|
<p>I use Google Colab to train the model, but like the picture shows that when I input 'torch.cuda.is_available()' and the ouput is 'true'. And then I run the code but it has the error that RuntimeError: No CUDA GPUs are available.</p>
<p><img src="https://i.stack.imgur.com/74sAB.png" alt="The screen of google colab" /></p>
|
<p>Try to install cudatoolkit version you want to use
"conda install pytorch torchvision cudatoolkit=10.1 -c pytorch"</p>
|
pytorch|google-colaboratory
| 1 |
1,908,971 | 72,207,081 |
How to access the save results of yolov5 in different folder?
|
<p>I am using the below code to load the trained custom Yolov5 model and perform detections.</p>
<pre><code>import cv2
import torch
from PIL import Image
model = torch.hub.load('ultralytics/yolov5', 'custom',
path='yolov5/runs/train/exp4/weights/best.pt', force_reload=True)
img = cv2.imread('example.jpeg')[:, :, ::-1] # OpenCV image (BGR to RGB)
results = model(img, size=416)
</code></pre>
<p>#To display and save results I am using:</p>
<pre><code>results.print()
results.save()
results.show()
</code></pre>
<p>My question is how can I save the results in different directory so that I can use them in my web-based application. For your reference I am using Streamlit. For instance, at the moment, results (image) are being saved in runs\detect\exp*. I want to change it. Can anyone please guide me.</p>
|
<p>You can make changes in the function definition of <code>results.save()</code>, the function can be found in the file <code>yolov5/models/common.py</code>. By default the definition is:</p>
<pre class="lang-py prettyprint-override"><code>def save(self, labels=True, save_dir='runs/detect/exp'):
save_dir = increment_path(save_dir, exist_ok=save_dir != 'runs/detect/exp', mkdir=True) # increment save_dir
self.display(save=True, labels=labels, save_dir=save_dir) # save results
</code></pre>
<p>You can make changes in the <code>save_dir</code> argument to the desired save location and the files should be saved in the new directory.</p>
|
web-applications|pytorch|streamlit|yolov5|detectron
| 1 |
1,908,972 | 65,728,037 |
matplotlib DEBUG Turn off when python DEBUG is on to debug rest of program
|
<p>When I turn on DEBUG in the python logger, matplotlib prints 10,000 lines of debug code that I do not want to see. I tried:</p>
<pre><code>plt.set_loglevel("info")
</code></pre>
<p>as in their documentation, but still doesn't turn it off. I put the statement right after importing matplotlib, and tried it right after creating the plot with <code>fig=plt.figure(...)</code>.</p>
<p>Neither works. Help?
ubuntu20.04, python 3.8.5, matplotlib 3.3.3</p>
|
<p>You can create a separate logger for your messages so you log only what you require.</p>
<p>E.g. from <a href="https://docs.python.org/3/howto/logging.html#configuring-logging" rel="nofollow noreferrer">logging doc</a> and <a href="https://matplotlib.org/3.1.1/gallery/lines_bars_and_markers/stackplot_demo.html" rel="nofollow noreferrer">Matplotlib example</a>.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import logging
# create logger
logger = logging.getLogger('no_spam')
logger.setLevel(logging.DEBUG)
# create console handler and set level to debug
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
# create formatter
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
# add formatter to ch
ch.setFormatter(formatter)
# add ch to logger
logger.addHandler(ch)
logger.info("Starting")
x = [1, 2, 3, 4, 5]
y1 = [1, 1, 2, 3, 5]
y2 = [0, 4, 2, 6, 8]
y3 = [1, 3, 5, 7, 9]
y = np.vstack([y1, y2, y3])
labels = ["Fibonacci ", "Evens", "Odds"]
fig, ax = plt.subplots()
ax.stackplot(x, y1, y2, y3, labels=labels)
ax.legend(loc='upper left')
logger.debug("About to show plot")
plt.show()
logger.info("Finished")
</code></pre>
<p>This should generate console output like:</p>
<pre><code>2021-01-15 17:33:17,431 - no_spam - INFO - Starting
2021-01-15 17:33:17,635 - no_spam - DEBUG - About to show plot
2021-01-15 17:33:21,491 - no_spam - INFO - Finished
</code></pre>
<p>Or, if you set the logging level to <code>INFO</code>:</p>
<pre><code>2021-01-15 17:34:18,987 - no_spam - INFO - Starting
2021-01-15 17:34:22,212 - no_spam - INFO - Finished
</code></pre>
|
python|matplotlib
| 0 |
1,908,973 | 65,773,707 |
How to change the index of an element in a list/array to another position/index without deleting/changing the original element and its value
|
<p>For example lets say I have a list as below,</p>
<pre><code>list = ['list4','this1','my3','is2'] or [1,6,'one','six']
</code></pre>
<p>So now I want to change the index of each element to match the number or make sense as I see fit (needn't be number) like so, (basically change the index of the element to wherever I want)</p>
<pre><code>list = ['this1','is2','my3','list4'] or ['one',1,'six',6]
</code></pre>
<p>how do I do this whether there be numbers or not ?</p>
<p>Please help, Thanks in advance.</p>
|
<p><strong>If you don't wanna use regex and learn it's mini language use this simpler method:</strong></p>
<pre><code>list1 = ['list4','this1', 'he5re', 'my3','is2']
def mySort(string):
if any(char.isdigit() for char in string): #Check if theres a number in the string
return [float(char) for char in string if char.isdigit()][0] #Return list of numbers, and return the first one (we are expecting only one number in the string)
list1.sort(key = mySort)
print(list1)
</code></pre>
<p>Inspired by this answer: <a href="https://stackoverflow.com/a/4289557/11101156">https://stackoverflow.com/a/4289557/11101156</a></p>
|
python|python-3.x|list
| 3 |
1,908,974 | 51,045,895 |
How to solve JSONDecodeError while using WHILE loop
|
<pre><code>while url:
post = session.post(login, data=payload)
r = session.get(url)
parsed = json.loads(r.text)
# Retrieve json product data
if parsed['links']['next'] is not 'null':
url = 'https://testshop.example.com/admin/products' + str(parsed['links']['next'])
time.sleep(2)
for product in parsed['products']:
parsed_result = product['id']
else:
print('stop now!')
break
</code></pre>
<p>SO I am using the code above to retrieve and print all the json data in my terminal. Everything is going fine until I retrieve the following error code at the end:</p>
<pre><code> raise JSONDecodeError("Expecting value", s, err.value) from None
JSONDecodeError: Expecting value
</code></pre>
<p>Does anybody know what the cause is of this and how I can fix it? </p>
<p>This is my JSON format if that matters:</p>
<pre><code>products: [
{
article_code: "123",
barcode: "456",
brand_id: 2600822,
created_at: "2018-05-31T15:15:34+02:00",
data01: "",
data02: "",
data03: "",
delivery_date_id: null,
has_custom_fields: false,
has_discounts: false,
has_matrix: false,
hits: 0,
hs_code: null,
id: 72660113,
image_id: null,
is_visible: false,
price_excl: 33.0165,
price_incl: 39.95,
price_old_excl: 0,
price_old_incl: 0,
product_set_id: null,
product_type_id: null,
search_context: "123 456 789",
shop_id: 252449,
sku: "789",
supplier_id: 555236,
updated_at: "2018-05-31T15:15:34+02:00",
variants_count: 1,
visibility: "hidden",
weight: 0,
nl: {
content: "",
fulltitle: "Grid Lifter",
slug: "grid-lifter",
title: "Grid Lifter"
}
],
links: {
first: ".json",
last: ".json?page=70",
prev: null,
next: ".json?page=2",
count: 3497,
limit: 50,
pages: 70
}
</code></pre>
<p>I am using this to paginate through all the pages. </p>
<p>Traceback:</p>
<p>File "", line 1, in
runfile('loginlightspeedshop.py', wdir='C:/Users/Solaiman/.spyder-py3/SCRIPTS/Lightspeed scripts')</p>
<p>File "sitecustomize.py", line 705, in runfile
execfile(filename, namespace)</p>
<p>File "sitecustomize.py", line 102, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)</p>
<p>File "C:/Users/Solaiman/.spyder-py3/SCRIPTS/Lightspeed scripts/loginshop.py", line 33, in
parsed = json.loads(r.text)</p>
<p>File "C:\Users\Solaiman\Anaconda3\lib\json__init__.py", line 354, in loads
return _default_decoder.decode(s)</p>
<p>File "decoder.py", line 339, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())</p>
<p>File "decoder.py", line 357, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None</p>
<p>JSONDecodeError: Expecting value</p>
|
<p>You are probably getting empty/not json response here: </p>
<pre><code>r = session.get(url)
</code></pre>
<p>Try to print r.text before parsing it to detect problem cause. Or use try/except clause:</p>
<pre><code>try:
parsed = r.json()
except ValueError:
print(r.text)
break
</code></pre>
|
python|json|api|request
| 1 |
1,908,975 | 51,083,555 |
Extracting duplicate code from Class methods
|
<p>I'm learning to work with classes on Python, I ran into this issue, I have several methods inside the class. There are some chunk of code that is very similar or exactly the same inside every method. </p>
<p>What would be the best practice to remove the duplicates from the code and therefore shorten it? </p>
<p>It looks something like this: </p>
<pre><code>class BasicClass(Object):
def FirstMethod(self, some_variable):
# Chunk of code that repeats across multiple methods
...
#Unique code to this method
...
def SecondMethod(self, some_variable):
# Chunk of code that repeats across multiple methods
...
#Unique code to this method
...
def ThirdMethod(self, some_variable):
# Chunk of code that repeats across multiple methods with slight variation
...
#Unique code to this method
...
</code></pre>
<p>Should I just write a helper function file and import that? Or is there a better way? </p>
|
<p>It really depends on what the code looks like. Using a helper function sounds reasonable. The slight variation in your third method could probably be implemented by passing an optional parameter to your helper function which then performs the variation.</p>
<p>If you want more detailed advice, you'd need to show the actual code...</p>
|
python|python-3.x|class
| 3 |
1,908,976 | 51,075,370 |
Average of last 6 elements from given element in list
|
<p>I have <code>list=[307, 258, 164, 193, 174, 285, 230, 160, 257, 306, 173, 169, 192, 209, 110]</code></p>
<p>I want to calculate average last 6 elements from given element[n] in list and after that <code>n</code> should iterate by 1[n+1] and again same operation should perform.</p>
<p>L3[-15:] I know how to access last few elements from d list.
<code>new_list = my_list[(len(my_list) - 10):]</code> .How can I use a deque for this</p>
<p>Please help, thanks</p>
|
<p>I'm not sure to understand what you're asking but maybe you're looking for a rolling average. In this case, I'm using a list comprehension to create the rolling average:</p>
<pre><code>mylist = [307, 258, 164, 193, 174, 285, 230, 160, 257, 306, 173, 169, 192, 209, 110]
rolling_avg = [mylist[j-6:j]/6 for j in range(6, len(mylist))]
# [230.16666666666666,
# 217.33333333333334,
# 201.0,
# 216.5,
# 235.33333333333334,
# 235.16666666666666,
# 215.83333333333334,
# 209.5,
# 217.66666666666666]
</code></pre>
<p>where the first number in <code>rolling_avg</code> is the average of the six first numbers in <code>mylist</code>, the second number is the average of the numbers at indices 1 to 6, ...</p>
|
python|list|pandas|deque
| 2 |
1,908,977 | 50,610,084 |
SPARQL - Unknown namespace prefix error
|
<p>I have a python file with imported rdflib and some SPARQL query implemented</p>
<pre><code>from rdflib import Graph
import html5lib
if __name__ == '__main__':
g = Graph()
g.parse('http://localhost:8085/weather-2.html', format='rdfa')
res1 = g.parse('http://localhost:8085/weather-2.html', format='rdfa')
print(res1.serialize(format='pretty-xml').decode("utf-8"))
print()
res2 = g.query("""SELECT ?obj
WHERE { <http://localhost:8085/weather-2.html> weather:region ?obj . }
""")
for row in res2:
print(row)
</code></pre>
<p>res1 has no trouble to print out but for res2 I get an error saying:</p>
<pre><code>Exception: Unknown namespace prefix : weather
</code></pre>
<p>Apparently this is due to an error on line 15 according to pycharm, the editor I am using to implement this.</p>
<p>What am I missing that is causing this error?
Is there more to just calling <code>weather:region</code> in my SPARQL query?
If so how to do fix this problem?</p>
|
<p>As the error message suggests, the namespace <code>weather:</code> is not defined - so in the SPARQL you either need a PREFIX to define weather, like:</p>
<p><code>PREFIX weather: <weatheruri>
</code></p>
<p>Or you should put the explicit weather URI instead of the <code>weather:</code></p>
<p>The weather namespace URI (or is it called an IRI?) will be in the XML namespaces in the rdf document - it will end with / or # so if the URI is <code>http://weather.com/</code> the prefix definition is <code>PREFIX weather: <http://weather.com/></code></p>
|
python|pycharm|sparql|rdflib|rdfa
| 4 |
1,908,978 | 50,288,006 |
spaCy: Error in Downloading English Models
|
<p>I used Anaconda to install spaCy as per instructions given on <a href="https://spacy.io/usage/" rel="nofollow noreferrer">download page</a>
When I run the following code to download the English models </p>
<pre><code>python -m spacy download en
</code></pre>
<p>I get the following error. </p>
<pre><code>/anaconda3/bin/python: No module named spacy.__main__; 'spacy' is a package and cannot be directly executed
</code></pre>
|
<p>The problem is solved by forcing a download.
You can try this code</p>
<pre><code>python3 -m spacy.en.download --force all
</code></pre>
|
python|pip|nlp|anaconda|spacy
| 0 |
1,908,979 | 50,268,454 |
Pair each item with the last non-null value in a row
|
<p>I'm trying to make a function where I feed it a list of URLs which go through a 301 hop and it flattens it for me. I want to save the resulting list as a CSV so I can hand it to the developers who can implement it and get rid of 301 hops.</p>
<p>For example, my crawler will produce this list of 301 hops:</p>
<pre><code> URL1 | URL2 | URL3 | URL4
example.com/url1 | example.com/url2 | |
example.com/url3 | example.com/url4 | example.com/url5 |
example.com/url6 | example.com/url7 | example.com/url8 | example.com/10
example.com/url9 | example.com/url7 | example.com/url8 |
example.com/url23 | example.com/url10 | |
example.com/url24 | example.com/url45 | example.com/url46 |
example.com/url25 | example.com/url45 | example.com/url46 |
example.com/url26 | example.com/url45 | example.com/url46 |
example.com/url27 | example.com/url45 | example.com/url46 |
example.com/url28 | example.com/url45 | example.com/url46 |
example.com/url29 | example.com/url45 | example.com/url46 |
example.com/url30 | example.com/url45 | example.com/url46 |
</code></pre>
<p>The output I'm trying to get is</p>
<pre><code>URL1 | URL2
example.com/url1 | example.com/url2
example.com/url3 | example.com/url5
example.com/url4 | example.com/url5
example.com/url6 | example.com/10
example.com/url7 | example.com/10
example.com/url8 | example.com/10
example.com/url23 | example.com/url10
...
</code></pre>
<p>I've converted the Pandas dataframe to a list of lists using the below code:</p>
<pre><code>import pandas as pd
import numpy as np
csv1 = pd.read_csv('Example_301_sheet.csv', header=None)
outlist = []
def link_flat(csv):
for row in csv.iterrows():
index, data = row
outlist.append(data.tolist())
return outlist
</code></pre>
<p>This returns each row as a list, and they are all nested together in a list, like below:</p>
<pre><code>[['example.com/url1', 'example.com/url2', nan, nan],
['example.com/url3', 'example.com/url4', 'example.com/url5', nan],
['example.com/url6',
'example.com/url7',
'example.com/url8',
'example.com/10'],
['example.com/url9', 'example.com/url7', 'example.com/url8', nan],
['example.com/url23', 'example.com/url10', nan, nan],
['example.com/url24', 'example.com/url45', 'example.com/url46', nan],
['example.com/url25', 'example.com/url45', 'example.com/url46', nan],
['example.com/url26', 'example.com/url45', 'example.com/url46', nan],
['example.com/url27', 'example.com/url45', 'example.com/url46', nan],
['example.com/url28', 'example.com/url45', 'example.com/url46', nan],
['example.com/url29', 'example.com/url45', 'example.com/url46', nan],
['example.com/url30', 'example.com/url45', 'example.com/url46', nan]]
</code></pre>
<p>How do I match each URL in each nested list with the last URL in the same list to produce the above list?</p>
|
<p>You'll need to determine the last valid item per row using <code>groupby</code> + <code>last</code>, and then reshape your dataFrame and build a two-column mapping using <code>melt</code>.</p>
<pre><code>df.columns = range(len(df.columns))
df = (
df.assign(URL2=df.stack().groupby(level=0).last())
.melt('URL2', value_name='URL1')
.drop('variable', 1)
.dropna()
.drop_duplicates()
.query('URL1 != URL2')
.sort_index(axis=1)
.reset_index(drop=True)
)
</code></pre>
<p></p>
<pre><code>df
URL1 URL2
0 example.com/url1 example.com/url2
1 example.com/url3 example.com/url5
2 example.com/url6 example.com/10
3 example.com/url9 example.com/url8
4 example.com/url23 example.com/url10
5 example.com/url24 example.com/url46
6 example.com/url25 example.com/url46
7 example.com/url26 example.com/url46
8 example.com/url27 example.com/url46
9 example.com/url28 example.com/url46
10 example.com/url29 example.com/url46
11 example.com/url30 example.com/url46
12 example.com/url4 example.com/url5
13 example.com/url7 example.com/10
14 example.com/url7 example.com/url8
15 example.com/url45 example.com/url46
16 example.com/url8 example.com/10
</code></pre>
|
python|pandas|dataframe|melt
| 2 |
1,908,980 | 34,894,496 |
How to login to mongodb(through pymongo) remotely and get output of db.serverStatus()
|
<p>How to connect to a <code>mongodb</code> host remotely by specifying Username, Password, Hostname and also how to get <code>db.serverStatus()</code> output through <code>pymongo</code> ???</p>
<p>"I have commented the <code>bind_ip</code> in <code>**mongod.conf*</code> file, so that it allows remote connection"</p>
<pre><code>import pymongo
from pymongo import MongoClient
connection=MongoClient(???)
</code></pre>
|
<p>Following is a sample code:</p>
<pre><code>import pymongo
MONGO_HOST = ''
MONGO_PORT = <PORT>
MONGO_DB=''
MONGO_USER=''
MONGO_PASS=''
def get_mongo_db():
con=pymongo.Connection(MONGO_HOST,MONGO_PORT)
db=con[MONGO_DB]
try:
db.authenticate(MONGO_USER,MONGO_PASS)
except:
return None
return db
</code></pre>
<p>Attention, if your mongo doesn't open auth (<code>--auth</code>), you needn't to auth, but it's recommended to open auth for security.</p>
<p>then, you can use <code>db</code> for more ops, as you said, <code>db.serverStatus()</code> (I haven't tried, maybe a little different)</p>
|
python|mongodb|pymongo
| 1 |
1,908,981 | 35,268,367 |
If statement 'not working' despite conditions being true
|
<p>currently going through a bin file full of hex data to process, however currently a match I'm using to try and find a group of 3 hex bytes in the file aren't working correctly. The values are identical however it is not printing my string I've currently got set to confirm that its a match, at present I'm trying to just match the first 3 bytes so we know it works etc. the code is as follows: </p>
<pre><code>match1 = "\\x00\\n\\x00"
print ("match1 ="+match1)
if byteData == match1:
print ("Byte string 030791 found!")
elif byteData == match1:
print ("Byte string 050791 found!")
exit()
</code></pre>
<p>The value of byteData is currently '\x00\n\x00' however the script ignores this and just moves to the exit statement. The file is being opened as follows :</p>
<pre><code>file = open('file.bin', 'rb')
while True:
byte = file.read(3)
</code></pre>
<p>When printing the value of byte it reports as "\x00\n\x00" does anyone have any ideas as to why the match isn't working properly?</p>
|
<p><code>match1</code> does not contain 3 bytes. It contains 10:</p>
<pre><code>>>> match1 = "\\x00\\n\\x00"
>>> len(match1)
10
</code></pre>
<p>You escaped the escape sequences, so <code>\\x00</code> is <em>four bytes</em>, the <code>\</code> backslash, then the letter <code>x</code> followed by two <code>0</code> digits.</p>
<p>Remove the backslash escapes:</p>
<pre><code>match1 = "\x00\n\x00"
</code></pre>
<p>Don't try to print this directly; terminals generally won't make nulls visible, so you just get an extra newline. Use the <a href="https://docs.python.org/3/library/functions.html#repr" rel="nofollow"><code>repr()</code> function</a> to produce debug output that looks just like a Python string so you can reproduce that value in your code or the interactive interpreter:</p>
<pre><code>print ("match1 =", repr(match1))
</code></pre>
<p>This is also how the interactive interpreter shows you expression results (unless they produced <code>None</code>):</p>
<pre><code>>>> match1 = "\x00\n\x00"
>>> len(match1)
3
>>> match1
'\x00\n\x00'
>>> print("match1 =", repr(match1))
match1 = '\x00\n\x00'
</code></pre>
<p>Next, if you are using Python 3, you'll still won't have a match because you opened the file in binary mode and are thus getting <a href="https://docs.python.org/3/library/stdtypes.html#bytes" rel="nofollow"><em><code>bytes</code></em> objects</a>, but your <code>match1</code> variable is a <a href="https://docs.python.org/3/library/stdtypes.html#text-sequence-type-str" rel="nofollow"><code>str</code> text sequence</a>. If you want the two types to match you'll either have to convert (encode the text or decode the bytes), or make <code>match1</code> a <code>bytes</code> object to start with:</p>
<pre><code>match1 = b'\x00\n\x00'
</code></pre>
<p>The <code>b</code> prefix makes that a <code>bytes</code> literal.</p>
|
python
| 2 |
1,908,982 | 35,013,589 |
Unable to run a python restful program
|
<p>I am using "Python 3.5" from the website "<a href="https://www.python.org/" rel="nofollow noreferrer">https://www.python.org/</a>" and when i try to run the program provided by <a href="https://stackoverflow.com/a/32721995/3151189">HVS</a> it does not work.</p>
<pre><code>C:\Users\sdixit23>python C:\Users\sdixit23\AppData\Local\Programs\Python\Python35-32\Shyam\NotWorking\restfulclient2.py
File "C:\Users\sdixit23\AppData\Local\Programs\Python\Python35-32\Shyam\NotWorking\restfulclient2.py", line 26 print key + " : " + jData[key] ^
SyntaxError: Missing parentheses in call to 'print' C:\Users\sdixit23>
</code></pre>
<p>Could you please share another version of a restful API that can work.</p>
<p><a href="https://stackoverflow.com/a/35013296/3151189">This question was also asked here in anothe thread</a></p>
|
<p>You could try adding the parenthesis. In python 3 is neccesary </p>
<pre><code>print(key + " : " + jData[key])
</code></pre>
|
python|rest
| 0 |
1,908,983 | 35,271,922 |
Distribute web-scraping write-to-file to parallel processes in Python?
|
<p>I'm scraping some JSON data from a website, and need to do this ~50,000 times (all data is for distinct zip codes over a 3-year period). I timed out the program for about 1,000 calls, and the average time per call was 0.25 seconds, leaving me with about 3.5 hours of runtime for the whole range (all 50,000 calls).</p>
<p>How can I distribute this process across all of my cores? The core of my code is pretty much this:</p>
<pre><code>with open("U:/dailyweather.txt", "r+") as f:
f.write("var1\tvar2\tvar3\tvar4\tvar5\tvar6\tvar7\tvar8\tvar9\n")
writeData(zips, zip_weather_links, daypart)
</code></pre>
<p>Where <code>writeData()</code> looks like this:</p>
<pre><code>def writeData(zipcodes, links, dayparttime):
for z in zipcodes:
for pair in links:
## do some logic ##
f.write("%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\t%s\n" % (var1, var2, var3, var4, var5,
var6, var7, var8, var9))
</code></pre>
<p><code>zips</code> looks like this:</p>
<pre><code>zips = ['55111', '56789', '68111', ...]
</code></pre>
<p>and <code>zip_weather_links</code> is just a dictionary of (URL, date) for each zip code:</p>
<pre><code>zip_weather_links['55111'] = [('https://website.com/55111/data', datetime.datetime(2013, 1, 1, 0, 0, 0), ...]
</code></pre>
<p>How can I distribute this using <code>Pool</code> or <code>multiprocessing</code>? Or would distribution even save time?</p>
|
<p>You want to "Distribute web-scraping write-to-file to parallel processes in Python".
For a start let's look where the most time is used for Web-Scraping.</p>
<p>The latency for the HTTP-Requests is much higher than for Harddisks. <a href="https://gist.github.com/jboner/2841832" rel="nofollow">Link: Latency comparison</a>. Small writes to a Harddisk are significantly slower than bigger writes. SSDs have a much higher random write speed so this effect doesn't affect them so much.</p>
<ol>
<li>Distribute the HTTP-Requests</li>
<li>Collect all the results</li>
<li>Write all the results at once to disk</li>
</ol>
<p>some example code with <a href="https://ipython.org/ipython-doc/3/parallel/parallel_intro.html" rel="nofollow">IPython parallel</a>:</p>
<pre class="lang-python prettyprint-override"><code>from ipyparallel import Client
import requests
rc = Client()
lview = rc.load_balanced_view()
worklist = ['http://xkcd.com/614/info.0.json',
'http://xkcd.com/613/info.0.json']
@lview.parallel()
def get_webdata(w):
import requests
r = requests.get(w)
if not r.status_code == 200:
return (w, r.status_code,)
return (w, r.json(),)
#get_webdata will be called once with every element of the worklist
proc = get_webdata.map(worklist)
results = proc.get()
# results is a list with all the return values
print(results[1])
# TODO: write the results to disk
</code></pre>
<p>You have to start the IPython parallel workers first:</p>
<pre><code>(py35)River:~ rene$ ipcluster start -n 20
</code></pre>
|
python|parallel-processing|web-scraping|multiprocessing|pool
| 1 |
1,908,984 | 56,779,766 |
How to iterate through series of dates to filter based on multiple conditions?
|
<p>I'm trying to filter the following dates to return a logical of whether a given "window" of data is tracked for at least 30 minutes in duration and no more than 3 minutes between consecutive time points WITHIN that window. Tried putting it into a for loop with a while condition but can't seem to get it to work. Fairly new to python and any help is appreciated. The condition column is what I'd like as output. Since none of the timestamp sequences are reported for at least 30 minutes and differences between consecutive time stamps are less than 3 minutes, all are false, while the last bit of timestamps are tracked for greater than 30 minutes and the difference between consecutive timestamps is less than 3 minutes.</p>
<pre><code> date condition
0 2019-04-11 11:10:00 False
1 2019-04-11 11:10:00 False
2 2019-04-11 11:11:00 False
3 2019-04-11 11:11:00 False
4 2019-04-11 11:11:00 False
5 2019-04-11 11:11:00 False
6 2019-04-11 11:11:00 False
7 2019-04-16 19:05:00 False
8 2019-04-16 19:05:00 False
9 2019-04-16 19:05:00 False
10 2019-04-16 19:05:00 False
11 2019-04-16 19:24:00 False
12 2019-04-16 19:25:00 False
13 2019-04-16 19:25:00 False
14 2019-04-16 19:25:00 False
15 2019-04-16 19:25:00 False
16 2019-04-16 19:25:00 False
17 2019-04-16 19:25:00 False
18 2019-04-16 19:25:00 False
19 2019-04-16 19:25:00 False
20 2019-04-16 19:25:00 False
21 2019-04-16 19:26:00 False
22 2019-04-16 19:26:00 False
23 2019-04-16 19:26:00 False
24 2019-04-16 19:26:00 False
25 2019-04-16 19:26:00 False
26 2019-04-16 19:26:00 False
27 2019-04-16 19:26:00 False
28 2019-04-16 19:26:00 False
29 2019-04-16 19:26:00 False
38533 2019-04-28 09:42:00 True
38534 2019-04-28 09:42:00 True
38535 2019-04-28 09:43:00 True
38536 2019-04-28 09:44:00 True
38537 2019-04-28 09:45:00 True
38538 2019-04-28 09:46:00 True
38539 2019-04-28 09:47:00 True
38540 2019-04-28 09:47:00 True
38541 2019-04-28 09:48:00 True
38542 2019-04-28 09:49:00 True
38543 2019-04-28 09:50:00 True
38544 2019-04-28 09:51:00 True
38545 2019-04-28 09:52:00 True
38546 2019-04-28 09:53:00 True
38547 2019-04-28 09:54:00 True
38548 2019-04-28 09:55:00 True
38549 2019-04-28 09:56:00 True
38550 2019-04-28 09:57:00 True
38551 2019-04-28 09:57:00 True
38552 2019-04-28 09:58:00 True
38553 2019-04-28 09:59:00 True
38554 2019-04-28 10:00:00 True
38555 2019-04-28 10:01:00 True
38556 2019-04-28 10:02:00 True
38557 2019-04-28 10:02:00 True
38558 2019-04-28 10:03:00 True
38559 2019-04-28 10:04:00 True
38560 2019-04-28 10:05:00 True
38561 2019-04-28 10:06:00 True
38562 2019-04-28 10:07:00 True
38563 2019-04-28 10:07:00 True
38564 2019-04-28 10:08:00 True
38565 2019-04-28 10:09:00 True
38566 2019-04-28 10:10:00 True
38567 2019-04-28 10:11:00 True
38568 2019-04-28 10:12:00 True
38569 2019-04-28 10:13:00 True
38570 2019-04-28 10:14:00 True
38571 2019-04-28 10:14:00 True
38572 2019-04-28 10:15:00 True
38573 2019-04-28 10:15:00 True
</code></pre>
|
<p>Here is a generalized Pandas approach where you can specify the <code>step</code> and <code>window</code>. You can use <code>diff()</code> to determine rows where the difference between consecutive timestamps exceeds your specified <code>step</code> (in this case, 3 mins), and then use <code>cumcount()</code> to identify the separate groups, and finally use <code>transform()</code> to create your <code>condition</code> column to check that each respective group contains at least your <code>window</code> (in this case, 30 timestamps):</p>
<pre><code>step = 3
window = 30
df['date'] = pd.to_datetime(df['date'], format='%Y-%m-%d %H:%M:%S')
df['condition'] = (df['date'].diff().astype('timedelta64[m]')<=step)
index = df[df['condition']].index
df['condition'] = df.groupby('condition').cumcount()
df[df.index.isin(index)] = np.nan
df = df.ffill()
df['condition'] = df.groupby('condition').transform('count')>=window
</code></pre>
<p>Output:</p>
<pre><code> date condition
0 2019-04-11 11:10:00 False
1 2019-04-11 11:10:00 False
2 2019-04-11 11:10:00 False
3 2019-04-11 11:10:00 False
4 2019-04-11 11:10:00 False
5 2019-04-11 11:10:00 False
6 2019-04-11 11:10:00 False
7 2019-04-16 19:05:00 False
8 2019-04-16 19:05:00 False
9 2019-04-16 19:05:00 False
10 2019-04-16 19:05:00 False
11 2019-04-16 19:24:00 False
12 2019-04-16 19:24:00 False
13 2019-04-16 19:24:00 False
14 2019-04-16 19:24:00 False
15 2019-04-16 19:24:00 False
16 2019-04-16 19:24:00 False
17 2019-04-16 19:24:00 False
18 2019-04-16 19:24:00 False
19 2019-04-16 19:24:00 False
20 2019-04-16 19:24:00 False
21 2019-04-16 19:24:00 False
22 2019-04-16 19:24:00 False
23 2019-04-16 19:24:00 False
24 2019-04-16 19:24:00 False
25 2019-04-16 19:24:00 False
26 2019-04-16 19:24:00 False
27 2019-04-16 19:24:00 False
28 2019-04-16 19:24:00 False
29 2019-04-16 19:24:00 False
30 2019-04-28 09:42:00 True
31 2019-04-28 09:42:00 True
32 2019-04-28 09:42:00 True
33 2019-04-28 09:42:00 True
34 2019-04-28 09:42:00 True
35 2019-04-28 09:42:00 True
36 2019-04-28 09:42:00 True
37 2019-04-28 09:42:00 True
38 2019-04-28 09:42:00 True
39 2019-04-28 09:42:00 True
40 2019-04-28 09:42:00 True
41 2019-04-28 09:42:00 True
42 2019-04-28 09:42:00 True
43 2019-04-28 09:42:00 True
44 2019-04-28 09:42:00 True
45 2019-04-28 09:42:00 True
46 2019-04-28 09:42:00 True
47 2019-04-28 09:42:00 True
48 2019-04-28 09:42:00 True
49 2019-04-28 09:42:00 True
50 2019-04-28 09:42:00 True
51 2019-04-28 09:42:00 True
52 2019-04-28 09:42:00 True
53 2019-04-28 09:42:00 True
54 2019-04-28 09:42:00 True
55 2019-04-28 09:42:00 True
56 2019-04-28 09:42:00 True
57 2019-04-28 09:42:00 True
58 2019-04-28 09:42:00 True
59 2019-04-28 09:42:00 True
60 2019-04-28 09:42:00 True
61 2019-04-28 09:42:00 True
62 2019-04-28 09:42:00 True
63 2019-04-28 09:42:00 True
64 2019-04-28 09:42:00 True
65 2019-04-28 09:42:00 True
66 2019-04-28 09:42:00 True
67 2019-04-28 09:42:00 True
68 2019-04-28 09:42:00 True
69 2019-04-28 09:42:00 True
70 2019-04-28 09:42:00 True
</code></pre>
|
python|pandas|datetime
| 1 |
1,908,985 | 61,474,839 |
Using an existing S3 bucket in AWS SAM template
|
<p>I am using AWS SAM template for deployment of python AWS lambdas. The trigger for these functions are from existing S3 buckets. But in SAM template I am unable to use existing buckets (only new bucket creation is supported), hence I'am creating the trigger manually.</p>
<p>Is there any way we could incorporate this in SAM template ?</p>
|
<p>you can't use an existing bucket with sam. It's a limitation mention <a href="https://github.com/awslabs/serverless-application-model/blob/develop/versions/2016-10-31.md#s3" rel="nofollow noreferrer">here</a>.
You can try the workaround from this <a href="https://github.com/awslabs/serverless-application-model/issues/124#issuecomment-511779961" rel="nofollow noreferrer">comment</a>.</p>
<p>Thx!</p>
|
python|amazon-web-services|aws-lambda|aws-sam
| 3 |
1,908,986 | 56,120,219 |
Reshape your data either using array.reshape(-1, 1) if your data has a single feature
|
<p>How can I use metrics.silouhette_score on a dataset which has 1300 images that I have their ResNet50 feature vectors (each of length 2048) and a discrete class label between 1 to 9? </p>
<pre><code>import pandas as pd
import numpy as np
from sklearn.metrics import pairwise_distances
from sklearn import cluster, datasets, preprocessing, metrics
from sklearn.cluster import KMeans
df = pd.read_csv("master.csv")
labels = list(df['Q3 Theme1'])
labels_reshaped = np.ndarray(labels).reshape(-1,1)
X = open('entire_dataset__resnet50_feature_vectors.txt')
X_Data = X.read()
print('Silhouette Score:', metrics.silhouette_score(X_Data, labels_reshaped,
metric='cosine'))
</code></pre>
<p>I get this error:</p>
<pre><code>Traceback (most recent call last):
File "/dataset/silouhette_score.py", line 8, in <module>
labels_reshaped = np.ndarray(labels).reshape(-1,1)
ValueError: sequence too large; cannot be greater than 32
Process finished with exit code 1
</code></pre>
<p>For this other code:</p>
<pre><code>import pandas as pd
import numpy as np
from sklearn.metrics import pairwise_distances
from sklearn import cluster, datasets, preprocessing, metrics
from sklearn.cluster import KMeans
df = pd.read_csv("master.csv")
labels = list(df['Q3 Theme1'])
labels_reshaped = np.ndarray(labels).reshape(1,-1)
X = open('entire_dataset__resnet50_feature_vectors.txt')
X_Data = X.read()
print('Silhouette Score:', metrics.silhouette_score(X_Data, labels_reshaped,
metric='cosine'))
</code></pre>
<p>I get this error:</p>
<pre><code>Traceback (most recent call last):
File "/dataset/silouhette_score.py", line 8, in <module>
labels_reshaped = np.ndarray(labels).reshape(1,-1)
ValueError: sequence too large; cannot be greater than 32
Process finished with exit code 1
</code></pre>
<p>If I run this other code:</p>
<pre><code>import pandas as pd
from sklearn import metrics
df = pd.read_csv("master.csv")
labels = list(df['Q3 Theme1'])
X = open('entire_dataset__resnet50_feature_vectors.txt')
X_Data = X.read()
print('Silhouette Score:', metrics.silhouette_score(X_Data, labels,
metric='cosine'))
</code></pre>
<p>I get this as an output: <a href="https://pastebin.com/raw/hk2axdWL" rel="nofollow noreferrer">https://pastebin.com/raw/hk2axdWL</a></p>
<p>How can I fix this code so that I can print the single silhouette score?</p>
<pre><code>Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample.
Process finished with exit code 1
</code></pre>
<p>I have pasted one line of my feature vector file (a .txt file) here: <a href="https://pastebin.com/raw/hk2axdWL" rel="nofollow noreferrer">https://pastebin.com/raw/hk2axdWL</a> (consists of 2048 numbers separated by space)</p>
|
<p>I was eventually able to figure this out. I needed to create the feature vector same exact format as sklearn required them:</p>
<pre><code>import pandas as pd
from sklearn import metrics
df = pd.read_csv("master.csv")
labels = list(df['Q3 Theme1'])
X = open('entire_dataset__resnet50_feature_vectors.txt')
#X_Data = X.read()
fv = []
for line in X:
line = line.strip("\n")
tmp_arr = line.split(' ')
print(tmp_arr)
fv.append(tmp_arr)
print(fv)
print('Silhouette Score:', metrics.silhouette_score(fv, labels,
metric='cosine'))
</code></pre>
|
python|machine-learning|scikit-learn|computer-vision|reshape
| -1 |
1,908,987 | 56,023,957 |
Creating dictionary from list of lists where keys are the lists
|
<p>I have a list of list:</p>
<pre><code>list = [[0, 0, 0, 0, 1, 10, 10], [0, 0, 0, 0, 1, 11, 10], [1, 1, 1, 1, 0, 30, 25]]
</code></pre>
<p>I want to create a dictionary from this list where the keys are the lists and values are 0:</p>
<pre><code>dict = {[0, 0, 0, 0, 1, 10, 10]: 0, [0, 0, 0, 0, 1, 11, 10]: 0, [1, 1, 1, 1, 0, 30, 25]: 0}
</code></pre>
<p>I tried this: <code>dict = {key: 0 for key in list}</code></p>
<p>but I get an error: </p>
<blockquote>
<p>TypeError: unhashable type: 'list'</p>
</blockquote>
<p>How can I fix this problem?</p>
|
<p>You cannot create a dictionary with lists as keys, as lists are <em>mutable</em> objects. One thing you could do is to create a dictionary from <code>tuples</code> rather than lists, as they can be hashed, you can check the <a href="https://docs.python.org/2/tutorial/datastructures.html#dictionaries" rel="nofollow noreferrer">docs</a> for more on this, where as stated:</p>
<blockquote>
<p>Unlike sequences, which are indexed by a range of numbers, dictionaries are indexed by keys, which can be any immutable type</p>
</blockquote>
<hr>
<p>So one way you could construct a dictionary, is by mapping the lists to <code>tuples</code>, and using them as <code>keys</code>: </p>
<pre><code>l = [[0, 0, 0, 0, 1, 10, 10], [0, 0, 0, 0, 1, 11, 10], [1, 1, 1, 1, 0, 30, 25]]
dict.fromkeys(map(tuple, l), 0)
{(0, 0, 0, 0, 1, 10, 10): 0,
(0, 0, 0, 0, 1, 11, 10): 0,
(1, 1, 1, 1, 0, 30, 25): 0}
</code></pre>
<hr>
<p>Also check <a href="https://docs.python.org/3/tutorial/datastructures.html#tuples-and-sequences" rel="nofollow noreferrer">Tuples and Sequences</a> for a better understanding</p>
|
python
| 3 |
1,908,988 | 69,502,143 |
Discord.py | not enough values to unpack
|
<pre><code> with open("warns.json", "r") as f:
data = json.load(f)
user_data = data[str(a)]
await ctx.send(f"Total warnings: {len(user_data)}")
for mod , reason, time, warn_id, warns in user_data:
warn_id_ = warn_id
mod_ = mod
reason_ = reason
time_ = time
warns_ = warns
await ctx.send(f"ID: {warn_id_}, mod: {mod_}, reason: {reason_}, time: {time_}, warns: {warns_}")´´´
</code></pre>
<p>This is My Code.. I try to get following Values out of my warns.json!</p>
<p>warn_id mod reason time warns</p>
<p>Here is my Json:</p>
<pre><code>{
"836495228629417984": {
"mod": [
763339711988236328
],
"reason": [
"test"
],
"time": [
"Friday, October 08 2021 @ 23:15:31 PM"
],
"warn_id": [
83299
],
"warns": 1
}
</code></pre>
<p>But it returns :</p>
<pre><code>discord.ext.commands.errors.CommandInvokeError: Command raised an exception: ValueError: not enough values to unpack (expected 5, got 3)
</code></pre>
<p>I dont Know how to fix that :/</p>
|
<p>First of all your example json is not correct, it is missing one } at last.
Now your unpacking program may be like:</p>
<pre><code>with open("warns.json", "r") as f:
data = json.load(f)
for k in data:
unpacked = data[k]
warn_id_ = unpacked['warn_id']
mod_ = unpacked['mod']
reason_ = unpacked['reason']
time_ = unpacked['time']
warns_ = unpacked['warns']
</code></pre>
<p>Note - this minimal change is just to make your program working. But there may be other better solutions.</p>
|
python|json|discord
| 0 |
1,908,989 | 69,563,252 |
Look up data from other table with apply
|
<p>Let's say I have a table that looks like this:</p>
<pre class="lang-py prettyprint-override"><code>table = pd.DataFrame(
columns=["Name", "Size", "Color"],
data=[['A', 1, 'Red'], ['B', 2, 'Green'], ['C', 3, 'Blue']]
)
</code></pre>
<img src="https://i.stack.imgur.com/LYGSj.png" width="300" height="200">
<p>And a lookup table that looks like this:</p>
<pre class="lang-py prettyprint-override"><code>lookup = pd.DataFrame(
columns=["Color", "Source", "Lead Days"],
data=[["Red", "Europe", 2],
["Green", "Europe", 3],
["Blue", "US", 1],
["Yellow", "Europe", 2]]
)
</code></pre>
<img src="https://i.stack.imgur.com/tJ0rv.png" width="300" height="200">
<p>How might I add columns "Source" and "Lead Days" to <code>table</code> by looking up "Color" from <code>lookup</code>?</p>
<p>A story sometimes helps.</p>
<p><code>table</code> has all the items I need to order.</p>
<p><code>lookup</code> has where I order them from and how long it takes.</p>
<p>I want to transform <code>table</code> so that it can show the "Source" and "Lead Days" for each item I need to order.</p>
<p>Here's what the final table should look like:</p>
<img src="https://i.stack.imgur.com/Uzvl8.png" width="450" height="200">
<p>Note: while I'm sure there's a way to do this with merge, or top level table operations. In the spirit of <a href="https://medium.com/dunder-data/minimally-sufficient-pandas-a8e67f2a2428" rel="nofollow noreferrer">Minimally Sufficient Pandas</a>, and to avoid the huge kluge that is pandas' over-provisioning of operations, I'd prefer to do it with <code>apply</code>. Apply is nice because it's easy to consistently reach for <code>apply</code> in all situations.</p>
<p>Here's my current approach, but it results in the error <code>ValueError: Columns must be same length as key</code></p>
<p>To me, this makes little sense, since I'm returning a list of length 2 and putting it into two columns. But I'm sure pandas has its reasons for being anti-intuitive here.</p>
<pre class="lang-py prettyprint-override"><code>lookup_columns = ["Source", "Lead Days"]
table[lookup_columns] = table.apply(
lambda row:
lookup.query('`Color` == "{color}"'.format(color=row["Color"])).loc[:, lookup_columns].values[0]
, axis = 1)
</code></pre>
|
<p>With <code>apply</code>, you can do:</p>
<pre><code>>>> pd.concat([table, table['Color'].apply(lambda x: lookup.loc[lookup['Color'] == x, ['Source', 'Lead Days']].squeeze())], axis=1)
Name Size Color Source Lead Days
0 A 1 Red Europe 2
1 B 2 Green Europe 3
2 C 3 Blue US 1
</code></pre>
<hr />
<p><strong>Old answer</strong></p>
<p>Use <code>pd.merge</code>:</p>
<pre><code>>>> pd.merge(table, lookup, how='left', on='Color')
Name Size Color Source Lead Days
0 A 1 Red Europe 2
1 B 2 Green Europe 3
2 C 3 Blue US 1
</code></pre>
|
python|pandas|dataframe
| 2 |
1,908,990 | 57,484,672 |
Is there a way to append NaT to a pandas datetime with timezone without changing dtype to object?
|
<p>I have a column in my DataFrame that is dtype: datetime64[ns, UTC]. When I append a row with either None or NaT in that column, the dtype of the column changes to 'object'. This does not happen to columns that are dtype: datetime64[ns].</p>
<p>Here is a demonstration:</p>
<pre><code># Test pandas with datetime columns
import pandas as pd
from datetime import datetime, timezone
df = pd.DataFrame([{'D': datetime.utcnow()}])
df_wtz = pd.DataFrame([{'D': datetime.now().astimezone(timezone.utc)}])
df_None = pd.DataFrame([{'D': None}])
# Note that the tz below is ignored even though specified
df_Nat = pd.DataFrame([{'D': pd.Timestamp(None,tz=timezone.utc)}])
print('df:\n', df['D'])
print('df_wtz:\n', df_wtz['D'])
print('df_None:\n', df_None['D'])
print('df_Nat:\n', df_Nat['D'])
print('df append df_None:\n', df.append(df_None, ignore_index=True, sort=False)['D'])
print('df append df_Nat:\n', df.append(df_Nat, ignore_index=True, sort=False)['D'])
print('df_wtz append df_None:\n', df_wtz.append(df_None, ignore_index=True, sort=False)['D'])
print('df_wtz append df_Nat:\n', df_wtz.append(df_Nat, ignore_index=True, sort=False)['D'])
</code></pre>
<p>Here is the output:</p>
<pre><code>df:
0 2019-08-13 19:58:18.811492
Name: D, dtype: datetime64[ns]
df_wtz:
0 2019-08-13 19:58:18.811968+00:00
Name: D, **dtype: datetime64[ns, UTC]**
df_None:
0 None
Name: D, dtype: object
df_Nat:
0 NaT
Name: D, dtype: datetime64[ns]
df append df_None:
0 2019-08-13 19:58:18.811492
1 NaT
Name: D, dtype: datetime64[ns]
df append df_Nat:
0 2019-08-13 19:58:18.811492
1 NaT
Name: D, dtype: datetime64[ns]
df_wtz append df_None:
0 2019-08-13 19:58:18.811968+00:00
1 None
Name: D, dtype: object
df_wtz append df_Nat:
0 2019-08-13 19:58:18.811968+00:00
1 NaT
Name: D, dtype: object
</code></pre>
<p>I had expected the column type to be retained in the case of appending None or NaT to the datetime64[ns, UTC] column but it is not. Is this the intended behavior or would this be considered a bug?</p>
|
<p>You can place a NaT in a column with dtype <code>datetime64[ns, UTC]</code> this way:</p>
<pre><code> In [380]: df_Nat = pd.DataFrame({'D': pd.to_datetime([None], utc=True)}); df_Nat
Out[380]:
D
0 NaT
In [381]: df_Nat.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1 entries, 0 to 0
Data columns (total 1 columns):
D 0 non-null datetime64[ns, UTC]
dtypes: datetime64[ns, UTC](1)
memory usage: 88.0 bytes
</code></pre>
<p>Appending <code>df_Nat</code> to <code>df_wtz</code> now preserves the dtype:</p>
<pre><code>import pandas as pd
import datetime as DT
utc = DT.timezone.utc
now = DT.datetime.now()
df_wtz = pd.DataFrame([{'D': now.astimezone(utc)}])
df_Nat = pd.DataFrame({'D': pd.to_datetime([None], utc=True)})
# df_Nat = pd.DataFrame({'D':pd.Series(pd.NaT, dtype='datetime64[ns, UTC]')}) # also works
print('df_wtz append df_Nat:\n', df_wtz.append(df_Nat, ignore_index=True, sort=False)['D'])
</code></pre>
<p>yields</p>
<pre><code>df_wtz append df_Nat:
0 2019-08-13 20:28:15.928023+00:00
1 NaT
Name: D, dtype: datetime64[ns, UTC]
</code></pre>
<hr>
<p>The NaT itself is not timezone aware:</p>
<pre><code>In [383]: pd.Timestamp(None) is pd.Timestamp(None, tz=utc)
Out[383]: True
</code></pre>
<p>So <code>pd.DataFrame([{'D': pd.Timestamp(None,tz=utc)}])</code> does not produce a column with timezone-aware dtype. </p>
<p>Since it is impossible to make the DataFrame infer a timezone-aware dtype from the NaT itself,
we need to build a container (such as a Series or DatetimeIndex) which already has the right timezone-aware dtype. That is what <code>pd.to_datetime([None], utc=True)</code> does:</p>
<pre><code>In [385]: pd.to_datetime([None], utc=True)
Out[385]: DatetimeIndex(['NaT'], dtype='datetime64[ns, UTC]', freq=None)
</code></pre>
|
python-3.x|pandas|datetime
| 1 |
1,908,991 | 42,565,148 |
need clarification on node.next pointer in python linkedlist
|
<p>I am having trouble understand pass by value property in python. In the following code, <code>runner</code> is making a copy of <code>current</code>, and <code>runner.next</code>is point to the next node of the given linkedlist, while we set <code>runner.next</code> to <code>runner.next.next</code>, how is this effecting <code>cur.next.next</code>? Are <code>runner.next</code> and <code>cur.next</code> both point to the same address? Because in my mind, runner is just a copy of current, and it won't have access to the original cur.next. Please lecture me.</p>
<pre><code> def remove_dups_followup(ll):
if ll.head is None:
return
current = ll.head
while current:
runner = current
while runner.next:
if runner.next.value == current.value:
runner.next = runner.next.next
else:
runner = runner.next
current = current.next
return ll.head
</code></pre>
|
<pre><code>runner = current
</code></pre>
<p>This does not create a <em>copy</em> of <code>current</code>, but rather assigns another name to the object also known as <code>current</code>. So <code>runner</code> and <code>current</code> are simply two different names for the same object instance. Any changes to the state of <code>runner</code> therefore also affect <code>current</code>, at least until <code>runner</code> is reassigned to the next node in the list.</p>
|
python|linked-list|deep-copy|shallow-copy
| 0 |
1,908,992 | 59,216,960 |
Can't import from module despite presence of __init__.py
|
<p>I have the following folder structure</p>
<pre><code>project_folder/
pyutils/
__init__.py
script1.py
script2.py
lambdas/
__init__.py
lambda_script1.py
lambda_script2.py
lambda_tests/
__init__.py
lambda_test1.py
</code></pre>
<p>Within <code>lambda_test1.py</code> I have the following attempts</p>
<pre><code>from lambdas.lambda_script1 import * # errors saying no module named lambdas
from .lambdas.lambda_script1 import * # errors saying ModuleNotFoundError: No module named '__main__.lambdas'; '__main__' is not a package
from ..lambdas.lmabda_script1 import * # errors saying tried to import above top level path
</code></pre>
<p>I'm trying to run my tests from the project folder with a command like</p>
<pre><code>python pyutils/lambda_tests/lambda_test1.py
</code></pre>
<p>But none of the options seem to work</p>
<p>If I run <code>IPython</code> from within the <code>pyutils</code> folder and run <code>from lambdas.lambda_script1 import *</code> it works. Is this a python path problem?</p>
<p>I also tried adding an __init__.py at the project folder and it still didn't work</p>
|
<pre><code>project_folder/
pyutils/
__init__.py
script1.py
script2.py
lambdas/
__init__.py
lambda_script1.py
lambda_script2.py
lambda_tests/
__init__.py
lambda_test1.py
</code></pre>
<p>Firstly, export your pythonpath to be the root of your project</p>
<p><code>export PYTHONPATH=/path/to/project_folder/py_utils</code></p>
<p>Then you would use imports like:</p>
<p><code>import script1</code></p>
<p><code>import script2</code></p>
<p><code>import lambdas.lambda_script1</code></p>
<p><code>import lambdas.lambda_script2</code></p>
<p><code>import lambdas_tests.lambda_test1</code></p>
|
python
| 0 |
1,908,993 | 53,896,509 |
Django Mock QuerySet
|
<p>Players class:</p>
<pre><code>class Players:
def __init__(self):
self.players = PlayerModel.objects.all()
def count(self):
return len(self.players)
</code></pre>
<p>Test:</p>
<pre><code> def setUp(self):
self.players = Players()
@patch('riskgame.entities.Players.count', return_value=9, create=True)
def test_count(self):
number = self.players.count()
self.assertEqual(number, 9)
</code></pre>
<p>This test throws:</p>
<pre><code>Failed: Database access not allowed, use the "django_db" mark, or the "db" or "transactional_db" fixtures to enable it.
</code></pre>
<p>But why needs this test the database? It seems like that the @patch on Players.count() is'nt working. Is there a better solution to make this more testable?</p>
|
<p>Fixt it with overriding the property players with </p>
<p><code>def setUp(self):
self.players = []</code></p>
|
python|django|unit-testing
| 0 |
1,908,994 | 65,481,003 |
TypeError: '<=' not supported between instances of 'int' and 'str' when doing Seaborn Histplot
|
<p>I get the above mentioned Type Error when creating subplots of different histograms.</p>
<p>To give some context, I have a large dataset that I had to clean in several separate chunks to avoid memory issues. I saved each chunks individually and then proceeded to concatenate them together on another notebook.</p>
<p>When I ran my code to create the subplots with the chunked dataframes it was working fine, but when I ran the subplot code again with the concatenate data I get a Type Error. I don't understand why since I'm not changing anything really.</p>
<p>The error occurs here:</p>
<p><a href="https://i.stack.imgur.com/5mkWZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5mkWZ.png" alt="enter image description here" /></a></p>
<p><strong>My full code</strong></p>
<pre><code>#Overall
CRev_All_age1 = df_optimized.groupby(['YearOnboarded', 'age_buckets']).sum().reset_index()
#Europe
CRev_EU = df_optimized.loc[df_optimized['Continents'] == 'Europe']
Plot_CRev_EU_age1 = CRev_EU.groupby(['YearOnboarded', 'age_buckets']).sum().reset_index()
#Asia
CRev_Asia = df_optimized.loc[df_optimized['Continents'] == 'Asia']
Plot_CRev_Asia_age1 = CRev_Asia.groupby(['YearOnboarded', 'age_buckets']).sum().reset_index()
#Other
CRev_Other = df_optimized.loc[(df_optimized['Continents'] != 'Europe') & (df_optimized['Continents'] != 'Asia')]
Plot_CRev_Other_age1 = CRev_Other.groupby(['YearOnboarded', 'age_buckets']).sum().reset_index()
fig, axes = plt.subplots(2,2, constrained_layout=True, figsize=(14,12))
ax1, ax2, ax3, ax4 =axes.flatten()
#plot1
ax1 = sns.histplot( data=CRev_All_age1, x="YearOnboarded", hue="age_buckets",weights="Revenue2", multiple="stack", discrete=True, shrink=.9, ax=ax1)
ax1.set_title('Overall - Client Revenue (Million)', fontsize=16, fontweight='bold')
ax1.tick_params('x', labelrotation=15)
ax1.set_ylabel('Revenue', fontsize=12)
ax1.set_xlabel('Year Onboarded', fontsize=12)
#plot2
ax2 = sns.histplot( data=Plot_CRev_EU_age1, x="YearOnboarded", hue="age_buckets",weights="Revenue2", multiple="stack", discrete=True, shrink=.9, ax=ax2)
ax2.set_title('Europe - Client Revenue (Million)', fontsize=14, fontweight='bold')
plt.setp(ax2.xaxis.get_majorticklabels(), rotation=15)
ax2.set_ylabel('Revenue', fontsize=12)
ax2.set_xlabel('Year Onboarded', fontsize=12)
#plot3
ax3 = sns.histplot( data=Plot_CRev_Asia_age1, x="YearOnboarded", hue="age_buckets",weights="Revenue2", multiple="stack", discrete=True, shrink=.9, ax=ax3)
ax3.set_title('Asia - Client Revenue (Million)', fontsize=14, fontweight='bold')
for tick in ax3.get_xticklabels():
tick.set_rotation(15)
ax3.set_ylabel('Revenue', fontsize=12)
ax3.set_xlabel('Year Onboarded', fontsize=12)
#plot4
ax4 = sns.histplot( data=Plot_CRev_Other_age1, x="YearOnboarded", hue="age_buckets",weights="Revenue2", multiple="stack", discrete=True, shrink=.9, ax=ax4)
ax4.set_title('Other Continents - Client Revenue (Million)', fontsize=14, fontweight='bold')
ax4.tick_params(labelrotation=15)
ax4.set_ylabel('Revenue', fontsize=12)
ax4.set_xlabel('Year Onboarded', fontsize=12)
plt.show()
</code></pre>
<p><strong>Toy data</strong></p>
<pre><code>dataset = {'YearOnboarded': [2018,2019,2020,2016,2019,2020,2017,2019,2020,2018,2019,2020,2016,2016,2016,2017,2016,2018,2016],
'Revenue2': [100,50,25,30,40,50,60,100,20,40,100,20,5,5,8,4,10,20,8],
'age_buckets': ['18-30','30-39','40-49','50-59','18-30','30-39','40-49','50-59','18-30','30-39','40-49','50-59',
'18-30','30-39','40-49','50-59','18-30','30-39','40-49'],
'Continents': ['Europe','Asia','Africa','Africa','Other','Asia','Africa','Other','America','America','Europe','Europe',
'Other','Europe','Asia','Africa','Asia','Europe','Other']}
df_optimized = pd.DataFrame(data=dataset)
</code></pre>
<p>I would appreciate if someone could help me understand why this occurs and how to solve the issue.</p>
<p>Thank you!</p>
<p><strong>Edit:</strong> Found where the problem was coming from and how to solve it. When importing each chunk dataset into the new kernel, one column had mixed data types. Converting the column with mixed data types using <code>.astype('category')</code> didn't solve my problem, therefore I had to change the datatype while importing the data with <code>read_csv</code> <code>dtype</code> and it worked.</p>
|
<p>This issue likely stems from a character in Revenue2 which pandas does not recognize as an integer when loading the data from whatever filetype you're using to save your data chunks. pandas reads the entire column as an object even if there's only one element in the column that can't be interpreted as an integer. In the example, I've used <code>-</code> to represent this string character with no integer equivalent.<br />
If you run this code:</p>
<pre><code>import pandas as pd
import seaborn as sns
df = pd.DataFrame({'YearOnboarded': [2018,2019,2020,2016,2019,2020,2017,2019,2020,2018,2019,2020,2016,2016,2016,2017,2016,2018,2016],
'Revenue2': ["-",50,25,30,40,50,60,100,20,40,100,20,5,5,8,4,10,20,8],
'age_buckets': ['18-30','30-39','40-49','50-59','18-30','30-39','40-49','50-59','18-30','30-39','40-49','50-59',
'18-30','30-39','40-49','50-59','18-30','30-39','40-49'],
'Continents': ['Europe','Asia','Africa','Africa','Other','Asia','Africa','Other','America','America','Europe','Europe',
'Other','Europe','Asia','Africa','Asia','Europe','Other']})
df['Revenue2'] = df['Revenue2'].astype(int)
</code></pre>
<p>You'll get this error:</p>
<pre><code>ValueError: invalid literal for int() with base 10: '-'
</code></pre>
<p>which is helpful because it indicates the first offending character, then you can replace that character with a filler, and try again:</p>
<pre><code>df['Revenue2'] = df.Revenue2.astype(str).str.replace('-','0').astype(int)
df['Revenue2'] = df['Revenue2'].astype(int)
</code></pre>
<p>eventually, I think you should be able to remove all invalid characters, and have a column that's all integers.</p>
|
python|pandas|seaborn
| 2 |
1,908,995 | 45,581,358 |
Converting CIDR IP address ranges using ipaddress and output to a data frame
|
<p>I have a dataframe with IPv4 and IPv6 CIDR IP address ranges (these can be split up if necessary) in a data frame. I am hoping to take those ranges and create a data frame with each address in the range, so I can join that with another data frame to do some filtering. </p>
<p>Using the ipaddress package, the function to expand a list is:</p>
<pre><code>a = ip.ip_network('103.21.244.0/22')
for x in a.hosts():
print(x)
</code></pre>
<p>This yields a list for just this IP range. Does anyone know how to put in a series of CIDR ranges so I don't have to perform the above n times? If I put a reference to the data frame in place of the IP address above, I get a ValueError stating that it doesn't appear to be an IPv4 or IPv6 network.</p>
<p>The secondary question, as a Python newbie, what do I need to do to get these expanded ranges into a list or data frame? I tried this:</p>
<pre><code>a = ip.ip_network('103.21.244.0/22')
ip_list = [] #x for x in a.hosts()
for x in a.hosts():
ip_list.append(x)
ip_list
</code></pre>
<p>And ended up with:</p>
<pre><code>[IPv4Address('103.21.244.1'),
IPv4Address('103.21.244.2'),
IPv4Address('103.21.244.3'),
IPv4Address('103.21.244.4'),
IPv4Address('103.21.244.5'),
...]
</code></pre>
<p>I'm sure there is a better way than taking that output and regexing the IP addresses. </p>
|
<p>About the first question, I'm afraid you can't do it if the module doesn't support it, and I don't think it does <a href="https://docs.python.org/3/library/ipaddress.html#ipaddress.ip_network" rel="nofollow noreferrer">given the docs</a>. Python offers two ways to apply a method to a list besides the traditional for loop:</p>
<p>The <a href="https://docs.python.org/3.5/library/functions.html#map" rel="nofollow noreferrer"><code>map()</code></a> way, applies an operation to all the items of a list and returns a <a href="https://stackoverflow.com/questions/1756096/understanding-generators-in-python">generator</a> of the results: </p>
<pre><code>def get_single_ip_from_cidr(cidr):
# ...
cidr_list = ["10.0.0.0/8","192.168.0.0/16"]
results_generator = map(get_single_ip_from_cidr, cidr_list)
print(list(results_generator)) # Casting results_generator to list as you cant print generators directly
</code></pre>
<p>The pythonic way with <a href="https://docs.python.org/3.7/tutorial/datastructures.html#list-comprehensions" rel="nofollow noreferrer">List comprehensions</a>:</p>
<pre><code>def get_single_ip_from_cidr(cidr):
# ...
results = [get_single_ip_from_cidr(cidr_addr) for cidr_addr in cidr_list]
</code></pre>
<p>About the second question, the list you get is a list of IPv4Address objects, you are just seeing a stringified representation of it. By using <code>help(ipaddress.IPv4Address)</code>, you can see that is has two attributes named <code>compressed</code> and <code>exploded</code> that both yield what you want (I'm assuming the difference between the two is only relevant in IPv6 where you can use <code>::</code> as a shorthand for a group of zeroes):</p>
<pre><code>a = ip.ip_network('103.21.244.0/22')
ip_list = [addr.compressed for addr in a.hosts()]
</code></pre>
<p>Jeff's answer is doing exactly the same thing but is more verbose.</p>
<p>So, you can refactor your entire code to get all hosts from a list of networks like so:</p>
<pre><code>import ipaddress as ip
def get_ip_from_cidr(cidr):
return [addr.compressed for addr in ip.ip_network(cidr)]
cidr_list = ["192.168.0.0/30","10.0.0.0/26"]
print([get_ip_from_cidr(cidr) for cidr in cidr_list])
</code></pre>
|
python|python-3.x|jupyter-notebook
| 0 |
1,908,996 | 45,655,987 |
Efficient solution for merging 2 sorted lists in Python
|
<p>I am teaching my self Python by starting with the crash course published by Google. One of the practice problems is to write a function that takes 2 <strong>sorted</strong> lists, merges them together, and returns a sorted list. The most obvious solution is:</p>
<pre><code>def linear_merge(list1, list2):
list = list1 + list2
list.sort()
return list
</code></pre>
<p>Obviously above is not very efficient, or so I thought, because on the backend the sort function will have to run over the entire output list again. The problem asks for an efficient way of implementing this function, presumably that it can work well on huge lists. My code was similar to Google's answer, but I tweaked it a bit to make it a bit faster:</p>
<pre><code>def linear_merge_goog(list1, list2):
result = []
while len(list1) and len(list2):
if list1[-1] > list2[-1]:
result.append(list1.pop())
else:
result.append(list2.pop())
result.extend(list1)
result.extend(list2)
return result[::-1]
</code></pre>
<p>Original Google code was poping from the front of the array, but even they make a note that it's much more efficient to pop form the back of the array and than reverse it.</p>
<p>I tried to run both functions with large 20 million entry arrays, and the simple stupid combine and sort function comes up on top by a margin of 3X+ every time. Sub 1 second vs. over 3 seconds for what <em>should</em> be the more efficient method.</p>
<p>Any ideas? Am I missing something. Does it have to do with built in sort function being compiled while my code is interpreted (doesn't sound likely). Any other ideas?</p>
|
<p>Its because of the Python implementation of <code>.sort()</code>. Python uses something called <a href="https://en.wikipedia.org/wiki/Timsort" rel="noreferrer">Timsort</a>.</p>
<p>Timsort is a type of mergesort. Its distinguishing characteristic is that it identifies "runs" of presorted data that it uses for the merge. In real world data, sorted runs in unsorted data are very common and you can sort two sorted arrays in <em>O(n)</em> time if they are presorted. This can cut down tremendously on sort times which typically run in <em>O(nlog(n))</em> time. </p>
<p>So what's happening is that when you <code>call list.sort()</code> in Python, it identifies the two runs of sorted data <code>list1</code> and <code>list2</code> and merges them in <em>O(n)</em> time. Additionally, this implementation is compiled C code which will be faster than an interpreted Python implementation doing the same thing. </p>
|
python|arrays|list|sorting|performance-testing
| 6 |
1,908,997 | 28,506,837 |
finding similarities in rows for a pandas dataframe
|
<p>I'm struggling to think of a way to efficiently accomplish this data wrangling problem in pandas. Here is my pandas dataframe:</p>
<pre><code> brian steve joe tom
0 1 0 1 0
1 1 0 0 0
2 0 1 1 0
3 1 0 1 1
</code></pre>
<p>I essentially want to find who has a value of 1 in the same row and then count the number of rows where they both have 1's. So, for instance, brian and joe are in the same row twice (row 0 and 3) so their score together would be 2. The first way I thought about approach this was by creating dictionaries. I thought I'd do something like {brian: 0, 1, 3} and then compare/count the similarities. Couldn't get this to work as I had problem with multilevel indices. </p>
<p>I then thought possibility reshaping/melting the dataframe in order to solve the problem. </p>
<p>I was thinking a df that looks like this (showing snippet of row 1 essentially melted):</p>
<pre><code>0 brian steve 1 0
1 brian joe 1 1
2 brian tom 1 0
3 steve brian 0 1
4 steve joe 0 1
5 steve tom 0 0
...
</code></pre>
<p>Am I thinking about this the right way? I tried using a lot of different variations of pd.melt and couldn't get what I wanted. Is there something simple I'm missing? It's causing a lot of frustration trying to reshape the dataframe to what I want to solve the problem, so any help would be appreciated</p>
|
<p>A matrix multiplication should do, no? Or it's more complicated than that?</p>
<pre><code>In [37]: df
Out[37]:
brian steve joe tom
0 1 0 1 0
1 1 0 0 0
2 0 1 1 0
3 1 0 1 1
In [38]: df.T.dot(df)
Out[38]:
brian steve joe tom
brian 3 0 2 1
steve 0 1 1 0
joe 2 1 3 1
tom 1 0 1 1
</code></pre>
<p><strong>EDIT:</strong></p>
<p>Thanks @exp1orer</p>
<pre><code>In [40]: df2 = df.T.dot(df)
In [41]: df3 = df2.stack().reset_index()
In [42]: df3[df3.level_0 != df3.level_1]
Out[42]:
level_0 level_1 0
1 brian steve 0
2 brian joe 2
3 brian tom 1
4 steve brian 0
6 steve joe 1
7 steve tom 0
8 joe brian 2
9 joe steve 1
11 joe tom 1
12 tom brian 1
13 tom steve 0
14 tom joe 1
</code></pre>
|
python|pandas
| 4 |
1,908,998 | 14,804,735 |
Tkinter: How can I dynamically create a widget that can then be destroyed or removed?
|
<p>I am looking for a method to create widgets (most likely a Label) ad nauseam, with the caveat that they can be removed or unpacked later on.</p>
<p>I can generate the widgets just fine, but they are not assigned a name. I do not understand how I would, if it is possible, remove a certain anonymous widget. </p>
<p>My first instinct was to dynamically create variable names with a stable convention, but that may unnecessarily open a can of worms. The idea is expressed below. I'd like to be able to remove a certain Button widget while not knowing at run-time how many I will handle. Thank you.</p>
<pre><code>from Tkinter import *
import time
import ttk
def onDoubleClick(event):
item = t.selection()
#print "you clicked on", t.item(item,"text")
if (t.item(item,"text")=="Start IO"):
Button2 = Button(frame2,text="Button2",command=but).pack()
def but():
pack_forget()
root=Tk()
root.geometry("800x300")
frame1 = Frame(root)
frame2 = Frame(root)
t=ttk.Treeview(frame1)
t.heading("#0",text="Test steps")
t.insert("",0,"IO",text="IO")
t.insert("IO","end",text="Start")
t.bind("<Double-1>", onDoubleClick)
t.pack()
frame1.pack(side=LEFT)
frame2.pack(side=LEFT)
</code></pre>
<p><strong>EDIT:</strong> My feature request was admittedly short-sighted. My ultimate goal is to have a Label widget and a Button side-by-side, both comprising what is to be a 'step' in a test launcher. Clicking the button will remove both itself and its respective Label from the GUI. I'm able to create both widgets and delete either one of them on the Button's callback, but to pack_forget both I believe I need to <strong>def</strong> a function. I believe my problem lies in passing a correct reference to <strong>def removeStep</strong> A use case is diagrammed below: ....[If this could be solved my RTFM please feel free to let me know, I just couldn't find it]</p>
<p>TEST: Make a PB&J</p>
<p>Step 0: Get Bread [Remove step]</p>
<p>Step 1: Smear PB [Remove step]</p>
<p>Step 2: Smear Jelly [Remove step]</p>
|
<p>You'll want to store the dynamically-created widgets in a list. Have something like</p>
<pre><code>dynamic_buttons = []
def onDoubleClick(event):
...
button = Button(...)
dynamic_buttons.append(button)
button.pack()
</code></pre>
<p>You can then access the buttons for removal with, say,</p>
<pre><code>dynamic_buttons[0].destroy()
</code></pre>
<p>Edit: With more information about your use case, I would probably do</p>
<pre><code>class RemovableTask(Frame):
def __init__(self, master, name, **options):
Frame.__init__(self, master, **options)
lbl = Label(self, text=name)
btn = Button(self, text='Remove step', command=self.destroy)
lbl.grid(row=0, column=0)
btn.grid(row=0, column=1)
</code></pre>
<p>Then just create instances of RemovableTask with names like "Step 0: Get Bread", and grid or pack them in a column. Everything else would be handled automatically.</p>
|
python|user-interface|tkinter
| 9 |
1,908,999 | 68,732,435 |
python: Task got bad yield:
|
<p>I am aiming to make parallel requests to the list of endpoints, hence using asyncio ensure_future can somone please take a look and give me an idea on how to fix errors (python3.6.7)</p>
<pre><code>import asyncio
import treq
async def main_aysnc():
loop = asyncio.get_event_loop()
await start_coros_parllel()
async def start_coros_parllel():
config = {}
config['services'] = [
b'https://www.google.com',
b'https://www.yahoo.com',
b'https://www.facebook.com'
]
results = await asyncio.gather(*[asyncio.ensure_future(treq.get(service)) for service in config['services']])
if __name__ == "__main__":
asyncio.get_event_loop().run_until_complete(main_aysnc())
</code></pre>
<p>LOGS</p>
<pre><code>Traceback (most recent call last):
File "2cr.py", line 35, in <module>
asyncio.get_event_loop().run_until_complete(main_aysnc())
File "/Users/vchauhan/.pyenv/versions/3.6.7/lib/python3.6/asyncio/base_events.py", line 473, in run_until_complete
return future.result()
File "2cr.py", line 7, in main_aysnc
await start_coros_parllel()
File "2cr.py", line 20, in start_coros_parllel
results = await asyncio.gather(*[asyncio.ensure_future(treq.get(service)) for service in config['services']])
File "/Users/vchauhan/.pyenv/versions/3.6.7/lib/python3.6/asyncio/tasks.py", line 537, in _wrap_awaitable
return (yield from awaitable.__await__())
RuntimeError: Task got bad yield: <Deferred at 0x104cc5048>
</code></pre>
|
<p>The problem is that you should not use <code>asyncio</code> with <code>treq</code>.</p>
<p>According to documentation:</p>
<blockquote>
<p>treq depends on a recent Twisted and functions on Python 2.7 and
Python 3.3+ (including PyPy).</p>
</blockquote>
<p>If you want to use <code>asyncio</code> you have to use some other http client framework e.g. aiohttp.</p>
<p>If you need any example on how to use aiohttp client, feel free to ask.</p>
|
python-3.x|python-asyncio|twisted
| 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.