Unnamed: 0
int64
0
1.91M
id
int64
337
73.8M
title
stringlengths
10
150
question
stringlengths
21
64.2k
answer
stringlengths
19
59.4k
tags
stringlengths
5
112
score
int64
-10
17.3k
1,909,400
67,316,256
Why do I get different predictions using Keras sequential neural network in a loop?
<p>I came across a weird difference between keras model.fit() and sklearn model.fit() functions. When model.fit() is called inside a loop I get inconsistent predictions using a Keras sequential model. This is not the case when using an sklearn model. See sample code to reproduce the phenomenon.</p> <pre><code>from numpy.random import seed seed(1337) import tensorflow as tf tf.random.set_seed(1337) from sklearn.linear_model import LogisticRegression from keras.models import Sequential from keras.layers import Dense, Dropout from keras.layers import InputLayer from sklearn.datasets import make_blobs from sklearn.preprocessing import MinMaxScaler import numpy as np def get_sequential_dnn(NUM_COLS, NUM_ROWS): # code for model if __name__ == &quot;__main__&quot;: input_size = 10 X, y = make_blobs(n_samples=100, centers=2, n_features=input_size, random_state=1 ) scalar = MinMaxScaler() scalar.fit(X) X = scalar.transform(X) model = get_sequential_dnn(X.shape[1], X.shape[0]) # print(model.summary()) # model = LogisticRegression() for i in range(2): model.fit(X, y, epochs=100, verbose=0, shuffle=False) # model.fit(X, y) Xnew, _ = make_blobs(n_samples=3, centers=2, n_features=10, random_state=1) Xnew = scalar.transform(Xnew) # make a prediction # ynew = model.predict_proba(Xnew)[:, 1] ynew = model.predict_proba(Xnew) ynew = np.array(ynew) # show the inputs and predicted outputs print('--------------') for i in range(len(Xnew)): print(&quot;X=%s \n Predicted=%s&quot; % (Xnew[i], ynew[i])) </code></pre> <p>The output of this is</p> <pre><code>-------------- X=[0.32799209 0.32682211 0.62699485 0.89987274 0.59894281 0.94662653 0.77125788 0.73345369 0.2153754 0.35317172] Predicted=[0.9931685] X=[0.60876924 0.33208319 0.24770841 0.11435312 0.66211608 0.17361879 0.12891829 0.25729677 0.69975833 0.73165292] Predicted=[0.35249507] X=[0.65154993 0.26153846 0.2416324 0.11793901 0.7047334 0.17706289 0.07761879 0.45189967 0.8481064 0.85092378] Predicted=[0.35249507] -------------- X=[0.32799209 0.32682211 0.62699485 0.89987274 0.59894281 0.94662653 0.77125788 0.73345369 0.2153754 0.35317172] Predicted=[1.] X=[0.60876924 0.33208319 0.24770841 0.11435312 0.66211608 0.17361879 0.12891829 0.25729677 0.69975833 0.73165292] Predicted=[0.17942095] X=[0.65154993 0.26153846 0.2416324 0.11793901 0.7047334 0.17706289 0.07761879 0.45189967 0.8481064 0.85092378] Predicted=[0.17942095] </code></pre> <p>While if I use a Logistic Regression (un-comment the commented lines) the predictions are consistent:</p> <pre><code>-------------- X=[0.32799209 0.32682211 0.62699485 0.89987274 0.59894281 0.94662653 0.77125788 0.73345369 0.2153754 0.35317172] Predicted=0.929209043999009 X=[0.60876924 0.33208319 0.24770841 0.11435312 0.66211608 0.17361879 0.12891829 0.25729677 0.69975833 0.73165292] Predicted=0.04643513037543502 X=[0.65154993 0.26153846 0.2416324 0.11793901 0.7047334 0.17706289 0.07761879 0.45189967 0.8481064 0.85092378] Predicted=0.038716408758471876 -------------- X=[0.32799209 0.32682211 0.62699485 0.89987274 0.59894281 0.94662653 0.77125788 0.73345369 0.2153754 0.35317172] Predicted=0.929209043999009 X=[0.60876924 0.33208319 0.24770841 0.11435312 0.66211608 0.17361879 0.12891829 0.25729677 0.69975833 0.73165292] Predicted=0.04643513037543502 X=[0.65154993 0.26153846 0.2416324 0.11793901 0.7047334 0.17706289 0.07761879 0.45189967 0.8481064 0.85092378] Predicted=0.038716408758471876 </code></pre> <p>I get that the obvious solution to this is fit the model before the loop, probably there is a strong randomness how Keras models fit the data to the labels, but there are a couple of cases where you need to have a loop to get prediction scores. For example if you want to perform a 10-fold cross validation to get the AUC, sensitivity, specificity values on a training data. In these situations this randomness is unacceptable.</p> <p>What is causing this inconsistency and what is the solution to it?</p>
<p>There are couple of issue with the way your are trying to make reproducible results with keras.</p> <ol> <li>You are calling the <code>fit</code> (when <code>i==1</code>) over the already fitted model (when <code>i==0</code>). So the optimizer sees different sets of inital weights in both the cases and so you will end up in two different models. <strong>Solution</strong>: Get a fresh model everytime. This is not the case with sklearn, which starts with fresh initialized weights every time a fit is called.</li> <li><code>model.fit</code> internally might use a current stage of random number generator. You seeded it outside the loop, so the state will be different when <code>fit</code> is called the second time. <strong>Solution</strong>: Seed inside the loop.</li> </ol> <h3>Sample code with issue</h3> <pre><code># Issue 2 here tf.random.set_seed(1337) def get_model(): model = Sequential() model.add(Dense(4, input_dim=8, activation='relu')) model.add(Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam') return model X = np.random.randn(10,8) y = np.random.randn(10,1) # Issue 1 here model = get_model() results = [] for i in range(10): model.fit(X, y, epochs=5, verbose=0, shuffle=False) results.append(np.sum(model.predict(X))) assert np.all(np.isclose(results, results[0])) </code></pre> <p>As you can see the assert fails</p> <h3>Corrected code</h3> <pre><code>results = [] for i in range(10): tf.random.set_seed(1337) model = get_model() model.fit(X, y, epochs=5, verbose=0, shuffle=False) results.append(np.sum(model.predict(X))) assert np.all(np.isclose(results, results[0])) </code></pre>
python|tensorflow|machine-learning|keras|scikit-learn
1
1,909,401
60,716,486
Python, delete a list item from class method
<p>I have my class. I want to create a method inside to delete a list item by code attribute. </p> <pre><code>class MyClass(Base): def __init__(self, code, name): self.__code = code self.__name = name @property def code(self): return self.__code @property def name(self): return self.__name @code.setter def code(self, new_code): self.__code=new_code def __repr__(self): x = f"Code: {self.__code} and Name:{self.__name}" return(x) def __deleteitem__(self, code): print("The code: {self.__code} was deleted") list=[] list.append(MyClass(1234,"Aijio")) list.append(MyClass(123,"Anodnd")) list.append(MyClass(1236,"Jfifi")) list.append(MyClass(1238,"Roberto")) print(list) lista.deleteitem(123) </code></pre> <p>How I can create a method who deletes the code that I send?</p> <p>Regards</p>
<p>You can try this below :</p> <pre><code>class MyClass(Base): def __init__(self, code, name): self.__code = code self.__name = name @property def code(self): return self.__code @property def name(self): return self.__name @code.setter def code(self, new_code): self.__code=new_code def __repr__(self): x = f"Code: {self.__code} and Name:{self.__name}" return(x) def __deleteitem__(self, code): # Logic for deletion for obj in list: if obj.code == code: list.remove(obj) print("The code: "+code+" was deleted") list=[] list.append(MyClass(1234,"Aijio")) list.append(MyClass(123,"Anodnd")) list.append(MyClass(1236,"Jfifi")) list.append(MyClass(1238,"Roberto")) myclass = MyClass(None, None) myclass.__deleteitem__(123) </code></pre>
python|list|magic-methods
1
1,909,402
64,189,889
How to pull data from a text file into sentences defined as rows of data in between blank rows?
<p>Data is located in a text file, and I want to group the data inside it into sentences. Definition of a sentence is all the rows one after another with at least 1 character in each row. There are blank rows in between the rows with data and so I want the blank rows to mark the beginning and end of a sentence. Is there a way to do this with list comprehension?</p> <p>Example from text file. Data would look like this:</p> <pre class="lang-none prettyprint-override"><code>This is the first sentence. This is a really long sentence and it just keeps going across many rows there will not necessarily be punctuation or consistency in word length the only difference in ending sentence is the next row will be blank here would be the third sentence as you see the blanks between rows of data help define what a sentence is this would be sentence 4 i want to pull data from text file as such (in sentences) where sentences are defined with blank records in between this would be sentence 5 since blank row above it and continues but ends because blank row(s) below it </code></pre>
<p>You can get the whole file as a single string with <code>file_as_string = file_object.read()</code>. As you want to split this string on an empty line, that's equivalent to splitting on two subsequent newline characters, so we can do <code>sentences = file_as_string.split(&quot;\n\n&quot;)</code>. Finally, you might want to remove the line breaks that are still present in the middle of the sentences. You can do that with a list comprehension, replacing newlines with nothing: <code>sentences = [s.replace('\n', '') for s in sentences]</code></p> <p>In total that gives:</p> <pre><code>file_as_string = file_object.read() sentences = file_as_string.split(&quot;\n\n&quot;) sentences = [s.replace('\n', '') for s in sentences] </code></pre>
python|list|list-comprehension|data-mining
2
1,909,403
70,445,124
Error trying to generate some faces and store in another folder
<p>I am trying to use transfer learning to train an image recognition model, I want to generate faces in one of my folders into another folder. This is my solution</p> <pre><code># Loading the HAARCascade Face Detector face_detector = cv2.CascadeClassifier('Haarcascades/haarcascade_frontalface_default.xml') # Directory of image of persons to perform extraction mypath = &quot;./where_i_want_to_extract_face_from/&quot; image_file_names = [f for f in listdir(mypath) if isfile(join(mypath, f))] print(&quot;Image name successfully collected&quot;) for image_name in image_file_names: person_image = cv2.imread(mypath+image_name) face_info = face_detector.detectMultiScale(person_image, 1.3, 5) for (x,y,w,h) in face_info: face = person_image[y:y+h, x:x+w] explorer = cv2.resize(face, (128, 128), interpolation = cv2.INTER_CUBIC) path = &quot;./folder_to_save_the_extracted_faces/&quot; + &quot;face_&quot; + image_name cv2.imwrite(path, explorer ) cv2.imshow(&quot;face&quot;, explorer ) cv2.waitKey(0) cv2.destroyAllWindows() </code></pre> <p>However, running the code above keeps generating:</p> <pre><code>NameError: name 'explorer' is not defined </code></pre> <p>What am I doing wrong?</p>
<p>Still don't understand how, but restarting my IDE fixed the error for me.</p>
python|opencv
0
1,909,404
56,442,405
Confused with python import (absolute and relative)
<p>I created project and helper modules for it. But some of modules are using each other like worker 1 uses helper1 and helper2 also uses helper1. So I completle confused how I need to import all those modules so so can work standalone (for example I want to debug helper2 out of main script) and they still will be functional. Summarizing - how to correctly import modules so maint_script will work and other modules when using out of main_script. Sorry for my English.</p> <blockquote> <pre><code>main program dir/ main_script.py -classes/ | |--helper1.py |--helper2.py -worker_classes/ | |--worker1.py </code></pre> </blockquote> <p>At the moment I'am using this constructions in the begging of each script, but I feel that this approach isn't appropriate for python</p> <pre><code>import os import sys sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), 'shell_modules'))) </code></pre>
<p>The way I deal with imports inside a project is to install the project in editable mode. This way, all files will be able to locate each other, always starting from your project root directory.</p> <p>In order to do this, follow these steps:</p> <p>1) write a setup.py file and add it to your project root folder - it doesn't need much info at all:</p> <pre><code># setup.py from setuptools import setup, find_packages setup(name='MyPackageName', version='1.0.0', packages=find_packages()) </code></pre> <p>2) install your package in editable mode (ideally from a virtual environment). From a terminal in your project folder, write</p> <pre><code>$ pip install -e . </code></pre> <p>Note the dot - this means "install the package from the current directory in editable mode".</p> <p>3) your files are now able to locate each other, always starting from the project root. To import <code>helper1.py</code>, for example, you write:</p> <pre><code>from classes import helper1 </code></pre> <p>or alternatively:</p> <pre><code>from classes.helper1 import foo, bar </code></pre> <p>This will be true to import <code>helper1.py</code> for any file, no matter where it is located in the project structure.</p> <p>Like I said, you should use a virtual environment for this, so that pip does not install your package to your main Python installation (which could be messy if your project has many dependencies).</p> <p>Currently my favorite tool for this is <a href="https://pipenv.kennethreitz.org/en/latest/" rel="nofollow noreferrer">pipenv</a>. When using it, replace the terminal command with </p> <pre><code>$ pipenv install -e . </code></pre> <p>So that your project gets added to the Pipfile.</p>
python|import
1
1,909,405
17,727,096
Python Turtle draw centered square
<p>I need to draw a square given a center point using the turtle module.</p> <pre><code>def drawCentSq(t,center,side): xPt=center[0] yPt=center[1] xPt-=int(side/side) yPt+=int(side/side) t.up() t.goto(xPt,yPt) t.down() for i in range(4): t.forward(side) t.right(90) </code></pre> <p>def main():</p> <pre><code>import turtle mad=turtle.Turtle() wn=mad.getscreen() print(drawCentSq(mad,(0,0),50)) main() </code></pre> <p>I'm having a hard time making my turtle go to the right starting point. </p>
<blockquote> <p>I need to draw a square given a center point using the turtle module.</p> </blockquote> <p>As @seth notes, you can do this by fixing the center calculation in your code:</p> <pre><code>from turtle import Turtle, Screen def drawCentSq(turtle, center, side): """ A square is a series of perpendicular sides """ xPt, yPt = center xPt -= side / 2 yPt += side / 2 turtle.up() turtle.goto(xPt, yPt) turtle.down() for _ in range(4): turtle.forward(side) turtle.right(90) yertle = Turtle() drawCentSq(yertle, (0, 0), 50) screen = Screen() screen.exitonclick() </code></pre> <p>But let's step back and consider how else we can draw a square at a given point of a given size. Here's a completely different solution:</p> <pre><code>def drawCentSq(turtle, center, side): """ A square is a circle drawn at a rough approximation """ xPt, yPt = center xPt -= side / 2 yPt -= side / 2 turtle.up() turtle.goto(xPt, yPt) turtle.right(45) turtle.down() turtle.circle(2**0.5 * side / 2, steps=4) turtle.left(45) # return cursor to original orientation </code></pre> <p>And here's yet another:</p> <pre><code>STAMP_UNIT = 20 def drawCentSq(turtle, center, side): """ A square can be stamped directly from a square cursor """ mock = turtle.clone() # clone turtle to avoid cleaning up changes mock.hideturtle() mock.shape("square") mock.fillcolor("white") mock.shapesize(side / STAMP_UNIT) mock.up() mock.goto(center) return mock.stamp() </code></pre> <p>Note that this solution returns a stamp ID that you can pass to <code>yertle</code>'s <code>clearstamp()</code> method to remove the square from the screen if/when you wish.</p>
python|python-3.x|turtle-graphics
1
1,909,406
61,109,667
Python Function to Compute a Beta Matrix
<p>I'm looking for an efficient function to automatically produce betas for every possible multiple regression model given a dependent variable and set of predictors as a DataFrame in python. </p> <p>For example, given this set of data:</p> <p><a href="https://i.stack.imgur.com/YuPuv.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YuPuv.jpg" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/YuPuv.jpg" rel="nofollow noreferrer">https://i.stack.imgur.com/YuPuv.jpg</a><br> The dependent variable is 'Cases per Capita' and the columns following are the predictor variables. </p> <p>In a simpler example:</p> <pre><code> Student Grade Hours Slept Hours Studied ... --------- -------- ------------- --------------- ----- A 90 9 1 ... B 85 7 2 ... C 100 4 5 ... ... ... ... ... ... </code></pre> <p>where the beta matrix output would look as such: </p> <pre><code> Regression Hours Slept Hours Studied ------------ ------------- --------------- 1 # N/A 2 N/A # 3 # # </code></pre> <p>The table size would be <code>[2^n - 1]</code> where <strong><code>n</code></strong> is the number of variables, so in the case with 5 predictors and 1 dependent, there would be 31 regressions, each with a different possible combination of <em><code>beta</code></em> calculations.</p> <p>The process is described in greater detail <a href="https://thewinnower.com/papers/322-automatic-testing-of-all-possible-multiple-regression-models-given-a-set-of-predictors" rel="nofollow noreferrer">here</a> and an actual solution that is written in R is posted <a href="https://osf.io/stm4y/" rel="nofollow noreferrer">here</a>.</p>
<p>I am not aware of any package that already does this. But you can create all those combinations (2^n-1), where n is the number of columns in X (independent variables), and fit a linear regression model for each combination and then get coefficients/betas for each model. </p> <p>Here is how I would do it, hope this helps</p> <pre><code>from sklearn import datasets, linear_model import numpy as np from itertools import combinations #test dataset X, y = datasets.load_boston(return_X_y=True) X = X[:,:3] # Orginal X has 13 columns, only taking n=3 instead of 13 columns #create all 2^n-1 (here 7 because n=3) combinations of columns, where n is the number of features/indepdent variables all_combs = [] for i in range(X.shape[1]): all_combs.extend(combinations(range(X.shape[1]),i+1)) # print 2^n-1 combinations print('2^n-1 combinations are:') print(all_combs) ## Create a betas/coefficients as zero matrix with rows (2^n-1) and columns equal to X betas = np.zeros([len(all_combs), X.shape[1]])+np.NaN ## Fit a model for each combination of columns and add the coefficients into betas matrix lr = linear_model.LinearRegression() for regression_no, comb in enumerate(all_combs): lr.fit(X[:,comb], y) betas[regression_no, comb] = lr.coef_ ## Print Coefficients of each model print('Regression No'.center(15)+" ".join(['column {}'.format(i).center(10) for i in range(X.shape[1])])) print('_'*50) for index, beta in enumerate(betas): print('{}'.format(index + 1).center(15), " ".join(['{:.4f}'.format(beta[i]).center(10) for i in range(X.shape[1])])) </code></pre> <p>results in </p> <pre><code>2^n-1 combinations are: [(0,), (1,), (2,), (0, 1), (0, 2), (1, 2), (0, 1, 2)] Regression No column 0 column 1 column 2 __________________________________________________ 1 -0.4152 nan nan 2 nan 0.1421 nan 3 nan nan -0.6485 4 -0.3521 0.1161 nan 5 -0.2455 nan -0.5234 6 nan 0.0564 -0.5462 7 -0.2486 0.0585 -0.4156 </code></pre>
python|matrix|regression|beta
1
1,909,407
65,945,598
Programming a Discord bot in Python- How do I make the bot run faster?
<p>I noticed that whenever a command is triggered, the bot usually takes a couple of seconds to respond. Is there any way I can increase the overall speed of the bot? I'm new to programming, so any insight would be greatly appreciated. Here is my code if it helps:</p> <pre><code>import discord import os import random import praw from keep_alive import keep_alive from discord.ext import commands from discord.ext.commands import Bot import time client = commands.Bot(command_prefix='.') sec_triggers = ['just a sec', 'Just a sec', 'just a second', 'Just a second', 'one sec', 'one second', 'One sec', 'One second'] monke_triggers = ['monke', 'Monke', 'Monkey', 'monkey'] hello_triggers = ['hello there', 'Hello there', 'hello There', 'Hello There'] f_triggers = ['f in the chat', 'F in the chat', 'f in the Chat', 'F in the Chat'] colors = [0xff0000, 0xff3300, 0xff6600, 0xff9900, 0xffcc00, 0xffff00, 0xccff00, 0x99ff00, 0x66ff00, 0x33ff00, 0x00ff00, 0x00ff33, 0x00ff66, 0x00ff99, 0x00ffcc, 0x00ffff, 0x00ccff, 0x0099ff, 0x0066ff, 0x0033ff, 0x0000ff, 0x3300ff, 0x6600ff, 0x9900ff, 0xcc00ff, 0xff00ff, 0xff00cc, 0xff0099, 0xff0066, 0xff0033] client.remove_command('help') @client.event async def on_ready(): print('We have logged in as {0.user}'.format(client)) await client.change_presence(activity=discord.Activity(type=discord.ActivityType.watching, name=&quot;you, Wazowski. Always Watching. Always.&quot;)) @client.event async def on_message(message): if message.author == client.user: return msg = message.content if any(word in msg for word in monke_triggers): await message.channel.send(file=discord.File('Reject Humanity, Return to Monke.jpg')) if any(word in msg for word in sec_triggers): time. sleep(1) await message.channel.send(&quot;It's been one second&quot;) if any(word in msg for word in hello_triggers): await message.channel.send(file=discord.File('General_Kenobi.gif')) if any(word in msg for word in f_triggers): mention = message.author.name await message.channel.send(f&quot;{mention} had paid their respects.&quot;) if message.content.lower() == 'f' or message.content.lower() == 'F': mention = message.author.name await message.channel.send(f&quot;{mention} had paid their respects.&quot;) await client.process_commands(message) @client.command() async def catjam(ctx, *, text): message = f&quot;{text}&quot; new_message = &quot;&quot; for char in message: new_message += f&quot;&lt;a:catjam:800476635655962644&gt;{char}&quot; new_message += &quot;&lt;a:catjam:800476635655962644&gt;&quot; await ctx.send(new_message) await ctx.message.delete() @client.command() async def echo(ctx, *, text): await ctx.send(f&quot;{text}&quot;) await ctx.message.delete() @client.command() @commands.has_permissions(kick_members=True) async def kick(ctx, member: discord.Member, *, text): reason = f&quot;{text}&quot; mention = ctx.message.author.name pfp = member.avatar_url em = discord.Embed(title = f&quot;{member} has been kicked.&quot;, color = random.choice(colors)) em.add_field(name=&quot;Reason:&quot;, value=reason) em.add_field(name=&quot;Responsible User:&quot;, value=mention) em.set_thumbnail(url=(pfp)) await member.kick(reason=reason) await ctx.send(embed=em) @client.command() @commands.has_permissions(ban_members=True) async def ban(ctx, member: discord.Member, *, text): reason = f&quot;{text}&quot; mention = ctx.message.author.name pfp = member.avatar_url em = discord.Embed(title = f&quot;{member} has been banned.&quot;, description= f&quot;__Reason:__ {reason} __Responsible moderator:__ {mention}&quot;, color = random.choice(colors)) em.set_thumbnail(url=(pfp)) await member.ban(reason=reason) await ctx.send(embed=em) @client.command() async def comic(ctx): subreddit = reddit.subreddit(&quot;comic&quot;) all_subs = [] hot = subreddit.hot(limit = 100) for submission in hot: all_subs.append(submission) random_sub = random.choice(all_subs) name = random_sub.title url = random_sub.url em = discord.Embed(title = name, color = random.choice(colors)) em.set_image(url = url) await ctx.send(embed = em) @client.command() async def joke(ctx): subreddit = reddit.subreddit(&quot;cleanjokes&quot;) all_subs = [] hot = subreddit.hot(limit = 100) for submission in hot: all_subs.append(submission) random_sub = random.choice(all_subs) name = random_sub.title url = random_sub.url text = random_sub.selftext em = discord.Embed(title = name, color = random.choice(colors), description = text) await ctx.send(embed = em) @client.command() async def meme(ctx): subreddit = reddit.subreddit(&quot;cleanmemes&quot;) all_subs = [] hot = subreddit.hot(limit = 100) for submission in hot: all_subs.append(submission) random_sub = random.choice(all_subs) name = random_sub.title url = random_sub.url em = discord.Embed(title = name, color = random.choice(colors)) em.set_image(url = url) await ctx.send(embed = em) @client.command() async def cat(ctx): subreddit = reddit.subreddit(&quot;catpictures&quot;) all_subs = [] hot = subreddit.hot(limit = 100) for submission in hot: all_subs.append(submission) random_sub = random.choice(all_subs) name = random_sub.title url = random_sub.url em = discord.Embed(title = name, color = random.choice(colors)) em.set_image(url = url) await ctx.send(embed = em) @client.command() async def dog(ctx): subreddit = reddit.subreddit(&quot;dogpictures&quot;) all_subs = [] hot = subreddit.hot(limit = 100) for submission in hot: all_subs.append(submission) random_sub = random.choice(all_subs) name = random_sub.title url = random_sub.url em = discord.Embed(title = name, color = random.choice(colors)) em.set_image(url = url) await ctx.send(embed = em) @client.command() async def help(ctx): em = discord.Embed(color = random.choice(colors)) em.add_field(name='General Commands', value='__help__- Displays this message', inline=True) await ctx.send(embed = em) @client.command() async def server(ctx): server = ctx.message.guild roles = str(len(server.roles)) emojis = str(len(server.emojis)) channels = str(len(server.channels)) embeded = discord.Embed(title=server.name, description='Server Info', color=random.choice(colors)) embeded.set_thumbnail(url=server.icon_url) embeded.add_field(name=&quot;Created on:&quot;, value=server.created_at.strftime('%d %B %Y at %H:%M UTC+3'), inline=False) embeded.add_field(name=&quot;Server ID:&quot;, value=server.id, inline=False) embeded.add_field(name=&quot;Users on server:&quot;, value=server.member_count, inline=True) embeded.add_field(name=&quot;Server owner:&quot;, value=server.owner, inline=True) embeded.add_field(name=&quot;Server Region:&quot;, value=server.region, inline=True) embeded.add_field(name=&quot;Verification Level:&quot;, value=server.verification_level, inline=True) embeded.add_field(name=&quot;Role Count:&quot;, value=roles, inline=True) embeded.add_field(name=&quot;Emoji Count:&quot;, value=emojis, inline=True) embeded.add_field(name=&quot;Channel Count:&quot;, value=channels, inline=True) await ctx.send(embed=embeded) keep_alive() client.run(os.getenv('TOKEN')) </code></pre> <p>(I am adding this part here because it won't let me revise the question otherwise. It tells me &quot;Looks like your post is mostly code, please add some more details&quot;. So that is what I am doing, you don't need to read this part.)</p>
<p>This heavily depends on multiple factors.</p> <ol> <li>Your code - I cannot help you without seeing your code, but in any case - a few seconds is a lot and better code wont really change that on a small scale</li> <li>Your hardware - Also, this is a small part of your bot's performance, but should be negligible</li> <li>(I think) it is your internet connection. Or maybe Discord's API is having trouble right now.</li> </ol> <p>Of course, there are more factors to it, but it is most likely your internet connection.</p>
python|discord.py
2
1,909,408
66,290,129
How do I merge 2 on_message functions in discord.py
<p>I need to merge these 3 on_message functions into 1 function in discord.py rewrite version 1.6.0, what can I do?</p> <pre><code>import discord client = discord.Client() @client.event async def on_ready(): print('in on_ready') @client.event async def on_message(hi): print(&quot;hello&quot;) @client.event async def on_message(Hey there): print(&quot;General Kenobi&quot;) @client.event async def on_message(Hello): print(&quot;Hi&quot;) client.run(&quot;TOKEN&quot;) </code></pre>
<p>The <code>on_message</code> function is defined as:</p> <pre class="lang-py prettyprint-override"><code>async def on_message(message) </code></pre> <p>So you need to check the message contents in the <code>on_message</code> command and do whatever you need to depending on its contents, for that you can use the <code>Message</code> object's <code>content</code> attribute, as defined here <a href="https://discordpy.readthedocs.io/en/latest/api.html?highlight=message#discord.Message.content" rel="nofollow noreferrer">here</a>:</p> <pre class="lang-py prettyprint-override"><code>@client.event async def on_message(message): if message.content == &quot;hi&quot;: print(&quot;hello&quot;) elif message.content == &quot;Hey there&quot;: print(&quot;General Kenobi&quot;) elif message.content == &quot;Hello&quot;: print(&quot;Hi&quot;) </code></pre> <p>Do note though that if you want the bot to actually send a message, you need to replace <code>print(...)</code> with <code>await message.channel.send(...)</code>, as that is how you send a message, <code>print</code> only outputs to the terminal by default.</p>
python|discord|discord.py
1
1,909,409
69,269,601
Django dynamic URL with data from DB
<p><strong><strong>Models.py</strong></strong></p> <pre><code>from django.db import models # Create your models here. class reviewData(models.Model): building_name = models.CharField(max_length=50) review_content = models.TextField() star_num = models.FloatField() class buildingData(models.Model): building_name = models.CharField(max_length=50) building_loc = models.CharField(max_length=50) building_call = models.CharField(max_length=20) </code></pre> <p><strong><strong>views.py</strong></strong></p> <pre><code># Create your views here. from django.shortcuts import render from rest_framework.response import Response from .models import reviewData from .models import buildingData from rest_framework.views import APIView from .serializers import ReviewSerializer class BuildingInfoAPI(APIView): def get(request): queryset = buildingData.objects.all() serializer = ReviewSerializer(queryset, many=True) return Response(serializer.data) class ReviewListAPI(APIView): def get(request): queryset = reviewData.objects.all() serializer = ReviewSerializer(queryset, many=True) return Response(serializer.data) </code></pre> <p><strong><strong>urls.py</strong></strong></p> <pre><code>from django.contrib import admin from django.urls import path from crawling_data.views import ReviewListAPI from crawling_data.views import BuildingInfoAPI urlpatterns = [ path('admin/', admin.site.urls), path('api/buildingdata/', BuildingInfoAPI.as_view()), #path('api/buildingdata/(I want to put building name here)', ReviewListAPI.as_view()) ] </code></pre> <p>I am making review api.</p> <p>I want to use building name as url path to bring reviews for specific buildings</p> <p>For example, there are a, b, c reviews</p> <p>a, b reviews are for aaabuilding</p> <p>c reviews are for xxxbuilding</p> <p><em>api/buildingdata/aaabuilding (only shows aaabuilding review)</em></p> <pre><code>{ building_name = aaabuilding review_content = a star_num = 5 building_name = aaabuilding review_content = b star_num = 3 } </code></pre> <p><em>api/buildingdata/xxxbuilding (only shows xxxbuilding review)</em></p> <pre><code>{ building_name = xxxbuilding review_content = c star_num = 4 } </code></pre> <p>I've searched some dynamic URL posts, but they were not that I want.</p> <p>Also, I've posted a question before but there was no answer I was looking for.</p> <p>Is there any way to bring building name into URL from db?</p>
<p>Please refer to the documentation on <a href="https://docs.djangoproject.com/en/3.2/topics/http/urls/#path-converters" rel="nofollow noreferrer">path converters</a>. And usage of django's <a href="https://docs.djangoproject.com/en/3.2/ref/utils/#django.utils.text.slugify" rel="nofollow noreferrer">slugify function</a>.</p> <p>In your situation you will want a slug - but there are limitations to using a slug:</p> <ul> <li>slugs must translate to some unique string, in your case building name. Therefore you should make sure that building name and slug are unique in your model.</li> </ul> <p>You should add a slug field to the model - and also change the review model so it foreign keys to the building model:</p> <pre><code>from django.utils.text import slugify class buildingData(models.Model): building_name = models.CharField(max_length=50, unique=True) slug = models.SlugField(unique=True) building_loc = models.CharField(max_length=50) building_call = models.CharField(max_length=20) def save(self, *args, **kwargs): self.slug = slugify(self.building_name) return super().save(*args, **kwargs) class reviewData(models.Model): building = models.ForeignKey(buildingData, related_name='reviews', on_delete=models.CASCADE, null=False, blank=False) review_content = models.TextField() star_num = models.FloatField() </code></pre> <p>Then in your <code>urls.py</code>:</p> <pre><code>path('api/buildingdata/&lt;slug:slug&gt;/', ReviewListAPI.as_view()) </code></pre> <p>Then in your <code>Views.py</code>:</p> <pre><code>class ReviewListAPI(APIView): def get(self, request): building = get_object_or_404(buildingData, slug=self.kwargs.get('slug') queryset = building.reviews.all() serializer = ReviewSerializer(queryset, many=True) return Response(serializer.data) </code></pre> <p>Also review pep8 - your class names should really be BuildingData and ReviewData - and you problably dont need <code>Data</code> in the names either.</p>
python|django|url|django-rest-framework|url-rewriting
0
1,909,410
69,155,496
How do i fix css not loading
<p>When i run my Web app the html loads but the style.css doesn't what should i do? Well</p> <p>I Tried to change the style.css to another folder but didn't work please Give me some answers</p> <p>I also use Flask for my web app</p>
<pre><code> &lt;!-- Favicon--&gt; &lt;link rel=&quot;icon&quot; type=&quot;image/x-icon&quot; href=&quot;assets/favicon.ico&quot; /&gt; &lt;!-- Bootstrap icons--&gt; &lt;link href=&quot;https://cdn.jsdelivr.net/npm/bootstrap-icons@1.4.1/font/bootstrap-icons.css&quot; rel=&quot;stylesheet&quot; /&gt; &lt;!-- Core theme CSS (includes Bootstrap)--&gt; &lt;link href=&quot;css/style.css&quot; rel=&quot;stylesheet&quot; type=&quot;text/css&quot;/&gt; &lt;link rel=&quot;stylesheet&quot; media=&quot;all&quot; href=&quot;{{ url_for('static', filename='css_image/asdf.css')}}&quot;&gt; &lt;/head&gt; &lt;body&gt; &lt;!-- Responsive navbar--&gt; &lt;nav class=&quot;navbar navbar-expand-lg navbar-dark bg-dark&quot;&gt; &lt;div class=&quot;container px-lg-5&quot;&gt; &lt;a class=&quot;navbar-brand&quot; href=&quot;#&quot;&gt;Griffincode&lt;/a&gt; &lt;button class=&quot;navbar-toggler&quot; type=&quot;button&quot; data-bs-toggle=&quot;collapse&quot; data-bs-target=&quot;#navbarSupportedContent&quot; aria-controls=&quot;navbarSupportedContent&quot; aria-expanded=&quot;false&quot; aria-label=&quot;Toggle navigation&quot;&gt;&lt;span class=&quot;navbar-toggler-icon&quot;&gt;&lt;/span&gt;&lt;/button&gt; &lt;div class=&quot;collapse navbar-collapse&quot; id=&quot;navbarSupportedContent&quot;&gt; &lt;ul class=&quot;navbar-nav ms-auto mb-2 mb-lg-0&quot;&gt; &lt;li class=&quot;nav-item&quot;&gt;&lt;a class=&quot;nav-link active&quot; aria-current=&quot;page&quot; href=&quot;/&quot;&gt;Home&lt;/a&gt;&lt;/li&gt; &lt;li class=&quot;nav-item&quot;&gt;&lt;a class=&quot;nav-link&quot; href=&quot;#!&quot;&gt;About&lt;/a&gt;&lt;/li&gt; &lt;li class=&quot;nav-item&quot;&gt;&lt;a class=&quot;nav-link&quot; href=&quot;#!&quot;&gt;Contact&lt;/a&gt;&lt;/li&gt; &lt;/ul&gt; &lt;/div&gt; &lt;/div&gt; &lt;/nav&gt; &lt;!-- Header--&gt; &lt;header class=&quot;py-5&quot;&gt; &lt;div class=&quot;container px-lg-5&quot;&gt; &lt;div class=&quot;p-4 p-lg-5 bg-light rounded-3 text-center&quot;&gt; &lt;div class=&quot;m-4 m-lg-5&quot;&gt; &lt;h1 class=&quot;display-5 fw-bold&quot;&gt;A warm welcome!&lt;/h1&gt; &lt;p class=&quot;fs-4&quot;&gt;Bootstrap utility classes are used to create this jumbotron since the old component has been removed from the framework. Why create custom CSS when you can use utilities?&lt;/p&gt; &lt;a class=&quot;btn btn-primary btn-lg&quot; href=&quot;#!&quot;&gt;Call to action&lt;/a&gt; &lt;/div&gt; </code></pre>
python|html|css
0
1,909,411
59,141,956
What is the argument `serving_input_fn` of method `export_savedmodel`?
<p>I am trying to train a Char RNN and <strong>export/save the model</strong> after training, so that I can use it at inference. Here's the model:</p> <pre><code>def char_rnn_model(features, target): """Character level recurrent neural network model to predict classes.""" target = tf.one_hot(target, 15, 1, 0) #byte_list = tf.one_hot(features, 256, 1, 0) byte_list = tf.cast(tf.one_hot(features, 256, 1, 0), dtype=tf.float32) byte_list = tf.unstack(byte_list, axis=1) cell = tf.contrib.rnn.GRUCell(HIDDEN_SIZE) _, encoding = tf.contrib.rnn.static_rnn(cell, byte_list, dtype=tf.float32) logits = tf.contrib.layers.fully_connected(encoding, 15, activation_fn=None) #loss = tf.contrib.losses.softmax_cross_entropy(logits, target) loss = tf.contrib.losses.softmax_cross_entropy(logits=logits, onehot_labels=target) train_op = tf.contrib.layers.optimize_loss( loss, tf.contrib.framework.get_global_step(), optimizer='Adam', learning_rate=0.001) return ({ 'class': tf.argmax(logits, 1), 'prob': tf.nn.softmax(logits) }, loss, train_op) </code></pre> <p>and the training part: </p> <pre><code># train model_dir = "model" classifier = learn.Estimator(model_fn=char_rnn_model,model_dir=model_dir) count=0 n_epoch = 20 while count&lt;n_epoch: print("\nEPOCH " + str(count)) classifier.fit(x_train, y_train, steps=1000,batch_size=10) y_predicted = [ p['class'] for p in classifier.predict( x_test, as_iterable=True,batch_size=10) ] score = metrics.accuracy_score(y_test, y_predicted) print('Accuracy: {0:f}'.format(score)) count+=1 </code></pre> <p>(<code>x_train</code> is a uint8 array of shape (16639, 100))</p> <p><a href="https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/contrib/learn/Estimator?authuser=0&amp;hl=ro" rel="nofollow noreferrer">Tensorflow</a> documentation tells about the method <code>export_savedmodel</code> that seems to do what I want. But I don't understand the second argument <code>serving_input_fn</code>. What should it be ? <code>classifier.export_savedmodel(output_dir, ???)</code></p> <p>I am using Tensorflow 1.8.0 and python 2.7.14.</p> <p>This is in relation with <a href="https://stackoverflow.com/questions/59140429/inference-with-tensorflow-checkpoints">this thread</a>.</p> <p><strong>============ EDIT ============</strong></p> <p>I tried both solutions suggested <a href="https://stackoverflow.com/questions/51330841/how-to-save-and-restore-a-tf-estimator-estimator-model-with-export-savedmodel">in this thread</a>:</p> <ol> <li><p>Redefine an estimator with argument <code>model_dir</code> having the same value than the one used during training with hope that the model will be automatically restored with the optimized weights : </p> <p><code>with tf.Session() as sess: new_saver = tf.train.import_meta_graph(meta_file) new_saver.restore(sess, tf.train.latest_checkpoint(model_dir)) classifier = learn.Estimator(model_fn=char_rnn_model,model_dir=model_dir) new_input = ['Some sequence of characters'] char_processor = learn.preprocessing.ByteProcessor(100) new_input_processed = np.array(list(char_processor.transform(new_input))) print('new_input_processed: ', new_input_processed) p = classifier.predict(new_input_processed, as_iterable=True) predicted_class = p['class'] print('predicted_class: ', predicted_class)</code></p></li> </ol> <p>but I get the following error </p> <pre><code>INFO:tensorflow:Restoring parameters from section_header_rnn/section_header_text_only_char_rnn_max_depth=3/model.ckpt-512000 INFO:tensorflow:Using default config. INFO:tensorflow:Using config: {'_save_checkpoints_secs': 600, '_num_ps_replicas': 0, '_keep_checkpoint_max': 5, '_task_type': None, '_train_distribute': None, '_is_chief': True, '_cluster_spec': &lt;tensorflow.python.training.server_lib.ClusterSpec object at 0x7f6794bc3050&gt;, '_model_dir': 'section_header_rnn/section_header_text_only_char_rnn_max_depth=3', '_save_checkpoints_steps': None, '_keep_checkpoint_every_n_hours': 10000, '_session_config': None, '_tf_random_seed': None, '_save_summary_steps': 100, '_environment': 'local', '_num_worker_replicas': 0, '_task_id': 0, '_log_step_count_steps': 100, '_tf_config': gpu_options { per_process_gpu_memory_fraction: 1.0 } , '_evaluation_master': '', '_master': ''} ('new_input_processed: ', array([[ 83, 111, 109, 101, 32, 115, 101, 113, 117, 101, 110, 99, 101, 32, 111, 102, 32, 99, 104, 97, 114, 97, 99, 116, 101, 114, 115, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=uint8)) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-12-e069212a21a0&gt; in &lt;module&gt;() 11 new_input_processed = np.array(list(char_processor.transform(new_input))) 12 print('new_input_processed: ', new_input_processed) ---&gt; 13 p = classifier.predict(new_input_processed, as_iterable=True) 14 predicted_class = p['class'] 15 print('predicted_class: ', predicted_class) /home/user/anaconda2/lib/python2.7/site-packages/tensorflow/python/util/deprecation.pyc in new_func(*args, **kwargs) 430 'in a future version' if date is None else ('after %s' % date), 431 instructions) --&gt; 432 return func(*args, **kwargs) 433 return tf_decorator.make_decorator(func, new_func, 'deprecated', 434 _add_deprecated_arg_notice_to_docstring( /home/user/anaconda2/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.pyc in predict(self, x, input_fn, batch_size, outputs, as_iterable, iterate_batches) 668 outputs=outputs, 669 as_iterable=as_iterable, --&gt; 670 iterate_batches=iterate_batches) 671 672 def get_variable_value(self, name): /home/user/anaconda2/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.pyc in _infer_model(self, input_fn, feed_fn, outputs, as_iterable, iterate_batches) 966 training_util.create_global_step(g) 967 features = self._get_features_from_input_fn(input_fn) --&gt; 968 infer_ops = self._get_predict_ops(features) 969 predictions = self._filter_predictions(infer_ops.predictions, outputs) 970 mon_sess = monitored_session.MonitoredSession( /home/user/anaconda2/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.pyc in _get_predict_ops(self, features) 1312 labels = tensor_signature.create_placeholders_from_signatures( 1313 self._labels_info) -&gt; 1314 return self._call_model_fn(features, labels, model_fn_lib.ModeKeys.INFER) 1315 1316 def export_savedmodel(self, /home/user/anaconda2/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.pyc in _call_model_fn(self, features, labels, mode, metrics, config) 1225 if 'model_dir' in model_fn_args: 1226 kwargs['model_dir'] = self.model_dir -&gt; 1227 model_fn_results = self._model_fn(features, labels, **kwargs) 1228 1229 if isinstance(model_fn_results, model_fn_lib.ModelFnOps): &lt;ipython-input-10-55c752822baa&gt; in char_rnn_model(features, target) 1 def char_rnn_model(features, target): 2 """Character level recurrent neural network model to predict classes.""" ----&gt; 3 target = tf.one_hot(target, 15, 1, 0) 4 #byte_list = tf.one_hot(features, 256, 1, 0) 5 byte_list = tf.cast(tf.one_hot(features, 256, 1, 0), dtype=tf.float32) /home/user/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/array_ops.pyc in one_hot(indices, depth, on_value, off_value, axis, dtype, name) 2500 2501 return gen_array_ops.one_hot(indices, depth, on_value, off_value, axis, -&gt; 2502 name) 2503 2504 /home/user/anaconda2/lib/python2.7/site-packages/tensorflow/python/ops/gen_array_ops.pyc in one_hot(indices, depth, on_value, off_value, axis, name) 4364 _, _, _op = _op_def_lib._apply_op_helper( 4365 "OneHot", indices=indices, depth=depth, on_value=on_value, -&gt; 4366 off_value=off_value, axis=axis, name=name) 4367 _result = _op.outputs[:] 4368 _inputs_flat = _op.inputs /home/user/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.pyc in _apply_op_helper(self, op_type_name, name, **keywords) 526 raise ValueError( 527 "Tried to convert '%s' to a tensor and failed. Error: %s" % --&gt; 528 (input_name, err)) 529 prefix = ("Input '%s' of '%s' Op has type %s that does not match" % 530 (input_name, op_type_name, observed)) ValueError: Tried to convert 'indices' to a tensor and failed. Error: None values not supported. </code></pre> <ol start="2"> <li><p>The second solution is to use the function <code>serving_input_fn</code> defined in the answer to <a href="https://stackoverflow.com/questions/51330841/how-to-save-and-restore-a-tf-estimator-estimator-model-with-export-savedmodel">the thread</a>:</p> <p><code>def serving_input_fn():</code></p> <pre><code>inputs = {'features': tf.placeholder(tf.uint8)} return tf.estimator.export.ServingInputReceiver(inputs, inputs) </code></pre> <p><code>def train_test(x_train, y_train, x_test, y_test, max_depth):</code></p> <pre><code># Build model model_dir = "model_dir" classifier = learn.Estimator(model_fn=char_rnn_model,model_dir=model_dir) count=0 while count&lt;1 #n_epoch: print("\nEPOCH " + str(count)) classifier.fit(x_train, y_train, steps=1000,batch_size=10) y_predicted = [ p['class'] for p in classifier.predict( x_test, as_iterable=True,batch_size=10) ] score = metrics.accuracy_score(y_test, y_predicted) print('Accuracy: {0:f}'.format(score)) count+=1 print("\n More details:") predicted = [ p['class'] for p in classifier.predict( x_test, as_iterable=True,batch_size=10) ] print(metrics.classification_report(y_test, predicted)) export_dir = model_dir + "_EXPORT" classifier.export_savedmodel(export_dir, serving_input_fn) return f1_score </code></pre></li> </ol> <p>But again I obtain an error:</p> <pre><code>++++++++++ RUN 0 ++++++++++ INFO:tensorflow:Using default config. INFO:tensorflow:Using config: {'_save_checkpoints_secs': 600, '_num_ps_replicas': 0, '_keep_checkpoint_max': 5, '_task_type': None, '_train_distribute': None, '_is_chief': True, '_cluster_spec': &lt;tensorflow.python.training.server_lib.ClusterSpec object at 0x7f7af77a40d0&gt;, '_model_dir': 'section_header_text_only_char_rnn_max_depth=3', '_save_checkpoints_steps': None, '_keep_checkpoint_every_n_hours': 10000, '_session_config': None, '_tf_random_seed': None, '_save_summary_steps': 100, '_environment': 'local', '_num_worker_replicas': 0, '_task_id': 0, '_log_step_count_steps': 100, '_tf_config': gpu_options { per_process_gpu_memory_fraction: 1.0 } , '_evaluation_master': '', '_master': ''} EPOCH 0 INFO:tensorflow:Create CheckpointSaverHook. INFO:tensorflow:Graph was finalized. INFO:tensorflow:Restoring parameters from section_header_text_only_char_rnn_max_depth=3/model.ckpt-516000 INFO:tensorflow:Running local_init_op. INFO:tensorflow:Done running local_init_op. INFO:tensorflow:Saving checkpoints for 516001 into section_header_text_only_char_rnn_max_depth=3/model.ckpt. INFO:tensorflow:loss = 4.84674e-05, step = 516001 INFO:tensorflow:global_step/sec: 16.343 INFO:tensorflow:loss = 1.6498e-05, step = 516101 (6.118 sec) INFO:tensorflow:global_step/sec: 50.2856 INFO:tensorflow:loss = 9.97774e-06, step = 516201 (1.989 sec) INFO:tensorflow:global_step/sec: 51.1929 INFO:tensorflow:loss = 7.67226e-05, step = 516301 (1.953 sec) INFO:tensorflow:global_step/sec: 52.1178 INFO:tensorflow:loss = 4.43674e-05, step = 516401 (1.919 sec) INFO:tensorflow:global_step/sec: 51.4657 INFO:tensorflow:loss = 1.10982e-05, step = 516501 (1.943 sec) INFO:tensorflow:global_step/sec: 51.517 INFO:tensorflow:loss = 2.57473e-05, step = 516601 (1.941 sec) INFO:tensorflow:global_step/sec: 50.8899 INFO:tensorflow:loss = 3.01228e-05, step = 516701 (1.965 sec) INFO:tensorflow:global_step/sec: 50.3907 INFO:tensorflow:loss = 3.85045e-06, step = 516801 (1.985 sec) INFO:tensorflow:global_step/sec: 51.3651 INFO:tensorflow:loss = 1.09075e-05, step = 516901 (1.947 sec) INFO:tensorflow:Saving checkpoints for 517000 into section_header_text_only_char_rnn_max_depth=3/model.ckpt. INFO:tensorflow:Loss for final step: 1.77617e-05. INFO:tensorflow:Graph was finalized. INFO:tensorflow:Restoring parameters from section_header_text_only_char_rnn_max_depth=3/model.ckpt-517000 INFO:tensorflow:Running local_init_op. INFO:tensorflow:Done running local_init_op. Accuracy: 0.980756 More details: INFO:tensorflow:Graph was finalized. INFO:tensorflow:Restoring parameters from section_header_text_only_char_rnn_max_depth=3/model.ckpt-517000 INFO:tensorflow:Running local_init_op. INFO:tensorflow:Done running local_init_op. precision recall f1-score support 0 0.99 0.99 0.99 6139 1 0.94 0.95 0.94 1292 avg / total 0.98 0.98 0.98 7431 Confusion Matrix [[6066 73] [ 70 1222]] Done --------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-90-d7817a1be627&gt; in &lt;module&gt;() 65 print("RUN " + str(i_run)) 66 print("++++++++++") ---&gt; 67 f1 = train_test(x_train, y_train, x_test, y_test, max_depth) 68 f1s.append(f1) 69 print("\n\n") &lt;ipython-input-90-d7817a1be627&gt; in train_test(x_train, y_train, x_test, y_test, max_depth) 53 # export inference graph 54 export_dir = model_dir + "_EXPORT" ---&gt; 55 classifier.export_savedmodel(export_dir, serving_input_fn) 56 57 return f1_score /home/user/anaconda2/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.pyc in export_savedmodel(self, export_dir_base, serving_input_fn, default_output_alternative_key, assets_extra, as_text, checkpoint_path, graph_rewrite_specs, strip_default_attrs) 1386 input_ops = serving_input_fn() 1387 input_alternatives, features = ( -&gt; 1388 saved_model_export_utils.get_input_alternatives(input_ops)) 1389 1390 # TODO(b/34388557) This is a stopgap, pending recording model provenance. /home/user/anaconda2/lib/python2.7/site-packages/tensorflow/python/util/deprecation.pyc in new_func(*args, **kwargs) 248 'in a future version' if date is None else ('after %s' % date), 249 instructions) --&gt; 250 return func(*args, **kwargs) 251 return tf_decorator.make_decorator( 252 func, new_func, 'deprecated', /home/user/anaconda2/lib/python2.7/site-packages/tensorflow/contrib/learn/python/learn/utils/saved_model_export_utils.pyc in get_input_alternatives(input_ops) 171 input_alternatives[DEFAULT_INPUT_ALTERNATIVE_KEY] = default_inputs 172 else: --&gt; 173 features, unused_labels = input_ops 174 175 if not features: ValueError: too many values to unpack </code></pre>
<p>Try to see if there is the <code>model.save</code> method as the Tensorflow documentation says. Otherwise you can just save the weights and then reload them into a default model. You can write:</p> <pre><code># Save the weights model.save_weights('./checkpoints/my_checkpoint') # Create a new model instance model = create_model() # Restore the weights model.load_weights('./checkpoints/my_checkpoint') </code></pre> <hr> <h2>EDIT</h2> <p>In order to create a model very fast you can use the <code>Sequential</code> class that lets you save the model or the weights (see here: <a href="https://www.tensorflow.org/api_docs/python/tf/keras/Sequential#save" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/keras/Sequential#save</a>).</p> <p>A Sequential object is basically a container that you populate using <code>add(...)</code>. Operations will be done sequentially through the layers tyou specified. See Keras documentation (a wrapper for tensorflow, it uses basically the same objects), or directly tnesorflow.</p> <p>Here i show you simple model for text processing, that is composed into:</p> <ul> <li>Embedding layer (just a lookup table)</li> <li>Bidirectional LSTM layer</li> <li>Droput Layer</li> <li>Dense layer (dimension = 100)</li> <li>Dense layer (dimension = number of classes)</li> </ul> <p>Note that you don't have to specify a cycle for running different epochs, you just put the parameter in the fit method.</p> <pre><code>#Create empty sequential model = Sequential() # Add the different layers model.add(Embedding(VOCAB_SIZE, EMBEDDING_DIM, input_length=MAX_SEQUENCE_LENGTH, weights=[embedding_matrix])) model.add(Bidirectional(LSTM(512, return_sequences=False))) model.add(Dropout(0.2)) model.add(Dense(100, activation='sigmoid')) model.add(Dense(num_classes, activation='softmax')) # Compile configures the model for training model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) # Fit train data history = model.fit(train_we, np.array(train_lab), validation_split=0.2, epochs=num_epochs, batch_size=batch_size) # Custom function to plot the loss graph utils.plot_history(history) # Make predictions: it return a vector of probability on the given classes list_prediction_proba = model.predict(test_we) # Find the most likely class prediction = [np.where(probabilities == probabilities.max())[0].min() for probabilities in list_prediction_proba] </code></pre> <p>If you need more information try this: <a href="https://machinelearningmastery.com/save-load-keras-deep-learning-models/" rel="nofollow noreferrer">https://machinelearningmastery.com/save-load-keras-deep-learning-models/</a></p>
python|tensorflow|machine-learning|inference
1
1,909,412
58,915,345
Changing the default seaborn heatmap plots
<p>I have exported a large Matrix from Matlab to a <code>data.dat</code> file, which is tab delimited. I am importing this data into a iPython script to use <code>seaborn</code> to create a heatmap of the matrix using the following MWE:</p> <pre><code>import numpy as np import seaborn as sns import matplotlib.pylab as plt uniform_data = np.loadtxt("data.dat", delimiter="\t") ax = sns.heatmap(uniform_data, linewidth=0.0) plt.show() </code></pre> <p>This code runs fine and outputs a correct heatmap, generating the following output:</p> <p><a href="https://i.stack.imgur.com/oZTNH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oZTNH.png" alt="latex"></a></p> <p>How can I change the style of this output? Specifically, I would like to change the colour scheme and also have the fonts in LaTeX form. This is since I would like to export this output as a .pdf file and import into a LaTeX document.</p>
<ol> <li>You can control the color scheme with the <code>cmap</code> key of <code>sns.heatmap()</code>. See <a href="https://seaborn.pydata.org/tutorial/color_palettes.html" rel="nofollow noreferrer">here</a> for what color maps are available.</li> <li>Generally to make all fonts in a plot look like latex fonts you can do </li> </ol> <p><code>sns.set(rc={'text.usetex': True})</code></p> <p>What it does is adding a <code>$</code> around each text object in the plot to allow the underlying tex environment to "tex-ify" it. This works fine for the colorbar but, as you can see <a href="https://github.com/mwaskom/seaborn/issues/1363" rel="nofollow noreferrer">here</a>, there seems to be a (to me still unresolved bug) making it not working for axes ticklabels. As a workaround you can manually add <code>$</code> around all tick labels to allow the tex interpreter recognize it as tex again Γ  la</p> <pre><code># Collecting all xtick locations and labels and wrapping the labels in two '$' plt.xticks(plt.xticks()[0], ['$' + label._text + '$' for label in plt.xticks()[1]]) </code></pre> <p>So for demonstrating your example</p> <pre><code>import matplotlib.pyplot as plt import numpy as np import seaborn as sns # Use tex for all labels globally in the plot sns.set(rc={'text.usetex': True}) uniform_data = np.random.rand(30, 30) # Adjust colormap with the cmap key (here 'cubehelix') ax = sns.heatmap(uniform_data, linewidth=0.0, cmap='cubehelix') # workaround wrap '$' around tick labels for x and y axis # commenting the following two lines makes only the colorbar in latex font plt.xticks(plt.xticks()[0], ['$' + label._text + '$' for label in plt.xticks()[1]]) plt.yticks(plt.yticks()[0], ['$' + label._text + '$' for label in plt.yticks()[1]]) plt.show() </code></pre> <p>leads to</p> <p><a href="https://i.stack.imgur.com/at2eK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/at2eK.png" alt="enter image description here"></a></p>
python|latex|seaborn
0
1,909,413
58,622,983
AttributeError After processing the view
<p>After Submitting the like button data is successfuly updating with the database but after this step it won't redirecting to the successful url. instead of that it is throwing attribute error. If I use <code>HttpResponseRedirect('/album/')</code> instaed of successful url this error is not comming. <a href="http://dpaste.com/22227PB" rel="nofollow noreferrer">Please refer this link for the traceback</a></p> <h2>models.py</h2> <p>Codes in models.py</p> <pre><code>class VoteManager(models.Manager): def get_vote_or_unsaved_blank_vote(self,song,user): try: return Vote.objects.get(song=song,user=user) except ObjectDoesNotExist: return Vote.objects.create(song=song,user=user) class Vote(models.Model): UP = 1 DOWN = -1 VALUE_CHOICE = ((UP, "️"),(DOWN, "️"),) like = models.SmallIntegerField(null=True, blank=True, choices=VALUE_CHOICE) user = models.ForeignKey(User,on_delete=models.CASCADE) song = models.ForeignKey(Song, on_delete=models.CASCADE) voted_on = models.DateTimeField(auto_now=True) objects = VoteManager() class Meta: unique_together = ('user', 'song') </code></pre> <h2>views.py</h2> <p>codes in views.py</p> <pre><code>class SongDetailView(DetailView): model = Song template_name = 'song/song_detail.html' def get_context_data(self,**kwargs): ctx = super().get_context_data(**kwargs) if self.request.user.is_authenticated: vote = Vote.objects.get_vote_or_unsaved_blank_vote(song=self.object, user = self.request.user) vote_url = reverse('music:song_vote_create', kwargs={'song_id':vote.song.id}) vote_form = SongVoteForm(instance=vote) ctx['vote_form'] = vote_form ctx['vote_url'] = vote_url return ctx class SongVoteCreateView(View): form_class = SongVoteForm context = {} def get_success_url(self,**kwargs): song_id = self.kwargs.get('song_id') return reverse('music:song_detail', kwargs={'pk':song_id}) def post(self,request,pk=None,song_id=None): user = self.request.user song_obj = Song.objects.get(pk=song_id) vote_obj,created = Vote.objects.get_or_create(song = song_obj,user = user) vote_form = SongVoteForm(request.POST, instance=vote_obj) if vote_form.is_valid(): new_vote = vote_form.save(commit=False) new_vote.user = self.request.user new_vote.save() return new_vote </code></pre> <h2>song_detail.html</h2> <p>codes in html page.</p> <pre><code> &lt;form action="{{vote_url}}" method="post" enctype="multipart/form-data"&gt; {% csrf_token %} {{ vote_form.as_p }} &lt;button class="btn btn-primary" type="submit" &gt;Vote&lt;/button&gt; &lt;/form&gt; </code></pre> <h2>Error code</h2> <p><a href="http://dpaste.com/22227PB" rel="nofollow noreferrer">Please refer this link for the traceback</a></p> <p><code>'Vote' object has no attribute 'get'</code></p>
<p>The <code>post</code> method needs to return an HttpResponse, not a Vote object.</p> <p>But you shouldn't be defining <code>post</code> in the first place. All that code should go in <code>form_valid</code>.</p> <pre><code>def form_valid(self, form): user = self.request.user song_obj = Song.objects.get(pk=self.kwargs['song_id']) vote_obj, _ = Vote.objects.get_or_create(song = song_obj, user = user) form.instance = vote_obj return super().form_valid(form) </code></pre> <p>Note, you don't need to check <code>is_valid</code>, and you also don't need to set the user as you've already done that in <code>vote_obj</code>.</p>
django|python-3.x|django-models|django-forms|django-views
2
1,909,414
73,370,772
Does space complexity change if moving items from one data structure to another without adding new objects?
<p>I have the following function, that checks for the first duplicate value in an array. Example: an array input of <code>[123344]</code> would return <code>3</code>.</p> <p>I believe the space complexity is O(1) since the total space being used remains constant. Is this correct?</p> <pre><code>#o(n) time. #o(1) space or o(n) space? def firstDuplicateValue(array): found = set() while len(array) &gt; 0: x = array.pop(0) #we remove elements from the array as we add them to the set if x in found: return x else: found.add(x) return -1 </code></pre>
<h2>Space complexity</h2> <p>I have to answer &quot;no, the space complexity is not O(1)&quot; for two reasons. The first is theoretical and pedantic, the second is more practical.</p> <p>We typically consider space complexity to be the extra space which your program needs access to in addition to the input it is given.</p> <p>Some programs read the input but are not allowed to modify it. For these programs, your question isn't raised and the space complexity is exactly the space taken by all the extra variables and data structures created in the program.</p> <p>Some programs are explicitly allowed to modify the input. Still, for those programs, the space complexity only counts the extra space, not the space already in the input. For these programs it is quite possible to have a space complexity of O(log(n)). A space complexity of O(1) is unheard of in theory, because to iterate over an input consisting in an array of n elements, you need a counter variable that can count up to n, which requires log(n) bits. So, when people say O(1) in practice, they probably mean O(log(n)) in theory. This is my first reason to answer no: if we're being pedantic, the space complexity cannot be better than O(log(n)) in theory.</p> <p>Your program modifies the input <code>array</code> by deleting elements from it, and stores these elements in an extra data structure <code>found</code>. In theory, this new data structure could fit exactly in the space liberated from the input array, and if that were the case, you would be right that the complexity of your algorithm is better than O(n). In practice, it looks shady because:</p> <ul> <li>there is no disclaimer in your function that warns the user that it might destroy the input;</li> <li>there is no guarantee by the python interpreter that <code>array.pop()</code> will really free space on the computer.</li> </ul> <p>In practice, python interpreters typically only resize the array when it doubles or halves ido not free space when using <code>array.pop()</code>, until you've popped about half the values in the array. This is my second reason to say no: you need to pop at least n/2 values from the input array before any space will be freed. Before that happens, you will have used n/2 space in your extra data structure <code>found</code>. Hence the space complexity of your function will not be better than O(n) with a standard python interpreter.</p> <h2>Time complexity</h2> <p>Your fist comment in the code says <code>#o(n) time.</code>. But that is not correct for two reasons. First, don't confuse <strong>o()</strong> and <strong>O()</strong>, which are very different. Second, this function is O(n^2), not O(n). This is because of the repeated use of <code>array.pop(0)</code>. As a good rule of thumb: never use <code>list.pop(0)</code> in python. Use <code>list.pop()</code> if you can, or find some other way.</p> <p><code>list.pop(0)</code> is terribly inefficient, as it removes the first element, then moves every other element one space up to fill the gap. Thus, every call to <code>list.pop(0)</code> has O(n) time complexity, and your function makes up to n calls to <code>list.pop(0)</code>, so the function has O(n^2) time complexity.</p>
python|algorithm|time-complexity|space-complexity
0
1,909,415
59,587,907
The Requested url not found. 404 error in pythonanywhere
<p>I have Flask app, and I wanted to run in pythonanywhere.com. I have followed the links <a href="https://stackoverflow.com/questions/53867776/deploying-python-flask-app-on-web-using-pythonanywhere">Link1</a>, <a href="https://stackoverflow.com/questions/15584241/pythonanywhere-404-error">Link2</a>, couldn't resolve the issue. My flask app name is <code>tourismmining</code> and the contents of the directory is listed under</p> <h2>Application Structure</h2> <p>My application is structured in the below way.</p> <pre><code>tourismmining |-&gt; instance |-&gt; mining |-&gt; resources |-&gt; entry.py |-&gt; installation.txt |-&gt; License |-&gt; Procfile |-&gt; README.rst |-&gt; requirements.txt </code></pre> <p>The core application is in the directory <code>mining</code> and is structured in the following way</p> <pre><code>mining |-&gt; config |-&gt; db |-&gt; exceptions |-&gt; files |-&gt; forms |-&gt; logs |-&gt; models |-&gt; sink |-&gt; static |-&gt; templates |-&gt; __init__.py |-&gt; main.py |-&gt; utils.py </code></pre> <p>So, just by flask local server, I used to run the below command</p> <pre><code> cd tourismmining python -m mining.main </code></pre> <p>and below is the output</p> <p><a href="https://i.stack.imgur.com/M7LZI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/M7LZI.png" alt="enter image description here" /></a></p> <p>In touristmining, I have written an another file named entry.py, and it contains</p> <pre class="lang-py prettyprint-override"><code> from mining import app if __name__ == '__main__': app.run(debug=True) </code></pre> <p>And I run the file <code>entry.py</code> using the below command</p> <pre><code> cd tourismmining python entry.py </code></pre> <p><a href="https://i.stack.imgur.com/K2nh6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/K2nh6.png" alt="enter image description here" /></a></p> <p>Now, after uploading the same code in to pythonanywhere, the directory structure is given below</p> <p><a href="https://i.stack.imgur.com/uWYLz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uWYLz.png" alt="enter image description here" /></a></p> <p>Setting up the virtual environment and tried two ways of configuring the wsgi file which are given below</p> <pre class="lang-py prettyprint-override"><code>import sys path = '/home/s1782662edin/tourismmining' if path not in sys.path: sys.path.insert(0, path) from mining import app as application </code></pre> <pre class="lang-py prettyprint-override"><code>import sys path = '/home/s1782662edin/tourismmining' if path not in sys.path: sys.path.insert(0, path) from entry import app as application </code></pre> <p>And reload the website, nothing worked out. Its says</p> <pre><code>Not Found The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again. </code></pre>
<p>Make sure that you have routes defined and that the code for defining the routes is run when app is imported from mining.</p>
python|flask|pythonanywhere
1
1,909,416
59,677,811
trouble in a piece of code python getattr()
<p>I was reading a book <em>Programming Python</em> (4th edition) and I am stuck at a problem with a piece of code including <code>getattr</code>. It is a loop for finding an icon file but I can't figure where it is searching (in what order). I have basic knowledge of using <code>getattr</code> like it searches the <code>__dict__</code> for names, but couldn't figure this one out.</p> <pre><code>mymod = __import__(__name__) # import self for dir path = __name__.split('.') # poss a package path for mod in path[1:]: # follow path to end mymod = getattr(mymod, mod) # only have leftmost mydir = os.path.dirname(mymod.__file__) myicon = os.path.join(mydir, self.iconmine) </code></pre> <p>The comments are from the book. The last comment says "only have leftmost" so why run the loop if we want the leftmost - can't we do <code>path[-1]</code> instead?</p>
<p>Suppose you are working on a python project in the folder <code>/home/py_project/</code></p> <p>If you are coding a module (.py file) using the following path: <code>/home/py_project/modules/my_file.py</code> and the modules are defined in <code>/home/py_project/modules/__init__.py</code></p> <p>then,</p> <pre class="lang-py prettyprint-override"><code>mymod = __import__(__name__) # import self for dir print(mymod) </code></pre> <p>yields: <code>&lt;module 'modules' from '/home/py_project/modules/__init__.py'&gt;</code></p> <pre class="lang-py prettyprint-override"><code>path = __name__.split('.') print(path) </code></pre> <p>yields: <code>['modules', 'my_file']</code></p> <p>The <code>for mod in path[1:]</code> is slicing the previous list and getting all items from 1 to inf, in this case, only 'my_file' is considered.</p> <pre class="lang-py prettyprint-override"><code>for mod in path[1:]: # follow path to end mymod = getattr(mymod, mod) # only have leftmost # same as mymod = modules.my_file mydir = os.path.dirname(mymod.__file__) print(mydir) </code></pre> <p>yields: <code>/home/py_project/modules</code></p> <pre class="lang-py prettyprint-override"><code>myicon = os.path.join(mydir, self.iconmine) print(myicon) </code></pre> <p>yields: <code>/home/py_project/modules/path_to_self.iconmine # I don't know what self.iconmine is, you didn't mention it.</code></p> <p>Always print (or debug) the steps to the understand the code.</p> <h1>getattr()</h1> <p>The <code>getattr()</code> method returns the value of the named attribute of an object. If not found, it returns the default value provided to the function. For example:</p> <p>The syntax of getattr() method is:</p> <p><code>getattr(object, name[, default])</code></p> <p>The above syntax is equivalent to:</p> <p><code>object.name</code></p> <p>Example:</p> <pre class="lang-py prettyprint-override"><code>class Person: name = "Louis" person = Person() print('Name:', person.name) print('Name:', getattr(person, "name")) # using getattr() is the same as person.name </code></pre>
python|getattr
0
1,909,417
59,801,368
Can Keras' Sequential fit() function take as train data a Pandas Data Frame?
<p>I want to create a neural network with Keras and my training data is in a pandas data frame, called df_train, which has the following form. Every row is an event/observation consisting of 51 variables.</p> <pre><code>df_train.head() </code></pre> <p><a href="https://i.stack.imgur.com/nRX3r.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nRX3r.png" alt="enter image description here"></a></p> <p>My question is, can I use this df_train data frame as an input in the Keras model.fit() command? As following</p> <pre><code>net = Sequential() net.add(Dense(70, input_dim = 51, activation = "relu")) net.add(Dense(70, activation = "relu")) net.add(Dense(1, activation = "sigmoid")) net.compile(loss = "binary_crossentropy", optimizer = "adam", metrics = ["accuracy"]) net.fit(df_train, train_labels, epochs = 300, batch_size = 100) </code></pre> <p>In the net.fit() I pass as train data a data frame, but in sequential documentation it doesnt mention a data frame as a valid input. However, in my code it works and the model runs normally. Is something happening wrong behind the backstage and simply there is no error, or it is running as intended even when you use a pandas data frame as input? </p> <p>Also, if it works, does the fit() command in this case take as input one row of the given data frame a t the time?</p> <p>Thanks a lot.</p>
<pre><code>net.fit(df_train, train_labels, epochs = 300, batch_size = 100) </code></pre> <p>In this df_train is 2D and train_label can be 2D or 1D (it depend upon loss which you mention and units of output layers)</p> <p>2nd question answer : Yes you can do</p> <p>If you want to input X as single row which is 1D then it generated error:</p> <pre><code> ValueError: Expected 2D array, got 1D array instead:array=[1. 2. 3. 4.]. Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample. </code></pre> <p><strong>How we resolve this one</strong></p> <p>converted into 2d !</p> <pre><code> X=train.iloc[0:1,:] print(X.shape) output:(1, 25) # now this single row converted into two dim: </code></pre>
python|keras|neural-network
1
1,909,418
48,924,045
PyScard - What is the interpretation of the data obtained after executing GET RESPONSE?
<p>I'm trying to figure out the file hierarchy within a contact smart card using pyScard and ISO 7816 commands. </p> <p>The first thing I do is selecting the master file (INS = 0xA4) using</p> <pre><code> connection.execute([0x0, 0xA4, 0x0, 0x0, 0x0]) </code></pre> <p>This returns ([ ], 0x61, 0x19) which means I have to run a GET_RESPONSE (INS = 0xC0) command in order to get the answer.</p> <p>To do so I run</p> <pre><code> connection.execute([0x0, 0xC0, 0x0, 0x0, 0x19]) </code></pre> <p>which returns a set of bytes (besides the 0x90 00).</p> <p>If I understand it correctly, by running the SELECT FILE I've selected the master file but I don't seem to find an interpretation for those bytes that I receive with GET RESPONSE, what does that mean? How do you interpretate them?</p> <p>Thanks!! :)</p>
<p>Thanks <a href="https://stackoverflow.com/questions/48924045/pyscard-what-is-the-interpretation-of-the-data-obtained-after-executing-get-re?noredirect=1#comment84869737_48924045">guidot</a>!!</p> <blockquote> <p>GET RESPONSE has no definition of its own for a card speaking T=0, which you seem to have. You have to look at SELECT command. Its quite clear, that a typical card will return a number of FCIs (file control parameter data objects) wrapped in an 62/64/6F tag in TLV-format, but you have to compare this by looking at the response you receive. A card can return something completely proprietary however and still claim to conform to 7816-4. Without an OS manual you are out of luck then.</p> </blockquote>
python|smartcard|iso|pyscard
0
1,909,419
49,265,977
How do I create genders in a text based python game?
<p>I've just started to make a text-based adventure game and started to make the intro but ran into a problem...</p> <p>I want to be able to let the player choose their gender (which is pointless but hey why not). So I started to program it in but then encountered a problem.</p> <p>Here's what I have. So far...</p> <pre><code>#Date started: 3/13/2018 #Description: text based adventure game import random import time def main(): def displayIntro(): print('It is the end of a 100 year war between good and evil that had killed more than 80% of the total human population.') time.sleep(4) print('The man who will soon be your father was a brave adventurer who fought for the good and was made famous for his heroism.') time.sleep(4) print('One day that brave adventurer meet a beautiful woman who he later wed and had you.') time.sleep(4) gen = "" while gen != "1" and gen != "2": # input validation gen = input('Your mother had a [Boy(1) or Girl (2)]: ') return gen playname = input('They named you: ') displayIntro() main() </code></pre> <p>I'm also getting the error message: </p> <pre><code>File "&lt;ipython-input-1-940c72b6cc26&gt;", line 10 def displayIntro(): ^ IndentationError: expected an indented block </code></pre>
<p>You had a ton of syntax errors in your code. I would recommend exploring a good IDE (integrated development environment) like PyCharm because these programs will yell at you if you have indented things incorrectly, made a typo, etc. They are invaluable. </p> <p>I have edited your code to give you a start on better understanding how a program should be put together. Problems fixed include: </p> <p>(1) Your multi-line strings need to be concatenated together somehow; I used the <code>+</code> operator</p> <p>(2) Python is strict about indentation; many of your blocks were improperly indented. See <a href="https://docs.python.org/2.0/ref/indentation.html" rel="nofollow noreferrer">https://docs.python.org/2.0/ref/indentation.html</a>, and </p> <p>(3) If you actually want to use a <code>main()</code> function you might as well do it the canonical way. This is typically a function that wraps everything else together and then is run under <code>if __name__ == "__main__":</code>.</p> <pre><code>import random import time def display_intro(): print('It is the end of a 100 year war between good and evil that had \n' + 'killed more than 80% of the total human population. \n') time.sleep(4) print('The man who will soon be your father was a brave adventurer who \n' + 'fought for the good and was made famous for his heroism. \n') time.sleep(4) print('One day that brave adventurer meet a beautiful woman who he later \n' + 'wed and had you. \n') time.sleep(4) def get_gender(gen=None): while gen != "1" and gen != "2": # input validation gen = input('\nYour mother had a [Boy(1) or Girl (2)]: ') return gen def main(): display_intro() gender_num = get_gender() print("This is a test. You entered {}".format(gender_num)) if __name__ == "__main__": main() </code></pre>
python|python-3.x
0
1,909,420
49,289,791
AttributeError: 'str' object has no attribute 'text' python 2.7
<p>Ik there are many questions like this but the answers are all specific and can only fix the solution for the persons specific script. I am currently trying to print a bunch of info from supremenewyork.com from the uk website. This script can succsesfully print all the info I want from supreme us but when I added the proxy script I starte to get alot of errors. I know the prxy script works becuase I tested it on a small scipt and It was able to pull info that was on supreme uk and didnt exist on supreme us Here is my script.</p> <pre><code>import requests from bs4 import BeautifulSoup UK_Proxy1 = raw_input('UK http Proxy1: ') UK_Proxy2 = raw_input('UK http Proxy2: ') proxies = { 'http': 'http://' + UK_Proxy1 + '', 'https': 'http://' + UK_Proxy2 + '', } categorys = ['jackets','shirts','tops_sweaters','sweatshirts','pants','shorts','t- shirts','hats','hats','bags','accessories','shoes','skate'] catNumb = 0 altArray = [] nameArray = [] styleArray = [] for cat in categorys: catStr = str(categorys[catNumb]) cUrl = 'http://www.supremenewyork.com/shop/all/' + catStr proxy_script = requests.get((cUrl.text), proxies=proxies) bSoup = BeautifulSoup(proxy_script, 'lxml') print('\n*******************"'+ catStr.upper() + '"*******************\n') catNumb += 1 for item in bSoup.find_all('div', class_='inner-article'): url = item.a['href'] alt = item.find('img')['alt'] req = requests.get('http://www.supremenewyork.com' + url) item_soup = BeautifulSoup(req.text, 'lxml') name = item_soup.find('h1', itemprop='name').text style = item_soup.find('p', itemprop='model').text print alt +(' --- ')+ name +(' --- ')+ style altArray.append(alt) nameArray.append(name) styleArray.append(style) print altArray print nameArray print styleArray </code></pre> <p>I am getting this error when I execute the script </p> <p>AttributeError: 'str' object has no attribute 'text' with the error pointing towards the </p> <p>proxy_script = requests.get((cUrl.text), proxies=proxies)</p> <p>i recently added this to the script which sorta fixed it... It was able to print the category's but no info between them. Which (I NEED) it just printed ****************jackets**************, ****shirts******, etc.... here is what I changed </p> <pre><code>import requests from bs4 import BeautifulSoup # make sure proxy is http and port 8080 UK_Proxy1 = raw_input('UK http Proxy1: ') UK_Proxy2 = raw_input('UK http Proxy2: ') proxies = { 'http': 'http://' + UK_Proxy1 + '', 'https': 'http://' + UK_Proxy2 + '', } categorys = ['jackets','shirts','tops_sweaters','sweatshirts','pants','shorts','t-shirts','hats','bags','accessories','shoes','skate'] catNumb = 0 altArray = [] nameArray = [] styleArray = [] for cat in categorys: catStr = str(categorys[catNumb]) cUrl = 'http://www.supremenewyork.com/shop/all/' + catStr proxy_script = requests.get(cUrl, proxies=proxies).text bSoup = BeautifulSoup(proxy_script, 'lxml') print('\n*******************"'+ catStr.upper() + '"*******************\n') catNumb += 1 for item in bSoup.find_all('div', class_='inner-article'): url = item.a['href'] alt = item.find('img')['alt'] req = requests.get('http://www.supremenewyork.com' + url) item_soup = BeautifulSoup(req.text, 'lxml') name = item_soup.find('h1', itemprop='name').text style = item_soup.find('p', itemprop='model').text print alt +(' --- ')+ name +(' --- ')+ style altArray.append(alt) nameArray.append(name) styleArray.append(style) </code></pre> <p>print altArray print nameArray print styleArray</p> <p>I put .text at the end and it worked sorta.... How do i fix it so it prints the info I want???</p>
<p>I think you miss smt. Your cUrl is a string type, not request type. I guess you want: proxy_script = requests.get(cUrl, proxies=proxies).text</p>
string|python-2.7|proxy|beautifulsoup|python-requests
0
1,909,421
25,048,953
strange results returned with unix odbc and pypyodbc
<p>I have odbc set up to query a Microsoft SQL server instance. My SQL statement run's fine from the <code>isql</code> command line; however, when i use the same query in <code>pypyodbc</code> the results are incorrect. What's unique about my query is that I have a nested left join.</p> <p>The dataset is basically some user login information; so i have a device table (hardware), a node table (software) and a user table (who's logged on to the node).</p> <p>What my nested left join does is to create a new result for each user that has logged onto the device/node and, of course, return no user information if no one has logged on. As I mentioned, the SQL statement works on the command line; just not in my python script - from the python script it appends a user where from the command line it doesn't. Can someone help shed some light for me please?</p> <p>My SQL Query:</p> <pre><code>SELECT deviceserialid AS serial, devicemanufacturer AS vendor, devicemodel AS model, nodecn AS nodename, nodeoperatingsystem AS os_value, nodeoperatingsystemversion AS os_version, nodelastlogontimestamp AS node_last_login, nodedeviceuseruserid AS user_id, ip.ipaddress AS ip_address, ip.ipmacaddress AS mac_address, [user].username AS user_name, [user].useremplid AS user_id FROM nodedevice LEFT JOIN nodedeviceuser ON nodedeviceuser.nodedeviceusernodedeviceid=nodedevice.nodeid LEFT JOIN [user] ON [user].userid=nodedeviceuser.nodedeviceuseruserid, device, node LEFT JOIN ip ON ip.ipnodeid=node.nodeid WHERE nodedevice.nodeid=node.nodeid AND nodedevice.deviceid=device.deviceid ORDER BY nodename, mac_address </code></pre> <p>My Python code:</p> <pre><code>def rows_to_dict_list(cursor): columns = [i[0].lower() for i in cursor.description] for row in cursor: try: d = dict(zip(columns, row)) except: yield None cursor.execute( &lt;SQL_STATEMENT&gt; ) for this in rows_to_dict_list(cursor): print "%s" % (this,) </code></pre>
<p>seems like there's a bug in <code>pypyodbc</code>; i swapped the module with <code>pyodbc</code> and it worked fine.</p>
python|sql|sql-server|left-join|pypyodbc
0
1,909,422
25,072,573
How to use Python to login and detect error if the combination is wrong
<p>i want to connect to my site using python script and detect when the login fails. for example in wordpress when the login is incorrect the URL don't forward from wp-login.php to /wp-admin/ and gives you this message :</p> <pre><code>ERROR: The password you entered for the username admin is incorrect. Lost your password? </code></pre> <p>i want the method to detect this two kinds of errors : 1-URL don't change. 2-Error message. and this is my code so far to detect error message : </p> <pre><code>import urllib2 import re html_content = urllib2.urlopen('http://example.com/login').read() matches = re.findall('Error', html_content); if len(matches) == 0: print 'Login Error' else: print 'Login Successfull' </code></pre> <p>thanks for any helps guys ^_^ </p>
<p>Please note that when working with Python for web dev, it's most convenient to use an already established framework, such as Flask which uses Werkzeug (simple and efficient).</p> <p>I suggest you read up on Python web development here</p> <p><a href="https://docs.python.org/2/howto/webservers.html" rel="nofollow">https://docs.python.org/2/howto/webservers.html</a></p> <p>and head to <a href="http://www.codecademy.com" rel="nofollow">http://www.codecademy.com</a> and finish the Python course first if you are unfamiliar with how to use it.</p> <p>From Python official doc:</p> <p>"The Web Server Gateway Interface, or WSGI for short, is defined in PEP 333 and is currently the best way to do Python web programming. While it is great for programmers writing frameworks, a normal web developer does not need to get in direct contact with it. When choosing a framework for web development it is a good idea to choose one which supports WSGI."</p> <p>I suggest Flask because it is really, really simple. But also take a look at</p> <p><a href="https://pelican.readthedocs.org/en/3.4.0/" rel="nofollow">https://pelican.readthedocs.org/en/3.4.0/</a></p> <p>This may not be the answer you were hoping for, but without showing us your current code, and telling us what you've tried, we cannot help any more :)</p> <p>or to put it more bluntly, turn to StackOverflow only after you've invested some effort into researching in order to solve your problem, and failed after trying to implement a solution or two.</p>
python|wordpress
2
1,909,423
70,886,732
How do you read strings from a data register using pymodbus?
<p>I'm trying to use my raspberry pi as a client for modbus communication between it and a S7-1200 Siemens plc. I got it to read integers from holding register, but not strings. When I try to search for a solution online on how to read strings from the plc holding register nothing useful comes up. I tried a program online that is supposed to be able to do it but I just keep getting an error every time I run it. Can anyone help? I have the code for the program and the error message below.</p> <pre><code>from pymodbus.client.sync import ModbusTcpClient as ModbusClient from pymodbus.payload import BinaryPayloadDecoder from pymodbus.constants import Endian from pymodbus.compat import iteritems if __name__ == '__main__': client = ModbusClient(&quot;192.168.0.1&quot;, port=502, auto_open=True) client.connect() result = client.read_holding_registers(1, 1, unit=1) print(&quot;Result : &quot;,result) decoder = BinaryPayloadDecoder.fromRegisters(result.registers, byteorder=Endian.Big, wordorder=Endian.Big) decoded = { 'name': decoder.decode_string(10).decode(), } for name, value in iteritems(decoded): print (&quot;%s\t&quot; % name, value) client.close() </code></pre> <p><a href="https://i.stack.imgur.com/DYIs2.png" rel="nofollow noreferrer">Error Message</a></p> <p><a href="https://i.stack.imgur.com/HijmH.png" rel="nofollow noreferrer">Data Register</a></p> <p><a href="https://i.stack.imgur.com/eOFjz.png" rel="nofollow noreferrer">PLC MB Server Block Setup</a></p>
<p>for example you want to read speed of a asynchronous motor and you use a driver. You have a raspberry pi. You use python and Modbus TCP Protocol .</p> <p>speed of motor is registered 40001.</p> <p>result = client.read_holding_registers(1, 1, unit=1) you can read just 40001 adress.</p> <p>result = client.read_input_registers(0x01,1, unit=0x01) you can read just 30001 adress.</p>
python|raspberry-pi|plc|modbus-tcp|pymodbus
0
1,909,424
60,320,610
NameError: name '*' is not defined
<p>Why this line <code>add(root, sum, 0)</code> gets <code>NameError: name 'add' is not defined</code> error?</p> <pre><code>class Solution: def rob(root: TreeNode) -&gt; int: sum = [0, 0] add(root, sum, 0) if sum[0] &lt; sum[1]: return sum[1] else: return sum[0] def add(node: TreeNode, sum: List[int], index: int): if not node: return sum[index] += node.val index += 1 if index &gt;= len(sum): index = 0; add(node.left, sum, index) add(node.right, sum, index) </code></pre>
<p>they are class functions, make use of <code>self</code></p> <p>Define like this: <code>def add(self, node: TreeNode, sum: List[int], index: int):</code></p> <p>and use it</p> <p><code>self.add(node.left, sum, index)</code></p>
python-3.x
2
1,909,425
68,022,073
ANSI color not working for Git Bash on Windows 10 using Windows Terminal
<p>I'm using Git Bash on Windows 10 with Windows Terminal and for this Python project, the ANSI escape sequence are not working.</p> <pre class="lang-py prettyprint-override"><code>from colorama import init, Fore from sys import stdout init(convert=True) # code ... </code></pre> <p>I tried to print a test text</p> <pre class="lang-py prettyprint-override"><code># The code above stdout.write('{GREEN}Test{RESET}'.format(GREEN=Fore.GREEN, RESET=Fore.RESET) </code></pre> <p>Here's the output:</p> <pre><code>←[32mTest←[0m </code></pre> <p>I'm sure that my terminal supports ANSI sequence since I've tested it with Bash, TS (Deno) and JS (NodeJS). They all worked. I've also tested on Command Prompt and it works fine for Python. Maybe this is a problem with Git Bash executing Python itself?</p> <p>I've also tried writing the hex code directly but still no luck. Check the code below<br /> <a href="https://pastebin.com/Uf557dSK" rel="nofollow noreferrer"><code>write.py</code></a><br /> <a href="https://i.stack.imgur.com/XBwlY.png" rel="nofollow noreferrer">Example image</a></p>
<p>After wasting some time testing, apparently only using <code>print</code> works</p> <pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3 from colorama import init, Fore from sys import stdout # Initialize init() # Using `Fore` print(f'{Fore.GREEN}Test{Fore.RESET}') # Using actuall hex values print('\x1B[31mTest\x1B[0m') # Using stdout.write stdout.write(f'{Fore.GREEN}Test{Fore.RESET}\n') stdout.write('\x1B[31mTest\x1B[0m\n') </code></pre> <pre><code>Test Test ←[32mTest←[39m ←[31mTest←[0m </code></pre> <p><a href="https://i.stack.imgur.com/r8in9.png" rel="nofollow noreferrer">Result</a></p> <h1><strong>EDIT:</strong></h1> <p>Using <code>input</code> will also fails</p> <pre class="lang-py prettyprint-override"><code># This will fail xyz = input(f'{Fore.RED}Red text{Fore.RESET}') # Consider using this as an alternative print(f'{Fore.RED}Red text{Fore.RESET}', end='') xyz = input() </code></pre>
python|windows|terminal|ansi
0
1,909,426
67,667,851
Highlight the date format in pandas data frame
<p>I want to highlight all the dates in the pandas data frame in one column.</p> <p>index A</p> <p>0 24</p> <p>1 32-35</p> <p>2 10/01/2016</p> <p>3 02/20/2017</p> <p>4 02/20/2017</p> <p>5 02/20/2017</p>
<p>Idea is test if <code>datetime</code>s with convert values by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.to_datetime.html" rel="nofollow noreferrer"><code>to_datetime</code></a> with test if not missing value, because <code>errors='coerce'</code> return <code>NaN</code> for no datetimes:</p> <pre><code>def color_datetimes(val): color = 'red' if pd.notna(pd.to_datetime(val, errors='coerce')) else 'black' return 'color: %s' % color df.style.applymap(color_datetimes, subset=['A']).to_excel('file.xlsx', index=False) </code></pre>
python|pandas|date
1
1,909,427
67,010,939
Sharing variables from IPyWidget embedded JS to python
<p>I'm using <a href="https://github.com/AaronWatters/jp_proxy_widget" rel="nofollow noreferrer"><code>jp_proxy_widget</code></a> - an extesion library for ipywidgets, and trying to use the <code>get_value_async</code> function to pass data from JavaScript back to python.</p> <p>As far as I can understand from the tutorial notebook, this should work</p> <pre><code>setDemo = jp_proxy_widget.JSProxyWidget() display(setDemo) setDemo.js_init(&quot;&quot;&quot; var testid = 10; &quot;&quot;&quot;) class Testinfo: TESTID = None def testid_callback(testid_val): Testinfo.TESTID = testid_val setDemo.get_value_async(testid_callback, &quot;testid&quot;) </code></pre> <p>but if I execute the cell this is in, I get an error:</p> <pre><code>Uninitialized Proxy Widget new error message: ReferenceError: testid is not defined </code></pre> <p>What am I doing wrong?!</p>
<p>If anyone else finds this, the issue is the declaration of the variable:</p> <pre class="lang-js prettyprint-override"><code> setDemo.js_init(&quot;&quot;&quot; testid = 10; &quot;&quot;&quot;) </code></pre> <p>works fine</p>
python|jupyter-notebook|ipywidgets
0
1,909,428
66,989,613
Passing information using HTML and Flask
<p>I'm working on a uni project that involves logging films, sort of like Letterboxd. I've made a search page where users enter a keyword when looking for a movie, this then prints the results supplied by the tmdb api. Below is the code in my routes.py file for the search and results page:</p> <pre><code>@app.route('/search-movie', methods=['GET', 'POST']) def m_search(): form = MovieSearch() if form.validate_on_submit(): user_search = urllib.parse.quote(form.movieName.data) complete_url = search_url + user_search + &quot;&amp;page=1&quot; conn = urllib.request.urlopen(complete_url) json_data = json.loads(conn.read()) return render_template('search_results.html', results=json_data[&quot;results&quot;], term=form.movieName.data) return render_template('movie_search.html', form=form) </code></pre> <p>The following is the code in the html file for that page:</p> <pre><code>{% block content %} &lt;h1&gt; results for &quot;{{term}}&quot;&lt;/h1&gt; &lt;h2&gt; click movie name to log it&lt;/h2&gt; &lt;div class=&quot;movie-list-container&quot;&gt; {% for movie in results %} &lt;div class=&quot;movie-card&quot;&gt; &lt;img src=&quot;https://image.tmdb.org/t/p/w500/{{movie.poster_path}}&quot; alt=&quot;&quot;&gt; &lt;!-- &lt;h3&gt; {{movie.title}} ({{movie.release_date[:4]}})&lt;/h3&gt; --&gt; &lt;h3&gt;&lt;a class=&quot;log-link&quot; href=&quot;{{ url_for('log_movie', movieid=movie.id) }}&quot;&gt; {{movie.title}} ({{movie.release_date[:4]}}) &lt;/a&gt;&lt;/h3&gt; &lt;/div&gt; {% endfor %} &lt;/div&gt; {% endblock %} </code></pre> <p>As you can see in the line I commented out, previously it would just display the movie title and release year. However I wanted to change is so that if the user presses the movie name, they are taken to a page where they add information through a form such as their rating, the date they watched the movie, and a review. <a href="https://i.stack.imgur.com/any2i.png" rel="nofollow noreferrer">This is how it's done on Letterboxd</a> I wanted mine to be pretty much the same.</p> <p>I want to be able to show the movie name, release date and poster for the movie they pressed on the logging page, and I tried this in the h3 by passing through movieid=movie.id . From there in my routes.py file I wrote the following code</p> <pre><code>@app.route('/log-movie', methods=['GET', 'POST']) def log_movie(movieid): log_url = info_url + movieid + &quot;?api_key=&quot; + api_key conn = urllib.request.urlopen(log_url) json_data = json.loads(conn.read()) form = LogMovie() if form.validate_on_submit(): log_data = Diary(date_watched=form.dateWatched.data, movie_name=mname, release_date=myear, user_rating=form.movieRating.data, rewatch=form.movieRewatch.data, review=form.movieReview.data, logger=current_user) db.session.add(log_data) db.session.commit() return redirect(url_for('home')) return render_template('log_movie.html', form=form, results=json_data[&quot;results&quot;]) </code></pre> <p>My idea was to simply get the movieid so that I can then request the information from the api again, so that I can pass it through like I did when displaying the results in the html code I added above.</p> <p>When storing to the database, I have the variables mname, myear. These were from a previous attempt where I wished to pass in the movie year and release date from the HTML without needing to call upon the api again in routes.py. When I couldn't get the multiple variables to pass, that's when I changed it to just movieid. I forgot to change these back but should I manage to pass the information from the HTML, I may go back to this version.</p> <p>I keep getting an error TypeError: log_movie() missing 1 required positional argument: 'movieid' and I can't seem to find an answer on google. I was wondering if anyone knew why, or a better way of achieving what I want?</p> <p>I've never asked a question before so let me know if I need to provide more information.</p> <p>Many Thanks! :)</p>
<pre><code>@app.route('/log-movie', methods=['GET', 'POST']) def log_movie(movieid): </code></pre> <p>I'm just pointing out that. This is where your error is. If you want to get the movieid here, you're route should be</p> <pre><code>@app.route('/log-movie/&lt;movieid&gt;', methods=['GET', 'POST']) </code></pre> <p>In that way, if you GET /log-movie/12345, the movieid will be 12345.</p> <p>Edit : Just to be clear, i just pointed out where your error is, if you need more help about this, you can still make a comment, i'll edit my answer to other question :)</p>
python|html|flask
0
1,909,429
42,590,517
How can I use Python to time how long a mp3 takes to download from website?
<p>A mp3 is accessible via two different URLs. I'm trying to use Python to figure out which URL is fastest to download from...? </p> <p>For example, I want to time how long <a href="https://cpx.podbean.com/mf/download/a6bxxa/LAF_15min_044_mindfulness.mp3" rel="nofollow noreferrer">https://cpx.podbean.com/mf/download/a6bxxa/LAF_15min_044_mindfulness.mp3</a> takes to download and compare that to how long <a href="http://cpx.podbean.com/mf/play/a6bxxa/LAF_15min_044_mindfulness.mp3" rel="nofollow noreferrer">http://cpx.podbean.com/mf/play/a6bxxa/LAF_15min_044_mindfulness.mp3</a> takes to download.</p> <p>To download the mp3 I'm currently using: urllib.request.urlretrieve(mp3_url, mp3_filename)</p>
<p>you could essentially do something like:</p> <pre><code>from datetime import datetime starttime = datetime.now() urllib.request.urlretrieve(mp3_url, mp3_filename) # Whatever code you're using... finishtime = datetime.now() runtime = finishtime - starttime print str(runtime) </code></pre> <p>this will print a timestamp like <code>0:03:19.356798</code> in the format of [hours]:[minutes]:[seconds.micro seconds]</p> <p>My bad... i didn't realize you're trying to figure out which link was the fastest. I have no clue how you're storing the your <code>mp3_url</code> and <code>mp3_filename</code> elements, but try something like this (adjust accordingly):</p> <pre><code>from datetime import datetime mp3_list = { 'file1.mp3': 'http://www.url1.com', 'file2.mp3': 'http://www.url2.com', 'file3.mp3': 'http://www.url3.com', } runtimes = [] for mp3_url, mp3_filename in mp3_list.items(): # i'm not sure how or where you are storing mp3_url or mp3_filename, so you'll have to modify this line accordingly... starttime = datetime.now() urllib.request.urlretrieve(mp3_url, mp3_filename) # Whatever code you're using... finishtime = datetime.now() runtime = finishtime - starttime runtimes.append({'runtime': runtime, 'url': mp3_url, 'filename': mp3_filename}) fastest_mp3_url = sorted(runtimes, key=lambda k: k['runtime'])[0]['url'] fastest_mp3_filename = sorted(runtimes, key=lambda k: k['runtime'])[0]['filename'] print fastest_mp3_url print fastest_mp3_filename </code></pre>
python|http
2
1,909,430
72,272,220
How can I return a list from a list of lists when searching a string and also force the comparison to lowercase?
<p>I have a list of lists as such:</p> <pre><code>allTeams = [ [57, 'Arsenal FC', 'Arsenal', 'ARS'], [58, 'Aston Villa FC', 'Aston Villa', 'AVL'], [61, 'Chelsea FC', 'Chelsea', 'CHE'], ...] userIsLookingFor = &quot;chelsea&quot; for team in allTeams: if userIsLookingFor.lower() in any_element_of_team.lower(): print(team) &gt; [61, 'Chelsea FC', 'Chelsea', 'CHE'] </code></pre> <p>I would basically look for the user's requested word in a list of lists, and if there's a match, I print that list. In the case above, the user searches for &quot;chelsea&quot; and in one of the lists, there's a match for &quot;chelsea&quot; (either Chelsea FC or Chelsea, doesn't matter). So I would return that specific list.</p> <p>I tried using &quot;any&quot; but it seems to only return a boolean and I can't actually print any list from it.</p>
<p>You can use a list comprehension:</p> <pre><code>userIsLookingFor = &quot;chelsea&quot; # option 1, exact match item 2 [l for l in allTeams if l[2].lower() == userIsLookingFor] # option 2, match on any word [l for l in allTeams if any(x.lower() == userIsLookingFor for s in l if isinstance(s, str) for x in s.split()) ] </code></pre> <p>output:</p> <pre><code>[[61, 'Chelsea FC', 'Chelsea', 'CHE']] </code></pre>
python|list|comparison|lowercase
1
1,909,431
65,904,696
WARNING: erroneous pipeline: no element "cameracalibrate"
<p>I am trying to run the following pipeline:</p> <pre><code>gst-launch-1.0 -v v4l2src ! videoconvert ! cameracalibrate ! cameraundistort ! autovideosink </code></pre> <p>The First part of my question is:</p> <p>As I read in the documentation <code>cameracalibrate</code> and <code>cameraundistort</code> are elements that belong to opencv plugin and which we can use directly to create our own pipelines. Could someone please tell me if what I understood is right or not.</p> <p>The second part is:</p> <p>I am getting this error:</p> <blockquote> <p>WARNING: erroneous pipeline: no element &quot;cameracalibrate&quot;</p> </blockquote> <p>I had already installed the <code>gst-plugins-bad</code></p> <p>I am beginner in Gstreamer, could someone help me please and tell me what is behind this error.</p>
<p>Although the opencv-related plugins are part of &quot;gst-plugins-bad&quot;, Debian (which you indicated you are using) <a href="https://packages.debian.org/buster/gstreamer1.0-opencv" rel="nofollow noreferrer">packages them separately</a>. That way, people who don't want/need the OpenCV based plugins don't have to, along with all the (quite heavy) dependency tree that comes together with it.</p> <p>So to solve your issue, you should be able to just use <code>sudo apt install gstreamer1.0-opencv</code></p>
python|camera-calibration|gstreamer-1.0|python-gstreamer
1
1,909,432
51,085,229
Parse Nested XML using python
<p>I'm trying to parse Nested XML using python. Sample file format looks like this</p> <pre><code>&lt;repositoryFileTreeDto&gt; &lt;children&gt; &lt;children&gt; &lt;file&gt; &lt;name&gt; File1 &lt;/name&gt; &lt;path&gt; home/user1/File1.txt &lt;/path&gt; &lt;/file&gt; &lt;/children&gt; &lt;children&gt; &lt;file&gt; &lt;name&gt; File2 &lt;/name&gt; &lt;path&gt; home/user1/File2.txt &lt;/path&gt; &lt;/file&gt; &lt;/children&gt; &lt;file&gt; &lt;name&gt; User1 &lt;/name&gt; &lt;path&gt; home/user1 &lt;/path&gt; &lt;/file&gt; &lt;/children&gt; &lt;children&gt; &lt;file&gt; &lt;name&gt; User2 &lt;/name&gt; &lt;path&gt; home/user2 &lt;/path&gt; &lt;/file&gt; &lt;/children&gt; &lt;children&gt; &lt;file&gt; &lt;name&gt; User3 &lt;/name&gt; &lt;path&gt; home/user3 &lt;/path&gt; &lt;/file&gt; &lt;/children&gt; &lt;children&gt; &lt;children&gt; &lt;file&gt; &lt;name&gt; File4 &lt;/name&gt; &lt;path&gt; home/user4/File4.txt &lt;/path&gt; &lt;/file&gt; &lt;/children&gt; &lt;file&gt; &lt;name&gt; User4 &lt;/name&gt; &lt;path&gt; home/user4 &lt;/path&gt; &lt;/file&gt; &lt;/children&gt; &lt;file&gt; &lt;name&gt; Home &lt;/name&gt; &lt;path&gt; /home &lt;/path&gt; &lt;/file&gt; &lt;/repositoryFileTreeDto&gt; </code></pre> <p>I want to print the Empty uses folders and Non-Empty User folders(i.e. users with 1 or more files). </p> <p>Here in the XML snippet. </p> <p>User 2 &amp; User 3 are Empty Folders and User 1 is a Non-Empty user. </p> <p>Condition to identify Empty and Non-Empty Users:</p> <p>If the User has any tag at the same level then Non-Empty User. If the user doesn't have tag then it is Empty User. </p> <p>Sample Code 1:</p> <pre><code>import xml.etree.ElementTree as ET import time import requests import csv tree = ET.parse('tree.xml') root = tree.getroot() for child in root.findall('children'): for subchlid in child.findall('file'): title = subchlid.find('title').text print(title) for subchlid1 in child.findall('children'): if subchlid1.tag == 'children': print(subchlid1.tag) </code></pre> <p>Code Output 1:</p> <pre><code>User1 File1 File2 User2 User3 User4 File4 </code></pre> <p>Sample Code2:</p> <pre><code>import xml.etree.ElementTree as ET import time import requests import csv tree = ET.parse('tree.xml') root = tree.getroot() list_values = [] dicts = {} for child in root.findall('children'): for sub_child in child.findall('file'): username = sub_child.find('title').text for sub_child1 in child.findall('children'): for sub_child2 in sub_child1.findall('file'): file_path = sub_child2.find('path').text file_title = sub_child2.find('title').text #print(username) #print(file_title) list_values.append(file_title) for user in username: dicts[username] = list_values print(dicts) </code></pre> <p>Code Output 2:</p> <pre><code>{'User1': ['File1', 'File2'],'User4': ['File1', 'File2', 'File4']} </code></pre> <p>Here in this output User2 and User3 is not part of the Dict because it is an empty folder and User4 is sharing the User1 files. </p> <p>Expected Output:</p> <pre><code>The number of Empty Users: 2 The number of Non-Empty Users: 2 User1 Files are: File1, File2 User4 files are: File4 </code></pre> <p>Thanks all guys.</p>
<p>If you are open to using the <code>lxml</code> package, you can use xpath to get your result.</p> <pre><code>from lxml import etree from operator import itemgetter from collections import defaultdict first = itemgetter(0) with open('tree.xml') as fp: xml = etree.fromstring(fp.read()) # create a user dictionary and a file dictionary udict = {first(e.xpath('path/text()')).strip(): first(e.xpath('name/text()')).strip() for e in xml.xpath('children/file')} fdict = {first(e.xpath('path/text()')).strip(): first(e.xpath('name/text()')).strip() for e in xml.xpath('children/children/file')} ufdict = defaultdict(list) for k,v in fdict.items(): ufdict[first(k.rsplit('/',1))].append(v) out = {v: ufdict.get(k, []) for k, v in udict.items()} print('The number of Empty Users: {}'.format(len([v for v in out.values() if not v])) print('The number of Non-Empty Users: {}'.format(len([v for v in out.values() if not v])) for k, v in out.items(): if v: print(f'{k} files are {", ".join(v)}') </code></pre>
python|xml|xml-parsing
0
1,909,433
51,091,593
Plotly Legent Aa labels how to remove them
<p>I am using a scatter plot with mode = markers+text. The legends appear with this disturbing Aa label. Anyway to remove them? I am using plotly for python</p> <p><a href="https://i.stack.imgur.com/SJOcS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SJOcS.png" alt="enter image description here"></a></p>
<p>It's certanly not the right way to perform the task, but if you want to render images using Orca - there is no other option. </p> <ol> <li><p>Locate <code>plotly.min.js</code> (<code>plotly.io.to_image()</code> use internal library at site-packages\plotly\package_data)</p></li> <li><p>Find and replace <code>x.tx="Aa"</code> to <code>x.tx=""</code></p></li> </ol>
python|plotly
2
1,909,434
50,551,997
Python3 unable to import argcomplete package
<p>I'm trying to import "argcomplete" package but I'm facing following error:</p> <pre><code>$ python3 Python 3.6.3 (v3.6.3:2c5fed86e0, Oct 3 2017, 00:32:08) [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; import argcomplete Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; ModuleNotFoundError: No module named 'argcomplete' &gt;&gt;&gt; exit() </code></pre> <p>I confirm that argcomplete was successfully installed:</p> <pre><code>#pip3 install argcomplete Collecting argcomplete Using cached https://files.pythonhosted.org/packages/31/88/ba8d8684a8a27749250c66ff7c2b408fdbc29b50da61200338ff9b2607bf/argcomplete-1.9.4-py2.py3-none-any.whl Installing collected packages: argcomplete Successfully installed argcomplete-1.9.4 $ more test_backend.py #!/usr/bin/env python # PYTHON_ARGCOMPLETE_OK """ Run module with test data """ $ cd /usr/local/lib/python3.6/site-packages/ apple site-packages $ ls -ltrh arg* -rw-r--r-- 1 apple admin 87K May 27 03:34 argparse.py argcomplete-1.9.4.dist-info: total 104 -rw-r--r-- 1 apple admin 12B May 27 15:38 top_level.txt -rw-r--r-- 1 apple admin 1.5K May 27 15:38 metadata.json -rw-r--r-- 1 apple admin 110B May 27 15:38 WHEEL -rw-r--r-- 1 apple admin 2.0K May 27 15:38 RECORD -rw-r--r-- 1 apple admin 16K May 27 15:38 METADATA -rw-r--r-- 1 apple admin 4B May 27 15:38 INSTALLER -rw-r--r-- 1 apple admin 14K May 27 15:38 DESCRIPTION.rst argcomplete: total 160 -rw-r--r-- 1 apple admin 2.1K May 27 15:38 shellintegration.py -rw-r--r-- 1 apple admin 13K May 27 15:38 my_shlex.py -rw-r--r-- 1 apple admin 15K May 27 15:38 my_argparse.py -rw-r--r-- 1 apple admin 3.6K May 27 15:38 completers.py -rw-r--r-- 1 apple admin 524B May 27 15:38 compat.py drwxr-xr-x 3 apple admin 102B May 27 15:38 bash_completion.d -rw-r--r-- 1 apple admin 1.4K May 27 15:38 _check_module.py drwxr-xr-x 9 apple admin 306B May 27 15:38 __pycache__ -rw-r--r-- 1 apple admin 29K May 27 15:38 __init__.py </code></pre> <p>/usr/local/lib/python3.6/site-packages/ is already added to PATH</p> <p>I noticed that import is only working from directory /usr/local/lib/python3.6/site-packages but not from anywhere else</p> <pre><code>$ python3 -c 'import argcomplete' &gt;&gt; successful $ cd /Users/apple/Desktop/XXXXX/ apple (master) XXXXX $ python3 test_backend.py Traceback (most recent call last): File "test_backend.py", line 11, in &lt;module&gt; import argcomplete ModuleNotFoundError: No module named 'argcomplete' apple (master) XXXXX $ python3 -c 'import argcomplete' Traceback (most recent call last): File "&lt;string&gt;", line 1, in &lt;module&gt; ModuleNotFoundError: No module named 'argcomplete' apple (master) XXXXX </code></pre> <p>Please advise how to solve this issue, thanks.</p>
<p>The problem is that python 2.7 is the default for mac so installing packages via terminal will only install them on python 2.7. </p> <p>If you already have pip installed on version 3, just do this:</p> <pre><code>python3 -m pip install argcomplete </code></pre>
python|python-3.x
3
1,909,435
35,136,404
xgboost binary logistic regression
<p>I am having problems running logistic regression with xgboost that can be summarized on the following example. </p> <p>Lets assume I have a very simple dataframe with two predictors and one target variable:</p> <pre><code>df= pd.DataFrame({'X1' : pd.Series([1,0,0,1]), 'X2' : pd.Series([0,1,1,0]), 'Y' : pd.Series([0,1,1,0], )}) </code></pre> <p>I can post images because Im new here, but we can clearly see that when X1 =1 and X2=0, Y is 0 and when X1=0 and X2=1, Y is 1.</p> <p>My idea is to build a model that outputs the probability that an observation belongs to each one of the classes, so if I run xgboost trying to predict two new observations (1,0) and (0,1) like so:</p> <pre><code>X = df[['X1','X2']].values y = df['Y'].values params = {'objective': 'binary:logistic', 'num_class': 2 } clf1 = xgb.train(params=params, dtrain=xgb.DMatrix(X, y), num_boost_round=100) clf1.predict(xgb.DMatrix(test.values)) </code></pre> <p>the output is:</p> <pre><code>array([[ 0.5, 0.5], [ 0.5, 0.5]], dtype=float32) </code></pre> <p>which, I imagine, means that for the first observation, there is 50% chance it belonging to each one of the classes.</p> <p>I'd like to know why wont the algorithm output a proper (1,0) or something closer to that if the relationship between the variables is clear. </p> <p>FYI, I did try with more data (Im only using 4 rows for simplicity) and the behavior is almost the same; what I do notice is that, not only the probabilities do not sum to 1, they are often very small like so: (this result is on a different dataset, nothing to do with the example above)</p> <pre><code>array([[ 0.00356463, 0.00277259], [ 0.00315137, 0.00268578], [ 0.00453343, 0.00157113], </code></pre>
<p>Ok - here's what is happening.. </p> <p>The clue as to why it isn't working is in the fact that in the smaller datasets it cannot train properly. I trained this exact model and observing the dump of all the trees you will see that they cannot split. </p> <p>(tree dump below)</p> <p><em>NO SPLITS, they have been pruned!</em></p> <p><code>[1] "booster[0]" "0:leaf=-0" "booster[1]" "0:leaf=-0" "booster[2]" "0:leaf=-0" [7] "booster[3]" "0:leaf=-0" "booster[4]" "0:leaf=-0" "booster[5]" "0:leaf=-0" [13] "booster[6]" "0:leaf=-0" "booster[7]" "0:leaf=-0" "booster[8]" "0:leaf=-0" [19] "booster[9]" "0:leaf=-0"</code></p> <p><strong>There isn't enough weight is each of the leaves to overpower</strong> <code>xgboost</code>'s <strong>internal regularization</strong> (which penalizes it for growing) </p> <p>This parameter may or may not be accessible from the python version, but you can grab it from <code>R</code> if you do a github install </p> <p><a href="http://xgboost.readthedocs.org/en/latest/parameter.html" rel="noreferrer">http://xgboost.readthedocs.org/en/latest/parameter.html</a></p> <blockquote> <p>lambda [default=1] L2 regularization term on weights</p> <p>alpha [default=0] L1 regularization term on weights</p> </blockquote> <p>basically this is why your example trains better as you add more data, but cannot train at all with only 4 examples and default settings.</p>
python|machine-learning|regression|logistic-regression|xgboost
5
1,909,436
44,916,248
Can't import certain package in cmd, but can import it with sublime text3
<p>So I am trying to import certain packages, when I use this command in cmd, <code>py test.py</code>, all I get is <code>ModuleNotFoundError: No module named holidays</code></p> <p>The only thing in <code>test.py</code> is <code>import holidays</code>.</p> <p>But when I try to execute <code>test.py</code> in sublime text3, it works. No error happens.</p> <p>I have tried to find the answer by Google, but can's find anything helpful. Please help me. Thank you all!</p>
<p>I think it may be linked to what @wim say in this <a href="https://stackoverflow.com/questions/29553668/script-running-in-pycharm-but-not-from-the-command-line">post</a></p> <p>You may want to check some points:</p> <ol> <li><p>Do you use the same python interpreter?</p> <p><code>import sys; print(sys.executable)</code></p></li> <li><p>Is this the same current directory that both use?</p> <p><code>import os; print(os.getcwd())</code></p></li> <li><p>Check the sys.path, and difference between both could lead to error (can be caused by environment variable).</p> <p><code>import sys; print(sys.path)</code></p></li> </ol> <p>(This answer is totally based on what Wim said and was accepted on the question from the linked post, if it answer your question please give him credit).</p>
python|python-3.x|sublimetext3
0
1,909,437
60,711,442
Count strings within an element of a list
<p>Let's say I have a list, l, which has sentences as elements. For example l = [boy boy, girl, hand foot foot ...]. I want to be able to loop through the list and find out which elements of my list has duplicates. I tried the counter function but didn't get the desired result:</p> <pre><code>from _collections import Counter print(Counter(l)) </code></pre> <p>How can I get the desired result?</p>
<p>Ref: <a href="https://docs.python.org/3.8/library/collections.html" rel="nofollow noreferrer">https://docs.python.org/3.8/library/collections.html</a></p> <p>The <code>Counter</code> class of <code>collections</code> module offers a utility to find the number of appearance of a string in a list of strings as dictionary. </p> <p>In the following function you will get a list of strings which have been duplicated.</p> <pre><code>from collections import Counter def get_duplicate_str(list_of_str): """ This function returns a list of duplicate strings appeared in given list of strings. @param list_of_str: List of strings @return : List of strings """ str_counter_dict = Counter(list_of_str) list_of_duplicate_str = [key for key in str_counter_dict.keys() if str_counter_dict[key] &gt; 1] return list_of_duplicate_str # Testing the function print(get_duplicate_str(["boy", "boy", "girl", "hand", "foot", "foot"])) # Output ['boy', 'foot'] </code></pre>
python|python-3.8
0
1,909,438
58,091,798
python: explain if list:
<p>I have been given a script that converts a multiline sequence to a single line sequence, however I don't understand how it works.</p> <p>I'm new to python and have never used blocks before.</p> <p>Here is the code:</p> <pre><code>with open("pandas.fas") as f_input, open("singleline.fas", 'w') as f_output: block = [] for line in f_input: if line.startswith('&gt;'): if block: f_output.write(''.join(block) + '\n') block = [] f_output.write(line) else: block.append(line.strip()) if block: f_output.write(''.join(block) + '\n') </code></pre> <p>Please can someone help me understand?</p> <p>I am especially confused about this segment:</p> <pre><code> if block: f_output.write(''.join(block) + '\n') block = [] </code></pre>
<p>In your code, <code>block</code> is a list type variable of python. It is similar to array in C. <code>block =[]</code> defines an empty list. The condition <code>if block:</code> checks whether <code>block</code> is empty or not. If <code>block</code> is not empty, it joins the content of <code>block</code> and writes in the output file. </p>
python
1
1,909,439
58,017,126
Pandas Groupby using information from other dataframe
<p>I have the following two dataframes:</p> <p>Table 1:</p> <pre><code>Key1 Key2 Value1 Other Data 1 2 5 foo 3 1 6 bar </code></pre> <p>and</p> <p>Table 2:</p> <pre><code>Key1 Key2 Property1 Property2 1 2 5 7 3 1 6 8 1 3 7 7 2 1 4 4 2 1 6 6 2 1 8 5 </code></pre> <p>In Table 1 the order of the keys doesn't matter. Table 1 has no duplicates. In Table 2 the order of the keys does matter. Table 2 has duplicates. I'm quite new to pandas, but as I understand the concept of groupby this should be the perfect tool to go. I hope that I explained my problem well enough.</p> <p>Edit: Regarding the comments I would like to split the problem.</p> <p>First Step: Merge Table 1 and Table 2. I think this has to be hierachical.</p> <pre><code>Key 1 Key 2 Value 1 Other Data Key1 Key2 Property1 Propterty2 1 2 5 foo 1 2 5 7 2 1 4 4 2 1 6 6 2 1 8 5 3 1 6 bar 3 1 6 8 1 3 7 7 </code></pre> <p>Step2: Filter values based on Value 1. If <strong>Property 1=Value 1 +- 1</strong> then hold the entry, if not delete it. In the example here this results in:</p> <pre><code>Key 1 Key 2 Value 1 Other Data Key1 Key2 Property1 Propterty2 1 2 5 foo 1 2 5 7 2 1 4 4 2 1 6 6 3 1 6 bar 3 1 6 8 1 3 7 7 </code></pre> <p>Step3: Reshape and build mean: Build the mean of all pairs remaining (here mean of the two entrys for (2,1)). Then reshape the dataframe.</p> <pre><code>Key 1 Key 2 Value 1 Other Data Property1(i,j) Propterty2(i,j) Property1(j,i) Propterty2(j,i) 1 2 5 foo 5 7 5 5 3 1 6 bar 6 8 7 7 </code></pre> <p>Step4: Handle missing data. If I would only have data for (1,3) in Table 2 but no for (3,1) then he should fill this values up with NaN in Step3. In the last step I would like to delete all rows with a NaN then.</p>
<p>Try merge twice:</p> <pre><code>new_df = df2.groupby(['Key1','Key2'], as_index=False).mean() (df1.merge(new_df, left_on=['Key1','Key2'], right_on=['Key2','Key1'], suffixes=('', '_add')) .drop(['Key1_add','Key2_add'], axis=1) .merge(new_df, on=['Key1','Key2'], suffixes=['(i,j)','(j,i)'] ) ) </code></pre> <p>output:</p> <pre><code> Key1 Key2 Value1 OtherData Property1(i,j) Property2(i,j) \ 0 1 2 5 foo 5.5 5.0 1 3 1 6 bar 7.0 7.0 Property1(j,i) Property2(j,i) 0 5.0 7.0 1 6.0 8.0 </code></pre>
python|python-3.x|pandas|merge|pandas-groupby
0
1,909,440
56,320,689
How to increment string within a certain range
<p>I have a string like '"F01" le code le "F16"' I want to parse this string to get a new string contains "F01", "F02, "F03" ... until "F16". </p> <p>I tired to parse the string by quotes and hope to loop through the first code till the last code. And then tried to increment the code via chr() and ord(). But can't seem to figure it out. </p> <pre class="lang-py prettyprint-override"><code>import re s = '"F01" le code le "F16"' s_q = re.findall('"([^"]*)"', s) first_code = s_q[0] last_code = s_q[1] ch_for_increment = s_q[0][-1:] ch_for_the_rest = s_q[0][:-1] print(ch_for_the_rest + chr(ord(ch) + 1)) </code></pre>
<p>You are almost there. After extracting the start and end of range from <code>s_q</code>, you can use <a href="https://docs.python.org/3/library/stdtypes.html#range" rel="nofollow noreferrer">range</a> to generate your list like so.</p> <pre><code>import re s = '"F01" le code le "F16"' s_q = re.findall('"([^"]*)"', s) #['F01', 'F16'] #Extract the first and last index of range from list first_code = int(s_q[0][1:]) #1 last_code = int(s_q[1][1:]) #16 #Create the result list li = ['F{:02d}'.format(item) for item in range(first_code, last_code+1)] #Get the final string with quotes result = '"{}"'.format('" "'.join(li)) print(result) </code></pre> <p>The output will be</p> <pre><code>"F01" "F02" "F03" "F04" "F05" "F06" "F07" "F08" "F09" "F10" "F11" "F12" "F13" "F14" "F15" "F16" </code></pre>
python
2
1,909,441
18,723,680
Load specific settings when user try to acces to an app (multi app django)
<p>I have a django project with several app. Each is linked to a different DB and may have different settings. I use mod_wsgi with apache. Actually, I'm following one of the methodology found <a href="https://code.djangoproject.com/wiki/SplitSettings" rel="nofollow">here</a> : read a .ini file.</p> <p><strong>How can I load the settings.ini specific to my app when a user try to access it ?</strong></p> <p>I can load/read a specific settings.ini in my settings.py and it works fine. I want to load/read only the settings.ini of the app requested by the user. </p> <p><strong>How to know the app to give the right path to load ? (via URL ?)</strong></p> <p>Please see my simplified project tree :</p> <pre><code>β”œβ”€β”€ django.wsgi β”œβ”€β”€ manage.py β”œβ”€β”€ global β”‚ β”œβ”€β”€ __init__.py β”‚ β”œβ”€β”€ settings.py β”‚ β”œβ”€β”€ wsgi.py β”œβ”€β”€ my_app1 β”‚Β Β  β”œβ”€β”€ admin.py β”‚Β Β  β”œβ”€β”€ __init__.py β”‚Β Β  β”œβ”€β”€ models.py β”‚ β”œβ”€β”€ settings.ini β”‚ └── ... β”œβ”€β”€ my_app2 β”‚Β Β  β”œβ”€β”€ admin.py β”‚Β Β  β”œβ”€β”€ __init__.py β”‚Β Β  β”œβ”€β”€ models.py β”‚Β Β  β”œβ”€β”€ settings.ini β”‚ └── ... </code></pre> <p>My specific settings file is read at the top of my setting.py : </p> <pre><code>import configparser config = configparser.ConfigParser() config.read('/path/to/my/settings.ini') DEBUG = config['debug']['DEBUG'] TEMPLATE_DEBUG = config['debug']['TEMPLATE_DEBUG'] ... </code></pre> <p>and my settings.ini looks like that : </p> <pre><code>[debug] DEBUG = True TEMPLATE_DEBUG = DEBUG ... </code></pre> <p>Any advice ? </p>
<p>What you're trying to do sounds a little... off. I agree with Daniel Roseman. If you're looking for advise, listen to him. But if you're bent on trying to make this work, here's an idea: you MIGHT be able to do this from a view with a mixin that calls <a href="https://docs.djangoproject.com/en/dev/topics/settings/#using-settings-without-setting-django-settings-module" rel="nofollow">settings.configure()</a>:</p> <pre><code>from os from django.conf import settings import configparser class DynamicSettingsViewMixin(object): def dispatch(self, request, *args, **kwargs): config = configparser.ConfigParser() config.read(os.path.join(os.path.dirname(__file__), 'settings.ini')) settings.configure(**config['debug']) return super(DynamicSettingsViewMixin, self).dispatch(request, *args, **kwargs) class MyAwesomeAppView(DynamicSettingsViewMixin, TemplateView): pass </code></pre> <p>I haven't tried this... since it's crazy.</p>
python|django|settings
1
1,909,442
71,649,926
re.fullmatch sql python telegram
<p>I need an exact comparison with records from a database. I have a string data from db:</p> <pre><code>print(str(db.get_nicknames2(message.from_user.id))) //[('123',), ('lopr',), ('hello',), ('imfamous2',), ('guy',)] </code></pre> <p>Does not work:</p> <pre><code>message = 'famous2' info = str(db.get_nicknames2(message.from_user.id)) re.fullmatch(message,info) //None </code></pre> <p>It works but i need to compare from db:</p> <pre><code>message = 'famous2' info = 'famous2' re.fullmatch(message,info) //re.Match object; span=(0, 9), match='imfamous2 </code></pre>
<p>I don't think you need <code>re</code> for this at all. Just loop over the <code>info</code> list instead of converting it to a string.</p> <pre><code>info = 'famous' data = db.get_nicknames2(message.from_user.id) nickname = None # first look for exact match for (nick,) in data: if info == nick: nickname = nick break else: # not found, look for substring match for (nick,) in data: if info in nick: nickname = nick break </code></pre>
python|python-3.x|sqlite|telegram|py-telegram-bot-api
0
1,909,443
69,600,785
How to call a module inside a package in Python with Flask?
<p><strong>CONTEXT.</strong> Python 3.9.1, Flask 2.0.1 and Windows 10</p> <p><strong>WHAT I WANT TO DO.</strong> Call a module located inside a custom package.</p> <p><strong>TROUBLE.</strong> No matter what I do, the console insists on telling me: <code>ModuleNotFoundError: No module named 'my_package'</code></p> <p><strong>WHAT AM I DOING.</strong></p> <p>I am following the official Flask tutorial <a href="https://flask.palletsprojects.com/en/2.0.x/tutorial/index.html" rel="nofollow noreferrer">https://flask.palletsprojects.com/en/2.0.x/tutorial/index.html</a>. Therefore, the structure of the project is basically this (with slight changes from the tutorial):</p> <pre><code>/flask-tutorial β”œβ”€β”€ flaskr/ β”‚ β”œβ”€β”€ __init__.py β”‚ β”œβ”€β”€ db.py β”‚ β”œβ”€β”€ schema.sql β”‚ β”œβ”€β”€ templates/ β”‚ β”œβ”€β”€ static/ β”‚ └── my_package/ β”‚ β”œβ”€β”€ __init__.py (this file is empty) β”‚ β”œβ”€β”€ auth.py β”‚ β”œβ”€β”€ backend.py β”‚ β”œβ”€β”€ blog.py β”‚ └── user.py β”œβ”€β”€ venv/ β”œβ”€β”€ .env β”œβ”€β”€ .flaskenv β”œβ”€β”€ setup.py └── MANIFEST.in </code></pre> <p>Inside the file /flask-tutorial/flaskr/<strong>init</strong>.py I try to call the modules that contain my blueprints:</p> <pre><code>from my_package import backend from my_package import auth from my_package import user from my_package import blog backend.bp.register_blueprint(auth.bp) backend.bp.register_blueprint(user.bp) backend.bp.register_blueprint(blog.bp) app.register_blueprint(backend.bp) </code></pre> <p>When I running the application (with the <code>flask run</code> command), the console tells me <code>ModuleNotFoundError: No module named 'my_package'</code>, indicating that the problem is in the line where it is from my_package import backend.</p> <p><strong>HOW I TRIED TO SOLVE THE PROBLEM (unsuccessfully).</strong></p> <p>First. I have created an empty file called <strong>init</strong>.py inside 'my_package'</p> <p>Second. Inside the file /flask-tutorial/flaskr/<strong>init</strong>.py, before the problematic <em>FROM</em>, I have put:</p> <pre><code>import os.path import sys parent = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) sys.path.insert(0, parent) </code></pre> <p>On the internet there are several posts with this same problem. Many of them mention that the solution is to put a <strong>init</strong>.py file inside the package (I already did this and it does not work for me). Also, unfortunately the structure of the projects in question is different from mine.</p> <p>Any ideas how to fix this?</p>
<p>Change your code to</p> <pre class="lang-py prettyprint-override"><code>from .my_package import backend from .my_package import auth from .my_package import user from .my_package import blog backend.bp.register_blueprint(auth.bp) backend.bp.register_blueprint(user.bp) backend.bp.register_blueprint(blog.bp) app.register_blueprint(backend.bp) </code></pre> <p>Just add a single dot <code>.</code> before the package name to import the package in the same parent package. In your case, <code>__init__.py</code> and <code>my_package</code> have a common <code>flaskr</code> as their parent package.</p>
python|python-3.x|flask
0
1,909,444
55,217,884
How to create a python library which can be used as shell command after installation?
<p>Some python packages such as <code>flask</code> can be used as shell script after installation via <code>pip install</code>. My question is how to create them?</p> <p>A minimal package can be written as below, and where to add codes?</p> <pre><code>. β”œβ”€β”€ library_name β”‚Β Β  β”œβ”€β”€ __init__.py β”‚Β Β  └── Foo.py └── setup.py </code></pre> <p>Thank you!</p>
<p>Use the entrypoints in your setup.py</p> <pre><code>from setuptools import setup, find_packages setup( name='Foo', version='1.0', packages=find_packages(), url='https://github.com/Bar/Foo.git', install_requires=[ ], license='', classifiers=[ 'Development Status :: 3 - Alpha', ], keywords='foo bar', author='Mr Foo', entry_points={ 'console_scripts': [ 'foo = library_name.Foo:main', ], }, ) </code></pre> <p>This way when you python setup.py install you can call foo from shell and it will execute main in Foo.py</p> <p>Note that you need to change library_name, foo, Foo and main in case they have different names in your code.</p>
python
2
1,909,445
55,577,396
Structuring python code to run a message through subprocess.Popen
<p>I am in a process of building a simple remote shell tool to communicate with Windows 10. Server sends a "message" through its own shell to the client who runs the message. I need this received message to be run by other process other that default cmd (shell=True) - a specified app.exe. Here is the code that runs on the client: 1)</p> <pre><code>def work( storage, message ) : import subprocess process = subprocess.Popen([message], stdout=subprocess.PIPE, stderr=None, shell=True) #Launch the shell command: output = process.communicate() print output[0] </code></pre> <p>I tried including "app.exe" or "cmd" to execute the message but with that I get error: TypeError: bufsize must be an integer. </p> <p>I have also tried pinpointing the issue locally and I can run: 2)</p> <pre><code>import subprocess import sys subprocess.Popen(["C:\\Users\\User\\Desktop\\app.exe", "-switch"] + sys.argv[1:], shell=False) </code></pre> <p>and pass arguments from a command terminal and it works as it should. Now I am trying to apply the same logic to a remote execution with my program and use either solution 1 or 2.</p> <p>Update: 3) Trying to implement what I did locally to a remote solution: </p> <pre><code>def work( storage, message ) : import subprocess import sys process = subprocess.Popen(["C:\\Users\\User\\Desktop\\app.exe", "-switch"] + sys.argv[1:], shell=False) #Launch the shell command: output = process.communicate() print output[0] </code></pre> <p>I tried replacing sys.argv[1:] with message but I get: TypeError: can only concatenate list (not "str") to list</p>
<p><code>shell=True</code> doesn't mean the first argument to <code>Popen</code> is a list of arguments to the shell; it just means the first argument is <em>processed</em> by the shell, rather than being arguments to whatever system call your system would use to execute a new process.</p> <p>In this case, you appear to want to run <code>app.exe</code> with a given argument; that's simply</p> <pre><code>cmd = r"C:\Users\User\Desktop\app.exe" subprocess.Popen([cmd, "-switch", message], stdout=subprocess.PIPE) </code></pre>
python|subprocess|popen
0
1,909,446
57,598,987
Why doesn't this code produce shapes with random colors?
<p>I'm working from the book "Program Arcade Games With Python And Pygame" and working through the 'lab' at the end of Chapter 12: Introduction to Classes.</p> <p>This code I've written for it randomises the coordinates size and movement direction for each shape created in 'my_list' by calling its constructor but not the colour, all the shapes created have the same colour, why is this?</p> <pre><code> import pygame import random # Define some colors BLACK = (0, 0, 0) WHITE = (255, 255, 255) GREEN = (0, 255, 0) RED = (255, 0, 0) class Rectangle(): x = 0 y = 0 width = 10 height = 10 change_x = 2 change_y = 2 color = [0, 0, 0] def __init__(self): self.x = random.randrange(0, 700) self.y = random.randrange(0, 500) self.change_x = random.randrange(-3., 3) self.change_y = random.randrange(-3., 3) self.width = random.randrange(20, 70) self.height = random.randrange(20, 70) for i in range(3): self.color[i] = random.randrange(0, 256) def draw(self, screen): pygame.draw.rect(screen, self.color, [self.x, self.y, self.width, self.height], 0) def move(self): if self.x &lt; 0: self.change_x *= -1 if self.x &gt; 700-self.width: self.change_x *= -1 if self.y &lt; 0: self.change_y *= -1 if self.y &gt; 500-self.height: self.change_y *= -1 self.x += self.change_x self.y += self.change_y class Ellipse(Rectangle): def draw(self, screen): pygame.draw.ellipse(screen, self.color, [self.x, self.y, self.width, self.height], 0) pygame.init() # Set the width and height of the screen [width, height] size = (700, 500) screen = pygame.display.set_mode(size) pygame.display.set_caption("My Game") # Loop until the user clicks the close button. done = False # Used to manage how fast the screen updates clock = pygame.time.Clock() my_list = [] for i in range(10): my_list.append(Rectangle()) for i in range(10): my_list.append(Ellipse()) # -------- Main Program Loop ----------- while not done: # --- Main event loop for event in pygame.event.get(): if event.type == pygame.QUIT: done = True # --- Game logic should go here # --- Screen-clearing code goes here # Here, we clear the screen to white. Don't put other drawing commands # above this, or they will be erased with this command. # If you want a background image, replace this clear with blit'ing the # background image. screen.fill(WHITE) # --- Drawing code should go here for shape in my_list: shape.draw(screen) shape.move() # --- Go ahead and update the screen with what we've drawn. pygame.display.flip() # --- Limit to 60 frames per second clock.tick(60) # Close the window and quit. pygame.quit()``` </code></pre>
<p>The instance attribute <code>self.color</code> is never created. The existing variable <code>color</code> is a class attribute. Read about the difference of <a href="https://docs.python.org/3/tutorial/classes.html#class-and-instance-variables" rel="noreferrer">Class and Instance Variables</a>. A class attribute exists only once and (of course) has the same value in each instance when it is read. An instance variable exists per instance of the class and can have a different value in each instance. </p> <p>Create the list of color channels by:</p> <pre class="lang-py prettyprint-override"><code>class Rectangle(): def __init__(self): # [...] self.color = [0, 0, 0] for i in range(3): self.color[i] = random.randrange(0, 256) </code></pre> <p>respectively</p> <pre class="lang-py prettyprint-override"><code>class Rectangle(): def __init__(self): # [...] self.color = [random.randrange(0, 256) for _ in range(3)] </code></pre>
python|python-3.x|pygame
5
1,909,447
57,331,130
Concatenated Xpath return empty item
<p>I have a problem with Scrapy, the following spider return an empty item after calling <code>scrapy crawl panini</code> , the code of the parse of the spider <code>name</code> is:</p> <pre><code>class PaniniSpider(scrapy.Spider): name = "panini" start_url = ["http://comics.panini.it/store/pub_ita_it/magazines.html"] # products-list def parse(self, response): # Get all the &lt;a&gt; tags item = ComicscraperItem() item['title'] = response.xpath('//*[@id="products-list"]/div/div[2]/h3/a/text()').extract() item['link'] = response.xpath('//*[@id="products-list"]/div/div[2]/h3/a/@href').extract() yield item </code></pre> <p>This is what the crawl return: </p> <pre><code>2019-08-03 21:10:08 [scrapy.middleware] INFO: Enabled item pipelines: [] 2019-08-03 21:10:08 [scrapy.core.engine] INFO: Spider opened 2019-08-03 21:10:08 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2019-08-03 21:10:08 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023 2019-08-03 21:10:08 [scrapy.core.engine] INFO: Closing spider (finished) 2019-08-03 21:10:08 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'elapsed_time_seconds': 0.010107, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2019, 8, 3, 19, 10, 8, 112158), 'log_count/INFO': 10, 'start_time': datetime.datetime(2019, 8, 3, 19, 10, 8, 102051)} 2019-08-03 21:10:08 [scrapy.core.engine] INFO: Spider closed (finished) </code></pre> <p>If i wrote in terminal <code>response.xpath('//*[@id="products-list"]/div/div[2]/h3/a/text()').extract()</code> after has load the shell with the choosen site it return the correct result!!</p> <p>I think that the problem is in concatenated xpath but i dont' know where!</p>
<p>Try to use attributes available such as <code>class</code> or <code>id</code> while scraping, it will make your life easier. </p> <p>Try with below tested code:</p> <pre><code>for sel in response.xpath("//div[@class='list-group']//h3/a"): print(sel.xpath('./text()').extract_first().strip('')) print(sel.xpath('./@href').extract_first()) </code></pre> <p>Edit: A better version of above code:</p> <pre><code>for sel in response.xpath("//h3[@class='product-name']/a"): print(sel.xpath('./@title').extract_first()) print(sel.xpath('./@href').extract_first()) </code></pre>
python|scrapy
0
1,909,448
57,664,964
How to use tkinter GUI to select function argument and file output path in python?
<p>I currently have a python file that I have to run for different excel files on a daily basis.</p> <p>The steps are :</p> <ol> <li>Open .py file </li> <li>change directory of excel file</li> <li>run python file </li> <li>write to .xlsx</li> </ol> <p>This takes in a excel file as a pandas dataframe does some manipulation and other things and spits out an excel file.</p> <p>The problem is that I have to change the directory each time manually in the code.</p> <p>I would rather build a nice GUI for me to pick the source file that I want to manipulate, pick the output directory and then click begin to start the .py script.</p> <p>My current .py file is not written as a function but it is just a series of steps on some data so I could easy write it as a function like so:</p> <pre><code>def data_automation(my_excel_file): #do some stuff pd.to_excel(output directory) </code></pre> <p>I currently have this:</p> <pre><code>import tkinter.filedialog as filedialog import tkinter as tk master = tk.Tk() def input(): input_path = tk.filedialog.askopenfilename() input_entry.delete(1, tk.END) # Remove current text in entry input_entry.insert(0, input_path) # Insert the 'path' def output(): path = tk.filedialog.askopenfilename() input_entry.delete(1, tk.END) # Remove current text in entry input_entry.insert(0, path) # Insert the 'path' top_frame = tk.Frame(master) bottom_frame = tk.Frame(master) line = tk.Frame(master, height=1, width=400, bg="grey80", relief='groove') input_path = tk.Label(top_frame, text="Input File Path:") input_entry = tk.Entry(top_frame, text="", width=40) browse1 = tk.Button(top_frame, text="Browse", command=input) output_path = tk.Label(bottom_frame, text="Output File Path:") output_entry = tk.Entry(bottom_frame, text="", width=40) browse2 = tk.Button(bottom_frame, text="Browse", command=output) begin_button = tk.Button(bottom_frame, text='Begin!') top_frame.pack(side=tk.TOP) line.pack(pady=10) bottom_frame.pack(side=tk.BOTTOM) input_path.pack(pady=5) input_entry.pack(pady=5) browse1.pack(pady=5) output_path.pack(pady=5) output_entry.pack(pady=5) browse2.pack(pady=5) begin_button.pack(pady=20, fill=tk.X) master.mainloop() </code></pre> <p>Which generates this:</p> <p><img src="https://i.stack.imgur.com/MCdk8.png" alt="tk_output"></p> <p>So what I want to do is assign a function to the begin button which I can do with command=function easily.</p> <p>What I am struggling to do is:</p> <ol> <li><p>Given an input from the user, take that input and use the the file path as a function argument.</p></li> <li><p>Be able to select an output destintion (currently I can only select a file not a destination) and then use that destination path to write my new excel file at the end of my function.</p></li> </ol> <p>Appreciate any input!</p>
<ol> <li><p>Connect a function to your 'Begin' button that gets the content of both your entries and runs <code>data_automation()</code>. Something like</p> <pre><code>def begin(): my_excel_file = input_entry.get() output_directory = output_entry.get() data_automation(my_excel_file, output_directory) </code></pre> <p>I have added the <code>output_directory</code> argument to your <code>data_automation</code> function.</p></li> <li><p>If you use <code>filedialog.askdirectory()</code> instead of <code>filedialog.askopenfilename()</code>, you will be able to pick a directory instead of a file. By the way there is a typo in <code>output()</code>, I think you want to insert the result in <code>output_entry</code>, not <code>input_entry</code>. </p> <pre><code>def output(): path = tk.filedialog.askdirectory() output_entry.delete(1, tk.END) # Remove current text in entry output_entry.insert(0, path) # Insert the 'path' </code></pre></li> </ol>
python|python-3.x|user-interface|tkinter
1
1,909,449
57,334,245
Can't I use the if statement with math operations?
<p>I am writing some code in order to test if a year is a leap year or not.</p> <p>So I wrote:</p> <pre><code>year = input("please enter a year: ") if (year % 4) == 0: print(f"{year} is a leap year.") else: print(f"{year} is a nonleap year.") </code></pre> <p>And the error reported is:</p> <pre><code> if (year % 4) == 0: TypeError: not all arguments converted during string formatting </code></pre>
<p>As <code>year</code> is a string, <code>year % 4</code> tries to run a string formatting operation. Change to</p> <p><code>if (int(year) % 4) == 0:</code></p>
python|python-3.x|string.format|f-string
0
1,909,450
57,576,868
How do I get the name of the highest value in a group in Pandas?
<p>I have the following dataframe:</p> <pre><code> Country Continent Population --- ------- ------------- ------------ 0 United States North America 329,451,665 1 Canada North America 37,602,103 2 Brazil South America 210,147,125 3 Argentina South America 43,847,430 </code></pre> <p>I want to group by the continent, and get the name of the country with the highest population in that continent, so basically I want my result to look as follows:</p> <pre><code>Continent Country ---------- ------------- North America United States South America Brazil </code></pre> <p>How can I do this?</p>
<p>Use <code>idxmax</code> to get index of the max row:</p> <pre><code>df['Population'] = pd.to_numeric(df['Population'].str.replace(',', '')) idx = df.groupby('Continent')['Population'].idxmax() df.loc[idx] </code></pre> <p>Result:</p> <pre><code> Country Continent Population 0 United States North America 329451665 2 Brazil South America 210147125 </code></pre>
python|python-3.x|pandas|python-2.7|pandas-groupby
1
1,909,451
42,137,529
Pandas - find first non-null value in column
<p>If I have a series that has either NULL or some non-null value. How can I find the 1st row where the value is not NULL so I can report back the datatype to the user. If the value is non-null all values are the same datatype in that series.</p> <p>Thanks</p>
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.first_valid_index.html" rel="noreferrer"><code>first_valid_index</code></a> with select by <code>loc</code>:</p> <pre><code>s = pd.Series([np.nan,2,np.nan]) print (s) 0 NaN 1 2.0 2 NaN dtype: float64 print (s.first_valid_index()) 1 print (s.loc[s.first_valid_index()]) 2.0 # If your Series contains ALL NaNs, you'll need to check as follows: s = pd.Series([np.nan, np.nan, np.nan]) idx = s.first_valid_index() # Will return None first_valid_value = s.loc[idx] if idx is not None else None print(first_valid_value) None </code></pre>
python|pandas
53
1,909,452
42,574,611
Does size of training data for an epoch matter in tensorflow?
<p>Assuming we have 500k items worth of training data, does it matter if we train the model one item at a time or 'n' items at a time or all at once?</p> <p>Considering <code>inputTrainingData</code> and <code>outputTrainingData</code> to be <code>[[]]</code> and <code>train_step</code> to be any generic tensorflow training step. </p> <p><strong>Option 1</strong> Train one item at a time -</p> <pre><code>for i in range(len(inputTrainingData)): train_step.run(feed_dict={x: [inputTrainingData[i]], y: [outputTrainingData[i]], keep_prob: .60}, session= sess) </code></pre> <p><strong>Option 2</strong> Train on all at once -</p> <pre><code>train_step.run(feed_dict={x: inputTrainingData, y: outputTrainingData, keep_prob: .60}, session= sess) </code></pre> <p>Is there any difference between options 1 and 2 above as far as the quality of training is concerned? </p>
<p>There is a different between this options. Normally you have to use a batchsize to train for example 128 iterations of data. You also could use a batchsize of one, like the first of you examples. The advantage of this method is you can output the training efficient of the neural network.</p> <p>If you are learning all data at ones, you will bi a little bit faster, but you will know only at the end if you efficient is good.</p> <p>Best way is to make a batchsize and learn by stack. So you can output you efficient after every stack and control your efficient.</p>
python|tensorflow
1
1,909,453
59,131,580
AttributeError: 'DataFrame' object has no attribute 'data'
<p>I keep receiving an error of <code>*AttributeError: 'DataFrame' object has no attribute 'data'*</code> when trying to run the below code. Tried testing with <code>data_sets.head(</code>) and received the error *<code>AttributeError: 'dict' object has no attribute 'head'*</code></p> <pre><code>dataDir = '/content/drive/My Drive/Colab Notebooks/Final/dataQ2/' # Directory with input files trainFile = 'q2train.csv' # Training examples labelFile = 'q2label.csv' # Test label validFile = 'q2valid.csv' # Valid Files data_sets = { 'train' : train, 'label' : label, 'valid' : valid} def get_data(data_set_name, test_prop=0.2, seed=2019): """returns data for training, testing, and data characteristics""" data = data_sets[data_set_name] X, y = data.data, data.target X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=test_prop, random_state=seed) nF = X.shape[1] # number of features nC = len(np.unique(y)) # number of classes nTrain, nTest = len(y_train), len(y_test) return X_train, X_test, y_train, y_test, nF, nC, nTrain, nTest for name in data_set: X_train, X_test, y_train, y_test, nF, nC, nTrain, nTest = get_data(name) </code></pre> <p>Assistance is appreciated, sorry in advance if this is a dumb question.</p>
<p>i come across the same question。 i solved this bug by updating my pandas from 1.0.1 to 1.0.5</p>
python|pandas|scikit-learn
-1
1,909,454
53,965,954
Sorting dictionary values in a ascending order a python counter
<p>So my question has been downvoted twice because the question is said to be a duplicate : <a href="https://stackoverflow.com/questions/44076269/sort-counter-by-frequency-then-alphabetically-in-python">Sort Counter by frequency, then alphabetically in Python</a></p> <p>The answers here are speaking about ordering the Counter Dictionnary alphabetically. While what I want to order it in a ascending order </p> <p>I want to do a order a list of strings by frequency. I get the frequency of each string in an descending order by default.</p> <p>I did a <code>letter_counts = Counter(list_of_string))</code> . I think I am getting a kind of dictionnary ordered in an descending order. </p> <p>I want to sort them in an ascending order,** but i am not managing it so far. </p> <p>I have read <a href="https://stackoverflow.com/questions/613183/how-do-i-sort-a-dictionary-by-value">How do I sort a dictionary by value?</a>, but cannot really apply it to a descending order. </p> <pre><code>frequency = Counter(list_of_string) print(frequency) </code></pre> <p>The dictionary (is it?) I am getting is the following. As you can see it is already in a descending order</p> <pre><code> Counter({' stan ': 3, ' type ': 3, ' quora ': 3, ' pescaparian': 3, ' python remove even number from a list': 3, ' gmail': 3, ' split words from a string ': 3, ' split python ': 2, ' split ': 2, ' difference entre list et set': 2, ' add a key to a dictionnary python': 1, ' stackoverflowsearch python add lists': 1}) </code></pre>
<p>You better specify what python version you are using, as dictionaries had no order by Python 3.7 (practically by Python 3.6). Since, they are sorted by insertion order. If you are using an older version, maybe OrderedDict will help you. Anyhow, if you want to just print the keys or hold them in another data structure sorted in descending order, this should work:</p> <pre><code>frequency = Counter(list_of_string) l = frequency.keys() print(l) </code></pre> <p>For opposite order:</p> <pre><code>frequency = Counter(list_of_string) l = [k for k in frequency.keys()][::-1] print(l) </code></pre> <p>If you need it to be a dictionary:</p> <pre><code>frequency = Counter(list_of_string) d = dict(frequency.most_common()[::-1]) print(d) </code></pre>
python|sorting
1
1,909,455
53,972,610
Merge nested lists into groups in a list of tuple of lists
<p>I have a list of size 50000. say <code>a</code>. Each element is a tuple, say <code>b=a[0]</code>. Each tuple consists of 2 lists, say <code>c=b[0], d=b[1]</code>. 1st list i.e. <code>c</code> is of length 784 and the second i.e. <code>d</code> is of length 10. From this list, I need to extract the following:<br> Group first 10 elements of list <code>a</code>. From these 10 tuples, extract their first element (<code>c</code>) and put them in a matrix of size <code>784x10</code>. Also extract the second elements of the tuples and put them in another matrix of size <code>10x10</code>. Repeat this for every batch of 10 elements in list <code>a</code>.<br> Is this possible to do in a single line using list comprehension? Or do I have to write multiple for loops? Which method is efficient and best? Note: It is okay if I get as a list or numpy.ndarray matrix.</p> <p><strong>Additional Information</strong>: I'm following <a href="http://neuralnetworksanddeeplearning.com/chap2.html" rel="nofollow noreferrer">this</a> tutorial on neural networks which aims to design a neural network to recognize handwritten digits. MNIST database is used to train the network. The training data is in the above described format. I need to create a matrix of input_images and expected_output for every mini_batch.</p> <p>Here is the code I've tried. I'm getting a list of size 50000. Its not getting split into mini_batches</p> <pre><code>f = gzip.open('mnist.pkl.gz', 'rb') tr_d, va_d, te_d = pickle.load(f, encoding='latin1') f.close() training_inputs = [numpy.reshape(x, (784, 1)) for x in tr_d[0]] training_results = [vectorized_result(y) for y in tr_d[1]] training_data = zip(training_inputs, training_results) # training_data is a list of size 50000 as described above n = len(training_data) # n=50000 mini_batch_size = 10 mini_batch = [x[0] for k in range(0, n, mini_batch_size) for x in training_data[k:k+mini_batch_size]] </code></pre> <p>The <code>mnist.pkl.gz</code> is available <a href="https://github.com/mnielsen/neural-networks-and-deep-learning/blob/master/data/mnist.pkl.gz" rel="nofollow noreferrer">here</a></p>
<p>I wrote my answer before you added the source and therefore it is purely based upon the first part where you write it out in words. It is therefore not very fail-safe in terms of changes in input size. If you read further in the book Anders Nielsen actually provides an implementation of his own.</p> <p>My main answer is not a single line answer, as that would obfuscate what it does and I would advise you very much to write complex processes like this out so you have a better understanding of what actually happens. In my code I make a firstMatrix, which contains the c-elements in a matrix, and a secondMatrix, which contains the d-elements. I do this for every batch of 10, but didn't know what you want to do with the matrices afterwards so I just make them for every batch. If you want to group them or something, please say so and I will try to implement it.</p> <pre><code>for batch in np.array_split(a,10): firstMatrix = np.zeros(shape=(784,10)) secondMatrix = np.zeros(shape=(10,10)) for i in range(len(batch)): firstMatrix[:,i] = batch[i][0] secondMatrix[:,i] = batch[i][1] </code></pre> <p>If you really want a one-liner, here is one that makes an array of the firstMatrices and one for the secondMatrices:</p> <pre><code>firstMatrices = [np.array([batch[i][0] for i in range(len(batch))]).T for batch in np.array_split(a,10)] secondMatrices = [np.array([batch[i][1] for i in range(len(batch))]).T for batch in np.array_split(a,10)] </code></pre>
python|python-3.x|list|neural-network|list-comprehension
1
1,909,456
22,781,805
data editor in Python?
<p>I have loaded a <strong>csv file</strong> in <strong>Python</strong>, but I do not know how to look at my data without having to call specific lines/columns. </p> <p>I just want to look at the data as if it was an <strong>excel file</strong>, that is being able to <code>scroll along different rows</code>, <code>change manually some values</code>, etc. </p> <p><code>In R there is the edit function, in Stata there is the data editor.</code> Is there something similar in Python? I use the <strong>canopy distribution</strong>.</p> <p>Thanks!</p>
<p>Do you use a pandas dataframe? It provides some functionality to load / write csvs easily and to display their content, like .dataframe.head(10) - which displays the first ten rows. dataframe.describe() will emit useful information about your data.</p> <p>If you want to try out a df you should use the following command before printing the df:</p> <pre><code>import pandas as pd pd.set_option('display.max_columns', None) </code></pre> <p>Otherwise pandas won't print a wide dataframe, but only the columns, which can be quite confusing.</p> <p>Personally, if I have to look at very big dataframes, I tend to export them to csv and look at them in Excel. It's not the best workflow, but the displaying capabillities of python are not the best. Alternatively, you can export a pandas dataframe easily to html, which might be more convienient.</p> <p>Here is my saving function:</p> <pre><code>def save_file(df, file_name): """ saves to a csv file in the german excel format, e.g. colon seperated :rtype : None :param df: the dataframe to be saved :param file_name: the filename """ assert isinstance(df, DataFrame) df.to_csv(file_name, sep=";") </code></pre> <p>I use this small function because Excel in germany uses a colon (;) as seperator. I always forget that when using the default function of pandas and then I have to redo it...</p>
python|data-visualization|canopy
3
1,909,457
45,440,255
How can I highlight element on a webpage using Selenium-Python?
<p>I have been handed over a existing selenium framework which uses python for scripting. For debugging(&amp; other) purposes, I would like to highlight the element on which action is being taken currently (input box, link, drop-down etc.)</p> <p>Though I could find solutions to define a function and calling the function wherever I need to highlight the element(as examples given below), what I need is a solution at the framework level.</p> <ol> <li><a href="https://stackoverflow.com/questions/10791866/selenium-webdriver-highlight-element-before-clicking">Selenium webdriver highlight element before clicking</a></li> <li><a href="https://seleniumwithjavapython.wordpress.com/selenium-with-python/intermediate-topics/playing-with-javascript-and-javascript-executor/highlighting-a-web-element-on-webpage/" rel="nofollow noreferrer">https://seleniumwithjavapython.wordpress.com/selenium-with-python/intermediate-topics/playing-with-javascript-and-javascript-executor/highlighting-a-web-element-on-webpage/</a></li> </ol> <p>Is it possible to implement any solution at the framework / script level in Python (or any other language which can be integrated with python scripts), so that I don't have to call the functions explicitly.</p> <p>P.S. I am just beginning to use Python, so excuse me if its a simple/straight forward. Will appreciate if someone can point me to any existing solution or can provide their own solution.</p>
<p>I haven't tried this code, but this should work.</p> <pre><code>import time def highlight(element): """Highlights (blinks) a Selenium Webdriver element""" driver = element._parent def apply_style(s): driver.execute_script("arguments[0].setAttribute('style', arguments[1]);",element, s) original_style = element.get_attribute('style') apply_style("background: yellow; border: 2px solid red;") time.sleep(.3) apply_style(original_style) </code></pre> <p>Hope this helps. Thanks.</p> <p>Source - <a href="https://gist.github.com/dariodiaz/3104601" rel="nofollow noreferrer">https://gist.github.com/dariodiaz/3104601</a></p>
javascript|python|selenium|automation
4
1,909,458
41,524,773
how to import the following function?
<p>Hello I have the following dictionary, with keys and frecuencies: </p> <pre><code>dictFrec = {'22': 21, '25': 9, '47': 21, '1': 22, '28': 20, '21': 12, '10': 136, '12': 106, '17': 20, '19': 39, '33': 89, '31': 40, '48': 52, '30': 37, '37': 18, '41': 114, '36': 49, '42': 30, '7': 22, '8': 29, '18': 22, '4': 18, '14': 49, '38': 16, '34': 37, '6': 11, '2': 19, '44': 16, '35': 69, '26': 52, '39': 30, '27': 16, '40': 24, '0': 31, '3': 21, '32': 71, '5': 17, '23': 27, '24': 36, '20': 26, '46': 19, '11': 28, '29': 50, '13': 19, '9': 101, '49': 44, '15': 23, '43': 17, '45': 37, '16': 72} </code></pre> <p>I order to get the 5 lowest values I designed the following function in a class:</p> <pre><code>import operator class getStrange: def getStrangeD(my_dict): strange=dict(sorted(my_dict.items(), key=operator.itemgetter(1), reverse=True)[:5]) return strange </code></pre> <p>The issue comes when I tried to import it as follows:</p> <pre><code>from tools import getStrange as G newA = G.getStrangeD(dictFrec) </code></pre> <p>I got:</p> <pre><code>Traceback (most recent call last): File "parser.py", line 55, in &lt;module&gt; newA = G.getStrangeD(dictFrec) AttributeError: module 'tools.getStrange' has no attribute 'getStrangeD' </code></pre> <p>So I would like to receive support about this, I am trying to have this function to make more clean my code, However I am not sure how to import this function and where is the issue, thanks for the support, </p> <p>this new class was stored in a file as follows:</p> <pre><code>/tools$ ls getStrange.py __pycache__ </code></pre>
<p>You've got two problems. First, <code>getStrangeD</code> is written as an instance method but doesn't include a <code>self</code> variable. Since it doesn't use instance data, you can define it as a static method instead. Second, since its a member of the class, you need to include the class name when you access it.</p> <p><em>tools/getStrange.py</em></p> <pre><code>import operator class getStrange: @staticmethod def getStrangeD(my_dict): strange=dict(sorted(my_dict.items(), key=operator.itemgetter(1), reverse=True)[:5]) return strange </code></pre> <p>Now to use it</p> <pre><code>&gt;&gt;&gt; from tools import getStrange as G &gt;&gt;&gt; dictFrec = {'22': 21, '25': 9} # .... &gt;&gt;&gt; newA = G.getStrange.getStrangeD(dictFrec) </code></pre>
python-3.x|import
1
1,909,459
57,153,419
Windows-curses install on ubuntu
<p>I am trying to install a package (python-can) by running <code>pip2 install python-can</code> and I get the following errors:</p> <pre><code>Collecting windows-curses (from python-can) ERROR: Could not find a version that satisfies the requirement windows-curses (from python-can) (from versions: none) Error: No matching distribution found for windows-curses (from python-can) </code></pre> <p>Any suggestions? I am on Ubuntu 16.04.</p>
<p><strong>Updated answer:</strong> Added information about installing <strong>python-can</strong> on Windows OS.</p> <p>See <a href="https://python-can.readthedocs.io/en/master/installation.html#windows-dependencies" rel="nofollow noreferrer">https://python-can.readthedocs.io/en/master/installation.html#windows-dependencies</a></p> <p>Please note that because of the IO nature, when using python-can on a Windows OS machine, it will require additional drivers or a back-end engine. First install one of the listed options from the install link. Then try pip installing python-can.</p> <p><strong>Install can with pip:</strong> <code>pip install python-can</code></p> <p>Notice the paragraphs below:<br /> &quot;As most likely you will want to interface with some hardware, you may also have to install platform dependencies. Be sure to check any other specifics for your hardware in CAN Interface Modules.&quot;</p> <blockquote> <p><strong>Windows dependencies:</strong><br /> Kvaser (one of many options)<br /> To install python-can using the Kvaser CANLib SDK as the backend:</p> </blockquote> <blockquote> <p>Install the latest stable release of Python. Install Kvaser’s latest Windows CANLib drivers. Test that Kvaser’s own tools work to ensure the driver is properly installed and that the hardware is working.</p> </blockquote> <p><strong>Original Answer:</strong><br /> (Original answer directly addressed the error message. Provided help to rule out potential error being related to Python's curses library install issue.)</p> <p>If you are using Python 3.6+, curses seem to be a part of the built-in Python for Ubuntu OS so there is nothing to install. You initialize a session to start using it.</p> <pre><code>import curses stdscr = curses.initscr() </code></pre> <p><a href="https://docs.python.org/3/howto/curses.html" rel="nofollow noreferrer">https://docs.python.org/3/howto/curses.html</a></p>
python|python-can
-1
1,909,460
56,958,209
Turning JSON into dataframe with pandas
<p>I'm trying to get a data frame but keep running into various error messages depending on the arguments I specify in read.json after I specify my file.</p> <p>I've run through many of the arguments in the pandas.read_json documentation, but haven't been able to identify a solution.</p> <pre><code>import pandas json_file = "https://gis.fema.gov/arcgis/rest/services/NSS/OpenShelters/MapServer/0/query?where=1%3D1&amp;outFields=*&amp;returnGeometry=false&amp;outSR=4326&amp;f=json" pandas.read_json(json_file) </code></pre> <p>I'm trying to get a data frame but keep running into various error messages depending on the arguments I specify in read.json after I specify my file.</p>
<p>Because the JSON is not directly convertible to <code>DataFrame</code>. <code>read_json</code> works only with a few formats defined by the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_json.html" rel="nofollow noreferrer"><code>orient</code> parameter</a>. Your JSON doesn't follow any of the allowed formats so you need to manipulate the JSON before converting it to a data frame.</p> <p>Let's take a high level look at your JSON:</p> <pre><code>{ "displayFieldName": ..., "fieldAliases": {...}, "fields": {...}, "features": [...] } </code></pre> <p>I'm gonna fathom a guess and assume the <code>features</code> node is what you want. Let's div deeper into <code>features</code>:</p> <pre><code>"features": [ { "attributes": { "OBJECTID": 1, "SHELTER_ID": 223259, ... } }, { "attributes": { "OBJECTID": 2, "SHELTER_ID": 223331, ... } }, ... ] </code></pre> <p><code>features</code> contains a list of objects, each having an <code>attributes</code> node. The data contained in the <code>attributes</code> node is what you actually want.</p> <p>Here's the code</p> <pre><code>import pandas as pd import json from urllib.request import urlopen json_file = "https://gis.fema.gov/arcgis/rest/services/NSS/OpenShelters/MapServer/0/query?where=1%3D1&amp;outFields=*&amp;returnGeometry=false&amp;outSR=4326&amp;f=json" data = urlopen(json_file).read() raw_json = json.loads(data) formatted_json = [feature['attributes'] for feature in raw_json['features']] </code></pre> <p><code>formatted_json</code> is now a list of dictionaries containing the data we are after. It is no longer JSON. To create the data frame:</p> <pre><code>df = pd.DataFrame(formatted_json) </code></pre>
python|json|pandas
1
1,909,461
57,199,231
Printing a string between two characters
<p>I'm writing a python script that will extract specific parts of lines in a text file and print them out. The part of that line is always between two specific characters. For example, I need to print everything between <code>"Ghost: "</code> and <code>"("</code>.</p> <p>I've tried importing <code>re</code> and using it to search a string for characters, but I failed.</p> <p>The file looks something like this:</p> <pre><code>Ghost: john.doe.1 {} ... Ghost: john.walker.4 {} ... Ghost: john.johnny.3 {} ... Ghost: john.craig.6 {} ... ... </code></pre> <p>I'm expecting something like this:</p> <pre><code>john.doe.1 john.walker.4 john.johnny.3 john.craig.6 </code></pre>
<p>Using regex, you can do:</p> <pre><code>re.search('((?:[a-z][a-z]+))(\\.)((?:[a-z][a-z]+))(\\.)(\\d+)', text).group() </code></pre> <p>Output:</p> <pre><code>john.doe.1 john.walker.4 john.johnny.3 john.craig.6 </code></pre> <p>Logic:</p> <pre><code>'((?:[a-z][a-z]+))' - looks for word (a-z) '(\\.)' - looks for dot '((?:[a-z][a-z]+))' - looks for word (a-z) '(\\.)' - looks for dot '(\\d+)' - looks for digit (0-9) </code></pre>
python|regex
1
1,909,462
25,923,308
Binary operations on Numpy scalars automatically up-casts to float64
<p>I want to do binary operations (like add and multiply) between np.float32 and builtin Python int and float and get a np.float32 as the return type. However, it gets automatically up-casted to a np.float64.</p> <p>Example code:</p> <pre><code>&gt;&gt;&gt; a = np.float32(5) &gt;&gt;&gt; a.dtype dtype('float32') &gt;&gt;&gt; b = a + 2 &gt;&gt;&gt; b.dtype dtype('float64') </code></pre> <p>If I do this with a np.float128, b also becomes a np.float128. This is good, as it thereby preserves precision. However, no up-casting to np.float64 is necessary to preserve precision in my example, but it still occurs. Had I added 2.0 (a Python float (64 bit)) to a instead of 2, the casting would make sense. But even here, I do not want it.</p> <p>So my question is: How can I alter the casting done when applying a binary operator to a np.float32 and a builtin Python int/float? Alternatively, making single precision the standard in all calculations rather than double, would also count as a solution, as I do not ever need double precision. Other people have asked for this, and it seems that no solution has been found.</p> <p>I know about numpy arrays and there dtypes. Here I get the wanted behavior, as an array always preserves its dtype. It is however when I do an operation on a single element of an array that I get the unwanted behavior. I have a vague idea to a solution, involving subclassing np.ndarray (or np.float32) and changing the value of __array_priority__. So far I have not been able to get it working.</p> <p>Why do I care? I am trying to write an n-body code using Numba. This is why I cannot simply do operations on the array as a whole. Changing all np.float64 to np.float32 makes for a speed up of about a factor of 2, which is important. The np.float64-casting behavior serves to ruin this speed up completely, as all operations on my np.float32 array are done in 64-precision and then downcasted to 32-precision.</p>
<p>I'm not sure about the NumPy behavior, or how exactly you're trying to use Numba, but being explicit about the Numba types might help. For example, if you do something like this:</p> <pre><code>@jit def foo(a): return a[0] + 2; a = np.array([3.3], dtype='f4') foo(a) </code></pre> <p>The float32 value in a[0] is promoted to a float64 before the add operation (if you don't mind diving into llvm IR, you can see this for yourself by running the code using the numba command and using the --dump-llvm or --dump-optimized flag: numba --dump-optimized numba_test.py). However, by specifying the function signature, including the return type as float32:</p> <pre><code>@jit('f4(f4[:]')) def foo(a): return a[0] + 2; </code></pre> <p>The value in a[0] is not promoted to float64, although the result is cast to a float64 so it can be converted to a Python float object when the function returns to Python land.</p> <p>If you can allocate an array beforehand to hold the result, you can do something like this:</p> <pre><code>@jit def foo(): a = np.arange(1000000, dtype='f4') result = np.zeros(1000000, dtype='f4') for i in range(a.size): result[0] = a[0] + 2 </code></pre> <p>Even though you're doing the looping yourself, the performance of the compiled code should be comparable to a NumPy ufunc, and no casts to float64 should occur (Again, this can be verified by looking at the llvm IR that Numba generates).</p>
python|numpy|numba|single-precision
2
1,909,463
25,752,673
Jython: Parameterize values in quotes
<p>how to parameter values in quotes in jython, this is my calling method :</p> <p>BaseSTSSchedulerTask.<strong>init</strong>(self, Test(testId, β€œGet Service Group by ID”), hostPort, β€˜/SchServices/api/servicegroup/9999β€², HEADERS)</p> <p>I want to replace the value 9999 with a variable which is returned from another method., like id= Data.getID(). </p> <p>I tried doing this β€˜/SchServices/api/servicegroup/’+id, but it does not help . Any idea how to handle this ?</p>
<p>Ok, I just fixed this by doing : β€˜/SchServices/api/servicegroup/’+<strong>self.id</strong>, "id" is a built in function in jython. so self.id is now a instance variable. </p>
python|jython-2.5
0
1,909,464
44,648,939
Unexpected behaviour when adapting sparse matrix
<p>I have a sparse matrix <code>M</code>and an array <code>a</code> of locations where I want to increase <code>M</code> by one. This array <code>a</code> may contain duplicates and whenever an element is <code>n</code> times in <code>a</code>, I would like to add <code>n</code> to the corresponding position in <code>M</code>. I did this the following way:</p> <pre><code>from scipy import sparse as sp M = sp.csr_matrix((3, 4), dtype=float) M[[0,0,0,0,0], [0,1,0,1,0]] += 1 </code></pre> <p>But when I run this, <code>M[0,0]</code> is only increased by one, is there an easy method to adapt this?</p>
<p>How does MATLAB handle this?</p> <p><code>numpy</code> has a special function to handle this repeated index case, the <code>add.at</code> </p> <p><a href="https://stackoverflow.com/questions/29617292/using-ufunc-at-on-matrix">Using ufunc.at on matrix</a></p> <p>This hasn't been implemented for <code>scipy.sparse</code>.</p> <p>Since <code>sparse</code> sums repeated coordinates when converting from a <code>coo</code> format to <code>csr</code> one, I suspect this problem could be cast in a way that takes advantage of that. In fact the <code>csr</code> matrix has a <code>M.sum_duplicates</code> method.</p> <p>I'd have to play around to work out the details.</p> <hr> <pre><code>In [876]: M = sparse.csr_matrix((3, 4), dtype=float) In [877]: M Out[877]: &lt;3x4 sparse matrix of type '&lt;class 'numpy.float64'&gt;' with 0 stored elements in Compressed Sparse Row format&gt; </code></pre> <p>demonstrating the <code>np.add.at</code> action:</p> <pre><code>In [878]: arr = M.A In [879]: arr[[0,0,0,0,0],[0,1,0,1,0]] += 1 In [880]: arr Out[880]: array([[ 1., 1., 0., 0.], [ 0., 0., 0., 0.], [ 0., 0., 0., 0.]]) In [883]: arr = M.A In [884]: np.add.at(arr,[[0,0,0,0,0],[0,1,0,1,0]],1) In [885]: arr Out[885]: array([[ 3., 2., 0., 0.], [ 0., 0., 0., 0.], [ 0., 0., 0., 0.]]) </code></pre> <hr> <p>Adding to <code>M</code> produces the same buffered action - with a warning. Changing the sparsity of a matrix is, relatively expensive.</p> <pre><code>In [886]: M[[0,0,0,0,0],[0,1,0,1,0]] += 1 .... SparseEfficiencyWarning) In [887]: M Out[887]: &lt;3x4 sparse matrix of type '&lt;class 'numpy.float64'&gt;' with 2 stored elements in Compressed Sparse Row format&gt; In [888]: M.A Out[888]: array([[ 1., 1., 0., 0.], [ 0., 0., 0., 0.], [ 0., 0., 0., 0.]]) </code></pre> <hr> <p>The correct way to do this addition is to make a new sparse matrix with the values that need to be added. We can take advantage of the fact that <code>coo</code> style inputs sum duplicates with converted to <code>csr</code>:</p> <pre><code>In [895]: m = sparse.csr_matrix((np.ones(5,int),([0,0,0,0,0],[0,1,0,1,0])), shape=M.shape) In [896]: m Out[896]: &lt;3x4 sparse matrix of type '&lt;class 'numpy.int32'&gt;' with 2 stored elements in Compressed Sparse Row format&gt; In [897]: m.A Out[897]: array([[3, 2, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]], dtype=int32) </code></pre> <p>now we can add the original and the new one:</p> <pre><code>In [898]: M = sparse.csr_matrix((3, 4), dtype=float) In [899]: M+m Out[899]: &lt;3x4 sparse matrix of type '&lt;class 'numpy.float64'&gt;' with 2 stored elements in Compressed Sparse Row format&gt; </code></pre>
python|scipy|sparse-matrix
0
1,909,465
61,855,760
Can someone tell me what's wrong with my animation code?
<h1>Can someone tell me what's wrong with my animation code?</h1> <p><p>I coded this program that will generate 40 png images, and an animated gif image.</p><p>The animation is supposed to be a sphere that is sliced into 5 segments, and the segments move left and right, but, as you will be able to see, it doesn't really work as I planned</p><p>(I only posted 12 frames of the gif, as 40 frames will be to big to post)</p> <p>Can someone tell me how to correct it?</p></p> <pre><code>import matplotlib.pyplot as plt from numpy import sin,cos,pi,outer,ones,size,linspace from mpl_toolkits.mplot3d import axes3d # Define the x, y, and z lists for the sphere: x = 10*outer(cos(linspace(0, 2*pi)), sin(linspace(0, pi))) y = 10*outer(sin(linspace(0, 2*pi)), sin(linspace(0, pi))) z = 10*outer(ones(size(linspace(0, 2*pi))), cos(linspace(0, pi))) for n in range(40): fig = plt.figure(figsize=(10, 10)) ax = fig.add_subplot(111, projection='3d') ax.plot_surface(x, y, z, color = ('r')) ax.set_xticks([]) ax.set_yticks([]) ax.set_zticks([]) ax.set_xlim(-10,10) ax.set_ylim(-10,10) ax.set_zlim(-10,10) plt.savefig(str(n)+'.png') #Save the image into a numbered png file print(str(n)+'.png completed.') plt.close() sign = 1 for count in range(5): #Slice the sphere into 5 segments for num in range(len(z)//5*count,len(z)//5*(count+1)): z[num] += sign # Make the segments go positive and negative directions sign *= -1 from PIL import Image images = [] # Open each png file and store in images list for n in range(40): exec('a'+str(n)+'=Image.open("'+str(n)+'.png")') images.append(eval('a'+str(n))) # Create an animated gif file: images[0].save('ball.gif', save_all=True, append_images=images[1:], duration=100, loop=0) </code></pre> <p><a href="https://i.stack.imgur.com/JIAxK.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JIAxK.gif" alt="Part of the resulting animation"></a></p>
<p>The problem is that you are using the same array for all segments, so the plot will stay connected at some vertices. That just how the plotting function works, it does not know that you want to separate the parts.</p> <p>You have to <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.split.html" rel="nofollow noreferrer">split</a> the array before, then modify the parts separately.</p> <p>At best this is done for <code>x</code> and <code>y</code> also, then you don't have to fiddle with the indices and that annoying inner for-loop :)</p> <p>I prepared something for you to start with:</p> <pre><code>import matplotlib.pyplot as plt from numpy import sin,cos,pi,outer,ones,size,linspace from mpl_toolkits.mplot3d import axes3d import numpy as np # Define the x, y, and z lists for the sphere: x = 10*outer(cos(linspace(0, 2*pi)), sin(linspace(0, pi))) y = 10*outer(sin(linspace(0, 2*pi)), sin(linspace(0, pi))) z = 10*outer(ones(size(linspace(0, 2*pi))), cos(linspace(0, pi))) def split_and_array(a): return np.array(np.split(a, 5, axis=0)) x = split_and_array(x) y = split_and_array(y) z = split_and_array(z) for n in range(40): fig = plt.figure(figsize=(10, 10)) ax = fig.add_subplot(111, projection='3d') for k in range(5): ax.plot_surface(x[k], y[k], z[k], color = ('r')) ax.set_xticks([]) ax.set_yticks([]) ax.set_zticks([]) ax.set_xlim(-10,10) ax.set_ylim(-10,10) ax.set_zlim(-10,10) plt.savefig(str(n)+'.png') #Save the image into a numbered png file print(str(n)+'.png completed.') plt.close() sign = 1 for count in range(5): #Slice the sphere into 5 segments z[count] += sign # Make the segments go positive and negative directions sign *= -1 </code></pre>
python|matplotlib|animation|python-imaging-library|animated-gif
2
1,909,466
64,471,019
Why won't Python accept that print is outside of the While loop?
<pre><code>#Tristan Ledlie #CSC1194C1 #10/21/2020 #Initialize variables first = True enter = &quot;&quot; #While the user hasn't ended the program while (enter != &quot;END&quot;): #Take input for enter enter = float(input(&quot;Please input a value(Or END to end the program)... &quot;)) #If this is the first loop: if (first == True): minimum = enter first = False #If enter is lower than the minimum number: elif (enter &lt; minimum): minimum = enter else (): print(minimum) i = input(&quot;Press Enter to close the window...&quot;) </code></pre> <p>The interpreter keeps saying that it expected an indent before print. I have no idea what is wrong. It makes no sense to me.</p>
<p>You're missing a statement after the else, if you don't want anything to happen, use <a href="https://docs.python.org/3/reference/simple_stmts.html#grammar-token-pass-stmt" rel="nofollow noreferrer"><code>pass</code></a>.</p> <pre class="lang-py prettyprint-override"><code>while condition: # ...your code... if condition: # ...your code... elif condition: # ...your code... else: pass print(statement) </code></pre> <p>Also, no parenthesis after <code>else</code>.</p>
python-3.x|syntax-error
1
1,909,467
69,978,188
How to use Reddit API to scrape only videos?
<p>I am trying to use <a href="https://praw.readthedocs.io/en/stable/" rel="nofollow noreferrer">PRAW</a> to fetch posts from a particular subreddit.</p> <p>This is my code:</p> <pre><code>import praw reddit = praw.Reddit(client_id=PERSONAL_USE_SCRIPT, client_secret=SECRET, user_agent=&quot;useragent&quot;) for submission in reddit.subreddit(&quot;funnyvideos&quot;).hot(limit=10): print(submission.title) </code></pre> <p>The output consists of all top 10 hot topics with their title. This includes posts which has both videos and non-videos posts.</p> <p>How can I apply a filter so that it <em><strong>only fetches the posts which has a video in it</strong></em> and exclude all others?</p>
<p>To filter out posts to only those containing videos you could do this:</p> <pre><code>for submission in reddit.subreddit(&quot;funnyvideos&quot;).hot(limit=10): if submission.is_video: # submission.url will give the url to the video print(submission.title) </code></pre>
python|praw
0
1,909,468
55,578,411
clicking through all rows in a table from angular using selenium python web driver
<p>I'm trying to iterate through a certain column of rows on a table/grid of an HTML page with I assume is a dynamic angular element. </p> <p>I have tried to iterate through the rows by creating a list of common xpaths between each row. This only help me achieve 32 rows and not the full amount which is 332. I also tried waiting to see if the webpage would load and then have the full amount of web-elements. Then I tried to run a loop on searching for similar xpaths by scrolling down to the last element in the list. None of these ways helped me to iterate through the rows. Also I will not be able to share the website since the website is private. </p> <h3>python</h3> <pre><code>webelement = [] driver.implicitly_wait(20) ranSleep() for webelement in driver.find_elements_by_xpath('//a[@class="ng-pristine ng-untouched ng-valid ng-binding ng-scope ng-not-empty"]'): driver.implicitly_wait(20) </code></pre> <h3>html for the rows</h3> <pre><code>&lt;a ng-model="row.entity.siteCode" ng-click="grid.appScope.openSite(row.entity)" style="cursor:pointer" class="ng-pristine ng-untouched ng-valid ng-binding ng-scope ng-not-empty"&gt; Albuquerque&amp;nbsp; &lt;span title="Open defect(s) on site" ng-show="row.entity.openDeficiencies" style="background-color:yellow; color:#000;" class="ng-hide"&gt; &amp;nbsp;!&amp;nbsp; &lt;/span&gt; &lt;/a&gt; </code></pre> <p>I expect to be able to click all the links in each row once this is solved</p> <p>Here is the snippet of the html code </p> <pre class="lang-html prettyprint-override"><code>&lt;div id="table1" class="container-fluid"&gt; &lt;div ui-i18n="en" class="grid advanceSearch ui-grid ng-isolate-scope grid1554731599680" id="grid1" ui-grid="gridOptions" ui-grid-expandable="" ui-grid-rowedit="" ui-grid-resize-columns="" ui-grid-selection="" ui-grid-edit="" ui-grid-move-columns=""&gt; &lt;!-- TODO (c0bra): add "scoped" attr here, eventually? --&gt; &lt;style ui-grid-style="" class="ng-binding"&gt; .grid1554731599680 { /* Styles for the grid */ } </code></pre> <p><a href="https://i.stack.imgur.com/QDjIJ.png" rel="nofollow noreferrer">here is how the page looks with the table format</a></p> <p><img src="https://i.stack.imgur.com/QDjIJ.png" alt=""></p> <p><a href="https://i.stack.imgur.com/gyL2x.png" rel="nofollow noreferrer">Here is the rows that I want to click through all of them</a></p> <p><img src="https://i.stack.imgur.com/gyL2x.png" alt=""></p>
<p>You might still be able to increment through each link by appending to the class name, as they seem to be a little unique in nature and using the last number as a char from the alphabet. Perhaps something like below could work :) Expanding on the classname's last character, in-case there's an increase, should solve the problem of there being more than 26. </p> <p>Steps taken: increment classnames >append successes to list >move to link within list >click link >List item</p> <pre><code>import string from selenium import webdriver from selenium.webdriver.common.action_chains import ActionChains from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC alpha = string.ascii_uppercase successfulIncs = [] for char in alpha: className = 'ng-pristine.ng-scope.ui-grid-coluiGrid-000' + char try: driver.find_elements_by_class_name(className) successfullIncs.append(className) except NoSuchElementException: print("Element not found") ### First move to our element for line in successfulIncs: link = WebDriverWait(driver, 3).until(EC.visibility_of_element_located (By.CLASS_NAME, line)) ActionChains(driver).move_to_element(link).perform() #Click ActionChains(driver).move_to_element(link).click(link).perform() </code></pre>
python|html|angularjs|selenium|selenium-webdriver
0
1,909,469
73,380,452
Save pandas dataframe to json by column value
<p>I have a dataframe with 10000 rows that look like below:</p> <pre><code> import numpy as np df=pd.DataFrame(np.array([['facebook', '15',&quot;women tennis&quot;], ['facebook', '20',&quot;men basketball&quot;], ['facebook', '30','club'], ['apple', &quot;10&quot;,&quot;vice president&quot;], ['apple', &quot;100&quot;,'swimming contest']]),columns=['firm','id','text']) </code></pre> <p><a href="https://i.stack.imgur.com/BRgnS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BRgnS.png" alt="enter image description here" /></a></p> <p>I'd like to save each firm into a separate JSON file. So the json file for Facebook looks like below, with the file name written as &quot;firm.json&quot; (e.g. facebook.json). The same will be for other firms, such as Apple.</p> <p><a href="https://i.stack.imgur.com/egYAD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/egYAD.png" alt="enter image description here" /></a></p> <p>Sorry, I am still a beginner to Pandas, is there a way to do so effectively?</p>
<p>You can do:</p> <pre class="lang-py prettyprint-override"><code>json_cols = df.columns.drop('firm').tolist() json_records = df.groupby('firm')[json_cols].apply( lambda x:x.to_json(orient='records')) </code></pre> <p>Then for 'facebook':</p> <pre><code>facebook_json = json_records['facebook'] '[{&quot;id&quot;:&quot;15&quot;,&quot;text&quot;:&quot;women tennis&quot;}, {&quot;id&quot;:&quot;20&quot;,&quot;text&quot;:&quot;men basketball&quot;}, {&quot;id&quot;:&quot;30&quot;,&quot;text&quot;:&quot;club&quot;}]' </code></pre> <p>for 'apple':</p> <pre><code>apple_json = json_records['apple'] '[{&quot;id&quot;:&quot;10&quot;,&quot;text&quot;:&quot;vice president&quot;},{&quot;id&quot;:&quot;100&quot;,&quot;text&quot;:&quot;swimming contest&quot;}]' </code></pre> <p>Save all at once</p> <pre><code>for col, records in json_records.iteritems(): with open(f&quot;{col}.json&quot;, &quot;w&quot;) as file: file.write(records) </code></pre>
python|json|pandas
3
1,909,470
49,940,900
Install NWRFC with PyRFC in Windows
<p>I am trying to install PyRFC with NWRFC SAP library... After a lot of work, and problems, I install everything, but now when I start python</p> <pre><code>import pyrfc </code></pre> <p>I get </p> <pre><code>Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "C:\Users\MARKOZ~1\Desktop\p36env\lib\site-packages\pyrfc-1.9.7-py3.6-win-amd64.egg\pyrfc\__init__.py", line 22, in &lt;module&gt; from pyrfc._pyrfc import get_nwrfclib_version, Connection, TypeDescription, FunctionDescription, Server ImportError: DLL load failed: The specified module could not be found. </code></pre> <p>In this <a href="https://github.com/SAP/PyRFC/issues/19" rel="nofollow noreferrer">link</a> I found that I should try to launch <code>rfcexec.exe</code> </p> <blockquote> <p>After the SAP NW RFC Library installed on Windows system and lib subfolder added to path, you may start the rfcexec.exe test program, from the bin subfolder, to verify the SAP NW RFC Lib installation.</p> </blockquote> <p>But when I check in this folder I can see <code>rfcexec</code> file but not <code>rfcexec.exe</code> ( and yes, I check if extensions are shown)</p> <p>I also found that this error could be produced by not have this <a href="https://blogs.sap.com/2014/08/01/quick-and-easy-install-of-pyrfc-on-windows/" rel="nofollow noreferrer">library in path</a></p> <blockquote> <p>Obviously put another path in if your path isn’t C:\Python27\nwrfcsdk\lib If you forget to set the Path, then your Python code won’t be able to use the C-connector and you’ll get an error message: β€œImportError: DLL load failed”</p> </blockquote> <p>So: </p> <p>I have mwrfcsdk folder with :</p> <pre><code>-&gt; bin | -&gt; rfcexec (without exe) -&gt; startrfc (without exe) -&gt; demo -&gt; doc -&gt; include | -&gt; sapdecf.h -&gt; sapnwrfc.h -&gt; sapuc.h -&gt; sapuc.h -&gt; sapucx.h -&gt; lib | -&gt; libicudata34.a -&gt; libicudecnumber.so -&gt; libicui18n34.a -&gt; libicuuc34.a -&gt; libsapnwrfc.so -&gt; libsapucum.so -&gt; META-INF -&gt; nwrfc750P_0.manifest </code></pre> <p>I install pyrfc with : </p> <pre><code>easy_install pyrfc-1.9.7-cp36-cp36m-win_amd64.whl </code></pre> <p>What do I miss?</p> <p>EDIT:</p> <hr> <p>I found this page : <a href="https://wiki.scn.sap.com/wiki/display/ABAPConn/Download+and+Installation+of+NW+RFC+SDK" rel="nofollow noreferrer">link</a> where I can see, that when unsar from .sar file I don't get the same print in cmd.... do anyone know why</p> <pre><code>SAPCAR: processing archive NWRFC_48-20004559.SAR (version 2.01) x nwrfcsdk x nwrfcsdk/bin x nwrfcsdk/bin/rfcexec x nwrfcsdk/bin/startrfc x nwrfcsdk/demo x nwrfcsdk/demo/companyClient.c x nwrfcsdk/demo/readme.txt x nwrfcsdk/demo/rfcexec.cpp x nwrfcsdk/demo/rfcexec.h x nwrfcsdk/demo/sapnwrfc.ini x nwrfcsdk/demo/sflightClient.c x nwrfcsdk/demo/sso2sample.c x nwrfcsdk/demo/startrfc.cpp x nwrfcsdk/demo/startrfc.h x nwrfcsdk/demo/stfcDeepTableServer.c x nwrfcsdk/doc x nwrfcsdk/include x nwrfcsdk/include/sapdecf.h x nwrfcsdk/include/sapnwrfc.h x nwrfcsdk/include/sapuc.h x nwrfcsdk/include/sapucx.h x nwrfcsdk/lib x nwrfcsdk/lib/libicudata34.a x nwrfcsdk/lib/libicudecnumber.so x nwrfcsdk/lib/libicui18n34.a x nwrfcsdk/lib/libicuuc34.a x nwrfcsdk/lib/libsapnwrfc.so x nwrfcsdk/lib/libsapucum.so x SIGNATURE.SMF SAPCAR: 29 file(s) extracted </code></pre>
<p>it shows that you are lack of dll files. And you can get the nwrfcsdk at this link: <a href="https://github.com/mikewolfli/sapnwrfcsdk" rel="nofollow noreferrer">SAP nwrfcsdk 7.2</a> After that you can follow the pyrfc installation :</p> <p>Windows 1. Create an directory, e.g. c:\nwrfcsdk. 2. Unpack the SAR archive to it, e.g. c:\nwrfcsdk\lib shall exist. 3. Include the lib directory to the library search path on Windows, i.e. extend the PATH environment variable.</p> <p>Then, you can use the pyrfc</p> <pre><code>import pyrfc </code></pre>
python|dll|sap|importerror|pyrfc
1
1,909,471
65,126,796
Swap values in the columns and rows under specific conditions
<p>I have a following pandas dataframe:</p> <p><a href="https://i.stack.imgur.com/ivFFq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ivFFq.png" alt="enter image description here" /></a></p> <p>and I want to check if the value in the column <code>'A start'</code> is negative. If negative than swap values in column <code>'start'</code> and <code>'end'</code> and in columns <code>'A start'</code> and <code>'A end'</code> in the row where the</p> <p><code>'A start'</code> has a negative value. So the result should be:</p> <p><a href="https://i.stack.imgur.com/g1qQE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/g1qQE.png" alt="enter image description here" /></a></p> <p>I tried to solve it with <code>where</code> but it doesn't worked.</p> <p>I'm using python 3.8.</p> <p>Thank you very much for your help.</p>
<p>Here's the simplest method using <code>where</code></p> <pre><code> df = pd.DataFrame() df['start'] = [1,5,7,2] df['end'] = [4,6,8,9,] df['A start'] = [234, -475, -765, 113] df['A end'] = [-654, 312, 987, -553] df[['A start','A end']] = df[['A end','A start']].where(df['A start'] &lt; 0 , df[['A start','A end']].values) df </code></pre> <p>Output: <a href="https://i.stack.imgur.com/6zJtY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6zJtY.png" alt="enter image description here" /></a></p>
python|pandas|dataframe|conditional-statements
1
1,909,472
71,639,380
conditional highlight one column of a pandas dataframe
<p>I have a basic pandas dataframe.</p> <p>I am trying to conditional highlight <strong>just one column</strong>.</p> <p>I have tried to follow the docs: <a href="https://pandas.pydata.org/docs/user_guide/style.html" rel="nofollow noreferrer">https://pandas.pydata.org/docs/user_guide/style.html</a></p> <p>I have got to this point for the dataframe <code>df</code>:</p> <pre class="lang-py prettyprint-override"><code>df.style.applymap(lambda x: &quot;background-color: blue&quot; if x&gt;0 else &quot;background-color: green&quot;) </code></pre> <p>I am able to generate a rule for the <strong>whole table</strong> like this:</p> <p><a href="https://i.stack.imgur.com/ZXebN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZXebN.png" alt="enter image description here" /></a></p> <p>But how can i apply the rule to <strong>just one column</strong> for example the <code>wav</code> column ?</p>
<p>With the <code>subset</code> arguement I have now managed to do this:</p> <pre class="lang-py prettyprint-override"><code>def format_df(x): if x&gt;0: return &quot;background-color: blue&quot; else: return &quot;background-color: green&quot; diffs.style.applymap(lambda x: format_df(x), subset=['wav']) </code></pre> <p>Which produces this:</p> <p><a href="https://i.stack.imgur.com/3Ns2B.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3Ns2B.png" alt="enter image description here" /></a></p> <p>Is there a way to calibrate such that there is a heatmap ?</p>
python|pandas|dataframe|formatting
0
1,909,473
10,507,011
django serialize foreign key objects
<ul> <li><a href="https://stackoverflow.com/questions/4015229/serialize-django-model-with-foreign-key-models">Serialize django model with foreign key models</a></li> <li><a href="https://stackoverflow.com/questions/3753359/serializing-foreign-key-objects-in-django">Serializing Foreign Key objects in Django</a></li> <li><a href="https://stackoverflow.com/questions/2832997/get-foreign-key-objects-in-a-single-query-django">get foreign key objects in a single query - Django</a></li> </ul> <p>There are couple of question asking for the same thing already. But they are from 2010, and it didn't help me so much. So I figure it maybe been some update to this front since 2010?</p> <p>On google I found this <a href="http://mohtasham.info/article/serializing-foreign-key-django-model/" rel="nofollow noreferrer">link</a>, which explain usage of <a href="http://docs.djangoproject.com/en/dev/topics/serialization/#natural-keys" rel="nofollow noreferrer">natural keys</a>. However my problem concerns of getting foreign objects from <code>django.contrib.auth.models.User</code> so it doesn't help.</p> <p>My problem is as following. I want to serialize the QuerySet so I get the foreign key objects also, because I want to pass it as JSON to the client. The serializer from <code>django.core</code> doesn't do that. So in my case to simply the problem I had added another field to the model to contain the value I need from the foreign object. But it however introduce redundant data.</p> <p>My example model it contains the <code>username</code> which I would like if possible remove, and instead get it by the foreign key.</p> <pre><code> user = models.ForeignKey(User) username = models.CharField(max_length=100, null=False) </code></pre>
<p>One potential way around this is to construct your own dictionary object based on the returns of a queryset. You'd do something like this:</p> <pre><code>queryset = Model.objects.all() list = [] #create list for row in queryset: #populate list list.append({'title':row.title, 'body': row.body, 'name': row.user.username}) recipe_list_json = json.dumps(list) #dump list as JSON return HttpResponse(recipe_list_json, 'application/javascript') </code></pre> <p>You need to import json for this to work.</p> <pre><code>import json </code></pre>
python|django|django-models|django-serializer
6
1,909,474
10,672,215
parsing ERF capture files in python
<p>What is the best way of parsing ERF (endace) capture files in python? I found a libpcap wrapper for python but I do not think that lipcap supports ERF format.</p> <p>Thanks!</p>
<p>Here's a simplistic ERF record parser which returns a dict per packet (I just hacked it together, so not extremely well tested. Not all flag fields are decoded, but the ones that aren't, aren't widely applicable):</p> <p>NB:</p> <ul> <li>ERF record types: 1 = HDLC, 2 = Ethernet, 3 = ATM, 4 = Reassembled AAL5, 5-7 multichannel variants with extra headers not processed here.</li> <li><code>rlen</code> can be less than <code>wlen+len(header)</code> if the snaplength is too short.</li> <li>The interstitial loss counter is the number of packets lost between this packet and the previous captured packet as noted by the Dag packet processor when its input queue overflows.</li> <li>Comment out the two scapy lines if you don't want to use scapy.</li> </ul> <p>Code:</p> <pre><code>import scapy.layers.all as sl def erf_records( f ): """ Generator which parses ERF records from file-like ``f`` """ while True: # The ERF header is fixed length 16 bytes hdr = f.read( 16 ) if hdr: rec = {} # The timestamp is in Intel byte-order rec['ts'] = struct.unpack( '&lt;Q', hdr[:8] )[0] # The rest is in network byte-order rec.update( zip( ('type', # ERF record type 'flags', # Raw flags bit field 'rlen', # Length of entire record 'lctr', # Interstitial loss counter 'wlen'), # Length of packet on wire struct.unpack( '&gt;BBHHH', hdr[8:] ) ) ) rec['iface'] = rec['flags'] &amp; 0x03 rec['rx_err'] = rec['flags'] &amp; 0x10 != 0 rec['pkt'] = f.read( rec['rlen'] - 16 ) if rec['type'] == 2: # ERF Ethernet has an extra two bytes of pad between ERF header # and beginning of MAC header so that IP-layer data are DWORD # aligned. From memory, none of the other types have pad. rec['pkt'] = rec['pkt'][2:] rec['pkt'] = sl.Ether( rec['pkt'] ) yield rec else: return </code></pre>
python
3
1,909,475
5,153,531
Django, want to update the form without having to open the form again
<p>I have a form called edit order. From my form there is a bit at the end where it updates the current item status date. It updates, but only if I was to close the form after clicking on the button called <code>save_status</code>. Is there a way where if I was to click on the button called <code>save_status</code>, then could the web page refresh itself without having to close the form? Some sort of auto-reload?</p> <pre><code>if request.method == 'POST': form = forms.ItemForm(request.POST, instance = item) if form.is_valid() and save_item is not None: form.save(True) request.user.message_set.create(message = "Item {0} has been updated successfully.".format(item.tiptop_id)) return HttpResponse("&lt;script language=\"javascript\" type=\"text/javascript\"&gt;window.opener.location = window.opener.location; window.close();&lt;/script&gt;") if request.POST.get('save_status'): item.current_item_status_date = date.today() item.save() </code></pre>
<p>I can't understand where you are having problems. You've updated the form - so now just send the updated form back to the context, rather than redirecting to a new page.</p>
python|html|django|forms
2
1,909,476
5,331,016
Using self-defined Cython code from other Cython code
<p>I am currently trying to optimize my Python program and got started with Cython in order to reduce the function calling overhead and perhaps later on include optimized C-libraries functions.</p> <p>So I ran into the first problem:</p> <p>I am using composition in my code to create a larger class. So far I have gotten one of my Python classes converted to Cython (which was difficult enough). Here's the code:</p> <pre><code>import numpy as np cimport numpy as np ctypedef np.float64_t dtype_t ctypedef np.complex128_t cplxtype_t ctypedef Py_ssize_t index_t cdef class bendingForcesClass(object): cdef dtype_t bendingRigidity cdef np.ndarray matrixPrefactor cdef np.ndarray bendingForces def __init__(self, dtype_t bendingRigidity, np.ndarray[dtype_t, ndim=2] waveNumbersNorm): self.bendingRigidity = bendingRigidity self.matrixPrefactor = -self.bendingRigidity * waveNumbersNorm ** 2 cpdef np.ndarray calculate(self, np.ndarray membraneHeight): cdef np.ndarray bendingForces bendingForces = self.matrixPrefactor * membraneHeight return bendingForces </code></pre> <p>From my composed Python/Cython class I am calling the class-method <code>calculate</code>, so that in my composed class I have the following (reduced) code:</p> <pre><code>from bendingForcesClass import bendingForcesClass cdef class membraneClass(object): def __init__(self, systemSideLength, lowerCutoffLength, bendingRigidity): self.bendingForces = bendingForcesClass(bendingRigidity, self.waveNumbers.norm) def calculateForces(self, heightR): return self.bendingForces.calculate(heightR) </code></pre> <p>I have found out that <code>cpdef</code> makes the method/functions callable from Python and Cython, which is great and works, as long as I don't try to define the type of <code>self.bendingForces</code> beforehand - which according to <a href="https://github.com/cython/cython/wiki/EarlyBindingForSpeed" rel="noreferrer">the documentation (Early Binding For Speed)</a> is necessary in order to remove the function-calling overhead. I have tried the following, which does not work:</p> <pre><code>from bendingForcesClass import bendingForcesClass from bendingForcesClass cimport bendingForcesClass cdef class membraneClass(object): cdef bendingForcesClass bendingForces def __init__(self, systemSideLength, lowerCutoffLength, bendingRigidity): self.bendingForces = bendingForcesClass(bendingRigidity, self.waveNumbers.norm) def calculateForces(self, heightR): return self.bendingForces.calculate(heightR) </code></pre> <p>With this I get this error, when trying to build <code>membraneClass.pyx</code> with Cython:</p> <pre><code>membraneClass.pyx:18:6: 'bendingForcesClass' is not a type identifier building 'membraneClass' extension </code></pre> <p>Note that the declarations are in two separate files, which makes this more difficult.</p> <p>So I how do I get this done? I would be very thankful if someone could give me a pointer, as I can't find any information about this, besides the link given above.</p> <p>Thanks and best regards!</p>
<p><em>Disclaimer:</em> This question is very old and I am not sure the current solution would work for 2011 Cython code.</p> <p>In order to cimport an extension class (cdef class) from another file you need to provide a <a href="https://cython.readthedocs.io/en/latest/src/tutorial/pxd_files.html" rel="noreferrer">.pxd</a> file (also known as a definitions file) declaring all C classes, attributes and methods. See <a href="https://cython.readthedocs.io/en/latest/src/userguide/sharing_declarations.html#sharing-extension-types" rel="noreferrer">Sharing Extension Types</a> in the documentation for reference.</p> <p>For your example, you would need a file <code>bendingForcesClass.pxd</code>, which declares the class you want to share, as well as all cimports, module level variables, typedefs, etc.:</p> bendingForcesClass <strong>.pxd</strong> <pre><code># cimports cimport numpy as np # typedefy you want to share ctypedef np.float64_t dtype_t ctypedef np.complex128_t cplxtype_t ctypedef Py_ssize_t index_t cdef class bendingForcesClass: # declare C attributes cdef dtype_t bendingRigidity cdef np.ndarray matrixPrefactor cdef np.ndarray bendingForces # declare C functions cpdef np.ndarray calculate(self, np.ndarray membraneHeight) # note that __init__ is missing, it is not a C (cdef) function </code></pre> <p>All imports, variables, and attributes that now are declared in the <code>.pxd</code> file can (and have to be) removed from the <code>.pyx</code> file:</p> bendingForcesClass <strong>.pyx</strong> <pre><code>import numpy as np cdef class bendingForcesClass(object): def __init__(self, dtype_t bendingRigidity, np.ndarray[dtype_t, ndim=2] waveNumbersNorm): self.bendingRigidity = bendingRigidity self.matrixPrefactor = -self.bendingRigidity * waveNumbersNorm ** 2 cpdef np.ndarray calculate(self, np.ndarray membraneHeight): cdef np.ndarray bendingForces bendingForces = self.matrixPrefactor * membraneHeight return bendingForces </code></pre> <p>Now your cdef class <code>bendingForcesClass</code> can be cimported from other Cython modules, making it a valid type identifier, which should solve your problem.</p>
python|class|methods|composition|cython
7
1,909,477
62,481,515
prevent double booking in python (schedule checker)
<p>This part of code is supposed to make sure we're not booking events for employees that are already booked in a specified time frame.</p> <p>If Marc is booked from 9AM to 11AM. IT shouldnt be possible to book for Marc from 9AM to 10AM, or 7AM to 9AM, etc.</p> <p>This is the conditions where:</p> <ol> <li><code>(currentStart, CurrentEnd)</code> = Start and End of new appointment.</li> <li><code>event['start']['dateTime']</code> and <code>event['end']['dateTime']</code> = Start and end of the appointment already registered in callendar.</li> </ol> <p>These are the conditions were a second appointment shouldn't be allowed: </p> <pre><code>if str2datetime(currentStart) &gt;= str2datetime(event['start']['dateTime'].split('+')[0]) and str2datetime(currentEnd) &lt;= str2datetime(event['end']['dateTime'].split('+')[0]): event_done = False break elif str2datetime(currentStart) &lt;= str2datetime(event['start']['dateTime'].split('+')[0]) and str2datetime(currentEnd) &lt;= str2datetime(event['end']['dateTime'].split('+')[0]): event_done = False break elif str2datetime(currentStart) &gt;= str2datetime(event['start']['dateTime'].split('+')[0]) and str2datetime(currentEnd) &gt; str2datetime(event['end']['dateTime'].split('+')[0]): event_done = False break </code></pre>
<p>It's a fairly simple condition to check if two datetime ranges overlap. Given 2 datetime ranges <code>a</code> and <code>b</code> - if the start of <code>a</code> is before the end of <code>b</code> and the end of <code>a</code> is after the start of <code>b</code> then they overlap</p> <pre><code>a.start &lt; b.end and a.end &gt; b.start </code></pre>
python
0
1,909,478
62,715,583
Import functions from python classes located in python file into Robot Framework
<p>Consider I have this file hierarchy for my project:</p> <ul> <li>my_project (root folder) <ul> <li><code>Resources</code> <ul> <li><code>python_file.py</code> containing two classes: <code>FirstClass</code> and <code>SecondClass</code></li> </ul> </li> <li><code>Tests</code> <ul> <li><code>test_suite.robot</code></li> </ul> </li> </ul> </li> </ul> <p>Here are the classes how they look like</p> <pre><code> class FirstClass: def my_method: pass class SecondClass: variable = { key1: value1, key2: value } </code></pre> <p>Firstly, I want to use <code>my_method</code> in <code>.robot</code> file by importing whole class <code>FirstClass</code> using relative path.</p> <p>Secondly, I want to use a dictionary from <code>SecondClass</code> by importing this class somehow (also using relative path).</p> <p>Is this possible in RF?</p> <p>Thanks in advance.</p>
<p>While a class might not be directly accessible, its methods can be accessed using objects/instances within python. To do so, in your Setting you would have to add the python file -</p> <pre><code>*** Settings *** Library ../Resources/python_file.py </code></pre> <p>Now within your Test Cases you can invoke the function like any other keyword although it would be advisable to use the file name as well to prevent any confusion to the compiler as follows</p> <pre><code>*** Test Cases *** ${result}= python_file.foo #Calls the new function foo that calls my_method </code></pre> <p>And within the python_file.py create another method -</p> <pre><code>def foo(): obj = FirstClass() return obj.my_method() </code></pre> <p>Or</p> <pre><code>def bar(): return SecondClass.variable </code></pre>
python|robotframework
0
1,909,479
61,757,017
How to convert datetime format using Pandas
<p>I'a m reading a csv file with Pandas. In the file there is a column with dates in <strong>dd/mm/yyyy</strong> format.</p> <pre><code>def load_csv(): mydateparser = lambda x: dt.datetime.strptime(x, "%d/%m/%Y") return pd.read_csv('myfile.csv', delimiter=';', parse_dates=['data'], date_parser=mydateparser) </code></pre> <p>Using this parser the column 'data' type becomes <strong>data datetime64[ns]</strong>, but the format is changed to <strong>yyyy-mm-dd</strong>.</p> <p>I need the the column 'data' type to be <strong>datetime64[ns]</strong> and formated as <strong>dd/mm/yyyy</strong>.</p> <p>How can it be done?</p> <p>Regards, Elio Fernandes</p>
<p>Date is not stored in <code>yyyy-mm-dd</code> format or <code>dd/mm/yyyy</code> format. It's stored in <strong>datetime</strong> format. Python by default chooses to shows it in <code>yyyy-mm-dd</code> format. But don't get it wrong, it still is stored in datetime format.<br> You will get a better idea if you add time to data and then try to display it.<br> The way to achieve what you wish is by changing date to string right before displaying, so as, it remains datetime in dataframe but you get the specified string format when you display.<br><br> The following uses <strong>Series.strftime()</strong> to change to string. Documentation <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.strftime.html" rel="nofollow noreferrer">here</a>. </p> <pre><code>df['data'].strftime('%d/%m/%Y') </code></pre> <p>or<br><br> The following uses <strong>datetime.strftime()</strong> to change to string. Documentation <a href="https://docs.python.org/3/library/datetime.html#datetime.datetime.strftime" rel="nofollow noreferrer">here</a>.</p> <pre><code>df['data'].apply(lambda x: x.strftime('%Y-%m-%d')) </code></pre> <p>For further reference check out <a href="https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior" rel="nofollow noreferrer">strftime-and-strptime-behavior</a>.</p> <p><br><br> This question will be of great help to understand how datetime is stored in python: <a href="https://stackoverflow.com/questions/52357460/how-does-python-store-datetime-internally">How does Python store datetime internally?</a></p>
pandas|datetime
0
1,909,480
61,719,810
1D FitzHugh Nagumo model
<p>I am going to solve 1D FitzHugh Nagoma with homogeneous Neumann boundary conditions. How to plot U and V separately. Here a1=2, a0=-0.03 , ep= 0.01 Du= 1, Dv=4 I was confused by plotting the figure </p> <pre><code>U_t=Du U_xx +U -U^3 - V V_t=Dv V_xx + ep(U-a1V - a0) import numpy as np import matplotlib.pyplot as plt #matplotlib inline Du = 0.001 Dv = 0.005 tau = 1 k = -.00 ep = 0.01 a1 = 2 a0 = -0.03 L = 2 N= 10 x = np.linspace(0, L, N+1) dx = x[1]-x[0] T = 45 # total time dt = .01 # time step size = N n = int(T / dt) # number of iterations U = np.random.rand(size) V = np.random.rand(size) def laplacian(Z): Ztop = Z[0:-2] Zbottom = Z[2:] Zcenter = Z[1:-1] return (Ztop + Zbottom - 2 * Zcenter) / dx**2 def show_patterns(U, ax=None): ax.imshow(U, cmap=plt.cm.copper, interpolation='bilinear', extent=[-1, 1]) ax.set_axis_off() fig, axes = plt.subplots(3, 3, figsize=(16, 16)) step_plot = n // 9 # We simulate the PDE with the finite difference # method. for i in range(n): # We compute the Laplacian of u and v. deltaU = laplacian(U) deltaV = laplacian(V) # We take the values of u and v inside the grid. Uc = U[1:-1] Vc = V[1:-1] # We update the variables. U[1:-1], V[1:-1] = \ Uc + dt * (Du * deltaU + Uc - Uc**3 - Vc),\ Vc + dt * (Dv * deltaV + ep*(Uc - a1*Vc - a0)) / tau # Neumann conditions: derivatives at the edges # are null. for Z in (U, V): Z[0] = Z[1] Z[-1] = Z[-2] # Z[:, 0] = Z[:, 1] # Z[:, -1] = Z[:, -2] # We plot the state of the system at # 9 different times. fig, ax = plt.subplots(1, 1, figsize=(8, 8)) show_patterns(U,ax=None) </code></pre> <p>I got an error 'NoneType' object has no attribute 'imshow' and could not solve it</p>
<p>This line</p> <pre><code>show_patterns(U,ax=None) </code></pre> <p>passed in a <code>None</code> to the <code>ax</code> parameter.</p> <p>I don't know what <code>ax</code> should be but it needs to be properly initialised.</p>
python|numpy|matplotlib
0
1,909,481
64,588,691
how to search through array with if statement and loop in Java Scrpit?
<p>How can I write code like this Python code:</p> <pre class="lang-py prettyprint-override"><code>array = [[&quot;item1&quot;, &quot;item2&quot;], [1, 2]] x = [&quot;item1&quot;, 3] for i in range(len(array)): for j in range(len(x)): if x[j] in array[i]: # do something </code></pre> <p>with JavaScript?</p> <p>I did write this code with JavaScript:</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-js lang-js prettyprint-override"><code>var arr = [["item1", "item2"], [1, 2]], x = ["item1", 3]; for (var i = 0; i &lt;arr.length; i++) { for (var j = 0; j &lt; x.length; j++) { if (x[j] in arr[i]) { // do something } } }</code></pre> </div> </div> </p> <p>But the code didn't give me the result I want.</p>
<p>Look at this code and see if it works for you. In this code we are checking if there is an element which is the same in <code>array1</code> and in <code>array2</code>.</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-js lang-js prettyprint-override"><code>var array1 = ["element1", "element2"]; var array2 = ["element0", "element2", "element3"]; for (let i = 0; i &lt; array1.length; i++) { for (let j = 0; j &lt; array2.length; j++) { if (array1[i] === array2[j]) console.log("Element: " + array1[i] + " is in array2."); } }</code></pre> </div> </div> </p>
javascript|python|arrays|for-loop|if-statement
0
1,909,482
70,245,475
How to load shelf in Houdini programatically
<p>I have recently started digging on tool development for houdini.</p> <p>Is there a way to load a shelf in houdini propgamatically, like we can load shelf in maya.</p>
<p>While not strictly programatic I would recommend looking into the Houdini <a href="https://www.sidefx.com/docs/houdini/ref/plugins.html" rel="nofollow noreferrer">shelf/toolbar</a> format as well as <a href="https://www.sidefx.com/docs/houdini/shelf/config_file.html" rel="nofollow noreferrer">packages</a>.</p> <p>Houdini will load any .shelf files that it finds in the predefined toolbar search paths, and packages allow you to package and distribute collections of tools and environments (including the aforementioned toolbar search paths) in a nice self-contained way. You can think of packages as the Houdini equivalent of Maya's modules.</p> <p>So if you want to distribute a tool with its own shelves you could have a package containing a &quot;toolbar&quot; folder, inside of which would be any number of .shelf files for the shelves you want to add.</p>
python|houdini
1
1,909,483
63,350,880
Rread excel without formatting date
<p>Pandas 1.1.0 Python 3.8</p> <p>When i tried to read a excel file with <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_excel.html" rel="nofollow noreferrer">read_excel method</a></p> <p>the file is loaded fine, but the excel has date formatting in some columns and pandas change the format automatically</p> <p>I want to avoid this behavior</p>
<p>You can set the dtypes parameter in read_excel function like</p> <pre><code>df=pd.read_excel('excel.xlsx', dtypes={'datecolumnname1':str, 'datecolumnnam2':str,...}) </code></pre>
python|python-3.x|excel|pandas
1
1,909,484
56,459,613
AttributeError: 'numpy.ndarray' object has no attribute 'sin'?
<p>Below is my short code, but it has an error: <code>"AttributeError: 'numpy.ndarray' object has no attribute 'sin'"</code>. I don't understand why and how to fix. Please guide me!</p> <p>Thanks a lot in advance!</p> <pre><code>import numpy as np w1 = 0.3 w2 = 0.2 w0 = 0.4 x1 = np.linspace(0, 10, 50) x2 = np.linspace(0, 10, 50) X, Y = np.meshgrid(x1, x2) A = np.array([1,X,Y],dtype=object) w = np.array([[w0],[w1],[w2]]) Z = np.sin(A.dot(w)) print (Z) </code></pre>
<p>Because you define <code>A</code> with <code>dtype=object</code>, the result of <code>A.dot(w)</code> will be of type object as well. As a consequence of this, <code>numpy.sin</code> tries to call <code>sin</code> as a member function of the elements in the result of <code>A.dot(w)</code> which is not defined.</p> <p>Produces error: <code>np.sin(np.array([np.array(1)], dtype=object))</code><br> No error: <code>np.sin(np.array([np.array(1)]))</code></p> <p>As @Adelin has mentioned above, simply call <code>np.sin(A.dot(w)[0]</code>.</p>
python|numpy
1
1,909,485
17,957,461
Fabric is blocking command with pty=False
<p>I'm trying to start a remote service with fabric and as pointed by <a href="https://stackoverflow.com/questions/6379484/fabric-appears-to-start-apache2-but-doesnt">this question</a> my command is:</p> <pre><code>sudo('service jboss start', pty=False) </code></pre> <p>but unfortunately this command hangs on my terminal and I'm not able to close the fabric command, even with CTRL+C.</p> <p>The only way I could find to workarround this issue is with a timeout option but if I have more tasks after that they don't run because the timeout is raised and the fab process is exited with an error.</p> <p>Am I doing something wrong?</p>
<p>It's most likely related to these two FAQ points:</p> <p><a href="http://docs.fabfile.org/en/1.7/faq.html#init-scripts-don-t-work" rel="nofollow">http://docs.fabfile.org/en/1.7/faq.html#init-scripts-don-t-work</a></p> <p><a href="http://docs.fabfile.org/en/1.7/faq.html#why-can-t-i-run-programs-in-the-background-with-it-makes-fabric-hang" rel="nofollow">http://docs.fabfile.org/en/1.7/faq.html#why-can-t-i-run-programs-in-the-background-with-it-makes-fabric-hang</a></p>
python|jboss|fabric
-1
1,909,486
61,120,745
chart_studio.exceptions.PlotlyRequestError: Authentication credentials were not provided
<p>Here is my code can you please help me to execute my code. Currently it fails with:</p> <blockquote> <p>chart_studio.exceptions.PlotlyRequestError: Authentication credentials were not provided</p> </blockquote> <pre><code>import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns train=pd.read_csv('titanic_train.csv') #sns.heatmap(train.isnull(),yticklabels=False,cbar=False,cmap='cividis') import cufflinks as cf import chart_studio.plotly as py cf.go_offline() train['Fare'].iplot(kind='hist',bins=30) sns.set_style('whitegrid') print(plt.show()) </code></pre> <p>Full traceback is:</p> <blockquote> <p>C:\Users\admin\PycharmProjects\untitled\venv\Scripts\python.exe C:/Users/admin/.PyCharmCE2018.3/config/scratches/me.py Traceback (most recent call last): File "C:/Users/admin/.PyCharmCE2018.3/config/scratches/me.py", line 711, in train['Fare'].iplot(kind='hist',bins=30) File "C:\Users\admin\PycharmProjects\untitled\venv\lib\site-packages\cufflinks\plotlytools.py", line 1218, in _iplot dimensions=dimensions,display_image=kwargs.get('display_image',True)) File "C:\Users\admin\PycharmProjects\untitled\venv\lib\site-packages\cufflinks\plotlytools.py", line 1471, in iplot filename=filename) File "C:\Users\admin\PycharmProjects\untitled\venv\lib\site-packages\chart_studio\plotly\plotly.py", line 135, in iplot url = plot(figure_or_data, **plot_options) File "C:\Users\admin\PycharmProjects\untitled\venv\lib\site-packages\chart_studio\plotly\plotly.py", line 280, in plot auto_open=False, File "C:\Users\admin\PycharmProjects\untitled\venv\lib\site-packages\chart_studio\plotly\plotly.py", line 1087, in upload file_info = _create_or_overwrite_grid(payload) File "C:\Users\admin\PycharmProjects\untitled\venv\lib\site-packages\chart_studio\plotly\plotly.py", line 1550, in _create_or_overwrite_grid res = api_module.create(data) File "C:\Users\admin\PycharmProjects\untitled\venv\lib\site-packages\chart_studio\api\v2\grids.py", line 18, in create return request("post", url, json=body) File "C:\Users\admin\PycharmProjects\untitled\venv\lib\site-packages\retrying.py", line 49, in wrapped_f return Retrying(*dargs, **dkw).call(f, *args, **kw) File "C:\Users\admin\PycharmProjects\untitled\venv\lib\site-packages\retrying.py", line 206, in call return attempt.get(self._wrap_exception) File "C:\Users\admin\PycharmProjects\untitled\venv\lib\site-packages\retrying.py", line 247, in get six.reraise(self.value[0], self.value[1], self.value[2]) File "C:\Users\admin\PycharmProjects\untitled\venv\lib\site-packages\six.py", line 693, in reraise raise value File "C:\Users\admin\PycharmProjects\untitled\venv\lib\site-packages\retrying.py", line 200, in call attempt = Attempt(fn(*args, **kwargs), attempt_number, False) File "C:\Users\admin\PycharmProjects\untitled\venv\lib\site-packages\chart_studio\api\v2\utils.py", line 180, in request validate_response(response) File "C:\Users\admin\PycharmProjects\untitled\venv\lib\site-packages\chart_studio\api\v2\utils.py", line 82, in validate_response raise exceptions.PlotlyRequestError(message, status_code, content) chart_studio.exceptions.PlotlyRequestError: Authentication credentials were not provided.</p> <p>Process finished with exit code 1</p> </blockquote>
<p>It is no longer work with Pycharm and it doesn't have any effect over the work and i also found the code for that simply use below code and that will be perfectly work over pycharm . plot(train['Fare'].iplot(kind='hist',bins=30)) It plot the graph according to the requirement in a new browser.</p>
python|plotly
0
1,909,487
65,927,470
Displayed data from firestore is out of order using Python
<p>This is my first question so I hope the format is OK.</p> <p>I have a Firestore collection called DeviceEvents. Each document has three fields always in the same order:</p> <ol> <li>'DeviceID' : integer</li> <li>'EventCode' : String</li> <li>TimeStamp : Timestamp</li> </ol> <p>I am accessing the data base and printing the contents of my DeviceEvents DB with this code,</p> <pre><code>db = Client(&quot;MyDatabase&quot;, creds) docs = db.collection(u'DeviceEvents').stream() for doc in docs: print(f'{doc.id} =&gt; {doc.to_dict()}') </code></pre> <p>But the lines that get printed have the fields in no particular order like this (sorry for the mess),</p> <pre><code>0VvJXxPVQ9gi26Cidk5A =&gt; {'DeviceID': 2885359593, 'TIMESTAMP': DatetimeWithNanoseconds(2020, 12, 12, 2, 44, 28, 658000, tzinfo=&lt;UTC&gt;), 'EventCode': '1'} 4Moup5WQqRVsfvjQDlgl =&gt; {'EventCode': '1', 'DeviceID': 2885359593, 'TIMESTAMP': DatetimeWithNanoseconds(2020, 12, 12, 2, 44, 3, 260000, tzinfo=&lt;UTC&gt;)} Y8BwdwTlVy65NpzOTLYV =&gt; {'TIMESTAMP': DatetimeWithNanoseconds(2020, 12, 22, 18, 39, 50, 221000, tzinfo=&lt;UTC&gt;), 'DeviceID': 2885451321, 'EventCode': '1'} Z4ekuzD6gsUpVXrHdmnC =&gt; {'TIMESTAMP': DatetimeWithNanoseconds(2020, 12, 20, 17, 29, 54, 819000, tzinfo=&lt;UTC&gt;), 'DeviceID': 2885451321, 'EventCode': '1'} vWdJQr6gWfrtlpfXWklD =&gt; {'EventCode': '1', 'TIMESTAMP': DatetimeWithNanoseconds(2020, 12, 10, 20, 21, 6, 941000, tzinfo=&lt;UTC&gt;), 'DeviceID': 2885359593} </code></pre> <p>If I run the code 10 times the order of the fields will be different every time. Does anyone know what may be going on?</p>
<p>I solved my issue. The solution is to format the print command in the order I want. I still don't understand why the fields were randomly out of order before but this bit of print code works.</p> <p>for doc in docs: print(u'{} =&gt; DeviceID:{}, EventCode: {}, TimeStamp: {}'.format(doc.id, doc.to_dict()[&quot;DeviceID&quot;], doc.to_dict()[&quot;EventCode&quot;], doc.to_dict()[&quot;TIMESTAMP&quot;]))</p>
python|google-cloud-firestore|field
0
1,909,488
69,216,352
How to Extract Text from Specific Spans Using BeautifulSoup?
<p>I have the following code snippet.</p> <pre><code> &lt;div id=&quot;job-location-name&quot; itemprop=&quot;address&quot; itemscope=&quot;&quot; itemtype=&quot;https://schema.org/PostalAddress&quot; data-reactid=&quot;49&quot;&gt; &lt;span itemprop=&quot;addressLocality&quot; data-reactid=&quot;50&quot;&gt;Cupertino&lt;/span&gt; &lt;span data-reactid=&quot;51&quot;&gt;, &lt;/span&gt; &lt;span class=&quot;break--on--small&quot; data-reactid=&quot;52&quot;&gt;&lt;/span&gt; &lt;span itemprop=&quot;addressRegion&quot; data-reactid=&quot;53&quot;&gt;California&lt;/span&gt; &lt;span data-reactid=&quot;54&quot;&gt;, &lt;/span&gt; &lt;span class=&quot;break--on--small&quot; data-reactid=&quot;55&quot;&gt;&lt;/span&gt; &lt;span itemprop=&quot;addressCountry&quot; data-reactid=&quot;56&quot;&gt;United States&lt;/span &lt;/div&gt; </code></pre> <p>I'm trying to extract the city and state:</p> <p>('Cupertino', 'California')</p> <p>But I'm only getting a text from the first span:</p> <p>('Cupertino')</p> <p>How can I select specific spans to get the state text as well? Here's my code:</p> <pre><code> job_soup = BeautifulSoup(browser.page_source, &quot;lxml&quot;) locations = job_soup.find('div', {'id': 'job-location-name'}).find_all('span') for location in locations: location = location.text </code></pre>
<p>I'm getting output using css selectors like as follows:</p> <p>Code:</p> <pre><code>doc =''' &lt;div id=&quot;job-location-name&quot; itemprop=&quot;address&quot; itemscope=&quot;&quot; itemtype=&quot;https://schema.org/PostalAddress&quot; data-reactid=&quot;49&quot;&gt; &lt;span itemprop=&quot;addressLocality&quot; data-reactid=&quot;50&quot;&gt;Cupertino&lt;/span&gt; &lt;span data-reactid=&quot;51&quot;&gt;, &lt;/span&gt; &lt;span class=&quot;break--on--small&quot; data-reactid=&quot;52&quot;&gt;&lt;/span&gt; &lt;span itemprop=&quot;addressRegion&quot; data-reactid=&quot;53&quot;&gt;California&lt;/span&gt; &lt;span data-reactid=&quot;54&quot;&gt;, &lt;/span&gt; &lt;span class=&quot;break--on--small&quot; data-reactid=&quot;55&quot;&gt;&lt;/span&gt; &lt;span itemprop=&quot;addressCountry&quot; data-reactid=&quot;56&quot;&gt;United States&lt;/span &lt;/div&gt; ''' from bs4 import BeautifulSoup job_soup = BeautifulSoup(doc, &quot;lxml&quot;) p = job_soup.select_one('div#job-location-name') t = p.select('span:nth-child(1)') + p.select('span:nth-child(4)') for location in t: print(location.text) </code></pre> <p>Output:</p> <pre><code>Cupertino California </code></pre>
python|beautifulsoup
1
1,909,489
69,217,209
Google Colab - can't open any notebooks
<p>After successfully using Google Colab on Chrome for a day the error message pasted below started appearing whenever I try to open any google colab notebook (even the tutorial notebook <a href="https://colab.research.google.com/notebooks/welcome.ipynb" rel="nofollow noreferrer">https://colab.research.google.com/notebooks/welcome.ipynb</a>).</p> <p>I have deactivated all ad-blockers, tried to reinstall Chrome and tried incognito-mode. I experience the same issue on Firefox but everything seems fine on Edge.</p> <p>Thanks for any help.</p> <pre><code>Notebook loading error There was an error loading this notebook. Ensure that the file is accessible and try again. Check dependency list! Synchronous require cannot resolve module 'vs/platform/quickinput/common/quickInput'. This is the first mention of this module! Check dependency list! Synchronous require cannot resolve module 'vs/platform/quickinput/common/quickInput'. This is the first mention of this module! Error: Check dependency list! Synchronous require cannot resolve module 'vs/platform/quickinput/common/quickInput'. This is the first mention of this module! at s.synchronousRequire (https://colab.research.google.com/v2/external/js/monaco_editor/vs/loader.js?vrz=colab-20210915-060048-RC00_396786732:29:74) at i (https://colab.research.google.com/v2/external/js/monaco_editor/vs/loader.js?vrz=colab-20210915-060048-RC00_396786732:35:493) at xa.program_ (https://colab.research.google.com/v2/external/external_polymer_binary_l10n__en_gb.js?vrz=colab-20210915-060048-RC00_396786732:2359:485) at za (https://colab.research.google.com/v2/external/external_polymer_binary_l10n__en_gb.js?vrz=colab-20210915-060048-RC00_396786732:19:336) at xa.next_ (https://colab.research.google.com/v2/external/external_polymer_binary_l10n__en_gb.js?vrz=colab-20210915-060048-RC00_396786732:17:503) at Aa.next (https://colab.research.google.com/v2/external/external_polymer_binary_l10n__en_gb.js?vrz=colab-20210915-060048-RC00_396786732:20:206) at f (https://colab.research.google.com/v2/external/external_polymer_binary_l10n__en_gb.js?vrz=colab-20210915-060048-RC00_396786732:62:101) </code></pre>
<p>This is a newly occurring bug being tracked at <a href="https://github.com/googlecolab/colabtools/issues/2250" rel="noreferrer">https://github.com/googlecolab/colabtools/issues/2250</a>.</p> <p>I'd recommend following up there with the maintainers in order to accelerate the diagnosis.</p>
python|jupyter-notebook|google-colaboratory
6
1,909,490
68,903,379
Control access to a page in UpdateView
<p><br>I want to control that only the superuser and for example the teacher, can access a page inherited from <code>UpdateView</code> and redirect others to the <code>/404</code> page.</p> <p>My class in <code>views.py</code> :</p> <pre class="lang-py prettyprint-override"><code>class Edit_news(UpdateView): model = News form_class = edit_NewsForm template_name = &quot;profile/Edit_news.html&quot; </code></pre> <p>URL of class in <code>urls.py</code> :</p> <pre class="lang-py prettyprint-override"><code>from django.urls import path from .views import Edit_news urlpatterns = [ path(&quot;edit-news/&lt;pk&gt;&quot;, Edit_news.as_view()), ] </code></pre>
<h2>Solution 1</h2> <p>You can override <a href="https://docs.djangoproject.com/en/3.2/ref/class-based-views/base/#django.views.generic.base.View.dispatch" rel="nofollow noreferrer">View.dispatch()</a> which is the entrypoint of any HTTP methods to check first the user.</p> <blockquote> <p><strong><code>dispatch(request, *args, **kwargs)</code></strong></p> <p>The view part of the view – the method that accepts a request argument plus arguments, and returns a HTTP response.</p> <p>The default implementation will inspect the HTTP method and attempt to delegate to a method that matches the HTTP method; a GET will be delegated to <code>get()</code>, a POST to <code>post()</code>, and so on.</p> </blockquote> <pre><code>class Edit_news(UpdateView): ... def dispatch(self, request, *args, **kwargs): if ( request.user.is_superuser or request.user.has_perm('app.change_news') # If ever you attach permissions to your users or request.user.email.endswith('@teachers.edu.org') # If the email of teachers are identified by that suffix or custom_check_user(request.user) # If you have a custom checker ): return super().dispatch(request, *args, **kwargs) return HttpResponseForbidden() ... </code></pre> <p>If you want it to be different for HTTP GET and HTTP POST and others, then override instead the <a href="https://docs.djangoproject.com/en/3.2/ref/class-based-views/generic-editing/#django.views.generic.edit.BaseUpdateView" rel="nofollow noreferrer">specific method</a>.</p> <blockquote> <p><strong><code>class django.views.generic.edit.BaseUpdateView</code></strong></p> <p>Methods</p> <p><code>get(request, *args, **kwargs)</code></p> <p><code>post(request, *args, **kwargs)</code></p> </blockquote> <p>Related reference:</p> <ul> <li><a href="https://stackoverflow.com/questions/10326938/add-object-level-permission-to-generic-view">Add object level permission to generic view</a></li> </ul> <h2>Solution 2</h2> <p>You can also try <a href="https://docs.djangoproject.com/en/3.2/topics/auth/default/#django.contrib.auth.mixins.UserPassesTestMixin" rel="nofollow noreferrer">UserPassesTestMixin</a> (the class-based implementation of <a href="https://docs.djangoproject.com/en/3.2/topics/auth/default/#django.contrib.auth.decorators.user_passes_test" rel="nofollow noreferrer">user_passes_test</a>).</p> <pre><code>class Edit_news(UserPassesTestMixin, UpdateView): # Some mixins in django-auth is required to be in the leftmost position (though for this mixin, it isn't explicitly stated so probably it is fine if not). raise_exception = True # To not redirect to the login url and just return 403. For the other settings, see https://docs.djangoproject.com/en/3.2/topics/auth/default/#django.contrib.auth.mixins.AccessMixin def test_func(self): return ( self.request.user.is_superuser or self.request.user.has_perm('app.change_news') # If ever you attach permissions to your users or self.request.user.email.endswith('@teachers.edu.org') # If the email of teachers are identified by that suffix or custom_check_user(self.request.user) # If you have a custom checker ) ... </code></pre>
python|django|django-class-based-views
1
1,909,491
73,145,049
How to add obj to db with sqlmodel
<p>I am new to FastAPI. How can i create record in db using sqlmodel and databases packages?</p> <pre><code>class Hero(SQLModel, table=True): id: Optional[int] = Field(default=None, primary_key=True) name: str = Field(index=True) secret_name: str age: Optional[int] = Field(default=None, index=True) team_id: Optional[int] = Field(default=None, foreign_key=&quot;team.id&quot;) team: Optional[Team] = Relationship(back_populates=&quot;heroes&quot;) </code></pre> <pre><code>@app.post(&quot;/hero&quot;, description=&quot;Create new hero&quot;) async def create_hero(data: Hero): hero = Hero(**data.dict()) q = insert(Hero).values(**hero.dict()) h_id = await db.execute(q) </code></pre> <p>When i finally try to do this, it shows me:</p> <pre><code>asyncpg.exceptions.NotNullViolationError: null value in column &quot;id&quot; of relation &quot;hero&quot; violates not-null constraint DETAIL: Failing row contains (null, spider, black, 18, null). </code></pre> <p>Referring to the sqlmodel docs, id will be set automatically, but using <strong>sqlmodel.Session</strong>. How to do the same thing with</p> <pre><code>import databases db = databases.Database(&quot;postgresql+asyncpg://postgres:postgres@localhost:5432/testdb&quot;) </code></pre>
<p>As some comments suggested, you should probably not use <code>databases</code> together with <code>SQLModel</code>. The beauty of <code>SQLModel</code> is (among many other things) that the database inserts are so simple: You just add your model objects to a database session and commit.</p> <p>Another suggestion is to make use of the FastAPI <a href="https://fastapi.tiangolo.com/tutorial/dependencies/dependencies-with-yield/" rel="nofollow noreferrer">dependencies</a> to automatically initialize, start, inject and finally close a session in every route.</p> <p>Here is a working example:</p> <pre class="lang-py prettyprint-override"><code>from typing import Optional from fastapi import FastAPI, Depends from sqlalchemy.ext.asyncio.engine import create_async_engine from sqlalchemy.ext.asyncio.session import AsyncSession from sqlalchemy.orm.session import sessionmaker from sqlmodel import Field, SQLModel api = FastAPI() db_uri = &quot;postgresql+asyncpg://postgres:postgres@localhost:5432/testdb&quot; engine = create_async_engine(db_uri, future=True) session_maker = sessionmaker(engine, expire_on_commit=False, class_=AsyncSession) @api.on_event('startup') async def initialize_db(): async with engine.begin() as conn: await conn.run_sync(SQLModel.metadata.drop_all) await conn.run_sync(SQLModel.metadata.create_all) async def get_session() -&gt; AsyncSession: session = session_maker() try: yield session finally: await session.close() class Hero(SQLModel, table=True): id: Optional[int] = Field(default=None, primary_key=True) name: str = Field(index=True) secret_name: str age: Optional[int] = Field(default=None, index=True) ... @api.post(&quot;/hero&quot;, description=&quot;Create new hero&quot;) async def create_hero(hero: Hero, session: AsyncSession = Depends(get_session)): session.add(hero) await session.commit() await session.refresh(hero) return hero </code></pre> <p>Note that for testing purposes I simply drop and (re-)create all database tables on startup.</p> <p>Hope this helps.</p>
python|database|fastapi|pydantic|sqlmodel
0
1,909,492
63,299,066
Regression train dataset in loop with different algorithms
<p>I have read the data from pandas and want to pass in different algorithms to find the score.</p> <p><strong>Option 1</strong></p> <pre><code>data=pd.read_excel(&quot;El Nino.xlsx&quot;) # Scaled the data further and then split into independent and dependent variables. R1= LinearRegression.fit(X_train,y_train) R2= Ridge.fit(X_train,y_train) R3=Perceptron.fit(X_train,y_train) </code></pre> <p><strong>Option 2</strong> I would like to pass it in loop ...something like this</p> <pre><code>estimators = [&quot;LinearRegression&quot;, &quot;Ridge&quot;, &quot;PassiveAggressiveClassifier&quot;,&quot;Perceptron&quot;] for i in estimators: reg= i.fit(X_train,y_train) Score= reg.score(X_train,y_train) print(Score) </code></pre> <p>It doesn't work as the items in the list are strings. Can you please suggest the way I can pass it in loop to train and calculate the score?</p>
<p>Edit, I misunderstood the problem. Souldn't you just pass in your Estimator loop your fitted models, like <code>estimators = [R1, R2, R3]</code> ?</p> <p>Or if you want to make your loop like that you have to call the object <code>LinearRegression</code> for example, not make a string :</p> <pre><code>lr = LinearRegression() rg = Ridge() pac = PassiveAggressiveClassifier() ptn = Perceptron() estimators = [lr,rg,pac,ptn] </code></pre> <p>as you did in the function before</p>
python|machine-learning|scikit-learn
0
1,909,493
63,301,367
Converting different time formats into same format and finding difference between 2 times
<p>Hope you're all doing good.</p> <p>I've been working this particular problem of finding the time difference between 2 times (that are in all different formats). I have partially solved it for some cases, however cannot understand at the moment how to create a solution for all cases.</p> <p>My step by step process includes:</p> <ul> <li><p>Converting all data (originally in String format) into datetime format</p> </li> <li><p>Finding cases where times have been expressed differently in String format to properly convert into datetime format without losing accuracy of PM is still PM and not converted into AM if it's '4' and not 4PM or 16:00 already</p> </li> <li><p>Then... calculating the difference between 2 times (once in datetime format)</p> </li> </ul> <p>Some specifics include finding the time difference for the following times (stored as Strings originally and in this example):</p> <ul> <li>16-19</li> <li>19.30-20.00</li> <li>5PM-6PM</li> <li>4-5</li> <li>5-5.10PM</li> <li>16 - 18 (yes the space between the numbers and hyphen is intentional, although some string manipulation should resolve this quite simply)</li> <li>12 - 14</li> </ul> <p>I've managed to convert the 16-19 into: 16:00:00 and 19:00:00, however for the 19.30-20.00 example, i receive the following error: <code>ValueError(&quot;unconverted data remains: %s&quot; % ValueError: unconverted data remains: .30</code>.</p> <p>I'm assuming this is due to my code implementation:</p> <p><code>theDate1, theDate2 = datetime.strptime(temp[0: temp.find('-')], &quot;%H&quot;), datetime.strptime(temp[temp.find('-') + 1: len(temp)], &quot;%H&quot;)</code></p> <p>...where the 19.30-20.00 includes a %M part and not only a %H so the code doesn't say how to deal with the :30 part. I was going to try a conditional part where if the string has 16-19, run some code and if the string has 19.30-20.00 run some other code (and the same for the other examples).</p> <p>Apologies if my explanation is a bit all over the place... I'm in the headspace of hacking a solution together and trying all different combinations.</p> <p>Thanks for any guidance with this!</p> <p>Have a good day.</p>
<p>here's a way to parse the example strings using <code>dateutil</code>'s parser, after a little pre-processing:</p> <pre><code>from dateutil import parser strings = ['16-19', '19.30-20.00', '5PM-6PM', '4-5' ,'5-5.10PM', '16 - 18', '12 - 14'] datepfx = '2020-07-21 ' # will prefix this so parser.parse works correctly for s in strings: # split on '-', strip trailing spaces # replace . with : as time separator, ensure upper-case letters parts = [part.strip().replace('.',':').upper() for part in s.split('-')] # if only one number is given, assume hour and add minute :00 parts = [p+':00' if len(p)==1 else p for p in parts] # check if AM or PM appears in only one of the parts ampm = [i for i in ('AM', 'PM') for p in parts if i in p] if len(ampm) == 1: parts = [p+ampm[0] if not ampm[0] in p else p for p in parts] print(f&quot;\n'{s}' processed to -&gt; {parts}&quot;) print([parser.parse(datepfx + p).time() for p in parts]) </code></pre> <p>gives</p> <pre><code>'16-19' processed to -&gt; ['16', '19'] [datetime.time(16, 0), datetime.time(19, 0)] '19.30-20.00' processed to -&gt; ['19:30', '20:00'] [datetime.time(19, 30), datetime.time(20, 0)] '5PM-6PM' processed to -&gt; ['5PM', '6PM'] [datetime.time(17, 0), datetime.time(18, 0)] '4-5' processed to -&gt; ['4:00', '5:00'] [datetime.time(4, 0), datetime.time(5, 0)] '5-5.10PM' processed to -&gt; ['5:00PM', '5:10PM'] [datetime.time(17, 0), datetime.time(17, 10)] '16 - 18' processed to -&gt; ['16', '18'] [datetime.time(16, 0), datetime.time(18, 0)] '12 - 14' processed to -&gt; ['12', '14'] [datetime.time(12, 0), datetime.time(14, 0)] </code></pre>
python|python-3.x|string|datetime
1
1,909,494
63,035,700
how is Matrix Multiplication with Numpy
<p>I have a 2D matrix for example <code>X=np.ones((97,2))</code> and another vector for example <code>Theta=np.zeros((2,1))</code> when I multiply the vector Theta and the Matrix X, it raises an error</p> <pre><code>'operands could not be broadcast together with shapes (2,1) (97,2)' </code></pre> <p>but when I multiply <code>Theta.T* x</code>, it output a new <code>(97, 2) matrix</code>.</p> <p>How did it work? How <code>(1,2) *(97,2)</code> works and if the shapes don't matter why it raises an error when <code>Theta * X</code> could anyone explain it, please.</p>
<p>It tries to broadcast ( = replicate) the smaller array across the larger one when a matrix multiplication doesn't work by dimensions, which is why your second example works but the first one doesn't.</p> <p><a href="https://numpy.org/doc/stable/user/basics.broadcasting.html" rel="nofollow noreferrer">https://numpy.org/doc/stable/user/basics.broadcasting.html</a></p>
python|numpy|matrix-multiplication
1
1,909,495
62,449,753
Unable to serialize PySpark UDF
<p>I am trying to create a UDF in PySpark. The function takes in an input string which is an xml. It then uses lxml to parse it and returns a list of dictionaries with the attributes. I created the function <code>parse_xml</code>, but when I try the line <code>spark.udf.register("parse_xml", parse_xml)</code> but this gives the error: <code>PicklingError: Could not serialize object: TypeError: can't pickle lxml.etree.XMLParser objects</code>. </p> <p>It seems as if lxml objects are not serializable, but the input is a string and the output is a list/dictionary -- is there any way to create a UDF like this?</p>
<p>The library used by pyspark is <code>cpickle</code> which stands for cloud pickle and serializing libraries written in C is not supported by it yet.</p> <p>If you want to parse xml use databricks XML parser instead and it'll be fast as well.</p>
python|apache-spark|pyspark|lxml|user-defined-functions
0
1,909,496
58,851,498
Wxpython passing data from one frame to another
<p>I have a text control, a choice control, and a button on one from. I would like to pass the values of both the text control and the choice control to another frame when button is pressed. In the on_press method in the code below, I returned both values as a list. How can I access this list in PanelTwo. Here's the code:</p> <pre><code>import wx class PanelOne(wx.Panel): def __init__(self, parent): wx.Panel.__init__(self, parent) box = wx.BoxSizer(wx.VERTICAL) self.label = wx.StaticText(self,label = "Your Name:" ,style = wx.ALIGN_CENTRE) box.Add(self.label, 1 , wx.EXPAND |wx.ALIGN_CENTER_HORIZONTAL |wx.ALL, 20) self.inputTxtOne = wx.TextCtrl(self,wx.ID_ANY,value='') box.Add(self.inputTxtOne, 1 , wx.EXPAND |wx.ALIGN_CENTER_HORIZONTAL |wx.ALL, 20) languages = ['ZIRA', 'DAVID'] chlbl = wx.StaticText(self,label = "Voice control",style = wx.ALIGN_CENTRE) box.Add(chlbl,1,wx.EXPAND|wx.ALIGN_CENTER_HORIZONTAL|wx.ALL,5) self.choice = wx.Choice(self,choices = languages) box.Add(self.choice,1,wx.EXPAND|wx.ALIGN_CENTER_HORIZONTAL|wx.ALL,5) btn = wx.Button(self, label='Submit',style=wx.TRANSPARENT_WINDOW) btn.SetFont(wx.Font(15,wx.FONTFAMILY_DEFAULT,wx.FONTSTYLE_NORMAL,wx.FONTWEIGHT_NORMAL)) btn.Bind(wx.EVT_BUTTON, self.on_press) box.Add(btn,1,wx.EXPAND|wx.ALIGN_CENTER_HORIZONTAL|wx.ALL,5) self.SetSizer(box) btn.Bind(wx.EVT_BUTTON, self.on_press) def on_press(self,event): name = self.inputTxtOne.GetValue() voice_choice = self.choice.GetValue() nv_list = [name,voice_choice] parent_frame = self.GetParent() parent_frame.Close() frame = FrameTwo() frame.Show() return nv_list class PanelTwo(wx.Panel): def __init__(self, parent): wx.Panel.__init__(self, parent) box = wx.BoxSizer(wx.VERTICAL) self.label = wx.StaticText(self,label = "Your Name is "+name ,style = wx.ALIGN_CENTRE) box.Add(self.label, 1 , wx.EXPAND |wx.ALIGN_CENTER_HORIZONTAL |wx.ALL, 20) self.SetSizer(box) class FrameOne(wx.Frame): def __init__(self): wx.Frame.__init__(self, None, title="First Frame") panel = PanelOne(self) self.Show() class FrameTwo(wx.Frame): def __init__(self): wx.Frame.__init__(self, None, title="Second Frame") panel = PanelTwo(self) self.Show() if __name__ == "__main__": app = wx.App(False) frame = FrameOne() app.MainLoop() </code></pre>
<p>There are several errors in your code. Let's go step by step.</p> <p>First of all, the variable called "name" in PanelOne, must be part of the class instead of a local variable of the "on_press" method:</p> <p>Change:</p> <pre><code>name = self.inputTxtOne.GetValue() voice_choice = self.choice.GetValue() </code></pre> <p>by:</p> <pre><code># Notice the use of self. In this way, we can access it from anywhere in the class self.name = self.inputTxtOne.GetValue() # What version of wx are you using? Get the choices value with GetSelection() method. voice_choice = self.choice.GetSelection() </code></pre> <p>Now we need to access this variable from another class (PanelTwo). For this, we will pass an instance of the PanelOne class to the second frame (Frame2) and we will store it as a member variable (with the use of self, as we saw in the previous example).</p> <pre><code># When you instantiate the Frame2 in PanelOne must pass the self instance of PanelOne frame = FrameTwo(self) </code></pre> <p>Then in <strong>init</strong> method in Frame2:</p> <pre><code>def __init__(self, parent): wx.Frame.__init__(self, None, title="Second Frame") #Now the reference to PanelOne is a member varialble of class Frame2. self.panelOne = parent #Pass Frame2 instance (self) to PanelTwo in constructor: panel = PanelTwo(self) # Rest of your code... </code></pre> <p>Finally in PanelTwo:</p> <pre><code>class PanelTwo(wx.Panel): def __init__(self, parent): wx.Panel.__init__(self) # parent is Frame2. # panelOne is a public and member variable of Frame2 that references to PanelOne instance. # name is a member and accesible variable of PanelOne. print(parent.panelOne.name) </code></pre> <p>A recommendation: use WxGlade for make best code. With it you can learn a lot about how things are done in wxpython.</p> <p>Good luck!</p>
python|wxpython
0
1,909,497
58,980,752
Run same testbench with different parameter files in VUnit
<p>I have a project with the following structure:</p> <pre><code>tb_top β”œβ”€β”€ run.py └── src β”œβ”€β”€ tb_top.vhd β”œβ”€β”€ test_a β”‚ β”œβ”€β”€ top_parameter.vhd β”‚ β”œβ”€β”€ input.csv β”‚ └── output.csv β”œβ”€β”€ test_b β”‚ β”œβ”€β”€ top_parameter.vhd β”‚ β”œβ”€β”€ input.csv β”‚ └── output.csv └── ... src β”œβ”€β”€ bar.vhd β”œβ”€β”€ foo.vhd └── top.vhd </code></pre> <p><code>top.vhd</code> includes <code>foo.vhd</code>, <code>bar.vhd</code> as well as <code>top_parameter.vhd</code> of each test case. In <code>run.py</code>, first the files at <code>src/</code> folder are compiled and then the <code>top_parameter.vhd</code> for each test case. All files are in the same library. When running <code>run.py</code> the following error is displayed:</p> <pre><code>tb_top/src/tb_top.vhd:44:20: entity "top" is obsoleted by package "top_parameter" /usr/local/bin/ghdl: compilation error </code></pre> <p>Obviously, <code>top.vhd</code> should be recompiled everytime when <code>top_parameter.vhd</code> gets recompiled, but I can't figure out how to structure my <code>run.py</code>. Is there are way to compile the tests correctly, without:</p> <ol> <li>recompiling <code>foo.vhd</code> and <code>bar.vhd</code> for each test?</li> <li>duplicating identical files like <code>tb_top.vhd</code> and <code>run.py</code> for each test?</li> </ol> <p>I'm using VUnit 4.2.0 and the current master of ghdl. If needed, I can prepare a MWE.</p>
<p>The compiled libraries can't change between test runs in VUnit. The preferred solution is to pass the parameter as top level generics instead of using a separate file.</p> <p>However, I didn't want to change the file structure. So in my case, <code>top_parameter.vhd</code> and all dependent files got compiled into a separate library for each test case. To chose the correct library, an extra generic got passed to the testbench.</p> <p>More details and other options can be found <a href="https://github.com/VUnit/vunit/issues/590" rel="nofollow noreferrer">here</a>.</p>
python|ghdl|vunit
2
1,909,498
73,263,820
C6011 in C++ and Python project
<p>I am in my third CS class online and I have done ok until now but I am really struggling with this. My code runs fine through the menu and input validation just fine but then as soon as I call a function from the python file I get the dereferencing null pointer message as follows &quot; EXCEPTION UNHANDLED: Unhandled exception at 0x1DD09F27 (python36.dll) in moduleSixCppAndPython.exe: 0xC0000005: Access violation reading location 0x00000004. occurred&quot;.</p> <p>I'm going to try to include my c++ code so forgive me if i mess this it up.. this is my first time using stackoverflow. the sections underlined in my IDE are the line as follows:</p> <p>pFunc = PyDict_GetItemString(pDict, procname); // this one get like a red X next to it ... Py_DECREF(pValue); // this line is underlined</p> <p>all issues are coming from the &quot;int callIntFunc(string proc, int param)&quot; funtion</p> <p>main is not really finished yet so i'm not super concerned with that unless that's where my problem is coming from...</p> <p>any guidance at all would be very greatly appreciated!</p> <pre><code>#include &lt;Python.h&gt; #include &lt;iostream&gt; #include &lt;Windows.h&gt; #include &lt;cmath&gt; #include &lt;string&gt; #include &lt;conio.h&gt; using namespace std; bool CONTINUE_RUN = true; int displayMenu() { int userInput = 0; while (true) { cout &lt;&lt; &quot;1: Display a Multiplication Table&quot; &lt;&lt; endl; cout &lt;&lt; &quot;2: Double a Value&quot; &lt;&lt; endl; cout &lt;&lt; &quot;3: Exit&quot; &lt;&lt; endl; cout &lt;&lt; &quot;Enter your selection as a number 1, 2, or 3.&quot; &lt;&lt; endl; while (!(cin &gt;&gt; userInput)) { system(&quot;cls&quot;); cout &lt;&lt; &quot;ERROR: Please enter 1, 2, or 3&quot; &lt;&lt; endl; cout &lt;&lt; &quot;1: Display a Multiplication Table&quot; &lt;&lt; endl; cout &lt;&lt; &quot;2: Double a Value&quot; &lt;&lt; endl; cout &lt;&lt; &quot;3: Exit&quot; &lt;&lt; endl; cout &lt;&lt; &quot;Enter your selection as a number 1, 2, or 3.&quot; &lt;&lt; endl; cin.clear(); cin.ignore(123, '\n'); } if (userInput == 1) { break; } if (userInput == 2) { break; } if (userInput == 3) { CONTINUE_RUN = false; break; } else { system(&quot;cls&quot;); cout &lt;&lt; &quot;ERROR: Please enter 1, 2, or 3&quot; &lt;&lt; endl; continue; } } return userInput; } int userData() { int pickNum; system(&quot;cls&quot;); cout &lt;&lt; &quot;Please enter an integer: &quot; &lt;&lt; endl; while (!(cin &gt;&gt; pickNum)) { system(&quot;cls&quot;); cout &lt;&lt; &quot;ERROR: Please enter an INTEGER:&quot;; cin.clear(); cin.ignore(123, '\n'); } return pickNum; } int callIntFunc(string proc, int param) { char* procname = new char[proc.length() + 1]; std::strcpy(procname, proc.c_str()); PyObject* pName, * pModule, * pDict, * pFunc, * pValue = nullptr, * presult = nullptr; // Initialize the Python Interpreter Py_Initialize(); // Build the name object pName = PyUnicode_FromString((char*)&quot;PythonCode&quot;); // Load the module object pModule = PyImport_Import(pName); // pDict is a borrowed reference pDict = PyModule_GetDict(pModule); // pFunc is also a borrowed reference pFunc = PyDict_GetItemString(pDict, procname); if (PyCallable_Check(pFunc)) { pValue = Py_BuildValue(&quot;(i)&quot;, param); PyErr_Print(); presult = PyObject_CallObject(pFunc, pValue); PyErr_Print(); } else { PyErr_Print(); } //printf(&quot;Result is %d\n&quot;, _PyLong_AsInt(presult)); Py_DECREF(pValue); // Clean up Py_DECREF(pModule); Py_DECREF(pName); // Finish the Python Interpreter Py_Finalize(); // clean delete[] procname; return _PyLong_AsInt(presult); } int main(){ while (CONTINUE_RUN == true) { int userNumber = 0; int menuValue = displayMenu(); if (menuValue == 1) { userNumber = userData(); system(&quot;cls&quot;); int token = callIntFunc(&quot;MultiplicationTable&quot;, userNumber); cout &lt;&lt; &quot;Press any key to continue&quot; &lt;&lt; endl; _getch(); } if (menuValue == 2) { userNumber = userData(); system(&quot;cls&quot;); cout &lt;&lt; callIntFunc(&quot;DoubleValue&quot;, userNumber); cout &lt;&lt; &quot;Press any key to continue&quot; &lt;&lt; endl; _getch(); } } cout &lt;&lt; &quot;GOODBYE&quot; &lt;&lt; endl; } </code></pre>
<p>OK so a big THANK YOU to @PaulMcKenzie in the comments for pointing me in the right direction...<br /> Turns out the issue was not in my <code>.cpp</code> file but in fact in the <code>.py</code> file that it was reading.</p> <p>I had used the Python syntax :</p> <pre class="lang-py prettyprint-override"><code>print(namedVariable + &quot;someString&quot; + (evaluatedCalculation)) </code></pre> <p>Now while this was technically correct for some instances it was creating unpredictable results when passed from my <code>.py</code> file back to my .cpp file. The error was flagging in my <code>.cpp</code> file so the real error that was made was that. I had tunnel vision in trying to find the error solely in the <code>.cpp</code> file and not anywhere else. PLEASE forgive a rookie of his rookie mistake here !</p> <p>To fix this, I altered the syntax in my .py file to :</p> <pre class="lang-py prettyprint-override"><code>print(namedVariable, &quot;someString&quot;, (evaluatedCalculation)) </code></pre> <p>The true error here was not just in the logic I applied in writing my Python code... but in the logic I applied in finding the source of my error.<br /> I learned much more in finding the flaw in my thinking than I did in finding the flaw in this code. But hey.. live and learn right?</p> <p>Anyway,<br /> happy coding !<br /> Love and Respect !</p>
python|c++|visual-studio|nullpointerexception|dereference
0
1,909,499
31,457,740
Python Selenium Webdriver Wait.Until is showing error takes exactly 2 arguments 3 given
<p>I am using Python Webdriver. I am having trouble clicking on an Add button. I am using Webdriver Wait, I am getting the following error when i run my code.</p> <pre><code>Traceback (most recent call last): File "C:\Webdriver\ClearCore 501\TestCases\AdministrationPage_TestCase.py", line 164, in test_add_Project administration_page.add_project(project_name) File "C:\Webdriver\ClearCore 501\Pages\admin.py", line 63, in add_project element = wait.until(EC.element_to_be_clickable(By.XPATH, '//span[@class="gwt-InlineLabel" and contains(text(), "Projects")]/following-sibling::*/div[@class="gwt-HTML" and contains(text(), "Add...")]')) TypeError: __init__() takes exactly 2 arguments (3 given) </code></pre> <p>My webdriver code is</p> <pre><code>wait = WebDriverWait(self.driver, 60) element = wait.until(EC.element_to_be_clickable(By.XPATH, '//span[@class="gwt-InlineLabel" and contains(text(), "Projects")]/following-sibling::*/div[@class="gwt-HTML" and contains(text(), "Add...")]')) element.click() </code></pre> <p>What am i doing wrong? The Xpath is valid as i have checked it in Firefox, Firepath. It finds the button.</p>
<p>You have to enclose the locator in a tuple:</p> <pre><code>element = wait.until(EC.element_to_be_clickable((By.XPATH, '//span[@class="gwt-InlineLabel" and contains(text(), "Projects")]/following-sibling::*/div[@class="gwt-HTML" and contains(text(), "Add...")]'))) </code></pre>
python|selenium|xpath|selenium-webdriver
4