Unnamed: 0
int64 0
1.91M
| id
int64 337
73.8M
| title
stringlengths 10
150
| question
stringlengths 21
64.2k
| answer
stringlengths 19
59.4k
| tags
stringlengths 5
112
| score
int64 -10
17.3k
|
---|---|---|---|---|---|---|
1,906,500 | 66,423,911 |
Input 0 of layer sequential_43 is incompatible with the layer: : expected min_ndim=5, found ndim=4. Full shape received: (None, 32, 32, 100000)
|
<p>My error:</p>
<pre><code>Input 0 of layer sequential_43 is incompatible with the layer:
: expected min_ndim=5, found ndim=4. Full shape received: (None, 32, 32, 100000)
</code></pre>
<p>The shapes of my input:</p>
<p><code>samples.shape</code> gives <code>(32,32,32,100000)</code></p>
<p><code>labels.shape</code> gives <code>(100000,)</code></p>
<p>The code I'm now trying to run is the following:</p>
<pre><code>model = keras.models.Sequential()
layers = tf.keras.layers
model.add(layers.Conv3D(filters=5, kernel_size=(4,4,4), strides=2, activation='relu', input_shape=(8,32,32,32,1)))
model.add(layers.Conv3D(filters=5, kernel_size=(4,4,4), strides=1, activation='relu'))
model.add(layers.Conv3D(filters=5, kernel_size=(4,4,4), strides=1, activation='relu'))
model.add(layers.Conv3D(filters=5, kernel_size=(4,4,4), strides=1, activation='relu'))
model.add(layers.Conv3D(filters=5, kernel_size=(4,4,4), strides=2, activation='relu'))
model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(1, activation='relu'))
model.compile(optimizer=Adam(learning_rate=0.0001),loss='mape',metrics=['accuracy'])
model.fit(x=samples,y=labels,validation_split=0.1,epochs=1,shuffle=True,verbose=2)
</code></pre>
<p>Everywhere I look the syntax is (batchsize,dim1,dim2,dim3,dim4). I put batchsize to 8, the data as a 32x32x32 cube, and the colour to 1 dimension. Even if i remove the batchsize from the <code>input_shape</code> and add it to <code>model.fit</code> as <code>batch_size=8</code> it gives the same error. Does anyone know why?</p>
|
<p>As stated in your question, the order of the dimensions is <code>(batchsize,dim1,dim2,dim3,dim4)</code>, so you need to reshape your <code>samples</code> array to match that order.</p>
<p>You can transpose your array to get the number of samples as the first dimension, and expand it to get the channel dimension (or colour, if I reuse your term) to 1.</p>
<pre><code>>>> samples.shape
TensorShape([32, 32, 32, 100000])
>>> samples = tf.expand_dims(tf.transpose(samples,[3,0,1,2]), axis=-1)
>>> samples.shape
TensorShape([100000, 32, 32, 32, 1])
</code></pre>
|
python|tensorflow|keras|deep-learning|conv-neural-network
| 0 |
1,906,501 | 66,528,828 |
Web scraper using BeautifulSoup not working
|
<p>I have coded a web scraper for Stack Overflow, but it doesn't work. Apparently, in my soup there are NoneType objects that just came from nowhere. Here's the web scraper code:</p>
<pre class="lang-py prettyprint-override"><code>import requests
from bs4 import BeautifulSoup
url = 'https://stackoverflow.com/questions?tab=newest&page='
r = requests.post(url)
soup = BeautifulSoup(r.text, 'lxml').find('div', id='questions').find_all('div')
for summary in soup: # FIXME: Prints each question twice
try:
print(f'Question: {summary.h3.text}')
print(f'Tags: {", ".join(summary.find("div", class_="tags").text[1:].split(" "))}')
except Exception as e:
print(e) # Prints "'NoneType' has no attribute 'text'" which shouldn't be in the soup
</code></pre>
<p>The error I get (If you didn't read the comment) is "'NoneType' has no attribute 'text'" which confuses me a lot because of the fact that there are NoneType objects in the soup.</p>
<p>I am using:</p>
<ul>
<li>Windows 10</li>
<li>Python 3.8</li>
</ul>
|
<p>There aren't <code>None</code> type objects in <code>soup</code>, rather the calls to</p>
<pre class="lang-py prettyprint-override"><code>... summary.h3 ...
... summary.find("div", class_="tags") ...
</code></pre>
<p>Within your loop will return <code>None</code> in the case that <code>summary</code> does not have a <code>h3</code> element, or if there are no <code>div</code> elements with a class attribute of <code>tags</code> in the current <code>summary</code> respectively.</p>
<p>An <code>Exception</code> is naturally thrown in these cases as you attempt to access the <code>text</code> attribute of a <code>None</code> type element.</p>
<p>It could be helpful to use <a href="https://docs.python.org/3.8/library/functions.html#enumerate" rel="nofollow noreferrer"><code>enumerate</code></a> to show the indices of <code>soup</code> which are causing this <code>Exception</code> to be thrown.</p>
<p>e.g.</p>
<pre class="lang-py prettyprint-override"><code>for i, summary in enumerate(soup):
try:
print(f'Question: {summary.h3.text}')
print(f'Tags: {", ".join(summary.find("div", class_="tags").text[1:].split(" "))}')
except Exception as e:
print(f'Exception raised during handling of soup[{i}]')
print(e)
</code></pre>
<p>It is probably advisable to use the above to gain a deeper understanding of the response structure so that you might best ignore these elements during parsing.</p>
<p>Of course you could always do something lazy like the below to <code>pass</code> over any <code>AttributeError</code>s thrown within the loop but still print any other kind of <code>Exception</code> thrown - though this is obviously not the most efficient approach.</p>
<pre class="lang-py prettyprint-override"><code>for summary in soup:
try:
print(f'Question: {summary.h3.text}')
print(f'Tags: {", ".join(summary.find("div", class_="tags").text[1:].split(" "))}')
except AttributeError as ee:
pass
except Exception as e:
print(e)
</code></pre>
|
python|python-3.x|web-scraping|beautifulsoup|python-requests
| 0 |
1,906,502 | 65,012,858 |
Print a mixture of text from a file and save output in another text file
|
<p>I have this little python script running fine:</p>
<pre><code>with open('xxxxxxx.txt', 'r') as searchfile:
for line in searchfile:
if 'Neuro' in line:
print line
</code></pre>
<p>But I'd like to print all lines with 'neuro' or 'brain' or 'whatever' etc. and then save the output to a text file - any tips much appreciated</p>
|
<p>You can open a new file with the write flag enabled:</p>
<pre><code>strings = ("neuro", "brain", "whatever")
with open('xxxxxxx.txt', 'r') as searchfile, open('target_file.txt', 'w') as output:
for line in searchfile:
if any(s in line.lower() for s in strings):
output.write(line)
</code></pre>
|
python
| 0 |
1,906,503 | 64,074,101 |
how to handle list that contains emoji in Python3
|
<p>I've been making function that takes list that has only emoji and transfer it to utf-8 unicode and return the unocode list . My current code seems to take multiple args and return error . I'm new to handling emoji . Could you give me some tips ??</p>
<pre><code>main.py
def encode_emoji(emoji_list):
result = []
for i in range(len(emoji_list)):
emoji = str(emoji_list[i])
d_ord = format(ord(":{}:","#08x").format(emoji))
result.append(str(d_ord))
break
return result
encode_emoji(["","",""])
</code></pre>
<pre><code>Result of above code
Traceback (most recent call last):
File "main.py", line 11, in <module>
encode_emoji(["","",""])
File "main.py", line 5, in encode_emoji
d_ord = format(ord(":{}:","#08x").format(emoji))
TypeError: ord() takes exactly one argument (2 given)
</code></pre>
|
<blockquote>
<p>TypeError: ord() takes exactly one argument (2 given)</p>
</blockquote>
<p>I think the error is self-explanatory. That function takes one argument, but you are passing it two:</p>
<ol>
<li>":{}"</li>
<li>"#08x"</li>
</ol>
<p><a href="https://www.w3schools.com/python/ref_func_ord.asp" rel="nofollow noreferrer">Here</a> some docs to read in case you need.</p>
|
python-3.x|unicode|emoji
| 1 |
1,906,504 | 68,527,840 |
Scrapy: return value if element belongs to a specific class
|
<p>I'm currently working on a Web Scraping project to scrape Data from a Newsletter Forum. For this I need to show if the comment was written by a staff member or by a reader/ customer. If the comment was written by a staff member, I want to write "Admin", if not "Customer".</p>
<p>The comment was written by a staff member if the <code>span</code> element contains <code>username--staff</code>. I already tried to use a if-loop in the items file but this didn't work.</p>
<p>As you may have noticed I'm pretty new to this stuff, so please forgive me if this was an incredible dumb question or if I did something wrong. I would be really thankful if someone could help me. :)</p>
<p>Here is my code:</p>
<p>Spider:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>class ComputerBaseSpider(scrapy.Spider):
name = "ComputerBase"
start_urls = [
'https://www.computerbase.de/forum/threads/in-eigener-sache-auch-wir-kommen-an-einem-consent-dialog-nicht-vorbei.1973328/',
]
def parse(self, response):
comments = response.xpath( '//*/div[@class = "message-cell message-cell--main"]')
for comment in comments:
loader = ItemLoader(item = CBCrawlerItem(), selector = comment)
loader.add_xpath('Comment_no','.//ul[@class="message-attribution-opposite message-attribution-opposite--list "]/li[2]/a/text()')
loader.add_xpath('Datetime','.//time[@class="u-dt"]/@datetime')
loader.add_xpath('Comment', './/*/article[@class = "message-body js-selectToQuote"]//div[@class = "bbWrapper"]//text()[not(ancestor::*[@class="js-extraPhrases"])]')
loader.add_xpath('Admin', '..//span[contains(@class, "username--staff")]')
yield loader.load_item()
next_page = response.xpath('//*/a[@class="pageNav-jump pageNav-jump--next"]/@href').extract_first()
if next_page is not None:
yield response.follow(next_page, callback=self.parse)</code></pre>
</div>
</div>
</p>
<p>items file:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>def clean_data(data):
data = data.strip()
return data
def remove_quotes(text):
#strip the unicode quotes
text = text.strip(u'\u201c'u'\u201d')
return text
def user_check(user):
if response.xpath( '//*/div[@class = "message-cell message-cell--main"]..//span[contains(@class, "username--staff")]') is true:
return "Admin"
else:
return "Customer"
class CBCrawlerItem(scrapy.Item):
# define the fields for your item here like:
Comment_no = scrapy.Field(
input_processor=MapCompose(clean_data),
output_processor=TakeFirst()
)
Datetime = scrapy.Field(
input_processor=MapCompose(),
output_processor=TakeFirst()
)
Comment = scrapy.Field(
input_processor=MapCompose(clean_data, remove_quotes),
output_processor=Identity()
)
Admin = scrapy.Field(
input_processor=MapCompose(user_check),
output_processor=TakeFirst()
)
pass</code></pre>
</div>
</div>
</p>
<p>This was my try:</p>
<pre><code>loader.add_xpath('Admin', '..//span[contains(@class, "username--staff")]')
def user_check(user):
if response.xpath( '//*/div[@class = "message-cell message-cell--main"]..//span[contains(@class, "username--staff")]') is true:
return "Admin"
else:
return "Customer"
</code></pre>
|
<p>Tanks for the help. :) The Solution was a for loop:</p>
<pre><code>def Check_Author(Authors):
for Author in Authors:
return 'Admin'
</code></pre>
|
python|if-statement|xpath|scrapy
| 0 |
1,906,505 | 62,765,301 |
Pandas: dynamically append dataframe column into a running total dataframe within a loop
|
<p>I am writing a simulation and I am trying to append the result of EACH ITERATION INTO a dataframe that keep track of all the iterations.</p>
<p>While everything works fine with collecting the results, I cannot find a way to append the results into a new column each time. I have been banging my head on that issue for a while now and cannot unblock the problem.</p>
<p>I have built a simplified version of what I am doing to best explain my issue:</p>
<pre><code>import simpy
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm
import pandas as pd
###dataframe for the simulation
df = pd.DataFrame({'Id' : ['1183', '1187']})
df['average_demand'] = [7426,989]
df['lead_time'] = [1.5, 1.5]
df['sale_price'] = [1.98, 2.01]
df['buy_price'] = [0.11, 0.23]
df['beg_inventory'] = [1544,674]
df['margin'] = df['sale_price'] - df['buy_price']
df['holding_cost'] = 0.2/12
df['aggregate_order_placement_cost'] = 1000
df['review_time'] = 0
df['periods'] = 30
#df['cap_ts'] = 1.5
df['min_ts'] = 1
df['low_demand'] = [300, 30]#,3000,350,220,40,42,40,10,25,240]
df['high_demand'] = [1000, 130]#,12000,700,500,100,90,210,135,200,800]
df['low_sd'] = [160,30]#,3400,100,90,10,5,50,26,45,170]
df['high_sd'] = [400,90]#,5500,200,160,60,50,100,78,113,300]
cap_ts = 0
big_df = pd.DataFrame(df)
for i in df.index:
for cap_ts in range(1,12, 1):
def warehouse_run(env, df):
df['inventory'] = df['beg_inventory']
df['balance'] = 0.0
df['quantity_on_order'] = 0
df['count_order_placed'] = 0
df['commands_on_order'] = 0
df['demand'] = 0
df['safety_stock'] = 0
df['stockout_occurence'] = 0
df['inventory_position'] = 0
while True:
interarrival = generate_interarrival()
yield env.timeout(interarrival)
df['balance'] -= df['inventory'] * df['holding_cost'] * interarrival
df['demand'] = generate_demand()
if df['demand'].loc[i] < df['inventory'].loc[i]:
df['balance'] += df['sale_price'] * df['demand']
df['inventory'] -= df['demand']
print('{:.2f} sold {}'.format(env.now, df['demand'].loc[i]))
else:
df['balance'] += df['sale_price'] * df['inventory']
df['inventory'] = 0
df['stockout_occurence'] += 1
print('{:.2f} demand {} but inventory{}'.format(env.now, df['demand'].loc[i], df['inventory'].loc[i]))
print('{:.2f} sold {} ( nb stockout)'.format(env.now, df['stockout_occurence'].loc[i]))
if df['demand'].loc[i] > df['inventory'].loc[i]:
env.process(handle_order(env,
df))
df['count_order_placed'] += 1
print("inventory", df['inventory'].loc[i])
print("number of orders placed", df['count_order_placed'].loc[i])
def handle_order(env, df):
df['quantity_ordered'] = cap_ts *df['average_demand']
df['quantity_on_order'] += df['quantity_ordered']
df['commands_on_order'] += 1
print("{:.2f} placed order for {}".format(env.now, df['quantity_ordered'].loc[i]))
df['balance'] -= df['buy_price'] * df['quantity_ordered'] + df['aggregate_order_placement_cost']
yield env.timeout(df['lead_time'].loc[i], 0)
df['inventory'] += df['quantity_ordered']
df['quantity_on_order'] -= df['quantity_ordered']
df['commands_on_order'] -= 1
print('{:.2f} receive order,{} in inventory'.format(env.now, df['inventory'].loc[i]))
# number of orders per month
def generate_interarrival():
return np.random.exponential(1. / 1)
# quantity of demand per months
def generate_demand():
return np.random.randint(df['low_demand'].loc[i], df['high_demand'].loc[i])
def generate_standard_deviation():
return np.random.randint(df['low_sd'].loc[i], df['high_sd'].loc[i])
obs_time = []
inventory_level = []
demand_level = []
safety_stock_level = []
inventory_position_level = []
def observe(env, df):
while True:
obs_time.append(env.now)
inventory_level.append(df['inventory'].loc[i])
demand_level.append(df['demand'].loc[i])
safety_stock_level.append(df['safety_stock'].loc[i])
inventory_position_level.append(df['inventory_position'].loc[i])
yield env.timeout(0.1)
np.random.seed(0)
env = simpy.Environment()
env.process(warehouse_run(env, df))
env.process(observe(env, df))
# #RUN FOR 12 MONTHS
env.run(until=36.0)
recap = pd.DataFrame(df.loc[i])
recap = recap.transpose()
#big_df.append(recap)
big_df['Iteration {}'.format(i)] = recap
print(recap)
</code></pre>
<p>So in this code, the issue is in appending the results contained into <code>recap</code> to <code>big_df</code>. Ideally at the end of the simulation, <code>big_df</code>should contain 24 columns, which would be one column of results for each iteration of the simulation. Any help on this would be greatly appreciated, thank you</p>
<p>UPDATE: thanks to wnsfan40 I have been able to get a df that concat the result for each iteration, but <code>big_df</code>reset at each iteration and does not continually append each new df.</p>
<p>expected output looks kind of like that:</p>
<pre><code> Id result_columns
0 11198 x
1 11198 x
2 11198 x
3 11198 x
4 11198 x
5 11198 x
6 11198 x
7 11198 x
8 11198 x
9 11198 x
10 11198 x
11 11198 x
12 11187 y
13 11187 y
14 11187 y
15 11187 y
16 11187 y
17 11187 y
18 11187 y
19 11187 y
20 11187 y
21 11187 y
22 11187 y
23 11187 y
</code></pre>
<p>with the <code>result columns</code> is a shortcuts for all the columns containing results about each row.</p>
|
<p>Assign df's columns as the index to big_df when it is initialized using</p>
<pre><code>big_df = pd.DataFrame(index = df.index)
</code></pre>
<p>Try changing from append to assigning a column value, such as</p>
<pre><code>big_df['Iteration {}'.format(i)] = recap
</code></pre>
|
python|pandas|numpy|dataframe|simpy
| 0 |
1,906,506 | 62,772,719 |
Azure Function - Pandas dataframe to Excel, write to outputBlob stream
|
<p>Am trying to write a DataFrame to an outputBlob from an Azure Function. I'm having trouble figuring out which io stream to use.</p>
<p>My function looks like this:</p>
<pre><code> import io
import xlrd
import pandas as pd
def main(myblob: func.InputStream, outputBlob: func.Out[func.InputStream]):
logging.info(f"Python blob trigger function processed blob \n"
f"Name: {myblob.name}\n"
f"Blob Size: {myblob.length} bytes")
input_file = xlrd.open_workbook(file_contents = myblob.read())
df = pd.read_excel(input_file)
if not df.empty:
output = io.BytesIO()
outputBlob.set(runway1.to_excel(output))
</code></pre>
<p>How do we save the DataFrame to a stream that is recognisable by the Azure Function to write the excel to a Storage Container?</p>
|
<p>If you want to save DataFrame as excel to Azure blob storage, please refer to the following example</p>
<ol>
<li>SDK</li>
</ol>
<pre><code>azure-functions==1.3.0
numpy==1.19.0
pandas==1.0.5
python-dateutil==2.8.1
pytz==2020.1
six==1.15.0
xlrd==1.2.0
XlsxWriter==1.2.9
</code></pre>
<ol start="2">
<li>Code</li>
</ol>
<pre><code>import logging
import io
import xlrd
import pandas as pd
import xlsxwriter
import azure.functions as func
async def main(myblob: func.InputStream,outputblob: func.Out[func.InputStream]):
logging.info(f"Python blob trigger function processed blob \n"
f"Name: {myblob.name}\n")
input_file = xlrd.open_workbook(file_contents = myblob.read())
df = pd.read_excel(input_file)
if not df.empty:
xlb=io.BytesIO()
writer = pd.ExcelWriter(xlb, engine= 'xlsxwriter')
df.to_excel(writer,index=False)
writer.save()
xlb.seek(0)
outputblob.set(xlb)
logging.info("OK")
</code></pre>
<p><a href="https://i.stack.imgur.com/y9TcB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/y9TcB.png" alt="enter image description here" /></a></p>
|
python|azure|azure-functions|azure-blob-storage
| 0 |
1,906,507 | 71,256,642 |
Matplotlib histogram missing bars
|
<p>I am trying to plot a histogram using the code below:</p>
<p>plt.subplots(figsize = (10,6))</p>
<p>lbins=[0,85,170,255,340,425]</p>
<p>plt.hist(flt_data['tree_dbh'], bins=lbins)</p>
<p>plt.gca().set(title='Tree diameter histogram', ylabel='Frequency')</p>
<p>The output is as follows:
<a href="https://i.stack.imgur.com/Z9goO.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Z9goO.jpg" alt="Histogram" /></a></p>
<p>The output is not including all data in the histogram.</p>
<p>The following are the descriptive statistics of the column:</p>
<p><a href="https://i.stack.imgur.com/u4WuU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/u4WuU.png" alt="Descriptive statistics" /></a></p>
|
<p>You could set a logarithmic y-axis to better show the tiny bars. You can also try seaborn's <code>sns.boxenplot(flt_data['tree_dbh'])</code> to better visualize the distribution.</p>
<p>Here is an example with simulated data. <code>df.describe()</code> shows:</p>
<pre><code>count 65000.000000
mean 12.591938
std 13.316495
min 0.000000
25% 2.000000
50% 9.000000
75% 18.000000
max 150.000000
Name: data, dtype: float64
</code></pre>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import pandas as pd
np.random.seed(2402)
df = pd.DataFrame({'data': (np.random.normal(3, 2, 65000) ** 2).astype(int)})
df['data'].describe()
lbins = [0, 85, 170, 255, 340, 425]
fig, (ax1, ax2, ax3) = plt.subplots(ncols=3, figsize=(14, 4))
ax1.hist(df['data'], bins=lbins, fc='skyblue', ec='black')
ax1.set_title('histogram with scalar y-axis')
ax2.hist(df['data'], bins=lbins, fc='skyblue', ec='black')
ax2.set_yscale('log')
ax2.set_title('histogram with log y-axis')
sns.boxenplot(x=df['data'], color='skyblue', ax=ax3)
ax3.set_title('sns.boxenplot')
plt.tight_layout()
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/LQnoz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LQnoz.png" alt="comparing histplot with sns.boxenplot" /></a></p>
|
python|matplotlib|histogram
| 1 |
1,906,508 | 71,095,652 |
Is it possible to get user input with micro python?
|
<p>How do I get user input with micro python? Whenever I try using</p>
<pre><code>input()
</code></pre>
<p>I get an error saying it is not a valid command. How do i fix this?</p>
|
<p>Use sys.stdin.readline()</p>
<pre><code>import sys
print("What is the Answer to the Ultimate Question of Life, the Universe, and Everything?")
answer = sys.stdin.readline()
if answer == "42\n":
print("correct!")
else:
print("incorrect!");
</code></pre>
|
micropython
| 1 |
1,906,509 | 70,287,448 |
TypeError: line() got an unexpected keyword argument 'markers'
|
<p>I'm trying to use "markers" for the line function with plotly 5.4.0</p>
<pre><code>import plotly.express as px
df_temp = df_temp[df_temp.date_str.isin(selected_dates)]
fig = px.line(df_temp, x='trialID', y='reaction_time', color='date_str', markers=True,
title=f'Graph2. Intervention status: {status}')
</code></pre>
<p>i get the error:</p>
<pre><code>TypeError: line() got an unexpected keyword argument 'markers'
</code></pre>
<p>i read that it's about an update but i don't think it's that. does anyone know what it could be?</p>
|
<ul>
<li>you have not provided a MWE. Have simulated <code>df_temp, selected_dates, status</code></li>
<li>with this it runs without issue with <strong>plotly</strong> 5.4.0</li>
</ul>
<pre><code>import plotly.express as px
import pandas as pd
import numpy as np
df_temp = pd.DataFrame(
{
"date_str": np.repeat(pd.date_range("1-jan-2021", periods=10), 10).astype(str),
"trialID": np.tile(np.arange(10), 10),
"reaction_time": np.random.uniform(0, 2, 100),
}
)
selected_dates = pd.Series(df_temp["date_str"].unique()).sample(5).values
status = "my favorite status"
df_temp = df_temp[df_temp.date_str.isin(selected_dates)]
fig = px.line(
df_temp,
x="trialID",
y="reaction_time",
color="date_str",
markers=True,
title=f"Graph2. Intervention status: {status}",
)
fig
</code></pre>
<p><a href="https://i.stack.imgur.com/WY5ew.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WY5ew.png" alt="enter image description here" /></a></p>
|
python|plotly|markers|plotly-express
| 0 |
1,906,510 | 11,135,871 |
imagefield(upload_to = 'myfolder') not save image in folder - 'myfolder' in django
|
<p>form html:</p>
<pre><code><form action='/register/' method = 'post'>{% csrf_token %}
...
<label>Avatar: </label><input type='file' name='avatar' value='' /><br />
<input type = 'submit' name='submit' value='Sign up' />
</form>
</code></pre>
<p>models.py</p>
<pre><code>class Employee(models.Model):
...
avatar = models.ImageField(upload_to = 'avatar', blank = True, null = True)
...
</code></pre>
<p>and views.py</p>
<pre><code>def register(request):
success = False
message = ''
try:
newE = Employee.objects.create(...
avatar = request.POST['avatar'])
success = True
message = 'Register successful!'
return HttpResponse(json.dumps({'success':str(success).lower(), 'message':message}))
except:
Employee.objects.filter(email = request.POST['email']).delete()
message = 'Can\'t create a new account!'
return HttpResponse(json.dumps({'success':str(success).lower(), 'message':message}))
</code></pre>
<p>settings.py</p>
<pre><code>MEDIA_ROOT = '/home/dotcloud/data/media/'
MEDIA_URL = '/media/'
</code></pre>
<p>when I use the django admin page, the image will load and save:
<a href="http://training-hongquan156.dotcloud.com/media/" rel="nofollow">http://training-hongquan156.dotcloud.com/media/</a><strong>avatar</strong>/image.png
but when I use html form upload photos, the image is not uploaded and saved in the folder '<strong><em>avatar</em></strong>', but saves the path:
<a href="http://training-hongquan156.dotcloud.com/media/image.png" rel="nofollow">http://training-hongquan156.dotcloud.com/media/image.png</a>
and i can't load image...
what's problem?</p>
|
<p>You should correct</p>
<pre><code><form action="/register/" method="post">
</code></pre>
<p>to</p>
<pre><code><form action="/register/" enctype="multipart/form-data" method="post">
</code></pre>
<p>handle the uploaded file with</p>
<pre><code>request.FILES['avatar']
</code></pre>
<p>then find it in dir <code>$upload_to</code>,<code>/home/dotcloud/data/media/photo</code></p>
|
python|django|django-models
| 4 |
1,906,511 | 11,157,853 |
Python list to XML and vice versa
|
<p>I have some python code that I wrote to convert a python list into an XML element. It's meant for interacting with LabVIEW, hence the weird XML array format. Anyways, here's the code:</p>
<pre><code>def pack(data):
# create the result element
result = xml.Element("Array")
# report the dimensions
ref = data
while isinstance(ref, list):
xml.SubElement(result, "Dimsize").text = str(len(ref))
ref = ref[0]
# flatten the data
while isinstance(data[0], list):
data = sum(data, [])
# pack the data
for d in data:
result.append(pack_simple(d))
# return the result
return result
</code></pre>
<p>Now I need to write an unpack() method to convert the packed XML Array back into a python list. I can extract the array dimensions and data just fine:</p>
<pre><code>def unpack(element):
# retrieve the array dimensions and data
lengths = []
data = []
for entry in element:
if entry.text == "Dimsize":
lengths.append(int(entry.text))
else:
data.append(unpack_simple(entry))
# now what?
</code></pre>
<p>But I am not sure how to unflatten the array. What would be an efficient way to do that?</p>
<p>Edit: Here's what the python list and corresponding XML looks like. Note: the arrays are n-dimensional.</p>
<pre><code>data = [[[1, 2], [3, 4]], [[5, 6], [7, 8]]]
</code></pre>
<p>And then the XML version:</p>
<pre><code><Array>
<Dimsize>2</Dimsize>
<Dimsize>2</Dimsize>
<Dimsize>2</Dimsize>
<I32>
<Name />
<Val>1</Val>
</I32>
... 2, 3, 4, etc.
</Array>
</code></pre>
<p>The actual format isn't important though, I just don't know how to unflatten the list from:</p>
<pre><code>data = [1, 2, 3, 4, 5, 6, 7, 8]
</code></pre>
<p>back into:</p>
<pre><code>data = [[[1, 2], [3, 4]], [[5, 6], [7, 8]]]
</code></pre>
<p>given:</p>
<pre><code>lengths = [2, 2, 2]
</code></pre>
<p>Assume pack_simple() and unpack_simple() do the same as pack() and unpack() for the basic data types (int, long, string, boolean).</p>
|
<p>start inside out:</p>
<pre><code>def group(seq, k):
return [seq[i:i+k] for i in range(0, len(seq), k)]
unflattened = group(group(data, 2), 2)
</code></pre>
<p>Your example might be easier, if your dimensions were not all the same. But I think the above code should work.</p>
|
python|xml|labview
| 2 |
1,906,512 | 55,981,931 |
Finding edges represented as lists
|
<p>Suppose we have a bipartite graph in which the nodes are represented as list. For example suppose a bipartite graph has the nodes as <code>l1 = [1,2,3,4,5]</code> and <code>l2 = [6,7,8,9,10]</code> are the nodes in the two partitions The edges are [[1,8, 'a'], [4,9,'b']] represented as list of list as given in the figure <a href="https://i.stack.imgur.com/OTjf3.png" rel="nofollow noreferrer">1</a></p>
<p><a href="https://i.stack.imgur.com/XbHIN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XbHIN.png" alt="figure 2"></a></p>
<p>If somehow we have merged the nodes of the bipartite graph and this is now represented as in <a href="https://i.stack.imgur.com/OTjf3.png" rel="nofollow noreferrer">1</a> by <code>[[1,2,3], [4, 5]]</code> and <code>[[6,7] , [8, 9, 10]]</code> then in this new graph, we would have edges between these groups if there is an edge between any pair in the original graph. For example, in the above, there would be an <code>a</code> edge between groups <code>[1,2,3]</code> and <code>[8,9,10]</code> since there is an edge originally between 1 and 8, this is depicted in Figure <a href="https://i.stack.imgur.com/OTjf3.png" rel="nofollow noreferrer">1</a>. How do find the edges in the new graph in Python, what would be a suitable data structure representation and how to find the resulting edges from this original graph?</p>
<p><a href="https://i.stack.imgur.com/OTjf3.png" rel="nofollow noreferrer"></a>
<a href="https://i.stack.imgur.com/OTjf3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OTjf3.png" alt="figure 1"></a></p>
<p>I have used lists for this, but the problem is to find these edges, I am having to iterate over all the nodes in the new graph to check if there should be an edge. Is there a more efficient way to do this? </p>
<p>Code I have tried :</p>
<pre><code>l1 = [1,2,3,4,5]
l2 = [6,7,8,9,10]
l3 = [[1,2,3], [4, 5]]
l4 = [[6,7] , [8, 9, 10]]
edges = [[1,8, 'a'], [4,9,'b']]
for e in edges:
for l in l3:
for k in l4:
if e[0] in l and e[1] in k:
print(e[0], e[1], e[2])
</code></pre>
|
<p>Let's start by getting the index of the group that contains the specific value.</p>
<pre class="lang-py prettyprint-override"><code>def idx_group_in_list(value, list_) -> int:
"""e.g. value=2, list_=[[1,2],[3,4]] -> 0
because the value 2 is in the first (idx=0) inner list"""
for idx, l in enumerate(list_):
if value in l:
return idx
return -1
</code></pre>
<p>In the following, I am working with a dictionary-based solution. This makes it easier to check if edges already exist.</p>
<pre><code>l3 = [[1, 2, 3], [4, 5]]
l4 = [[6, 7], [8, 9, 10]]
edges = [[1, 8, 'a'], [4, 9, 'b']]
new_edges = {}
for e in edges:
# left
l_idx = idx_group_in_list(e[0], l3)
r_idx = idx_group_in_list(e[1], l4)
if (l_idx, r_idx) in new_edges:
pass # two edges are squeezed. Maybe add some special stuff here
new_edges[(l_idx, r_idx)] = e[2]
print(new_edges)
expected_output = {(0, 1): 'a', (1, 1): 'b'}
print(expected_output == new_edges)
</code></pre>
<h2>Edit:</h2>
<p>I've made some very simple performance tests. Most of the code stayed the same I've just changed the way, the lists are generated.</p>
<pre><code>num_nodes_per_side = 1000
left = [[i] for i in range(num_nodes_per_side)]
right = [[i] for i in range(num_nodes_per_side, num_nodes_per_side*2)]
edges = [[i, j, 'a'] for i, j in zip(range(num_nodes_per_side), range(num_nodes_per_side, num_nodes_per_side*2))]
# result for num_nodes_per_side = 3
>>> left
[[0], [1], [2]]
>>> right
[[3], [4], [5]]
>>> edges
[[0, 3, 'a'], [1, 4, 'a'], [2, 5, 'a']]
</code></pre>
<p>This means you have from every left group one edge to a right group.
In the following are my timeit results, based on <code>num_nodes_per_side</code>.</p>
<ul>
<li>10: 2.0693999999987778e-05</li>
<li>100: 0.0004394410000000404</li>
<li>1000: 0.042664883999999986</li>
<li>10000: 4.629786907</li>
</ul>
|
python|list
| 2 |
1,906,513 | 56,631,334 |
eos_vlan and ansible, "cli command "vlan 777" failed: invalid command"
|
<p>I am a newbie at Ansible and Networking, however I have started a job at networking company, where we have started using Ansible for automatizing configurations for network nodes. Juniper devices show no problem, however Arista switches pose issues when trying to relay simple commands.</p>
<p>So what I am trying to achieve is to create vlans at Arista switches using Ansible.
I am using eapi connection using https (sensitive data replaced with xxx):</p>
<pre><code>show management api http-commands
Enabled: Yes
HTTPS server: running, set to use port 443
HTTP server: shutdown, set to use port 80
VRF: MGMT
Hits: 318
Last hit: 766 seconds ago
Bytes in: 25241
Bytes out: 3985523
Requests: 40
Commands: 80
Duration: 79.082 seconds
User Hits Bytes in Bytes out Last hit
---------- ---------- -------------- --------------- ---------------
xxx 40 25241 3985523 766 seconds ago
URLs
-------------------------------------
Vlan2 : https://xxx
</code></pre>
<p>here is my task:</p>
<pre><code> - name: create vlan
eos_vlan:
vlan_id: "{{ vlan_id }}"
name: "{{ vlan_descr }}"
state: present
authorize: yes
auth_pass: "{{ auth_password }}"
transport: "{{ transport }}"
username: "{{ username }}"
password: "{{ password }}"
validate_certs: false
ssh_keyfile: "{{ ssh_keyfile }}"
</code></pre>
<p>as you can see I use authorize password, all variables are stored in another file. The problem is that I receive this error:</p>
<p><code>changed": false, "code": 1002, "msg": "CLI command 2 of 2 'vlan 777' failed: invalid command"</code></p>
<p>This server runs on:</p>
<pre><code> CPE OS Name: cpe:/o:centos:centos:7
Kernel: Linux 3.10.0-957.21.2.el7.x86_64
Architecture: x86-64
</code></pre>
<p>Using Ansible:</p>
<pre><code> config file = /opt/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, May 2 2019, 20:40:44) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
</code></pre>
<p>Arista version:</p>
<pre><code>Arista DCS-7050QX-32-F
Hardware version: 02.00
Software image version: 4.13.5F
Architecture: i386
</code></pre>
<p>I have tried this on test environment using same tasks, however different ansible VM and arista image:</p>
<pre><code>Arista vEOS
Hardware version:
Serial number:
Software image version: 4.21.1.1F
Architecture: i386
Internal build version: 4.21.1.1F-10146868.42111F
</code></pre>
<p>and all seemed ok, used same documentations for setting up users and so on. </p>
<p>I believe it may be due to user not having privileges, but I include admin password and both eapi and ansible users have network-admin roles. </p>
<p>Here is more detailed output:</p>
<pre><code>fatal: [x.x.x.x]: FAILED! => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"code": 1002,
"invocation": {
"module_args": {
"aggregate": null,
"associated_interfaces": null,
"auth_pass": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"authorize": true,
"delay": 10,
"host": "x.x.x.x",
"interfaces": null,
"name": null,
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"port": 443,
"provider": {
"auth_pass": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"authorize": true,
"host": "x.x.x.x",
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"port": 443,
"ssh_keyfile": "/root/.ssh/id_rsa.pub",
"timeout": 45,
"transport": "eapi",
"use_proxy": true,
"use_ssl": true,
"username": "eapi",
"validate_certs": false
},
"purge": false,
"ssh_keyfile": "/root/.ssh/id_rsa.pub",
"state": "present",
"timeout": 45,
"transport": "eapi",
"url_password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"url_username": "eapi",
"use_ssl": true,
"username": "eapi",
"validate_certs": false,
"vlan_id": 777
}
},
"msg": "CLI command 2 of 2 'vlan 777' failed: invalid command"
}
</code></pre>
<p>Another thing worth noticing is that if I use the vlan that already exists, and do not include description (so that no changes are made), Ansible returns success:</p>
<pre><code>ok: [x.x.x.x] => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"commands": [],
"invocation": {
"module_args": {
"aggregate": null,
"associated_interfaces": null,
"auth_pass": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"authorize": true,
"delay": 10,
"host": "x.x.x.x",
"interfaces": null,
"name": null,
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"port": 443,
"provider": {
"auth_pass": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"authorize": true,
"host": "x.x.x.x",
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"port": 443,
"ssh_keyfile": "/root/.ssh/id_rsa.pub",
"timeout": 45,
"transport": "eapi",
"use_proxy": true,
"use_ssl": true,
"username": "eapi",
"validate_certs": false
},
"purge": false,
"ssh_keyfile": "/root/.ssh/id_rsa.pub",
"state": "present",
"timeout": 45,
"transport": "eapi",
"url_password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"url_username": "eapi",
"use_ssl": true,
"username": "eapi",
"validate_certs": false,
"vlan_id": 777
}
}
}
</code></pre>
<p>Could someone point me towards the place where I should look for answers?
Or any suggestions at all? I would extremely appreciate that. </p>
<p>Thanks</p>
|
<p>The answer was in my question all along:
turns out it was due to software version: </p>
<pre><code> Arista DCS-7050QX-32-F
Hardware version: 02.00
Software image version: 4.13.5F
Architecture: i386
</code></pre>
<p>You need at least <code>4.15.5F</code> tu run with <code>Ansible 2.8.1</code></p>
|
python|ansible|eos
| 0 |
1,906,514 | 18,196,932 |
python numpy set elements of list by condition
|
<p>I want to set a specific element of my list to specific value with low overhead.
for example if I have this : <code>a = numpy.array([1,2,3,0,4,0])</code> I want to change every 0 value to 10; in the end I want to have [1, 2, 3, 10, 4, 10]</p>
<p>in Matlab you can do this easily like a(a==0) = 10, is there any equivalent in numpy?</p>
|
<p>Remarkably similar to Matlab:</p>
<pre><code>>>> a[a == 0] = 10
>>> a
array([ 1, 2, 3, 10, 4, 10])
</code></pre>
<p>There's a really nice <a href="http://wiki.scipy.org/NumPy_for_Matlab_Users#head-13d7391dd7e2c57d293809cff080260b46d8e664" rel="nofollow">"NumPy for Matlab Users"</a> guide at the SciPy website.</p>
<p>I should note, this doesn't work on regular Python lists. NumPy arrays are a different datatype that work a lot more like a Matlab matrix than a Python list in terms of access and math operators.</p>
|
python|list|numpy
| 6 |
1,906,515 | 60,964,775 |
Pandas - create a new string column based on the values of a datetime column
|
<p>The below code creates a new column season that assigns a 'season' depending on the value of the date column. </p>
<pre><code>df2.season = None
df2.loc[(df2.date=='2010-02-22 00:00:00'), 'season'] = '2009/2010'
df2.loc[(df2.date=='2011-02-22 00:00:00'), 'season'] = '2010/2011'
df2.loc[(df2.date=='2014-09-19 00:00:00'), 'season'] = '2014/2015'
df2.loc[(df2.date=='2012-02-22 00:00:00'), 'season'] = '2011/2012'
df2.loc[(df2.date=='2013-09-20 00:00:00'), 'season'] = '2013/2014'
df2.loc[(df2.date=='2015-09-10 00:00:00'), 'season'] = '2015/2016'
df2.head()
</code></pre>
<p>Is there a way of doing this in fewer lines of code with a loop? </p>
<p>I've tried <code>pd.cut()</code> but that throws up an error due to the <code>datetime</code> values in the date column. </p>
<p>I imagine there's a way of zipping date and season values and then using a loop but I don't know how to go about this. Thanks!</p>
|
<p>If logic is datetimes before and after last june day use:</p>
<pre><code>df["date"] = pd.to_datetime(df["date"])
y = df["date"].dt.year
y1 = y.astype(str)
y2 = (y - 1).astype(str)
y3 = (y + 1).astype(str)
df["season"] = np.where(df["date"].dt.month > 6, y1 + '/' + y3, y2 + '/' + y3)
print (df)
date season
0 2010-02-22 2009/2011
1 2011-02-22 2010/2012
2 2014-09-19 2014/2015
3 2012-02-22 2011/2013
4 2013-09-20 2013/2014
5 2015-09-10 2015/2016
</code></pre>
|
python|pandas
| 0 |
1,906,516 | 69,201,761 |
Python selenium chrome driver SSL: CERTIFICATE_VERIFY_FAILED unable to get local issuer certificate
|
<p>When trying to run undetected-chromedriver I was running into the following error:</p>
<pre><code>urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate
</code></pre>
|
<p>If you're using macOS go to Macintosh HD > Applications > Python3.9 folder (or whatever version of python you're using) > double click on "Install Certificates.command" file.</p>
|
python|macos|selenium-chromedriver|undetected-chromedriver
| 3 |
1,906,517 | 72,511,659 |
Values are not going to the database with signals
|
<p>I'm using a signal to register a user in the Profile table, every time a User is registered in the Django User. I'm able to get the results of username and email, however, the first and last name are not being assigned.</p>
<p>Model Perfil</p>
<pre><code>class Perfil(models.Model):
username = models.OneToOneField(User, on_delete=models.CASCADE)
first_name = models.CharField(max_length=30, verbose_name="first name", blank=True)
last_name = models.CharField(max_length=30, verbose_name="last name", blank=True)
email = models.EmailField(max_length=75, verbose_name="email address", blank=True)
def __str__(self):
return self.username.username
</code></pre>
<p>Signal new_profile</p>
<pre><code>def new_profile(sender, instance, created, **kwargs):
user_db = User.objects.filter(username__exact=instance).values().first()
if created:
Perfil.objects.create(
username = instance
)
Perfil.objects.filter(username__exact=instance).update(
first_name = user_db['first_name'],
last_name = user_db['last_name'],
email = user_db['email']
)
post_save.connect(new_profile, sender=User)
</code></pre>
<p>Note: I already tried to create with everything together and my measure of desperation was to try to update after having created.</p>
<p>Print of the table without the last_name and the first_name but with the email.</p>
<p><a href="https://i.stack.imgur.com/k5HZv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/k5HZv.png" alt="enter image description here" /></a></p>
|
<p>I ended up resolving the situation. The problem was in the way django injects data into the database, apparently, it creates and after the update. Here's the solution:</p>
<pre><code>@receiver(post_save, sender=User)
def new_profile(sender, instance, created, **kwargs):
if created:
Perfil.objects.create(
username = instance,
email = instance.email,
)
else:
if not hasattr(instance, "Perfil"):
Perfil.objects.filter(username__exact=instance).update(
ultimo_nome = instance.last_name,
primeiro_nome = instance.first_name,
)
</code></pre>
|
python|django|django-rest-framework|signals
| 0 |
1,906,518 | 68,403,653 |
Read Multitag XML file and store in data frame in Python
|
<p>Hi I have a big XML file having below format, I want all the tag and his value need to be stored in a data frame like tags are column and values in records .</p>
<pre><code><?xml version='1.0' ?>
<positioning_record>
<record>
<reference_id>20190714121625000001</reference_id>
<positioning_request_time utc_off="+0530">20190714121625</positioning_request_time>
<positioning_response_time utc_off="+0530">201990714121625</positioning_response_time>
<network type="MSG"></network>
<cell_identity>
<gsm_cell_identity>
<mcc>6789</mcc>
<mnc>5677</mnc>
<lac>4645645</lac>
<cid>24564</cid>
</gsm_cell_identity>
</cell_identity>
<client type="Emxxxxr5tiyoerw"></client>
<requested_QoS>
<horizontalAccuracy>3456</horizontalAccuracy>
<responseTime type="delay_tolerant"></responseTime>
</requested_QoS>
<MSUE_capability>
<AGPS_capability type="None"></AGPS_capability>
</MSUE_capability>
<response_data type="Success">
<position_data>
<PositioningMethodAndUsage method="e-cid" locationReturn="YES">
<positionresultCode>1</positionresultCode>
<position_estimate>
<pointWithUncertaintyEllipse>
<geographicalCoordinates>
<latitudeSign type="defgrt"></latitudeSign>
<latitude>456789</latitude>
<longitude>987654</longitude>
</geographicalCoordinates>
<uncertaintyEllipse>
<uncertaintySemiMajor>456</uncertaintySemiMajor>
<uncertaintySemiMinor>876</uncertaintySemiMinor>
<orientationOfMajorAxis>2345</orientationOfMajorAxis>
</uncertaintyEllipse>
<confidence>1234</confidence>
</pointWithUncertaintyEllipse>
</position_estimate>
<obtainedAccuracy>
<obtainedhorAccuracy>54321</obtainedhorAccuracy>
</obtainedAccuracy>
<timeStamp utc_off="+0530">20190714121625</timeStamp>
</PositioningMethodAndUsage>
</position_data>
</response_data>
</record>
<record>
<reference_id>20190714121625000002</reference_id>
<positioning_request_time utc_off="+0530">20190714121625</positioning_request_time>
<positioning_response_time utc_off="+0530">20190714121625</positioning_response_time>
<network type="Gde"></network>
<cell_identity>
<gsm_cell_identity>
<mcc>34567</mcc>
<mnc>8765</mnc>
<lac>87654</lac>
<cid>1234</cid>
</gsm_cell_identity>
</cell_identity>
<client type="Emergency"></client>
<requested_QoS>
<horizontalAccuracy>2342</horizontalAccuracy>
<responseTime type="delay_tolerant"></responseTime>
</requested_QoS>
<MSUE_capability>
<AGPS_capability type="Both"></AGPS_capability>
</MSUE_capability>
<response_data type="Failure">
<cause>5</cause>
<position_data>
</position_data>
</response_data>
</record>
</code></pre>
<p>I want all the tags between Records to be keep as a record
including (response_data type="Success" ( like success or failure ))</p>
<p>Have tried below code</p>
<pre><code>import xml.etree.cElementTree as ET
import re
tree = ET.parse('sample.xml')
root = tree.getroot()
data = []
def get_tail(root):
for child in root:
#print(child.tag,child.text)
tag = re.sub(r"[\n\t\s]*", "", str(child.tag))
value = re.sub(r"[\n\t\s]*", "", str(child.text))
if not value :
value = "NAN"
#temp = [tag,value]
#data.append(temp)
get_tail(child)
temp = [tag,value]
data.append(temp)
#print( data)
get_tail(root)
print(data)
cols = ("records","reference_id","positioning_request_time","positioning_response_time","network","cell_identity","gsm_cell_identity","mcc","mnc","lac","cid","client","gsm_cell_identity","horizontalAccuracy","responseTime","MSUE_capability","AGPS_capability","response_data","position_data","PositioningMethodAndUsage","positionresultCode","position_estimate","pointWithUncertaintyEllipse","geographicalCoordinates","latitudeSign","latitude","longitude","uncertaintyEllipse","uncertaintySemiMajor","uncertaintySemiMinor","orientationOfMajorAxis","confidence","obtainedAccuracy","obtainedhorAccuracy","timeStamp")
df = pd.DataFrame(data).T
df.columns = cols
print(df)
</code></pre>
<p>Its gives me the out out but in a 2 rows a and multiple columns . also tried not to do transpose of DF even not working .</p>
<p>the expected out put will be .
"records","reference_id","positioning_request_time","positioning_response_time","network","cell_identity","gsm_cell_identity","mcc","mnc","lac","cid","client","gsm_cell_identity","horizontalAccuracy","responseTime","MSUE_capability","AGPS_capability","response_data","position_data","PositioningMethodAndUsage","positionresultCode","position_estimate","pointWithUncertaintyEllipse","geographicalCoordinates","latitudeSign","latitude","longitude","uncertaintyEllipse","uncertaintySemiMajor","uncertaintySemiMinor","orientationOfMajorAxis","confidence","obtainedAccuracy","obtainedhorAccuracy","timeStamp"
'NAN', '20210714121625000001', '20210714121625', '20210714121625', 'None', 'NAN', 'NAN', '405', '67', '5035', '21621', 'None', 'NAN', '13', 'None', 'NAN', 'None', 'NAN', 'NAN', 'NAN', '1', 'NAN', 'NAN', 'NAN', 'None', '2245323', '4112646', 'NAN', '35', '26', '90', '80', 'NAN', '162', '20210714121625'
'NAN' '20210714121625000002', '20210714121625', '20210714121625', 'None', 'NAN', 'NAN', '404', '05', '5823', '2373', 'None', 'NAN', '13', 'None', 'NAN', 'None', 'NAN', '5', 'NAN'
'NAN' '20210714121625000003', '20210714121625', '20210714121625', 'None', 'NAN', 'NAN', '404', '15', '10021', '35846', 'None', 'NAN', '13', 'None', 'NAN', 'None', 'NAN', 'NAN', 'NAN', '1', 'NAN', 'NAN', 'NAN', 'None', '2496594', '3774627', '0', '37', '45', '89', '80', 'NAN', '330', '20210714121625'
'NAN', '20210714121626000004', '20210714121626', '20210714121626', 'None', 'NAN', 'NAN', '404', '15', '61995', '43038', 'None', 'NAN', '25', 'None', 'NAN', 'None', 'NAN', 'NAN', 'NAN', '1', 'NAN', 'NAN', 'NAN', 'None', '2445796', '3802090', 'NAN', '43', '35', '75', '80', 'NAN', '365', '20210714121626'
'NAN', '20210714121626000005', '20210714121626', '20210714121626', 'None', 'NAN', 'NAN', '405', '67', '5035', '21621', 'None', 'NAN', '13', 'None', 'NAN', 'None', 'NAN', 'NAN', 'NAN', '1', 'NAN', 'NAN', 'NAN', 'None', '2245323', '4112646', 'NAN', '35', '26', '90', '80', 'NAN', '162', '20210714121626'</p>
|
<p>I got the answer as per below code . it will provide the out put as desired</p>
<pre><code>data = []
records = {}
start = "<record>"
end = "</record>"
########################################
with open('sample.xml') as file :
for line in file:
tag,value ="",""
try:
temp = re.sub(r"[\n\t\s]*", "", line)
if temp == start:
records.clear()
elif temp == end:
data.append(records.copy())
print("end records ", records)
else:
line = re.sub(r'[^\w]', ' ', temp) #/\W+/g
tag = line.split()[0]
if tag in {"positioning_request_timeutc_off", "positioning_response_timeutc_off", "timeStamputc_off"}:
value= line.split()[2]
else:
value = line.split()[1]
records[tag] = value
except Exception:
pass
</code></pre>
|
python-3.x|pandas
| 0 |
1,906,519 | 59,195,280 |
Text choices attribute not recognised
|
<p>I'm trying to create a model with <code>django-multiselectfield</code>, but when I run</p>
<p><code>python manage.py runserver</code></p>
<p>I get an error saying :</p>
<p><code>AttributeError: module 'django.db.models' has no attribute 'TextChoices'</code>.</p>
<p>I successfully installed <code>django-multiselectfield-0.1.10</code> and I can't figure out why I get this error. Thanks for any help!</p>
<pre><code>from django.db import models
from multiselectfield import MultiSelectField
class MovieGenre(models.TextChoices):
Action = 'Action'
Horror = 'Horror'
Comedy = 'Comedy'
genre = MultiSelectField(
choices=MovieGenre.choices,
max_choices=3,
min_choices=1
)
def __str__(self):
return self.question_text
</code></pre>
|
<p>Which version of Django do you use?</p>
<p><strong>models.TextChoices</strong> is new in Django 3.0</p>
<blockquote>
<p>New in Django 3.0: The TextChoices, IntegerChoices, and Choices
classes were added.</p>
</blockquote>
<p><a href="https://docs.djangoproject.com/en/3.0/ref/models/fields/#django.db.models.Field.choices" rel="nofollow noreferrer">https://docs.djangoproject.com/en/3.0/ref/models/fields/#django.db.models.Field.choices</a></p>
<p>Your error has nothing to do with "django-multiselectfield-0.1.10"</p>
<p>You can also define your choices as:</p>
<pre><code>MOVIE_GENGRE = (
('action', 'Action'),
('horror', 'Horror'),
('comedy', 'Comedy'),
)
</code></pre>
|
python|django|django-models|module|multi-select
| 4 |
1,906,520 | 62,963,441 |
Indexing a matrix using "floor division" and "modulus" operators
|
<p>I saw a python code where the value from a matrix could be grabbed using a "index" and the both python operators "floor division" and "modulus".</p>
<p>Given the (3,3) matrix below.</p>
<pre class="lang-py prettyprint-override"><code>>>> m = np.array([['0>','1>','2>'],['3>','4>','5>'],['6>','7>','8>']])
>>> m
array([['0>', '1>', '2>'],
['3>', '4>', '5>'],
['6>', '7>', '8>']], dtype='<U2')
</code></pre>
<p>If we "flat" the given matrix we will have:</p>
<pre class="lang-py prettyprint-override"><code>>>> m.reshape(-1)
array(['0>', '1>', '2>', '3>', '4>', '5>', '6>', '7>', '8>'], dtype='<U2')
</code></pre>
<p>Let's suppose that I want to read the value '3>' that is the value in the 4th position in the array.</p>
<p>If I use the index <code>3</code> I can get the respective value from the matrix, using:</p>
<pre class="lang-py prettyprint-override"><code>>>> idx = int(np.where(m.reshape(-1) == '3>')[0])
>>> idx
3
>>> x = idx // m.shape[0]
>>> y = idx % m.shape[0]
>>>
>>> m[x][y]
'3>'
>>>
</code></pre>
<p>I can't see how this work.</p>
<p>What is the explanation for that ?</p>
|
<p>If you read the array like a book (left to right, row by row from top to bottom), then each characters position from the start (i.e. the index once flattened) corresponds to an <em>x</em> and <em>y</em> index in the shaped matrix like so:</p>
<p>Position from start of flat: 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 ... etc</p>
<p><em>y</em> index in your matrix <em>m</em> : 0 | 0 | 0 | 1 | 1 | 1 | 2 | 2 | 2 | 3 | 3 ... etc</p>
<p><em>x</em> index in your matrix <em>m</em> : 0 | 1 | 2 | 0 | 1 | 2 | 0 | 1 | 2 | 0 | 1 ... etc</p>
<p>So a pattern maxes itself apparent. Consider your problem in reverse.</p>
<p>Given the row and column index, the 'book' (i.e. flattened) index is <em>i = x + ny</em> where n is the number of elements in a row, in your case 3. This general pattern holds anywhere. This equation doesn't really answer your question in full, though hopefully it sheds some light.</p>
<p>We can construct two more equations looking at 2 rows at a time in the collection of 3 rows above.</p>
<p>Looking at <em>id</em> and <em>x</em> we see that dividing id by the number of elements to a row consistently yields the x address as the remainder</p>
<p>Similarly looking at <em>id</em> and <em>y</em> we see that the value stays the same for 3 elements, increasing by one in a periodic way. That's just what you get if you keep taking the floor function of sequential integers with respect to 3 (and of course this generalizes.)</p>
<p>I hope this answers your question. I learned this logic when constructing a chess board in Excel, and wanted to store pieces with respect to their 'flat index' but computing move possibilities was so much easier with respect to x/y coordinates. Drawing it out and labelling the coordinates made it become apparent so that is what I aimed to do here.</p>
|
python|numpy|deep-learning|linear-algebra|reinforcement-learning
| 2 |
1,906,521 | 62,146,496 |
DataFrame.duplicated() error in function recursion TypeError: duplicated() got multiple values for argument 'keep'
|
<p>I am using Python 3 on Jupyter notebook.</p>
<p><strong>Use case:</strong></p>
<p>Process all the records of an excel file.</p>
<p><strong>Problem</strong>:</p>
<p>Duplicate records against the column Login id in excel while the underlaying processing can not process a Data set with duplicate records for a login id. So trying to process the records in batches by filtering and creating sub data sets for the duplicate records using recursive function.</p>
<p><strong>Test data set</strong>:</p>
<p><a href="https://i.stack.imgur.com/cVAS3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cVAS3.png" alt="Test data set"></a></p>
<p>Python code:</p>
<pre><code># process withdrwal of duplicate enteries with recrursive function
def withdrw_user_balance(withdraw_records):
#create new data frame using the
duplicateRowsDF = withdraw_records.duplicated(['Login'], keep = "first")
duplicateRowsDF.head(10)
#remove duplicates from the original data frame
withdraw_records.drop_duplicates(['Login'], keep='first', inplace=True)
#process withdraw request for the non-duplicated row data frame
# processWithdraw()
#update the status
for row in withdraw_records.itertuples():
withdraw_records.at[row.Index, 'status'] = 1
# write to excel the processed data frame
# writeExcel(withdraw_records)
# clear the object withdraw_records
withdraw_records = None
# check the size of new dataframe, if greater then 0, recruse the withdrw_user_balance() to find more duplicate records in the current
# else return
if duplicateRowsDF.size > 0:
print("recrusion called")
withdrw_user_balance(duplicateRowsDF)
else:
return True;
</code></pre>
<p>The next code is to execute the recursive function:</p>
<pre><code># import the excel file
withdraw_records_excel = pd.read_excel("batch-withdraw-duplicate-login.xlsx")
withdraw_records_excel.tail()
withdraw_records_excel.size
withdrw_user_balance(withdraw_records_excel)
</code></pre>
<p>Out put:</p>
<pre><code>recrusion called
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-6-bae5054c7199> in <module>
4
5 withdraw_records_excel.size
----> 6 withdrw_user_balance(withdraw_records_excel)
<ipython-input-5-2ecdc243bfb2> in withdrw_user_balance(withdraw_records)
27 if duplicateRowsDF.size > 0:
28 print("recrusion called")
---> 29 withdrw_user_balance(duplicateRowsDF)
30 else:
31 return True;
<ipython-input-5-2ecdc243bfb2> in withdrw_user_balance(withdraw_records)
4 def withdrw_user_balance(withdraw_records):
5 #create new data frame using the
----> 6 duplicateRowsDF = withdraw_records.duplicated(['Login'], keep = "first")
7 duplicateRowsDF.head(10)
8
TypeError: duplicated() got multiple values for argument 'keep'
</code></pre>
<p>I think that the error is because the function parameters are <strong>pass by reference</strong> in python so the data frame some how keeps the reference of previous function call hence the DataFrame.duplicate() method is throwing error.</p>
<p>To fix that, I have nullify the DataFrame object as <strong>withdraw_records = None</strong> after processing, but no help.
Note that I am a starter in python so I may have wrong information on the types and object reference.</p>
<p>Thanks for you help.</p>
|
<p>Finally, I was able to fix the issue in the code.
rather than</p>
<pre><code>#create new data frame using the
duplicateRowsDF = withdraw_records.duplicated(['Login'], keep = "first")
</code></pre>
<p>I need to write:</p>
<pre><code>duplicateRowsDF = withdraw_records[withdraw_records.duplicated(['Login'], keep = "first")]
</code></pre>
<p>It will return the the sub-DataFrame of duplicate records in each call to the function which can be passed as parameter in the consecutive recursive function call to create batches of unique records.</p>
<p>I have used the pixidust debug tool to debug this program which helped a lot in identifying the issue. Following this link:
<a href="https://medium.com/codait/the-visual-python-debugger-for-jupyter-notebooks-youve-always-wanted-761713babc62" rel="nofollow noreferrer">Use pixidust debugger with Jupyter notebook</a></p>
|
python|dataframe|recursion
| 1 |
1,906,522 | 58,641,042 |
Convert .doc/.docx to text with preserving tables
|
<p>I want to convert doc/docx files to text files. My requirement is that tables should as it is.</p>
<p>I have tried python tika. It converting the rows to columns</p>
<p>For example table in input doc/docx file</p>
<p><a href="https://i.stack.imgur.com/YImul.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YImul.png" alt="enter image description here" /></a></p>
<p>Above the table is converted to text like below</p>
<pre><code>LANGUAGE
UNDERSTAND
LEARN
HINDI
YES
NO
MARATHI
YES
NO
ENGLISH
YES
NO
</code></pre>
<p>Desired output is like(preserve table format)</p>
<pre><code> LANGUAGE UNDERSTAND LEARN
HINDI YES NO
MARATHI YES NO
ENGLISH YES NO
</code></pre>
<p>Please let me know if it is possible.</p>
|
<p>As @ilmiacs suggested <code>pandoc</code> can do this for you.<br>
Using <code>python</code> you need to install <code>pypandoc</code>.<br>
Test document:</p>
<p><a href="https://i.stack.imgur.com/9723i.png" rel="noreferrer"><img src="https://i.stack.imgur.com/9723i.png" alt="enter image description here"></a></p>
<pre><code>import pypandoc
print(pypandoc.convert_file("Untitled 1.docx", "plain+simple_tables", format="docx", extra_args=(), encoding='utf-8', outputfile=None))
</code></pre>
<p>gives you:</p>
<p><a href="https://i.stack.imgur.com/VmvAp.png" rel="noreferrer"><img src="https://i.stack.imgur.com/VmvAp.png" alt="enter image description here"></a></p>
<p>Clearly, you also have the option of using <code>subprocess</code> to bang this onto the command line.</p>
|
python|file|text|apache-tika
| 5 |
1,906,523 | 31,432,956 |
How to export a mesh's movement from Blender?
|
<p>I have a simple UV Sphere in Blender which moves from point A to point B (I've set it up by pressing "I" button on location A, clicked on LocRot then clicked on LocRot on point B as well).</p>
<p>I'd like to export its movement.</p>
<p>For exporting a camera's movement I use
Blender Foundation\Blender\2.74\scripts\addons\io_anim_camera.py
(This is Cameras & Markers (.py) exporter.)</p>
<p>It's produces following output:</p>
<pre><code>...
...
# new frame
scene.frame_set(1 + frame)
obj = cameras['Camera.003']
obj.location = -272.1265563964844, -155.54611206054688, -121.49121856689453 <-- here is the current coordinate of the camera, this is what I need for spheres
obj.scale = 0.9999998807907104, 0.9999998807907104, 0.9999999403953552
obj.rotation_euler = -1.6492990255355835, 0.00035389664117246866, 0.009288366883993149
obj.keyframe_insert('location')
obj.keyframe_insert('scale')
obj.keyframe_insert('rotation_euler')
data = obj.data
data.lens = 35.0
data.keyframe_insert('lens')
...
...
</code></pre>
<p>I'm looking for the same thing for meshes.
So here is the code of the basic exporter for cameras:</p>
<pre><code># ##### BEGIN GPL LICENSE BLOCK #####
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# ##### END GPL LICENSE BLOCK #####
# <pep8 compliant>
bl_info = {
"name": "Export Camera Animation",
"author": "Campbell Barton",
"version": (0, 1),
"blender": (2, 57, 0),
"location": "File > Export > Cameras & Markers (.py)",
"description": "Export Cameras & Markers (.py)",
"warning": "",
"wiki_url": "http://wiki.blender.org/index.php/Extensions:2.6/Py/"
"Scripts/Import-Export/Camera_Animation",
"support": 'OFFICIAL',
"category": "Import-Export",
}
import bpy
def write_cameras(context, filepath, frame_start, frame_end, only_selected=False):
data_attrs = (
'lens',
'shift_x',
'shift_y',
'dof_distance',
'clip_start',
'clip_end',
'draw_size',
)
obj_attrs = (
'hide_render',
)
fw = open(filepath, 'w').write
scene = bpy.context.scene
cameras = []
for obj in scene.objects:
if only_selected and not obj.select:
continue
if obj.type != 'CAMERA':
continue
cameras.append((obj, obj.data))
frame_range = range(frame_start, frame_end + 1)
fw("import bpy\n"
"cameras = {}\n"
"scene = bpy.context.scene\n"
"frame = scene.frame_current - 1\n"
"\n")
for obj, obj_data in cameras:
fw("data = bpy.data.cameras.new(%r)\n" % obj.name)
for attr in data_attrs:
fw("data.%s = %s\n" % (attr, repr(getattr(obj_data, attr))))
fw("obj = bpy.data.objects.new(%r, data)\n" % obj.name)
for attr in obj_attrs:
fw("obj.%s = %s\n" % (attr, repr(getattr(obj, attr))))
fw("scene.objects.link(obj)\n")
fw("cameras[%r] = obj\n" % obj.name)
fw("\n")
for f in frame_range:
scene.frame_set(f)
fw("# new frame\n")
fw("scene.frame_set(%d + frame)\n" % f)
for obj, obj_data in cameras:
fw("obj = cameras['%s']\n" % obj.name)
matrix = obj.matrix_world.copy()
fw("obj.location = %r, %r, %r\n" % matrix.to_translation()[:])
fw("obj.scale = %r, %r, %r\n" % matrix.to_scale()[:])
fw("obj.rotation_euler = %r, %r, %r\n" % matrix.to_euler()[:])
fw("obj.keyframe_insert('location')\n")
fw("obj.keyframe_insert('scale')\n")
fw("obj.keyframe_insert('rotation_euler')\n")
# only key the angle
fw("data = obj.data\n")
fw("data.lens = %s\n" % obj_data.lens)
fw("data.keyframe_insert('lens')\n")
fw("\n")
# now markers
fw("# markers\n")
for marker in scene.timeline_markers:
fw("marker = scene.timeline_markers.new(%r)\n" % marker.name)
fw("marker.frame = %d + frame\n" % marker.frame)
# will fail if the cameras not selected
if marker.camera:
fw("marker.camera = cameras.get(%r)\n" % marker.camera.name)
fw("\n")
from bpy.props import StringProperty, IntProperty, BoolProperty
from bpy_extras.io_utils import ExportHelper
class CameraExporter(bpy.types.Operator, ExportHelper):
"""Save a python script which re-creates cameras and markers elsewhere"""
bl_idname = "export_animation.cameras"
bl_label = "Export Camera & Markers"
filename_ext = ".py"
filter_glob = StringProperty(default="*.py", options={'HIDDEN'})
frame_start = IntProperty(name="Start Frame",
description="Start frame for export",
default=1, min=1, max=300000)
frame_end = IntProperty(name="End Frame",
description="End frame for export",
default=250, min=1, max=300000)
only_selected = BoolProperty(name="Only Selected",
default=True)
def execute(self, context):
write_cameras(context, self.filepath, self.frame_start, self.frame_end, self.only_selected)
return {'FINISHED'}
def invoke(self, context, event):
self.frame_start = context.scene.frame_start
self.frame_end = context.scene.frame_end
wm = context.window_manager
wm.fileselect_add(self)
return {'RUNNING_MODAL'}
def menu_export(self, context):
import os
default_path = os.path.splitext(bpy.data.filepath)[0] + ".py"
self.layout.operator(CameraExporter.bl_idname, text="Cameras & Markers (.py)").filepath = default_path
def register():
bpy.utils.register_module(__name__)
bpy.types.INFO_MT_file_export.append(menu_export)
def unregister():
bpy.utils.unregister_module(__name__)
bpy.types.INFO_MT_file_export.remove(menu_export)
if __name__ == "__main__":
register()
</code></pre>
<p>So is it possible? How to change this code to export the movement of a spheres?</p>
|
<p>Well, If you want to export the movement not as a Python script but in Text or in CSV format. Then look at the following code snippet.</p>
<p>CSV export can work for camera like this:</p>
<pre><code>fw("# frame camera x y z qx qy qz qw\n")
for f in frame_range:
scene.frame_set(f)
for obj, obj_data in cameras:
matrix = obj.matrix_world.copy()
fw("%d," % f)
fw("%s," % obj.name)
fw("%r,%r,%r," % matrix.to_translation()[:])
fw("%r,%r,%r,%r" % matrix.to_quaternion()[:])
fw("\n")
from bpy.props import StringProperty, IntProperty, BoolProperty
from bpy_extras.io_utils import ExportHelper
class CameraExporter(bpy.types.Operator, ExportHelper):
"""Export camera trajectory to a file"""
bl_idname = "export_trajectory.cameras"
bl_label = "Export camera trajectory"
filename_ext = ".csv"
filter_glob = StringProperty(default="*.csv", options={'HIDDEN'})
</code></pre>
<p>In my opinion it should work similarly for any other objects in scene.
Not yet tried for other objects though!</p>
|
python|blender
| 1 |
1,906,524 | 31,437,083 |
Python: Get data BeautifulSoup
|
<p>I need help with BeautifulSoup, I'm trying to get the data:</p>
<p><code><font face="arial" font-size="16px" color="navy">001970000521</font></code></p>
<p>They are many and I need to get the value inside "font"</p>
<pre><code><div id="accounts" class="elementoOculto">
<table align="center" border="0" cellspacing=0 width="90%"> <tr><th align="left" colspan=2> permisos </th></tr><tr>
<td colspan=2>
<table width=100% align=center border=0 cellspacing=1>
<tr>
<th align=center width="20%">cuen</th>
<th align=center>Mods</th>
</tr>
</table>
</td>
</tr>
</table>
<table align="center" border="0" cellspacing=1 width="90%">
<tr bgcolor="whitesmoke" height="08">
<td align="left" width="20%">
<font face="arial" font-size="16px" color="navy">001970000521</font>
</td>
<td>......
<table align="center" border="0" cellspacing=1 width="90%">
<tr bgcolor="whitesmoke" height="08">
<td align="left" width="20%">
<font face="arial" font-size="16px" color="navy">001970000521</font>
</td>
</code></pre>
<p>I hope you can help me, thanks.</p>
|
<p>how about this?</p>
<pre><code>from bs4 import BeautifulSoup
str = '''<div id="accounts" class="elementoOculto">
<table align="center" border="0" cellspacing=0 width="90%"> <tr><th align="left" colspan=2> permisos </th></tr><tr>
<td colspan=2>
<table width=100% align=center border=0 cellspacing=1>
<tr>
<th align=center width="20%">cuen</th>
<th align=center>Mods</th>
</tr>
</table>
</td>
</tr>
</table>
<table align="center" border="0" cellspacing=1 width="90%">
<tr bgcolor="whitesmoke" height="08">
<td align="left" width="20%">
<font face="arial" font-size="16px" color="navy">001970000521</font>
</td>
<td>......
<table align="center" border="0" cellspacing=1 width="90%">
<tr bgcolor="whitesmoke" height="08">
<td align="left" width="20%">
<font face="arial" font-size="16px" color="navy">001970000521</font>
</td>'''
bs = BeautifulSoup(str)
print bs.font.string
</code></pre>
|
python|web-scraping|beautifulsoup
| 1 |
1,906,525 | 31,614,068 |
Not able to set terminal title with Django runserver
|
<p>I am using the following <code>set-title</code> function from this post on <a href="https://unix.stackexchange.com/a/186167/20895">How to rename terminal tab title in gnome-terminal?</a></p>
<pre><code>function set-title() {
if [[ -z "$ORIG" ]]; then
ORIG=$PS1
fi
TITLE="\[\e]2;$@\a\]"
PS1=${ORIG}${TITLE}
}
</code></pre>
<p>I have the following in my <code>.env</code> file</p>
<pre><code>set-title SERVER
python manage.py runserver
</code></pre>
<p>I am running the above as follows:</p>
<pre><code>. .env
</code></pre>
<p>The problem is that it doesn't work when <code>python manage.py runserver</code> is present. But when I kill the current running server instance, it changes the title automatically to what I want.</p>
<p>Why is the above behaviour happening when clearly <code>set-title</code> should execute first.</p>
|
<p>I usually have script that launches <strong>new</strong> gnome terminal using the <code>gnome-terminal</code> command, optionally with several tabs (for example, if I need tow django servers to run in parallel, or django server and DB console).</p>
<p>The drawback is that this is new terminal, however, if you need several tabs it can start you fresh new terminal with all your tabs - you just need to write your script once.</p>
<p>Man page is <a href="http://manpages.ubuntu.com/manpages/raring/man1/gnome-terminal.1.html" rel="nofollow">here</a></p>
<pre><code> gnome-terminal --tab -t django1 --working-directory="dir1" -e "python manage.py runserver 8000" \
--tab -t django2 --working-directory="dir2" -e "python manage.py runserver 8002"
</code></pre>
|
python|django|bash
| 1 |
1,906,526 | 15,965,770 |
Make a web crawler in python to download pdf
|
<p>I want to make a web crawler using Python and then download pdf file from that URL.
Can anyone help me? how to start?</p>
|
<p>A good site to start is <a href="https://scraperwiki.com/" rel="nofollow">ScraperWiki</a>, a site where you can write and execute scrapers/crawlers online. Besides other languages it supports <a href="https://scraperwiki.com/docs/python/" rel="nofollow">Python</a>. It provides a lot of useful <a href="https://scraperwiki.com/docs/python/python_intro_tutorial/" rel="nofollow">tutorials</a> and librarys for a fast start.</p>
|
python|pdf|web-crawler
| 2 |
1,906,527 | 49,228,842 |
Checking for duplicates in list of list and sorting them
|
<p>I have a table containing:</p>
<pre><code>table = [[5, 7],[4, 3],[3, 3],[2, 3],[1, 3]]
</code></pre>
<p>and the first values represented in each list, (5,4,3,2,1) can be said to be an ID of a person. the second values represented (7,3,3,3,3) would be a score. What I'm trying to do is to detect duplicates values in the second column which is in this case is the 3s in the list. Because the 4 lists has 3 as the second value, i now want to sort them based on the first value.</p>
<p>In the table, notice that [1,3] has one as the first value hence, it should replace [4,3] position in the table. [2,3] should replace [3,3] in return.</p>
<pre><code>Expected output: [[5,7],[1,3],[2,3],[3,3],[4,3]]
</code></pre>
<p>I attempted:</p>
<pre><code>def checkDuplicate(arr):
i = 0
while (i<len(arr)-1):
if arr[i][1] == arr[i+1][1] and arr[i][0] > arr[i+1][0]:
arr[i],arr[i+1] = arr[i+1],arr[i]
i+=1
return arr
checkDuplicate(table)
</code></pre>
<p>The code doesn't fulfil the output i wanted and i would appreciate some help on this matter.</p>
|
<p>You can use <code>sorted</code> with a key.</p>
<pre><code>table = [[5, 7], [4, 3], [3, 3], [2, 3], [1, 3]]
# Sorts by second index in decreasing order and then by first index in increasing order
sorted_table = sorted(table, key=lambda x: (-x[1], x[0]))
# sorted_table: [[5, 7], [1, 3], [2, 3], [3, 3], [4, 3]]
</code></pre>
|
python
| 5 |
1,906,528 | 70,923,824 |
Prefect Local Agent Troubleshooting
|
<p>I am running a flow on a local agent in docker on EC2 (not ECS). Prefect Cloud is configured to provide a UI for monitoring. The flow executes every 5mins, and for about an hour or so, it does just fine. However, the flows eventually fall behind, before failing to executer entirely and I get the 'cannot find a heartbeat' error.</p>
<p>Is there a way to run the local agent continuously? Why does it suddenly stop?</p>
<p>I apologise for the simplicity of the question, but I am new to Prefect.</p>
<p>Cheers</p>
|
<p>When the local or docker agent is running within a container itself (rather than a local process), your flow runs end up deployed as containers, but not as individual containers, but rather within the agent container. You effectively have a single agent container spinning up new containers within itself (docker in docker), which may have many unintended consequences such as issues with scale and resource utilization.</p>
<p>To solve this, I would recommend running the local agent as a local process monitored by a supervisord. This <a href="https://docs.prefect.io/orchestration/agents/local.html#using-with-supervisor" rel="nofollow noreferrer">documentation page</a> provides more information.</p>
<p>If you want more environment isolation for this agent process, you can run it within a virtual environment.</p>
<p>To learn more about flow's heartbeat, check out <a href="https://docs.prefect.io/orchestration/agents/local.html#using-with-supervisor" rel="nofollow noreferrer">this page</a> and <a href="https://discourse.prefect.io/t/can-i-run-a-docker-agent-in-a-container/50" rel="nofollow noreferrer">this one</a> for running local or docker agent in a container.</p>
|
python|prefect
| 2 |
1,906,529 | 2,970,871 |
Is processing a dead project?
|
<p>Look at the last updated release, Python 2.5??</p>
<p><a href="http://pypi.python.org/pypi/processing" rel="nofollow noreferrer">http://pypi.python.org/pypi/processing</a></p>
|
<p><a href="http://www.python.org/dev/peps/pep-0371/" rel="nofollow noreferrer">It became</a> <a href="http://docs.python.org/library/multiprocessing.html" rel="nofollow noreferrer"><code>multiprocessing</code></a>.</p>
|
python
| 5 |
1,906,530 | 3,128,385 |
simplexml_load_string equivalent Python / Django
|
<p>I'm trying to find a xml-interpret function (like the simplexml_load_string) in Python, but with no success :/</p>
<p>Let's say I have xml in a string</p>
<pre><code>my_xml_string = """
<root>
<content>
<one>A value</one>
<two>Here goes for ...</two>
</content>
</root>"""
</code></pre>
<p>To read an value in php I would normaly do something like this</p>
<pre><code>// read into object
$xml = simplexml_load_string(my_xml_string);
// print some values
echo $xml->root->content->one
echo $xml->root->content->two
</code></pre>
<p>are there any equivalent object in python/django?</p>
<p>Thanks</p>
|
<p>The nearest is probably <a href="http://docs.python.org/library/xml.etree.elementtree.html" rel="nofollow noreferrer">ElementTree</a> which is part of the python standard library (or an extended version <a href="http://codespeak.net/lxml/" rel="nofollow noreferrer">lxml</a>)</p>
<pre><code>import xml.etree
element = xml.etree.ElementTree.XML(my_xml_string)
</code></pre>
<p>sets up element which is of class Element and this can be treated as lists of XML elements</p>
<p>e.g.</p>
<pre><code># for your example
print(element[0][0].tag)
print(element[0][0].text)
print(element[0][3].text)
</code></pre>
<p>You can also search by XPaths if you want to use names.</p>
<p>lxml also has an <a href="http://codespeak.net/lxml/objectify.html" rel="nofollow noreferrer">objectify</a> model that allows access of elements as "if you were dealing with a normal Python object hierarchy." Which matches the php useage more exactly</p>
|
python|xml|django|simplexml
| 4 |
1,906,531 | 68,012,578 |
I'm getting this error in python project with Tensorflow object_detection
|
<p>Create TF records</p>
<p>Code:</p>
<pre><code>!python {SCRIPTS_PATH + '/generate_tfrecord.py'} -x {IMAGE_PATH + '/train'} -l {ANNOTATION_PATH + '/label_map.pbtxt'} -o {ANNOTATION_PATH + '/train.record'}
!python {SCRIPTS_PATH + '/generate_tfrecord.py'} -x{IMAGE_PATH + '/test'} -l {ANNOTATION_PATH + '/label_map.pbtxt'} -o {ANNOTATION_PATH + '/test.record'}
</code></pre>
<p>Error:</p>
<pre><code>Traceback (most recent call last):
File "Tensorflow/scripts/generate_tfrecord.py", line 29, in <module>
From object_detection.utils import dataset_util, label_map_util
ModuleNotFoundError: No module named 'object_detection'
Traceback (most recent call last):
File "Tensorflow/scripts/generate_tfrecord.py", line 29, in <module>
From object_detection.utils import dataset_util, label_map_util
ModuleNotFoundError: No module named 'object_detection'
</code></pre>
<p>I followed Nicholas Renotte video
<a href="https://youtu.be/pDXdlXlaCco" rel="nofollow noreferrer">https://youtu.be/pDXdlXlaCco</a> @ 22:30</p>
<p>I downloaded Tensorflow object_detection models from git clone <a href="https://github.com/tensorflow/models" rel="nofollow noreferrer">https://github.com/tensorflow/models</a></p>
<p>And then I ran</p>
<pre><code>python -m install pip
</code></pre>
|
<p>if you have installed relevant dependencies like protoc as below</p>
<pre><code>%cd /content/models/research
!protoc object_detection/protos/*.proto --python_out=.
</code></pre>
<p>you need to create environment by,</p>
<pre><code>!pip install tf_slim
pwd = os.getcwd()
os.environ['PYTHONPATH'] += f':{pwd}:{pwd}/slim'
</code></pre>
|
python|tensorflow|pip|object-detection-api|tfrecord
| 0 |
1,906,532 | 67,064,959 |
Migration error in django, what error am I running into?
|
<p>I am attempting to migrate my changes with command</p>
<p><code>python manage.py migrate</code></p>
<p>I have already ran the command,</p>
<p><code>python manage.py makemigrations accounts</code></p>
<p>which successfully returns,</p>
<pre><code>Migrations for 'accounts':
accounts/migrations/0001_initial.py
- Create model CustomUser
</code></pre>
<p>Then in attempting to run <code>python manage.py migrate</code> I recieve this error...</p>
<pre><code>Traceback (most recent call last):
File "/home/mycroft/C/news/manage.py", line 22, in <module>
main()
File "/home/mycroft/C/news/manage.py", line 18, in main
execute_from_command_line(sys.argv)
File "/home/mycroft/.local/share/virtualenvs/news-ToLZWqxe/lib/python3.9/site-packages/django/core/management/__init__.py", line 401, in execute_from_command_line
utility.execute()
File "/home/mycroft/.local/share/virtualenvs/news-ToLZWqxe/lib/python3.9/site-packages/django/core/management/__init__.py", line 395, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/home/mycroft/.local/share/virtualenvs/news-ToLZWqxe/lib/python3.9/site-packages/django/core/management/base.py", line 330, in run_from_argv
self.execute(*args, **cmd_options)
File "/home/mycroft/.local/share/virtualenvs/news-ToLZWqxe/lib/python3.9/site-packages/django/core/management/base.py", line 371, in execute
output = self.handle(*args, **options)
File "/home/mycroft/.local/share/virtualenvs/news-ToLZWqxe/lib/python3.9/site-packages/django/core/management/base.py", line 85, in wrapped
res = handle_func(*args, **kwargs)
File "/home/mycroft/.local/share/virtualenvs/news-ToLZWqxe/lib/python3.9/site-packages/django/core/management/commands/migrate.py", line 95, in handle
executor.loader.check_consistent_history(connection)
File "/home/mycroft/.local/share/virtualenvs/news-ToLZWqxe/lib/python3.9/site-packages/django/db/migrations/loader.py", line 302, in check_consistent_history
raise InconsistentMigrationHistory(
django.db.migrations.exceptions.InconsistentMigrationHistory: Migration admin.0001_initial is applied before its dependency accounts.0001_initial on database 'default'.
</code></pre>
<p>I am very new to django and can't seem to figure out just what I am doing wrong. Any help would be greatly appreciated!</p>
|
<p>Maybe you made some changes to you models.py file and these changes aren't sitting well with the inital db structure. If you can track and undo the recent changes it may help, else you'll have to drop your database, delete all migration files your <code>account</code> app. Then repeat the makemigration, migrate, and createsuperuser processes.</p>
|
python|django|django-rest-framework|web-applications|pipenv
| 0 |
1,906,533 | 66,812,137 |
How to write to a csv file in a next column in python
|
<p>I currently have a csv file which has four columns</p>
<p><a href="https://i.stack.imgur.com/D3ZFz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/D3ZFz.png" alt="enter image description here" /></a></p>
<p>the next time I write to the file, I want to write from E1. I've searched for solutions but none seems to work.</p>
<pre><code>with open(file_location,"w") as csv_file:
csv_writer = csv.writer(csv_file)
csv_writer.writerows(list_of_parameters)
</code></pre>
<p>where list_of_parameters is a zip of all the four columns.</p>
<pre><code> list_of_parameters = zip(timestamp_list,request_count_list,error_rate_list,response_time_list)
</code></pre>
<p>Anyone have any idea to implement this? Appreciate your help.</p>
|
<p>The Python library Pandas is very good for these sorts of things. Here is how you could do this in Pandas:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
# Read file as a DataFrame
df = pd.read_csv(file_location)
# Create a second DataFrame with your new data,
# you want the dict keys to be your column names
new_data = pd.DataFrame({
'Timestamp': timestamp_list,
'Key Request': request_count_list,
'Failure Rate': error_rate_list,
'Response': response_time_list
})
# Concatenate the existing and new data
# along the column axis (adding to E1)
df = pd.concat([df, new_data], axis=1)
# Save the combined data
df.to_csv(file_location, index=False)
</code></pre>
|
python|csv
| 1 |
1,906,534 | 63,779,073 |
How to find all combination from multiple sets of elements?
|
<p>I want to form all possible 6 combinations out of 3 groups of elements. The composition is as follows:</p>
<ul>
<li>take 2 from group A (<code>a,b,c,d,e</code>)</li>
<li>take 2 from group B (<code>f,g,h,i,j</code>) and</li>
<li>take 2 from group C (<code>k,l,m,n,o,p,q,r,s,t</code>)
Also, after this, I want to export it as a CSV file that looks like this:</li>
</ul>
<pre><code>Column 1 Column 2 Column 3 Column 4 Column 5 Column 6
a b f g k l
</code></pre>
<p>and so on...</p>
<p>ps. it could be numbered if letters are not allowed.
I've tried using <code>itertools</code>, and list, but still nothing. I hope you could me help out.</p>
|
<p>Here is the code for your permutations:</p>
<pre class="lang-py prettyprint-override"><code># Print CSV header
print("Column 1,Column 2,Column 3,Column 4,Column 5,Column 6")
A = ['a', 'b', 'c', 'd', 'e']
B = ['f', 'g', 'h', 'i', 'j']
C = ['k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't']
for a1 in range(0, len(A)):
for a2 in range(a1 + 1, len(A)):
for b1 in range(0, len(B)):
for b2 in range(b1 + 1, len(B)):
for c1 in range(0, len(C)):
for c2 in range(c1 + 1, len(C)):
print(A[a1] + "," + A[a2] + "," + B[b1] + "," + B[b2] + "," + C[c1] + "," + C[c2])
</code></pre>
<p>It will print 4500 lines for the input specified in the questions.<br />
It assumes the order does not matter, so if <code>(a, b)</code> appears, <code>(b, a)</code> will not appear. It also does not output a letter twice <code>(a, a)</code> will not appear.</p>
<p>If you want to have double letters, replace <code>a1 + 1</code> with <code>a1</code> (for <code>b</code> and <code>c</code> as well).<br />
If the order matters, (so you want to have <code>(a, b)</code> <strong>and</strong> <code>(b, a)</code>), replace <code>a1 + 1</code> with <code>0</code> (for <code>b</code> and <code>c</code> as well).</p>
|
python|combinations
| 0 |
1,906,535 | 63,890,188 |
How to do `for line.split() in line:` as list comprehension
|
<p>I'm trying to use python comprehension.</p>
<p>I have a list which is in the format:</p>
<pre><code>name_a:surname_a
name_b:surname_b
name_c:surname_c
</code></pre>
<p>My code initially to split each pair in a line into its own variable was:</p>
<pre><code>for lined in self.account:
a, b = line.split(':')
</code></pre>
<p>I tried to use this comprehension, but it didn't seem to work:</p>
<pre><code>(a,b) = [line.split(':') for line in self.account]
</code></pre>
|
<p>As mentioned by @MattDMo the colons are missing in your list comprehension. If you add them, a additional problem will appear. If you print the list returned from the list comprehension, it will probably look like this:</p>
<pre><code>[['name_a', 'surname_a\n'], ['name_b', 'surname_b\n'], ['name_c', 'surname_c\n']]
</code></pre>
<p>The problem is, that you can't assign it to two variables, because the list contains the same number of elements as lines in the file.</p>
<p>To get the desired result, you have to transpose the two dimensional list, for example by using zip and unpacking ('*'):</p>
<pre><code>>>> with open('test_file.txt') as f:
... (a, b) = zip(*[line.split(':') for line in f])
...
>>> a
('name_a', 'name_b', 'name_c')
>>> b
('surname_a\n', 'surname_b\n', 'surname_c\n')
</code></pre>
|
python|split|list-comprehension
| 1 |
1,906,536 | 42,996,725 |
extract and split the content of a specific column into a new csv file with new columnser
|
<p>I have a column (second column called second_column) in my csv file which represents à list of characters and its positions as follow:
the column called <code>character_position</code></p>
<p>Each line of this column contains a list of character_position . overall l have 300 lines in this column each with list of character position</p>
<pre><code>character_position = [['1', 1890, 1904, 486, 505, '8', 1905, 1916, 486, 507, '4', 1919, 1931, 486, 505, '1', 1935, 1947, 486, 505, '7', 1950, 1962, 486, 505, '2', 1965, 1976, 486, 505, '9', 1980, 1992, 486, 507, '6', 1995, 2007, 486, 505, '/', 2010, 2022, 484, 508, '4', 2025, 2037, 486, 505, '8', 2040, 2052, 486, 505, '3', 2057, 2067, 486, 507, '3', 2072, 2082, 486, 505, '0', 2085, 2097, 486, 507, '/', 2100, 2112, 484, 508, 'Q', 2115, 2127, 486, 507, '1', 2132, 2144, 486, 505, '7', 2147, 2157, 486, 505, '9', 2162, 2174, 486, 505, '/', 2175, 2189, 484, 508, 'C', 2190, 2204, 487, 505, '4', 2207, 2219, 486, 505, '1', 2241, 2253, 486, 505, '/', 2255, 2268, 484, 508, '1', 2271, 2285, 486, 507, '5', 2288, 2297, 486, 505], ['D', 2118, 2132, 519, 535, '.', 2138, 2144, 529, 534, '2', 2150, 2162, 516, 535, '0', 2165, 2177, 516, 535, '4', 2180, 2192, 516, 534, '7', 2196, 2208, 516, 534, '0', 2210, 2223, 514, 535, '1', 2226, 2238, 516, 534, '8', 2241, 2253, 514, 534, '2', 2256, 2267, 514, 535, '4', 2270, 2282, 516, 534, '0', 2285, 2298, 514, 535]]
</code></pre>
<p>each character has for values : left, top, right, bottom. For instance
character <code>'1'</code> has left=1890, top=1904, right=486, bottom=505.</p>
<p>l want to create another csv file with five columns : </p>
<pre><code>column 1: character, column 2 : left , column 3 : top, column 4 : right, column 5 : bottom.
</code></pre>
<p>Here is what l have done :</p>
<pre><code>import pandas as pd :
character= []
left_position = []
right_position = []
top_position=[]
bottom_position []
temp_df = pd.DataFrame({'character': position_char, 'left': left_position, 'top': top_position, 'right' : right_position,'bottom': bottom_position})
temp_df.to_csv('character_position.csv')
</code></pre>
<p>my file that contains the column of characters_position in one column can be accessed as follow :</p>
<pre><code>import csv
with open('list_characters.csv') as csvfile:
readCSV = csv.reader(csvfile, delimiter=',')
for row in readCSV:
content = list(row[i] for i in second_column)
</code></pre>
<p>How can l extract and split the content of the <code>second_column</code> of the file <code>list_characters.csv</code> into a new csv file <code>character_position.csv</code> with five column ?</p>
<p>My first row of the column character_position is as follow :</p>
<pre><code>[['m', 38, 104, 2456, 2492, 'i', 40, 102, 2442, 2448, 'i', 40, 100, 2402, 2410, 'l', 40, 102, 2372, 2382, 'm', 40, 102, 2312, 2358, 'u', 40, 102, 2292, 2310, 'i', 40, 104, 2210, 2260, 'l', 40, 104, 2180, 2208, 'i', 40, 104, 2140, 2166, 'l', 40, 104, 2124, 2134]]
</code></pre>
<p>My second row of the column character_position is as follow :</p>
<pre><code>[['A', 73, 91, 373, 394, 'D', 93, 112, 373, 396, 'R', 115, 133, 373, 396, 'E', 136, 153, 373, 396, 'S', 156, 172, 373, 396, 'S', 175, 192, 373, 396, 'E', 195, 211, 373, 396, 'D', 222, 241, 373, 396, 'E', 244, 261, 373, 396, 'L', 272, 285, 375, 396, 'I', 288, 293, 375, 396, 'V', 296, 314, 375, 396, 'R', 317, 334, 373, 396, 'A', 334, 354, 375, 396, 'I', 357, 360, 373, 396, 'S', 365, 381, 373, 396, 'O', 384, 405, 373, 396, 'N', 408, 425, 373, 394]]
</code></pre>
<p>and here is how l access to character position column :</p>
<pre><code>df = pd.read_csv(filepath_or_buffer='list_characters.csv', header=None, usecols=[1], names=['character_position])
df
character_position
0 [['m', 38, 104, 2456, 2492, 'i', 40, 102, 2442...
1 [['.', 203, 213, 191, 198, '3', 235, 262, 131,...
2 [['A', 275, 347, 147, 239, 'M', 363, 465, 145,...
3 [['A', 73, 91, 373, 394, 'D', 93, 112, 373, 39...
4 [['D', 454, 473, 663, 685, 'O', 474, 495, 664,...
5 [['A', 108, 129, 727, 751, 'V', 129, 150, 727,...
6 [['N', 34, 51, 949, 970, '/', 52, 61, 948, 970...
7 [['S', 1368, 1401, 43, 85, 'A', 1406, 1446, 43...
8 [['S', 1437, 1457, 112, 138, 'o', 1458, 1476, ...
9 [['h', 1686, 1703, 315, 339, 't', 1706, 1715, ...
10 [['N', 1331, 1349, 370, 391, 'C', 1361, 1379, ...
11 [['N', 1758, 1775, 370, 391, 'D', 1785, 1803, ...
12 [['D', 2166, 2184, 370, 391, 'A', 2186, 2205, ...
13 [['2', 1395, 1415, 427, 454, '0', 1416, 1434, ...
14 [['I', 1533, 1545, 487, 541, 'I', 1548, 1551, ...
15 [['P', 1659, 1677, 490, 514, '2', 1680, 1697, ...
16 [['1', 1890, 1904, 486, 505, '8', 1905, 1916, ...
17 [['B', 1344, 1361, 583, 607, 'O', 1364, 1386, ...
18 [['B', 1548, 1580, 979, 1015, 'T', 1586, 1619,...
19 [['Q', 169, 190, 1291, 1312, 'U', 192, 210, 12...
20 [['1', 296, 305, 1492, 1516, 'S', 339, 357, 14...
21 [['G', 339, 362, 1815, 1840, 'S', 365, 384, 18...
22 [['2', 1440, 1455, 2047, 2073, '9', 1458, 1475...
23 [['R', 339, 360, 2137, 2163, 'e', 363, 378, 21...
24 [['R', 339, 360, 1860, 1885, 'e', 363, 380, 18...
25 [['0', 1266, 1283, 1951, 1977, ',', 1287, 1290...
26 [['1', 2207, 2217, 1492, 1515, '0', 2225, 2240...
27 [['1', 2364, 2382, 1552, 1585], [], ['E', 2369...
28 [['S', 2369, 2382, 1833, 1866]]
29 [['0', 2243, 2259, 1951, 1977, '0', 2271, 2288...
.. ...
70 [['1', 296, 305, 1492, 1516, 'S', 339, 357, 14...
71 [['G', 339, 362, 1815, 1840, 'S', 365, 384, 18...
72 [['2', 1440, 1455, 2047, 2073, '9', 1458, 1475...
73 [['R', 339, 360, 2137, 2163, 'e', 363, 378, 21...
74 [['R', 339, 360, 1860, 1885, 'e', 363, 380, 18...
75 [['0', 1266, 1283, 1951, 1977, ',', 1287, 1290...
76 [['1', 2207, 2217, 1492, 1515, '0', 2225, 2240...
77 [['1', 2364, 2382, 1552, 1585], [], ['E', 2369...
78 [['S', 2369, 2382, 1833, 1866]]
79 [['0', 2243, 2259, 1951, 1977, '0', 2271, 2288...
80 [['0', 2243, 2259, 2227, 2253, '0', 2271, 2286...
81 [['D', 76, 88, 2580, 2596, 'é', 91, 100, 2580,...
82 [['ü', 1474, 1489, 2586, 2616, '3', 1541, 1557...
83 [['E', 1440, 1461, 2670, 2697, 'U', 1466, 1488...
84 [['2', 1685, 1703, 2670, 2697, '.', 1707, 1712...
85 [['1', 2202, 2213, 2668, 2695, '3', 2220, 2237...
86 [['c', 88, 118, 2872, 2902]]
87 [['N', 127, 144, 2889, 2910, 'D', 156, 175, 28...
88 [['E', 108, 129, 3144, 3172, 'C', 133, 156, 31...
89 [['5', 108, 126, 3204, 3231, '0', 129, 147, 32...
90 [[]]
91 [['1', 480, 492, 3202, 3229, '6', 500, 518, 32...
92 [['P', 217, 234, 3337, 3360, 'A', 235, 255, 33...
93 [[]]
94 [['I', 954, 963, 2892, 2934, 'M', 969, 1011, 2...
95 [['E', 1385, 1407, 2970, 2998, 'U', 1410, 1433...
96 [['T', 2067, 2084, 2889, 2911, 'O', 2088, 2106...
97 [['1', 2201, 2213, 2970, 2997, '6', 2219, 2238...
98 [['M', 1734, 1755, 3246, 3267, 'O', 1758, 1779...
99 [['L', 923, 935, 3411, 3430, 'A', 941, 957, 34...
</code></pre>
<p>when l try :</p>
<pre><code>df['character_position'][1]
"[['.', 203, 213, 191, 198, '3', 235, 262, 131, 198]]"
df['character_position'][2]
"[['A', 275, 347, 147, 239, 'M', 363, 465, 145, 239, 'S', 485, 549, 145, 243, 'U', 569, 631, 145, 241, 'N', 657, 733, 145, 239]]"
</code></pre>
<p>then apply as you said :</p>
<pre><code>df=pd.DataFrame(np.array(list(chain.from_iterable( df['character_position'][2]))).reshape(-1,5),
columns=cols)
</code></pre>
<p>l get this error : </p>
<pre><code>Traceback (most recent call last):
File "/usr/lib/python3.5/code.py", line 91, in runcode
exec(code, self.locals)
File "<input>", line 1, in <module>
ValueError: cannot reshape array of size 127 into shape (5)
</code></pre>
<p>in order to check l did this :</p>
<pre><code>Y=df['character_position'][2]
"[['A', 275, 347, 147, 239, 'M', 363, 465, 145, 239, 'S', 485, 549, 145, 243, 'U', 569, 631, 145, 241, 'N', 657, 733, 145, 239]]"
</code></pre>
<p>then :</p>
<pre><code>Y[0] gives '['
</code></pre>
<p>But we are supposed to get</p>
<pre><code>['A', 275, 347, 147, 239, 'M', 363, 465, 145, 239, 'S', 485, 549, 145, 243, 'U', 569, 631, 145, 241, 'N', 657, 733, 145, 239]
</code></pre>
<p>since it's a nested list.</p>
<p>l think we need a loop which iterate the whole rows of that column :</p>
<pre><code>df.ix[:len(df)]
df=pd.DataFrame(np.array(list(chain.from_iterable( df['character_position'].ix[:len(df)]))).reshape(-1,5),
columns=cols)
</code></pre>
<p>After executing the following code :</p>
<pre><code>cols = ['char','left','top','right','bottom']
df1 = df.character_position.str.strip('[]').str.split(', ', expand=True)
df1.columns = [df1.columns % 5, df1.columns // 5]
df1 = df1.stack().reset_index(drop=True)
df1.columns = cols
df1[cols[1:]] = df1[cols[1:]].astype(int)
</code></pre>
<p>l got this error : </p>
<pre><code>Traceback (most recent call last):
File "/usr/lib/python3.5/code.py", line 91, in runcode
exec(code, self.locals)
File "<input>", line 1, in <module>
File "/usr/local/lib/python3.5/dist-packages/pandas/core/generic.py", line 3054, in astype
raise_on_error=raise_on_error, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/pandas/core/internals.py", line 3189, in astype
return self.apply('astype', dtype=dtype, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/pandas/core/internals.py", line 3056, in apply
applied = getattr(b, f)(**kwargs)
File "/usr/local/lib/python3.5/dist-packages/pandas/core/internals.py", line 461, in astype
values=values, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/pandas/core/internals.py", line 504, in _astype
values = _astype_nansafe(values.ravel(), dtype, copy=True)
File "/usr/local/lib/python3.5/dist-packages/pandas/types/cast.py", line 534, in _astype_nansafe
return lib.astype_intsafe(arr.ravel(), dtype).reshape(arr.shape)
File "pandas/lib.pyx", line 980, in pandas.lib.astype_intsafe (pandas/lib.c:17409)
File "pandas/src/util.pxd", line 93, in util.set_value_at_unsafe (pandas/lib.c:72104)
ValueError: invalid literal for int() with base 10: "['E'"
</code></pre>
<p>The code runs until df1.columns = cols</p>
<pre><code>when l print df1 :
print(df1)
it works. The l saved it in a csv file.
df1.to_csv('positions.csv')
</code></pre>
|
<p>I think you need first convert <code>strings</code> to <code>lists</code> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.strip.html" rel="nofollow noreferrer"><code>strip</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.split.html" rel="nofollow noreferrer"><code>split</code></a>.</p>
<p>Then use chain for falttening values and reshape. Last if necessary convert columns to int.</p>
<pre><code>from itertools import chain
#temporaly display content
with pd.option_context('display.max_colwidth', 120):
print (df)
character_position
0 [['m', 38, 104, 2456, 2492, 'i', 40, 102, 2442, 222]]
1 [['.', 203, 213, 191, 198, '3', 235, 262, 131, 3333]]
2 [['A', 275, 347, 147, 239, 'M', 363, 465, 145, 3334]]
3 [['A', 73, 91, 373, 394, 'D', 93, 112, 373, 39]]
4 [['D', 454, 473, 663, 685, 'O', 474, 495, 664, 33]]
5 [['A', 108, 129, 727, 751, 'V', 129, 150, 727, 444]]
df.character_position = df.character_position.str.strip('[]').str.split(', ')
cols = ['char','left','top','right','bottom']
df1 = pd.DataFrame(np.array(list(chain.from_iterable(df.character_position))).reshape(-1,5),
columns=cols)
df1[cols[1:]] = df1[cols[1:]].astype(int)
print (df1)
0 'm' 38 104 2456 2492
1 'i' 40 102 2442 222
2 '.' 203 213 191 198
3 '3' 235 262 131 3333
4 'A' 275 347 147 239
5 'M' 363 465 145 3334
6 'A' 73 91 373 394
7 'D' 93 112 373 39
8 'D' 454 473 663 685
9 'O' 474 495 664 33
10 'A' 108 129 727 751
11 'V' 129 150 727 444
</code></pre>
<p>Another solution with <code>MultiIndex</code> and reshape by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.stack.html" rel="nofollow noreferrer"><code>stack</code></a>:</p>
<pre><code>cols = ['char','left','top','right','bottom']
df1 = df.character_position.str.strip('[]').str.split(', ', expand=True)
df1.columns = [df1.columns % 5, df1.columns // 5]
df1 = df1.stack().reset_index(drop=True)
df1.columns = cols
df1[cols[1:]] = df1[cols[1:]].astype(int)
print (df1)
char left top right bottom
0 'm' 38 104 2456 2492
1 'i' 40 102 2442 222
2 '.' 203 213 191 198
3 '3' 235 262 131 3333
4 'A' 275 347 147 239
5 'M' 363 465 145 3334
6 'A' 73 91 373 394
7 'D' 93 112 373 39
8 'D' 454 473 663 685
9 'O' 474 495 664 33
10 'A' 108 129 727 751
11 'V' 129 150 727 444
</code></pre>
|
python-3.x|csv|pandas|dataframe
| 1 |
1,906,537 | 72,143,122 |
Python how ?comparing two columns data into one dataframe
|
<p>so i have grouping data from this column
<a href="https://i.stack.imgur.com/psmy8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/psmy8.png" alt="enter image description here" /></a></p>
<p>and then i want to comparing 2 type of the country is 'US' & 'GB into one dataframe so i can make vissualization from this data</p>
<p>so i create new variable for comparing beetwen US and GB</p>
<pre><code>df_compare = df_new.loc[
(df_new['country'].isin(["US", "GB"]))][['ID','name','main_category','usd_goal_real','usd_pledged_real','backers','country','state']]
</code></pre>
<p>but i want to make visualization that comparing US and GB, when i try to put my code to make vissualization with pandas the count of "main_category" get combine from country 'US' and 'GB'</p>
<p>and this is my code that i used</p>
<pre><code>fig = plt.figure(figsize=(20, 15))
barWidth= 0.5
dictionary3 = df_compare['main_category'].value_counts().to_dict()
x = dictionary3.keys()
y = dictionary3.values()
plot1 = plt.bar(x, y, width=barWidth)
plt.title('Class Main Category', fontsize = '20')
plt.xlabel('Main Category', fontsize = '20')
plt.ylabel('Count', fontsize = '20')
for bar in plot1:
plt.annotate(bar.get_height(),xy=(bar.get_x(), bar.get_height()+200,), fontsize=10)
</code></pre>
<p>and this is the result
<a href="https://i.stack.imgur.com/xiRtk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xiRtk.png" alt="enter image description here" /></a></p>
|
<p>try this :</p>
<pre><code>df_new=df_new.reset_index()
df_new_new = df_new.loc[(df_new['country'].isin(["US", "GB"])), ['ID','name','main_category', 'usd_goal_real','usd_pledge_real','backers','state']]
</code></pre>
|
python|pandas
| 0 |
1,906,538 | 65,682,028 |
Vlookup/Map value from one dataframe to another dataframe in Python
|
<p>I want to do something similar to the vlookup in the python.
Here is a dataframe I want to lookup for value 'Flow_Rate_Lupa'<a href="https://i.stack.imgur.com/NwK3j.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NwK3j.jpg" alt="enter image description here" /></a></p>
<p>And here is the dataframe I want to fill the data by looking at the same month+day to fill the missing value. Is there any one to help me to solve how to do this QAQ<img src="https://i.stack.imgur.com/Jqmvr.jpg" alt="enter image description here" /></p>
|
<p>I usually merge the two data frames and define an indicator, then filter out the values where the indicator says both meaning data is in both data frames.</p>
<pre><code>import pandas as pd
mergedData= pd.merge(df1,df2, how='left' ,left_on='Key1', right_on='Key2', indicator ='Exists')
filteredData = mergedData[mergedData[Exists]='both']
</code></pre>
|
python|pandas
| 2 |
1,906,539 | 65,749,742 |
Pandas how to index a a list from JSON and put it into a dataframe?
|
<p>How can I index a list inside a dataframe?</p>
<p>I have this code here that will get data from JSON and insert it into a dataframe</p>
<p>Here's what the JSON looks like</p>
<pre><code>{"text_sentiment": "positive", "text_probability": [0.33917574607174916, 0.26495590980799744, 0.3958683441202534]}
</code></pre>
<p>Here's my code.</p>
<pre><code>input_c = pd.DataFrame(columns=['Comments','Result'])
for i in range(input_df.shape[0]):
url = 'http://classify/?text='+str(input_df.iloc[i])
r = requests.get(url)
result = r.json()["text_sentiment"]
proba = r.json()["text_probability"]
input_c = input_c.append({'Comments': input_df.loc[i].to_string(index=False),'Result': result, 'Probability': proba}, ignore_index = True)
st.write(input_c)
</code></pre>
<p>Here's what the results look like
<a href="https://i.stack.imgur.com/JbBym.png" rel="nofollow noreferrer">result</a></p>
<pre><code> Comments Result Probability
0 This movie is good in my eyes. neutral [0.26361889609129974, 0.4879752378104797, 0.2484058660982205]
1 This is a bad movie it's not good. negative [0.5210904912792065, 0.22073131008688818, 0.25817819863390534]
2 One of the best performance in this year. positive [0.14644707145500369, 0.3581522311734714, 0.49540069737152503]
3 The best movie i've ever seen. positive [0.1772046003747405, 0.026468108571479156, 0.7963272910537804]
4 The movie is meh. neutral [0.24349393167653663, 0.6820982528652574, 0.07440781545820596]
5 One of the best selling artist in the world. positive [0.07738688706903311, 0.3329095061233371, 0.5897036068076298]
</code></pre>
<p>The data in the Probability column is the one I want to index.</p>
<p>For example: If the value in Result is "positive" then I want the proba to index to 2,and If the result is "neutral" index to 1</p>
<p>Like this</p>
<pre><code> Comments Result Probability
0 This movie is good in my eyes. neutral [0.4879752378104797]
1 This is a bad movie it's not good. negative [0.5210904912792065]
2 One of the best performance in this year. positive [0.49540069737152503]
3 The best movie i've ever seen. positive [0.7963272910537804]
4 The movie is meh. neutral [0.6820982528652574]
5 One of the best selling artist in the world. positive [0.5897036068076298]
</code></pre>
<p>Are there any ways on how to do it?</p>
|
<p>In your code, you already decided the <code>Result</code> content, whether it's negative, neutral, or positive, so you need only to store the maximum value of the probability list in the data frame <code>input_c</code>.</p>
<p>This means, change <code>'Probability': proba</code> to <code>'Probability': max(proba)</code>, so modify:</p>
<pre><code> input_c = input_c.append({'Comments': input_df.loc[i].to_string(index=False),'Result': result, 'Probability': proba}, ignore_index = True)
</code></pre>
<p>to</p>
<pre><code> input_c = input_c.append({'Comments': input_df.loc[i].to_string(index=False),'Result': result, 'Probability': max(proba}, ignore_index = True)
</code></pre>
<p>then to set the index in <code>input_c </code> to <code>Probability</code> column, use</p>
<pre><code>input_c.set_index('Probability')
</code></pre>
|
python|json|pandas|numpy|sentiment-analysis
| 0 |
1,906,540 | 50,865,770 |
Python behavior differs in script vs command line
|
<p>When I type "python" to get to the python command line and execute this script it works perfectly and does exactly what I need it to do:</p>
<pre><code>import json
import datetime
dictstr = {'FeedbackScore': '884', 'IDVerified': 'false', 'eBayGoodStanding': 'true', 'AboutMePage': 'false', 'UserSubscription': ['FileExchange'], 'UserIDChanged': 'false', 'PayPalAccountType': 'Business', 'PositiveFeedbackPercent': '100.0', 'Email': 'xxxxxxxxx', 'EIASToken': 'xxxxxxxxxxxxxxx', 'PayPalAccountStatus': 'Active', 'UniquePositiveFeedbackCount': '66', 'UniqueNeutralFeedbackCount': '0', 'SellerInfo': {'CheckoutEnabled': 'true', 'TransactionPercent': '69.0', 'StoreOwner': 'false', 'AllowPaymentEdit': 'false', 'RecoupmentPolicyConsent': None, 'PaymentMethod': 'PayPal', 'GoodStanding': 'true', 'SafePaymentExempt': 'true', 'SellerGuaranteeLevel': 'NotEligible', 'LiveAuctionAuthorized': 'false', 'MerchandizingPref': 'OptIn', 'CIPBankAccountStored': 'false', 'QualifiesForB2BVAT': 'false', 'SchedulingInfo': {'MaxScheduledMinutes': '30240', 'MaxScheduledItems': '3000', 'MinScheduledMinutes': '0'}, 'CharityRegistered': 'false'}, 'UserIDLastChanged': datetime.datetime(2014, 8, 6, 0, 37, 37), 'EnterpriseSeller': 'false', 'Status': 'Confirmed', 'PayPalAccountLevel': 'Verified', 'UserID': 'xxxxxxxxxxxx', 'eBayWikiReadOnly': 'false', 'FeedbackRatingStar': 'Purple', 'UniqueNegativeFeedbackCount': '0', 'VATStatus': 'NoVATTax', 'MotorsDealer': 'false', 'RegistrationDate': datetime.datetime(2003, 1, 12, 0, 21, 42), 'BusinessRole': 'FullMarketPlaceParticipant', 'Site': 'US', 'EBaySubscription': 'FileExchange', 'FeedbackPrivate': 'false', 'NewUser': 'false'}
f = open('json_output.txt','w')
dump_file = json.dumps(dictstr, indent=4, default=str)
f.write(dump_file)
f.close()
</code></pre>
<p>However when I run this as a script the content of my output file are all on one line:</p>
<pre><code>import json
import datetime
import ebaysdk
from ebaysdk.trading import Connection as Trading
def getUser():
api = Trading(config_file='ebay.yaml')
f = open('json_output.txt','w')
api.execute('GetUser', {'UserID': 'xxxxxxxxx'})
dictstr = "%s" % api.response.reply.User
print dictstr
dump_file = json.dumps(dictstr, indent=4, default=str)
f.write(dump_file)
f.close()
if __name__ == "__main__":
getUser()
</code></pre>
<p>the file is all on one line, like the original declaration of the dictionary.</p>
<p>I'm using the contents of stdout in the non-working example to populate the dictstr variable in the working example. I am doing this because I want to make sure I'm using the exact same data.</p>
<p>Any of you experts know what I'm doing wrong here?</p>
|
<p>I'm glad it was something so simple.</p>
<p>Corrected code:</p>
<pre><code>import json
import datetime
import ebaysdk
from ebaysdk.trading import Connection as Trading
def getUser():
api = Trading(config_file='ebay.yaml')
f = open('json_output.txt','w')
api.execute('GetUser', {'UserID': 'xxxxxxxxx'})
dictstr = api.response.dict()
dump_file = json.dumps(dictstr, indent=4, default=str)
f.write(dump_file)
f.close()
if __name__ == "__main__":
getUser()
</code></pre>
|
python|json|python-2.7|dictionary|ebay-api
| 0 |
1,906,541 | 50,368,386 |
Strange behavior with DataFrame copy
|
<p>Consider this code:</p>
<pre><code>In [16]: data = [['Alex',10],['Bob',12],['Clarke',13]]
In [17]: df = pd.DataFrame(data,columns=['Name','Age'])
Out[18]:
Name Age
0 Alex 10
1 Bob 12
2 Clarke 13
In [19]: df_new = df
In [20]: df_new['Age'] = df_new['Age'] * 90 / 100
In [21]: df_new
Name Age
0 Alex 9.0
1 Bob 10.8
2 Clarke 11.7
In [22]: df
Name Age
0 Alex 9.0
1 Bob 10.8
2 Clarke 11.7
</code></pre>
<p>When I assigned new values to the <em>Age</em> columns of the new DataFrame (<em>df_new</em>), the <em>Age</em> column of the original DataFrame (<em>df</em>) changed as well. </p>
<p>Why does it happen? Does it have something to do with the way I create a copy of the original DataFrame? Seem like they are chained together.</p>
|
<p>Use - </p>
<pre><code>df_new = df.copy()
</code></pre>
<p>OR</p>
<pre><code>df_new = df.copy(deep=True)
</code></pre>
<p>This is the standard way of making a copy of a <code>pandas</code> object’s indices and data.</p>
<p>From the <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.copy.html" rel="nofollow noreferrer"><code>pandas documentation</code></a></p>
<blockquote>
<p>When deep=True (default), a new object will be created with a copy of
the calling object’s data and indices. Modifications to the data or
indices of the copy will not be reflected in the original object</p>
</blockquote>
<p><strong>Explanation</strong></p>
<p>If you see the object IDs of the various DataFrames you create, you can clearly see what is happening.</p>
<p>When you write df_new = df, you are creating a variable named <code>new_df</code>, and binding it with an object with same id as that of <code>df</code>.</p>
<p><strong>Example</strong></p>
<pre><code>data = [['Alex',10],['Bob',12],['Clarke',13]]
df = pd.DataFrame(data,columns=['Name','Age'])
df_new = df
df_copy = df.copy()
print("ID of old df: {}".format(id(df)))
print("ID of new df: {}".format(id(df_new)))
print("ID of copy df: {}".format(id(df_copy)))
</code></pre>
<p><strong>Output</strong></p>
<pre><code>ID of old df: 113414664
ID of new df: 113414664
ID of copy df: 113414832
</code></pre>
|
python|pandas|dataframe
| 1 |
1,906,542 | 26,517,674 |
Passing newline within string into a python script from the command line
|
<p>I have a script that I run from the command line which I would like to be able to pass string arguments into. As in </p>
<pre><code>script.py --string "thing1\nthing2"
</code></pre>
<p>such that the program would interpret the '\n' as a new line. If <code>string="thing1\nthing2"</code> I want to get </p>
<pre><code>print string
</code></pre>
<p>to return:</p>
<pre><code>thing1
thing2
</code></pre>
<p>rather than <code>thing1\nthing2</code></p>
<p>If I simply hard-code the string "thing1\nthing2" into the script, it does this, but if it's entered as a command line argument via getopt, it doesn't recognize it. I have tried a number of approaches to this: reading in the cl string as <code>r"%s" % arg</code>, various ways of specifying it on the commandline, etc, and nothing seems to work. Ideas? Is this completely impossible?</p>
|
<p>From <a href="https://stackoverflow.com/a/4918413/478656">https://stackoverflow.com/a/4918413/478656</a> in Bash, you can use:</p>
<pre><code>script.py --string $'thing1\nthing2'
</code></pre>
<p>e.g.</p>
<pre><code>$ python test.py $'1\n2'
1
2
</code></pre>
<p>But that's Bash-specific syntax.</p>
|
python|bash|getopt
| 16 |
1,906,543 | 45,096,568 |
Django model unit test
|
<p>I started learning Django, and I'm having problems with the unit test, and I was trying to look for the problem, but I can not identify what the problem is in my code. If someone can identify what the problem is in my code or if you can give me some advice?</p>
<p><strong>Error:</strong></p>
<pre><code>Creating test database for alias 'default'...
System check identified no issues (0 silenced).
E
======================================================================
ERROR: test_create_user (user_api.tests.ModelUseProfileTest)
Users can register
----------------------------------------------------------------------
Traceback (most recent call last):
File "/vagrant/src/api/user_api/tests.py", line 26, in test_create_user
user = UserProfile.objects.get(id=1)
File "/home/ubuntu/.virtualenvs/venv/lib/python3.5/site-packages/django/db/models/manager.py", line 85, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/home/ubuntu/.virtualenvs/venv/lib/python3.5/site-packages/django/db/models/query.py", line 379, in get
self.model._meta.object_name
user_api.models.DoesNotExist: UserProfile matching query does not exist.
----------------------------------------------------------------------
Ran 1 test in 0.040s
FAILED (errors=1)
Destroying test database for alias 'default'...
</code></pre>
<p><strong>My test:</strong></p>
<pre><code>from django.test import TestCase
from rest_framework import status
from django.core.urlresolvers import reverse
from .models import UserProfile
# Create your tests here.
class ModelUseProfileTest(TestCase):
""" Test class define the test suite for the UserProfile model."""
def test_create_user(self):
"""Users can register"""
# Create an instance of a GET request.
response = self.client.post("/api/v1/register/", {
"name": "Walter",
"lasT_name": "White",
"email": "heisenberg@email.com",
"password": "secret", })
user = UserProfile.objects.get(id=1)
self.assertEqual(user.name, "Walter")
self.assertEqual(user.last_name, "White")
self.assertEqual(user.email, "heisenberg@email.com")
self.assertEqual(response.status_code, 201)
</code></pre>
<p><strong>My model:</strong></p>
<pre><code>from django.db import models
from django.contrib.auth.models import AbstractBaseUser
from django.contrib.auth.models import PermissionsMixin
from django.contrib.auth.models import BaseUserManager
class UserProfileManager(BaseUserManager):
"""Helps Django work with our custom user model"""
def create_user(self, name, last_name, email, password=None):
"""Create a new user profile object."""
if not email:
raise ValueError('Users must have an email address.')
if not name:
raise ValueError('Users must have an name.')
if not last_name:
raise ValueError('Users must have an last name.')
email = normalize_email(email)
user = self.model(name=name, last_name=last_name, email=email)
user.set_password(password)
user.save(using=self._db)
return user
def create_superuser(self, name, last_name, email, password=None):
"""Create and saves a new superuser with given details."""
user = self.create_user(name, last_name, email, password)
user.is_superuser = True
user.is_staff = True
user.save(using=self._db)
return user
class UserProfile(AbstractBaseUser, PermissionsMixin):
"""Represents a "user profile" inside our system."""
name = models.CharField(max_length=500)
last_name = models.CharField(max_length=500)
email = models.EmailField(max_length=255, unique=True)
is_active = models.BooleanField(default=True)
is_staff = models.BooleanField(default=False)
objects = UserProfileManager()
USERNAME_FIELD = 'email'
REQUIRED_FIELDS = ['name', 'last_name']
def get_full_name(self):
"""Used to get a users full name."""
return self.name
def get_short_name(self):
"""Used to get a users short name."""
return self.name
def __str__(self):
"""Django uses this when it needs to convert the object to a string"""
return self.email
</code></pre>
|
<p>Although the test client starts from a blank database each time, that's no reason to assume that the primary key is 1; sequences are not reset when the tables are emptied after each run. Instead of explicitly getting pk=1, you should just query for the first item:</p>
<pre><code> user = UserProfile.objects.first()
</code></pre>
|
django|python-3.x|unit-testing|django-rest-framework|testcase
| 2 |
1,906,544 | 45,227,575 |
How do I make a variable inside an if statement global (python3)?
|
<p>Is there a way to make a variable inside an if statement global to the whole program? In the following code, I want to change my favorite color to purple, but it only does it for the if statement. For example, Apple's Siri knows your name. Let's say you for some reason want to change it, so you say "Siri change my name to Bob". Then from that point on, Siri calls you bob. I'm trying to model the same thing but with color.</p>
<pre><code>color = "red"
command = input()#the variables have been defined
if command == "What is my favorite color":
print(color, "is your favorite color")#this prints red
if command == "Change my favorite color":
color = input()
print(color, "is your new favorite color")#this prints whatever
#color I put, purple
#now if I ask what is my favorite color again, it will still say red, but I want it to say purple from now until I change it to another color and so on
</code></pre>
|
<p>The other answers aren't quite covering your issue because they assume that you're executing the whole code only once and somehow getting both of those <code>print</code>s to fire (which would mean that you would have to have changed the contents of <code>command</code> between your two <code>if</code>s).</p>
<p>Your issue is that you're running the program multiple times, and expecting the new value of <code>color</code> to somehow (magically?) propagate over to the next time you run the program.</p>
<p>In order to run the same bit of code multiple times, you should put it in a loop. For example:</p>
<pre><code>while command != "exit":
# your code goes here
</code></pre>
<p>Don't forget to indent the code that goes inside the loop.</p>
<p>However, if that's all you do, you're still going to have an issue (actually two, but we'll fix the <code>command is not defined</code> error in a sec), because the first line of your code sets <code>color</code> to <code>"red"</code>, and that's going to keep setting your <code>color</code> variable back to <code>"red"</code> every time it loops.</p>
<p>To fix this part of the issue, you want to initialize your variables outside of the loop (what do you know, this fixes the other issue too).</p>
<p>Python has a special <code>None</code> that fits this situation well, so let's use that:</p>
<pre><code>command = None # initialized but contains nothing
favcolor = input("What is your favorite color?") # let's initialize this with some user data
while command != "exit"
command = input("Please enter a command:")
# the rest of your code goes here
</code></pre>
<p>If you really need to exit the program and have the user data from last time come back, you'll have to save that data to a file or database or something, then load it back into the program next time you run it.</p>
<p>Hope this clears things up a bit for you.</p>
|
python-3.x
| 0 |
1,906,545 | 61,590,752 |
Django generating multiple urls for the same subcategory
|
<p>In my app, there's a list of categories and subcategories with a ForeignKey relationship. Say, there're:</p>
<ul>
<li>Subcategory1 related to Category1</li>
<li>Subcategory2 related to Category2</li>
</ul>
<p>I expect to get the following subcategory urls:</p>
<ul>
<li><a href="http://127.0.0.1:8000/cat1/subcat1" rel="nofollow noreferrer">http://127.0.0.1:8000/cat1/subcat1</a> </li>
<li><a href="http://127.0.0.1:8000/cat2/subcat2" rel="nofollow noreferrer">http://127.0.0.1:8000/cat2/subcat2</a></li>
</ul>
<p>These urls work fine. However, django also generates these urls that I don't need:</p>
<ul>
<li><a href="http://127.0.0.1:8000/cat1/subcat2" rel="nofollow noreferrer">http://127.0.0.1:8000/cat1/subcat2</a></li>
<li><a href="http://127.0.0.1:8000/cat2/subcat1" rel="nofollow noreferrer">http://127.0.0.1:8000/cat2/subcat1</a></li>
</ul>
<p>Why do they appear in my app? How do I get rid of them? Thanks in advance!</p>
<p><em>models.py:</em></p>
<pre><code>class Category(models.Model):
categoryslug = models.SlugField(max_length=200, default="",unique=True)
def get_absolute_url(self):
return reverse("showrooms_by_category",kwargs={'categoryslug': str(self.categoryslug)})
class Subcategory(models.Model):
subcategoryslug = models.SlugField(max_length=200, default="",unique=True)
category = models.ForeignKey('Category', related_name='subcategories',
null=True, blank=True, on_delete = models.CASCADE)
def get_absolute_url(self):
return reverse("showrooms_by_subcategory",
kwargs={'categoryslug': str(self.category.categoryslug), 'subcategoryslug': str(self.subcategoryslug)})
</code></pre>
<p>views.py:</p>
<pre><code>class ShowroomCategoryView(DetailView):
model = Category
context_object_name = 'showrooms_by_category'
template_name = "website/category.html"
slug_field = 'categoryslug'
slug_url_kwarg = 'categoryslug'
class ShowroomSubcategoryView(DetailView):
model = Subcategory
context_object_name = 'showrooms_by_subcategory'
template_name = "website/subcategory.html"
slug_field = 'subcategoryslug'
slug_url_kwarg = 'subcategoryslug'
</code></pre>
<p>urls.py:</p>
<pre><code>urlpatterns = [
path('<slug:categoryslug>/<slug:subcategoryslug>/', views.ShowroomSubcategoryView.as_view(), name='showrooms_by_subcategory'),
path('<slug:categoryslug>/', views.ShowroomCategoryView.as_view(), name='showrooms_by_category'),
]
</code></pre>
|
<p>I think the reason for this is <code>foreign_key</code>. So, I think as you can use one to one field to get the target, like:</p>
<pre><code>subcategoryslug = models.SlugField(max_length=200, default="",unique=True)
category = models.OneToOneField('Category', related_name='subcategories',null=True, blank=True, on_delete = models.CASCADE)
</code></pre>
<p>*Note:- Please understand the logic too behind it. For that, do research more.</p>
|
python|django|django-urls
| 0 |
1,906,546 | 57,939,472 |
what is the best way to extract data from pdf
|
<p>I have thousands of pdf file that I need to extract data from.This is an example <a href="https://firebasestorage.googleapis.com/v0/b/honeybox-catalogue.appspot.com/o/test.pdf?alt=media&token=f540b8f2-9d99-4d29-8bca-8f7d7070f9f9" rel="noreferrer">pdf</a>. I want to extract this information from the example pdf.</p>
<p><a href="https://i.stack.imgur.com/BuU3d.png" rel="noreferrer"><img src="https://i.stack.imgur.com/BuU3d.png" alt="enter image description here"></a> </p>
<p>I am open to nodejs, python or any other effective method. I have little knowledge in python and nodejs.
I attempted using python with this code </p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>import PyPDF2
try:
pdfFileObj = open('test.pdf', 'rb')
pdfReader = PyPDF2.PdfFileReader(pdfFileObj)
pageNumber = pdfReader.numPages
page = pdfReader.getPage(0)
print(pageNumber)
pagecontent = page.extractText()
print(pagecontent)
except Exception as e:
print(e)</code></pre>
</div>
</div>
</p>
<p>but I got stuck on how to find the procurement history. What is the best way to extract the procurement history from the pdf?</p>
|
<p>I did something similar to <strong>scrape</strong> my grades a long time ago. The easiest (not pretty) solution I found was to convert the pdf to html, then parse the html.</p>
<p>To do so I used pdf2text/pdf2html (<a href="https://pypi.org/project/pdf-tools/" rel="nofollow noreferrer">https://pypi.org/project/pdf-tools/</a>) and html.<br />
I also used codecs but don't remember exactly the why behind this.</p>
<p>A quick and dirty summary:</p>
<pre><code>from lxml import html
import codecs
import os
# First convert the pdf to text/html
# You can skip this step if you already did it
os.system("pdf2txt -o file.html file.pdf")
# Open the file and read it
file = codecs.open("file.html", "r", "utf-8")
data = file.read()
# We know we're dealing with html, let's load it
html_file = html.fromstring(data)
# As it's an html object, we can use xpath to get the data we need
# In the following I get the text from <div><span>MY TEXT</span><div>
extracted_data = html_file.xpath('//div//span/text()')
# It returns an array of elements, let's process it
for elm in extracted_data:
# Do things
file.close()
</code></pre>
<p>Just check the result of pdf2text or pdf2html, then using xpath you should extract your information easily.</p>
<p>I hope it helps!</p>
<p>EDIT: comment code</p>
<p>EDIT2:
The following code is printing your data</p>
<pre><code># Assuming you're only giving the page 4 of your document
# os.system("pdf2html test-page4.pdf > test-page4.html")
from lxml import html
import codecs
import os
file = codecs.open("test-page4.html", "r", "utf-8")
data = file.read()
html_file = html.fromstring(data)
# I updated xpath to your need
extracted_data = html_file.xpath('//div//p//span/text()')
for elm in extracted_data:
line_elements = elm.split()
# Just observed that what you need starts with a number
if len(line_elements) > 0 and line_elements[0].isdigit():
print(line_elements)
file.close();
</code></pre>
|
python|node.js|pdf|pdf-scraping
| 2 |
1,906,547 | 56,056,073 |
How do I filter inbox and only download emails that have attachments
|
<p>I am filtering a fairly large inbox using exchangelib. I only want to look at emails that have attachments. </p>
<p>I tried to add the attachments=True clause in the filter but it does not work. Below is my code. Could someone tell me what is the correct way of doing this?</p>
<pre class="lang-py prettyprint-override"><code>account.inbox.filter(datetime_received__range=(start_time, end_time), sender=some_sender, attachments=True)
</code></pre>
|
<p>You probably want to filter on the <code>has_attachments</code> filter instead. I don't think you can filter directly on an attachment. Also, you may want to only select the fields that you need, using <code>.only()</code>:</p>
<pre><code>account.inbox.filter(
datetime_received__range=(start_time, end_time),
sender=some_sender,
has_attachments=True,
).only('sender', 'subject', 'attachments')
</code></pre>
|
python|exchangelib
| 0 |
1,906,548 | 55,271,601 |
List of 10 random points between -2 and 2 with uniform distribution
|
<p>How do I make a list of 10 random points between -2 and 2 with a uniform distribution? Python keeps telling me that the range is negative or too small</p>
<p>This is what I have so far: </p>
<pre><code>import random
randint = random.uniform((1, 10),10)
print (randint)
</code></pre>
|
<p><code>random.uniform</code> only takes 2 arguments: the lower and upper bounds. So you need to call this 10 times:</p>
<pre><code>[random.uniform(-2, 2) for _ in range(10)]
</code></pre>
<p>You can also use Numpy's version. Here you can specify the number of elements:</p>
<pre><code>np.random.uniform(-2, 2, 10)
</code></pre>
|
python|random|integer|sample|uniform
| 1 |
1,906,549 | 42,380,788 |
Adding a variable in Content disposition response file name-python/django
|
<p>I am looking to add a a variable into the file name section of my below python code so that the downloaded file's name will change based on a user's input upon download. </p>
<p>So instead of "Data.xlsx" it would include my variable (based on user input) + "Data.xlsx". I saw a similar question but based in PHP and couldn't figure out how to adapt it to python. </p>
<pre><code>response = HttpResponse(content_type='application/vnd.ms-excel')
response['Content-Disposition'] = 'attachment; filename= "Data.xlsx"'
</code></pre>
<p>Thanks in advance!</p>
|
<p>Using <a href="https://docs.python.org/3/library/stdtypes.html#str.format" rel="noreferrer"><code>str.format</code></a>:</p>
<pre><code>response['Content-Disposition'] = 'attachment; filename= "{}"'.format(filename)
</code></pre>
<p>Using <a href="https://docs.python.org/3/library/stdtypes.html#printf-style-string-formatting" rel="noreferrer"><code>printf</code>-style formatting</a>:</p>
<pre><code>response['Content-Disposition'] = 'attachment; filename= "%s"' % filename
</code></pre>
<p>or concatenating strings:</p>
<pre><code>response['Content-Disposition'] = 'attachment; filename= "' + filename + '"'
</code></pre>
|
python|django|string|download|django-views
| 13 |
1,906,550 | 59,280,298 |
How can I change the shape of this numpy array to (3058, 480, 640, 3)?
|
<p>I am trying to read 3058 images from a folder. I want my picture to be read as np array with size (3158, 480, 640, 3) dtype as uint8. I store all image to a list (image_list). After changing the list to array, I get an array (3158, ). Below is my code</p>
<pre><code>import numpy as np
import cv2 as cv
DIR = mydir
takenFrames = 6
counter = 0
for filename in glob.glob(DIR + '/*.png'):
counter += 1
# if counter >= no frames, open image, add img and img_label to list.
if (counter >= takenFrames):
im = cv.imread(filename) #im.shape is 480, 640
image_list.append(im)
#im = np.resize(im, (-1, 490, 640, 3))
image_list = np.array(image_list, dtype='uint8').reshape(-1, 480, 640, 3) / 255.0
</code></pre>
<p>Whenever I try to do this, I get the following error as below</p>
<p>image_list = np.array(image_list, dtype='uint8').reshape(-1, 480, 640, 3) / 255.0
Traceback (most recent call last):</p>
<p>File "", line 1, in
image_list = np.array(image_list, dtype='uint8').reshape(-1, 480, 640, 3) / 255.0</p>
<p>ValueError: setting an array element with a sequence.</p>
<p>I tried accessing an image from a single folder and the below line of code works
x = np.array([np.array(Image.open(file)) for file in filename]) #x.shape = (55, 480, 640, 3)</p>
<p>I tried to store x in an empty numpy array whenever I access a different folder and read the images to get all the 3058 images as </p>
<pre><code>data = np.array([])
#I tried to append numpy array as
if(data.size == 0):
data = im
else:
data = np.append(data, im, axis = 0)
</code></pre>
<p>but that doesn't work either</p>
|
<p>I just figured this out. Some of the images have different shape that is the reason. I solved it by changing the image shape before appending the list. </p>
<pre><code>import numpy as np
import cv2 as cv
DIR = mydir
takenFrames = 6
counter = 0
for filename in glob.glob(DIR + '/*.png'):
counter += 1
# if counter >= no frames, open image, add img and img label to list.
if (counter >= takenFrames):
im = cv.imread(filename)
#some images are of size (480, 640, 3) whereas some are (490, 640, 3). I resized all images to (480, 640, 3)
im = np.resize(im, (480, 640, 3))
image_list.append(im)
labels.append(label)
#converting image_list to numpy array changes my list to (3058, 480, 640, 3)
import numpy as np
image_list = np.asarray(image_list)
</code></pre>
|
image|numpy|resize|reshape|numpy-ndarray
| 0 |
1,906,551 | 59,455,258 |
<Exercise_2.Line object at 0x7ff1ddbfeba8> in bash what does it mean? linux
|
<p>I wrote a code that should return me the equation of a line passing through two points and it works but in the bash it also appears this line and I don't know what it could mean. The code is saved as Exercise_2.py and there are two classes in it (Point and Line).</p>
<p>This is my code:</p>
<p>class Point:</p>
<pre><code>def __init__(self,x,y,):
self.x=x
self.y=y
</code></pre>
<p>class Line:</p>
<pre><code>def __init__(self,m,q,):
self.m=m
self.q=q
print(f"y={m}x+{q}")
</code></pre>
<p>This is what I see if I run the code:</p>
<p>y=3x+2</p>
<p>< Exercise_2.Line object at 0x7ff1ddbfeba8 ></p>
<p>Thanks to whoever will explain it to me and thanks for your time.</p>
<p>P.S. maybe I'm finding the problem. I should use the <strong>repr</strong> method but I'm trying to figure out why</p>
|
<p>When you pass a class to the print function it prints that object's representation. You can modify what displays when you print it by adding a <code>__repr__</code> method to your class and returning a string. </p>
<pre><code>class Line:
def __init__(self,m,q):
self.m=m
self.q=q
print(f"y={m}x+{q}")
def __repr__(self):
return f"y={self.m}x+{self.q}"
print(Line(3,2))
</code></pre>
<p>This will however print your output twice since it's running in the init method as well so that one can be removed.</p>
<p>As stated in my comment above, when you create an object from a class, it runs your <strong>init</strong> method if one exists. Since your <strong>init</strong> method contained a print function it printed that, however since you also printed the object itself it attempted to print some representation of it. Since none was defined, it printed python's default representation of an object.</p>
<p>If you need more detail. I can attempt and explain a little bit better.</p>
|
python-3.x|linux|bash|oop
| 0 |
1,906,552 | 54,065,531 |
How to send python post requests with a $ in the form data
|
<p>I am trying to send form data with a post request, but one of my data headers has a $ in the name which python doesn't like. How can I get around this?</p>
<pre><code>payload = dict(ctl00_ContentPlaceHolder1_TabContainer1_ClientState='{"ActiveTabIndex":3,"TabState":[true,true,true,true]}',
ctl00$ContentPlaceHolder1$TabContainer1$TC1TP1$DropDownList1_1='250 per page')
s = requests.Session()
donor_page = s.post(url, files=payload)
</code></pre>
<p>I need to send that second data field with the $ included. I don't know if I am going about this the complete wrong way or what, I'm new to python and requests. Any help is appreciated!</p>
|
<p>Use a <strong>dictionary literal</strong> or set such header after constructing the dict using <code>[]</code>:</p>
<pre><code># dictionary literal
payload = {
'ctl00_ContentPlaceHolder1_TabContainer1_ClientState': '{"ActiveTabIndex":3,"TabState":[true,true,true,true]}',
'ctl00$ContentPlaceHolder1$TabContainer1$TC1TP1$DropDownList1_1': '250 per page'
}
# or assign later
payload = dict(ctl00_ContentPlaceHolder1_TabContainer1_ClientState='{"ActiveTabIndex":3,"TabState":[true,true,true,true]}')
payload['ctl00$ContentPlaceHolder1$TabContainer1$TC1TP1$DropDownList1_1'] = '250 per page'
s = requests.Session()
donor_page = s.post(url, files=payload)
</code></pre>
|
python|python-requests
| 1 |
1,906,553 | 58,401,040 |
Getting min and max datime for each date in csv
|
<p>I'm kind of new to data science and Python.</p>
<p>First of all, do you suggest using any other Library than pandas when dealing with huge dataset (100K+ rows)?</p>
<p>Second of all, let me expose to you my current problem.</p>
<p>I have a Dataset in which I have a Datetime column, to make it easy to understand, let's say I only have a Datetime column named <code>date_col</code>.</p>
<p>Here's what my <code>date_col</code> values looks like:</p>
<pre><code>df=pd.DataFrame({'dt_col': ["2019-03-13 08:12:23", "2019-03-13 07:10:18", "2019-03-13 08:12:23", "2019-03-15 10:35:53", "2019-03-20 11:12:23", "2019-03-20 08:12:23"]})
dt_col
0 2019-03-13 08:12:23
1 2019-03-13 07:10:18
2 2019-03-13 08:12:23
3 2019-03-15 10:35:53
4 2019-03-20 11:12:23
5 2019-03-20 08:12:23
</code></pre>
<p>I want to extract foreach day the minimum and the maximum hour or <code>datetime</code>, for example for <code>2019-03-13</code>, I want to extract <code>2019-03-13 07:10:18</code> and <code>2019-03-13 08:12:23</code>.</p>
<p>I thought about:</p>
<ol>
<li>Getting distinct dates without the time from my DataFrame</li>
<li>Foreach of these dates, getting the min and max corresponding date from my Dataframe</li>
</ol>
<p>I'm kind of stuck at step 2 as I don't know how to really achieve this in Python, I mean I can do it the "old way" with some loops but I don't think that it will do the job with a large Dataset.</p>
<p>Btw, here's what I've done for step 1:</p>
<pre><code>dates=pd.to_datetime(df.dt_col)
distinc_dates=dates.dt.strftime("%Y-%m-%d").unique()
</code></pre>
<p>Once I got those min and max, I want to generate datetime rows between each min and max datetime, for example between <code>2019-03-13 07:10:18</code> and <code>2019-03-13 08:12:23</code>, I want to get <code>2019-03-13 07:10:18</code>, <code>2019-03-13 07:10:19</code>, <code>2019-03-13 07:10:20</code>, <code>2019-03-13 07:10:21</code>, <code>2019-03-13 07:10:22</code>,..... until <code>2019-03-13 08:12:23</code>.</p>
<p>I think this can be achieved using <code>pd.date_range</code>. So once I have got my min and max, I'm thinking user using <code>pd.date_tange</code> to do something like this:</p>
<pre><code>dates=[]
for index,row in df.iterrows():
dates.append(pd.date_range(start=row['min'], end=row['max'], freq='1S'))
print(dates)
</code></pre>
<p>But I know that iterrows is slow asf, so I'm asking you guys for the best way to achieve this when having huge dataset.</p>
|
<p>In case <code>dt_col</code> is not dtype <code>datetime</code>, you need convert it to datetime</p>
<pre><code>df.dt_col = pd.to_datetime(df.dt_col)
</code></pre>
<p>Next, try this</p>
<pre><code>df1 = df.groupby(df.dt_col.dt.date).dt_col.agg(['min', 'max'])
Out[555]:
min max
dt_col
2019-03-13 2019-03-13 07:10:18 2019-03-13 08:12:23
2019-03-15 2019-03-15 10:35:53 2019-03-15 10:35:53
2019-03-20 2019-03-20 08:12:23 2019-03-20 11:12:23
</code></pre>
<hr>
<p>After having min and max. You may create range in seconds by <code>pd.date_range</code> or resampling. I think <code>pd.date_range</code> with listcomp may faster resampling in your case. Here it is</p>
<pre><code>time_arr = [pd.date_range(df1.loc[ix,'min'], df1.loc[ix,'max'], freq='S')
for ix in df1.index]
</code></pre>
<p>Or</p>
<pre><code>time_arr = [pd.date_range(x[0], x[1], freq='S') for x in df1.values]
Out[640]:
[DatetimeIndex(['2019-03-13 07:10:18', '2019-03-13 07:10:19',
'2019-03-13 07:10:20', '2019-03-13 07:10:21',
'2019-03-13 07:10:22', '2019-03-13 07:10:23',
'2019-03-13 07:10:24', '2019-03-13 07:10:25',
'2019-03-13 07:10:26', '2019-03-13 07:10:27',
...
'2019-03-13 08:12:14', '2019-03-13 08:12:15',
'2019-03-13 08:12:16', '2019-03-13 08:12:17',
'2019-03-13 08:12:18', '2019-03-13 08:12:19',
'2019-03-13 08:12:20', '2019-03-13 08:12:21',
'2019-03-13 08:12:22', '2019-03-13 08:12:23'],
dtype='datetime64[ns]', length=3726, freq='S'),
DatetimeIndex(['2019-03-15 10:35:53'], dtype='datetime64[ns]', freq='S'),
DatetimeIndex(['2019-03-20 08:12:23', '2019-03-20 08:12:24',
'2019-03-20 08:12:25', '2019-03-20 08:12:26',
'2019-03-20 08:12:27', '2019-03-20 08:12:28',
'2019-03-20 08:12:29', '2019-03-20 08:12:30',
'2019-03-20 08:12:31', '2019-03-20 08:12:32',
...
'2019-03-20 11:12:14', '2019-03-20 11:12:15',
'2019-03-20 11:12:16', '2019-03-20 11:12:17',
'2019-03-20 11:12:18', '2019-03-20 11:12:19',
'2019-03-20 11:12:20', '2019-03-20 11:12:21',
'2019-03-20 11:12:22', '2019-03-20 11:12:23'],
dtype='datetime64[ns]', length=10801, freq='S')]
</code></pre>
<hr>
<p><strong>Note</strong>: if you dataset is too big and you create range by seconds, you may run out of memory and crash.</p>
|
python|pandas|dataset|data-science
| 0 |
1,906,554 | 65,286,739 |
TKinter application runs perfectly fine in Visual Studio Code but crashes as an .exe
|
<p>I have a TKinter application which i put together in Visual Studio Code. It runs perfectly fine in Visual Studio Code and does not freeze or anything like that. Because I want my program to run on other pcs as well, I created an .exe file using pyinstaller by running</p>
<pre><code>pyinstaller --onefile -w "main.py"
</code></pre>
<p>This creates the desired .exe file without any problems. Sadly when put into an .exe file, my program crashes a lot, by crashing I mean that the window is not responding anymore or that the window just closes itself after some time.
I don't know if this is a commom problem but I honestly don't know what to do.
Apparently I don't have any problems in my code, because it runs perfectly fine in Visual Studio Code.</p>
<p>Is there anything I can do?</p>
<p><strong>Edit 1:</strong>
My window freezes around these lines of my code:
I am trying to create 4 scales with a for loop:</p>
<pre class="lang-py prettyprint-override"><code>for i in range(4):
scale = tk.Scale(self.root, state = "disabled", from_ = 100, to = 0)
scale.place(rely=0.2,relx=i*0.25,relwidth=0.25, relheight=0.8)
self.scales.append(scale)
</code></pre>
<p>Also I try to put my scales into the list <code>self.scales</code> so I can work with my scales later on. The program creates the first three scales without any problem but often fails to create the 4th.</p>
<p><strong>EDIT 2:</strong>
I think I found a solution: Maybe the for loop is just too fast for Tkinter and it can't create the GUI items that fast, I added</p>
<pre class="lang-py prettyprint-override"><code>time.sleep(0.1)
</code></pre>
<p>to my for loop, for now, this seems to work. But I dont really know if thats how its supposed to be.</p>
<p><strong>EDIT 3:</strong>
Nevermind, this did not solve the problem. The problem has something to do with creating the scales.
I dont really know what to do.</p>
|
<p>Check out <a href="https://docs.python.org/3/library/logging.html" rel="nofollow noreferrer">Logging</a>, and the <a href="https://docs.python.org/3/howto/logging.html" rel="nofollow noreferrer">tutorial</a> that comes with. After formatting to write to a file, you can put a few <code>logging.info()</code> here and there which can help determine the root cause of your error.</p>
<pre><code>import logging
logging.basicConfig(filename='log.txt',
filemode='a',
format='%(asctime)s %(message)s',
datefmt='%Y-%m-%d %H:%M:%S',
level=logging.INFO)
def my_func(x, y):
logging.info('Accessing Function: my_func()')
z = x + y
logging.info(f'Function myfunc() Successfully Completed, Variable Values: {x}, {y}, {z}')
my_func(2, 7)
</code></pre>
<p>Now you have a handy <code>log.txt</code> file which you can diagnose.</p>
<p><strong>EDIT</strong></p>
<p>Using the lines of code you provided, I tried out the following:</p>
<pre><code>import tkinter as tk
class Application(tk.Frame):
def __init__(self, root=None):
super().__init__(root)
self.root = root
self.root.geometry('720x450')
self.pack()
self.scales = []
self.create_scale()
def create_scale(self):
for i in range(4):
scale = tk.Scale(self.root, state="disabled", from_=100, to=0)
scale.place(rely=0.2, relx=i*0.25, relwidth=0.25, relheight=0.8)
self.scales.append(scale)
gui = tk.Tk()
app = Application(root=gui)
app.mainloop()
</code></pre>
<p>I made an exe with PyInstaller 4.1, Python 3.9 and all four scales are properly produced on my screen so the problem isn't with those specific lines of code but how they interact with the rest of your code.</p>
<p>I can try finding out the root of your problem but I would need to see your full code or a <a href="https://stackoverflow.com/help/minimal-reproducible-example">minimal, reproducible example</a> that successfully mimics the same problem you have.</p>
|
python|windows|tkinter|pyinstaller|exe
| 0 |
1,906,555 | 22,527,579 |
Adding data field to pivot table in python
|
<p>I found a python code to create pivot tables in Excel <a href="http://pythonexcels.com/automating-pivot-tables-with-python/" rel="nofollow">here</a>. And this is how values are added to the data field: </p>
<pre><code>wb.ActiveSheet.PivotTables(tname).AddDataField(
wb.ActiveSheet.PivotTables(tname).PivotFields(sumvalue[7:]),
sumvalue,
win32c.xlSum)
</code></pre>
<p>What do the parameters mean? I need to modify them to suit my application. Please help out.</p>
|
<p>Figured that out. <code>wb.ActiveSheet.PivotTables(tname).PivotFields(sumvalue[7:])</code> is the object of the python field you want to add as the data field. <code>sumvalue[7:]</code> can be any field name. <code>sumvalue</code> is the title that will show at the top of your pivot table. I believe win32.xlsum is the function that Excel calls (you can see it when you generate a macro in Excel).</p>
|
python|excel
| 0 |
1,906,556 | 28,706,806 |
Pelican external image in restructured text
|
<p>I want put external image on my blog created with <a href="http://blog.getpelican.com/" rel="nofollow">Pelican</a> </p>
<p>So i try:</p>
<pre><code>some text
.. image:: http://example.com/image.png
:alt: Alt text
</code></pre>
<p>After running "pelican -s pelicanconf.py" I've got error:</p>
<pre><code>ERROR: Better Fig. Error: image not found: /Users/kendriu/sources/pelican-blog/contenthttp:/example.com/image.png
ERROR: Could not process ./4_joe.rst
| [Errno 2] No such file or directory: u'/Users/kendriu/sources/pelican-blog/contenthttp:/example.com/image.png'
</code></pre>
<p>And no image in my post.</p>
<p>Questions is: How put external images in my blog.</p>
|
<p>Pelican expects paths to be within the content directory. In general, this is a good idea because it is not kind to pull content and use other people's bandwidth. </p>
<p>You will have to do this using the raw tag like so: </p>
<pre><code>.. raw:: html
<img src="http://example.com/image.jpg" alt="alt text">
</code></pre>
<p>The best solution is to download the image (assuming you have permission) and then include it in your content folder and then host it yourself, a la default behaviour.</p>
<h2>update</h2>
<p>For hosting images from google drive see this discussion: <a href="https://stackoverflow.com/questions/10311092/displaying-files-e-g-images-stored-in-google-drive-on-a-website">Displaying files (e.g. images) stored in Google Drive on a website</a></p>
<p>The solution by Mori works for me:</p>
<pre><code>https://drive.google.com/uc?id=FILE-ID
</code></pre>
<p>Which translates into this in the above example:</p>
<pre><code>.. raw:: html
<img src="https://drive.google.com/uc?id=0B9o1MNFt5ld1N3k1cm9tVnZxQjg">
</code></pre>
<p>Works like a charm using the Google Drive API.</p>
|
python|image|restructuredtext|pelican
| 0 |
1,906,557 | 14,512,845 |
Two Turtle at the Same Position, How to determine this?
|
<p>So here is the code I have, is there a way to code so that when the ISS and rocket meet (at the same position) destory the window and create a new Tkinter window? </p>
<pre><code>from turtle import *
def move(thing, distance):
thing.circle(250, distance)
def main():
rocket = Turtle()
ISS = Turtle()
rocket.speed(10)
ISS.speed(10)
counter = 1
title("ISS")
screensize(750, 750)
ISS.hideturtle()
rocket.hideturtle()
ISS.penup()
ISS.left(90)
ISS.fd(250)
ISS.left(90)
ISS.showturtle()
ISS.pendown()
rocket.penup()
rocket.fd(250)
rocket.left(90)
rocket.showturtle()
rocket.pendown()
while counter == 1:
move(ISS, 3)
move(rocket, 4)
</code></pre>
<p>Thank You!!</p>
|
<p><a href="http://docs.python.org/2/library/turtle.html#turtle.position" rel="nofollow">http://docs.python.org/2/library/turtle.html#turtle.position</a></p>
<p>"Return the turtle’s current location (x,y) (as a Vec2D vector)."</p>
<p>However, due to floating point errors, you should consider them overlapping even if they are very slightly apart, e.g.</p>
<pre><code>epsilon = 0.000001
if abs(ISS.xcor() - rocket.xcor()) < epsilon and abs(ISS.ycor() - rocket.ycor()) < epsilon:
do stuff
</code></pre>
<p>If you want to pretend they are circles and ISS has a radius of r1 and the rocket has a radius of r2, then you can measure distance like:</p>
<pre><code>sqrt((ISS.xcor() - rocket.xcor())**2 + (ISS.ycor() - rocket.ycor())**2) < (r1 + r2)
</code></pre>
<p>If this is true, they are overlapping circles.</p>
|
python|turtle-graphics
| 2 |
1,906,558 | 14,634,709 |
sorting a python multi-dimensional array?
|
<p>I declared a multidimensional array that can accept different data types using <code>numpy</code></p>
<p><code>count_array = numpy.empty((len(list), 2), dtype = numpy.object)</code></p>
<p>The first array has got strings and the second has got numbers. I want to sort both the columns on the basis of the numbers ...</p>
<p>Is there any easier way like <code>sort()</code> method to do this ?</p>
|
<p>Consider making your array a <em>structured array</em> instead:</p>
<pre><code>count_array = np.empty((len(list),), dtype=[('str', 'S10'), ('num', int)])
</code></pre>
<p>Then you can just sort by a specific key:</p>
<pre><code>np.sort(arr, order='num')
</code></pre>
|
python|numpy
| 4 |
1,906,559 | 41,554,225 |
Quadratic timecomplexity: why is the following code calculated this way?
|
<p>Could some one please explain why the following code complexity is calculated this way: 1+2+3+4+...+(n-2)+(n-1)+n = O(n^2)</p>
<pre><code>def CalculateAveragesTotElementOverzicht(inputlist):
resultlist = [0]*len(inputlist)
for i in range(0,len(inputlist)):
som = 0
for j in range(0,i+1):
som += inputlist[j]
average = som/(i+1)
resultlist[i] = average
return resultlist
</code></pre>
|
<p>It's not <code>O(n)</code> but O(n<sup>2</sup>). And that's because you have two loops which the outer one iterates from <code>0</code> to <code>n</code> and the inner one from <code>0</code> to the throwaway variables size (<code>i</code>) which in each iteration it will generate the following ranges:</p>
<pre><code>range(0,0+1) # 1 iteration
range(0,1+1) # 2 iteration
range(0,2+1) # 3 iteration
...
</code></pre>
<p>Therefore at the end, you'll have 1 + 2 + 3 + ... + n iteration which its complexity computes as fllows:</p>
<p>n * (n+1) / 2 = 1/2 n<sup>2</sup> + 1/2n = O(n<sup>2</sup>)</p>
|
python|python-2.7|time-complexity
| 8 |
1,906,560 | 6,439,978 |
pip install for Multiple Python Distributions on Mac
|
<p>I am fine having multiple distributions of Python on my system, given the advice found <a href="https://stackoverflow.com/questions/1218891/multiple-versions-of-python-on-os-x-leopard">here</a>.</p>
<p>However: I cannot get <code>easy_install</code> nor <code>pip install</code> to install to the distribution associated with <code>/usr/bin/python</code> on Mac. They will only install modules to the distribution associated with <code>/Library/Python/2.6/</code>.</p>
<p>This is a problem because both my default <code>python</code> calls and XCode compiles are associated with <code>/usr/bin/python</code>.</p>
<p>So, for example, when I try to <code>pip install appscript</code>, I get back a cheeky</p>
<p><code>Requirements already satisfied</code></p>
<p>But, then, when I open up <code>python</code> or XCode and try to <code>import appscript</code>, I get </p>
<p><code>ImportError: No module named appscript</code></p>
<p>How do I force <code>pip</code> to install to whatever distribution is associated with <code>/usr/bin/python</code>?</p>
|
<p>It turned out that <code>easy_install</code> (and <code>pip</code>) was not associated with Python 2.7 (the version used by my default <code>python</code> and XCode). Per <a href="https://stackoverflow.com/users/60711/vartec">vartec</a>'s instructions on an Answer that has now been deleted, I downloaded and installed <code>easy_install</code> for the correct version of python:</p>
<p><code>sh setuptools-0.6c11-py2.7.egg</code></p>
<p>(<code>easy_install</code> is part of <code>setuptools</code>)</p>
<p>After doing this, my default call to <code>easy_install</code> suddenly switched to installing packages for the distribution used by <code>python</code> and XCode.</p>
<p>Both <code>python</code> and XCode have access to <code>appscript</code> now, so whatever, I guess. Thanks for the help everyone, especially <a href="https://stackoverflow.com/users/60711/vartec">vartec</a>.</p>
|
python|macos|pip|easy-install
| 4 |
1,906,561 | 6,714,375 |
Using list comprehension instead of for loop when working with Django QuerySets
|
<p>I'm trying to tweak an application that is suffering in speed department. Because of that, I've started converting all the for-loop statements to list comprehensions when possible.</p>
<p>Currently, I'm working on a function that needs to iterate through a dictionary of Django querysets. The old code uses for-loop statement to iterate through this and it works fine. My code that uses list comprehension returns django querysets instead of my model object.</p>
<p>Here is the code:</p>
<pre><code>def get_children(parent):
# The following works
children = []
for value in get_data_map(parent).itervalues():
children += list(value)
# This part doesn't work as intended.
booms = [value for value in get_data_map(parent).itervalues() if value]
import pdb
pdb.set_trace()
(Pdb) type(children[0])
<class 'site.myapp.models.Children'>
(Pdb) type(booms[0])
<class 'django.db.models.query.QuerySet'>
</code></pre>
<p>Note that get_data_map returns a dictionary whose values are <code><class 'django.db.models.query.QuerySet'></code></p>
<p>This part of code is one of the most time consuming part of the application. If I get this working on list comprehensions, the application speed will hopefully get faster by two times.</p>
<p>Any idea how I can speed up this part of the code?</p>
|
<p>Your problem isn't <code>for</code> loops vs. list comprehensions (better would be generators). Your problem is <em>way too many queries to the database</em>.</p>
<p>Since you're trying to get one list, you should be trying to get it from one query. If you explained what was in your typical QuerySet, we could show you how best to merge them. Maybe use <code>OR</code> merges on Q objects. Or maybe building up a set of integers and feeding it to an <code>__in=</code> filter.</p>
|
python|django|list|list-comprehension|django-queryset
| 5 |
1,906,562 | 57,152,697 |
Python float64index to float
|
<p>I am dealing with some pandas dataframes where all elements are of type <code>Float64Index</code>. Everything I try to convert those elements to a simple <code>float</code> fail. </p>
<p>I have tried <code>pandas.to_numeric</code> and <code>astype(float)</code>.</p>
<p>Here is a reproducible dataframe that can be recreated from the following <code>dict</code></p>
<pre><code>{0: {44: 492.0, 45: 492.0},
1: {44: Float64Index([506.76], dtype='float64'),
45: Float64Index([506.76], dtype='float64')},
2: {44: Float64Index([516.8952], dtype='float64'),
45: Float64Index([516.8952], dtype='float64')},
3: {44: Float64Index([527.233104], dtype='float64'),
45: Float64Index([527.233104], dtype='float64')},
4: {44: Float64Index([537.77776608], dtype='float64'),
45: Float64Index([537.77776608], dtype='float64')}}
</code></pre>
<p>I would like to have <code>dtypes</code> return <code>float</code> for all the columns.</p>
|
<p>Your data frame looks a little crazy, but here's a try:</p>
<pre><code>df = pd.DataFrame({0z: {44: 492.0, 45: 492.0},
1: {44: pd.Float64Index([506.76], dtype='float64'),
45: pd.Float64Index([506.76], dtype='float64')},
2: {44: pd.Float64Index([516.8952], dtype='float64'),
45: pd.Float64Index([516.8952], dtype='float64')},
3: {44: pd.Float64Index([527.233104], dtype='float64'),
45: pd.Float64Index([527.233104], dtype='float64')},
4: {44: pd.Float64Index([537.77776608], dtype='float64'),
45: pd.Float64Index([537.77776608], dtype='float64')}}
)
df.stack().apply(lambda x: pd.Series(x))[0].unstack()
</code></pre>
<p>Output:</p>
<pre><code> 0 1 2 3 4
44 492.0 506.76 516.8952 527.233104 537.777766
45 492.0 506.76 516.8952 527.233104 537.777766
</code></pre>
|
python|pandas
| 1 |
1,906,563 | 54,212,508 |
Decompress .xls file in python
|
<p>I'm searching for a method to decompress/unzip .xls files in python. By opening Excel files with 7-Zip you can see the directories I'd like to extract.</p>
<p>I already tried renaming the Excel to ".zip" and then extracting it</p>
<pre><code>myExcelFile = zipfile.ZipFile("myExcel.zip")
myExcelFile.extractall()
</code></pre>
<p>but it throws</p>
<pre><code>zipfile.BadZipFile: File is not a zip file
</code></pre>
<p><a href="https://i.stack.imgur.com/ogbAE.png" rel="nofollow noreferrer">.xls in 7-Zip</a></p>
|
<blockquote>
<p>.xls files use the BIFF format. .xlsx files use Office Open XML, which is a zipped XML format. BIFF is not a zipped format; files using that format are not recognized by zip libraries. – shmee </p>
</blockquote>
<p>a conversion to .xlsx is the <a href="https://stackoverflow.com/questions/9918646/how-to-convert-xls-to-xlsx">solution</a></p>
<pre><code>import win32com.client as win32
fname = "full+path+to+xls_file"
excel = win32.gencache.EnsureDispatch('Excel.Application')
wb = excel.Workbooks.Open(fname)
wb.SaveAs(fname+"x", FileFormat = 51) #FileFormat = 51 is for .xlsx extension
wb.Close() #FileFormat = 56 is for .xls extension
excel.Application.Quit()
</code></pre>
|
python|excel|compression|unzip|zip
| 2 |
1,906,564 | 25,748,151 |
Python force a condition
|
<p>Is there a way to force a condition to be true in Python? I've seen it being done in Haskell before and am wondering if you can do it Python. For example:</p>
<pre><code>>>> 2+2==5
True
</code></pre>
|
<p>Well you just need to set four equal to five.</p>
<pre><code>import ctypes
def deref(addr, typ):
return ctypes.cast(addr, ctypes.POINTER(typ))
deref(id(4), ctypes.c_int)[6] = 5
2 + 2
#>>> 5
2 + 2 == 5
#>>> True
</code></pre>
<p>Obviously...</p>
|
python
| 8 |
1,906,565 | 44,483,695 |
TF run session: operation precedence
|
<p>I do not understand why when running initialisation of a Variable as well as the assign method in one run call, the value does not get assigned? Is it something to do with parallel execution, or is there no operation precedence? TF <a href="https://www.tensorflow.org/versions/r0.10/api_docs/python/client/session_management" rel="nofollow noreferrer">session management</a> does not explain it.</p>
<p>Example:</p>
<pre><code>import tensorflow as tf
W = tf.Variable(10)
with tf.Session() as sess:
sess.run([W.initializer, W.assign(20)])
print W.eval() #
>> returns 10, but I would expect 20
#running it separately:
sess.run(W.initializer)
sess.run(W.assign(20))
print W.eval()
>> returns 20
</code></pre>
|
<p>I suspect that this happens because in the first place <code>W.initializer</code> is executed after <code>W.assign(20)</code> overwriting it's value to its initial state.</p>
<p>As you can see, changing the order in the second example:</p>
<pre><code>import tensorflow as tf
W = tf.Variable(10)
with tf.Session() as sess:
sess.run(W.assign(20))
sess.run(W.initializer)
print W.eval()
</code></pre>
<p>gives you the same results as you got in the first one.</p>
|
python|tensorflow
| 0 |
1,906,566 | 44,675,918 |
Assigning lists to eachother in Python
|
<p>Can someone explain me why this python code prints 6 instead of -10?</p>
<p>Why does s2/list2 not change throughout this piece of code?</p>
<pre><code>def f(s1,s2):
s2 = s1
s1[2] = -7
s1 = s2
s2[2] = -10
list1 = [1,2,3]
list2 = [4,5,6]
f(list1,list2)
print(list2[2])
</code></pre>
|
<p>After the line <code>s2 = s1</code> inside the function, the s2 you passed in is no longer relevant inside the function.</p>
<p>That line assigns the value (i.e. list and contents) to the variable s2.</p>
|
python|list|python-3.x
| 1 |
1,906,567 | 20,487,618 |
Dictionary output is not as expected in PYTHON
|
<p>Here prsnson.result contains the data which are comming from an query. I want the expected output, but its not coming. I am using python 2.5</p>
<pre><code>for row in prsnobj.result:
ansdb = {row[0] : row[1]}
print ansdb
</code></pre>
<p>Actual: {1L: 3L} {2L: 4L} {3L: 2L} {4L: 2L} {5L: 2L}</p>
<p>Expected: {1: 3, 2: 4, 3: 2, 4: 2, 5: 2} </p>
|
<p>Integers fetched from a database is commonly returned as <code>Long</code> when utilizing the different database interfaces.</p>
<p>If you want the output you're after</p>
<pre><code>ansdb = {}
for row in prsnobj.result:
ansdb[int(row[0])] = int(row[1])
</code></pre>
<p>Perhaps the <a href="http://mysql-python.sourceforge.net/MySQLdb.html" rel="nofollow">MySQL documentation</a> for Python could help clear things up</p>
|
python|dictionary
| 1 |
1,906,568 | 35,843,704 |
jinja/ansible convert string to boolean
|
<p>I need the simple thing - if variable is <code>false</code> or empty string, then evaluate to <code>false</code>. Otherwise evaluate to true.</p>
<p>I tried <code>bool(var)</code> but I'm getting:</p>
<pre><code>UndefinedError: 'bool' is undefined
</code></pre>
<p>Then I tried <code>var | bool</code> but even though var is non-empty, that evaluates to <code>false</code>. How to make that condition work??</p>
|
<p>I've found a possible solution in ruby style:</p>
<pre><code> when: not not var
</code></pre>
<p>But it's rather ugly. Forgot to say that without <code>not not</code> the var evaluates to a string so ansible errors out. I hope for a better answer so please add another answer if you have.</p>
|
python-3.x|ansible|jinja2
| 4 |
1,906,569 | 60,931,790 |
Big difference between val-acc and prediction accuracy in Keras Neural Network
|
<p>I have a dataset that I used for making NN model in Keras, i took 2000 rows from that dataset to have them as validation data, those 2000 rows should be added in <code>.predict</code> function.</p>
<p>I wrote a code for Keras NN and for now it works good, but I noticed something that is very strange for me. It gives me very good accuracy of more than 83%, loss is around 0.12, but when I want to make a prediction with unseen data (those 2000 rows), it only predicts correct in average of 65%.
When I add Dropout layer, it only decreases accuracy.</p>
<p>Then I have added <code>EarlyStopping</code>, and it gave me accuracy around 86%, loss is around 0.10, but still when I make prediction with unseen data, I get final prediction accuracy of 67%.</p>
<p>Does this mean that model made correct prediction in 87% of situations? Im going with a logic, if I add 100 samples in my <code>.predict</code> function, that program should make good prediction for 87/100 samples, or somewhere in that range (lets say more than 80)? I have tried to add 100, 500, 1000, 1500 and 2000 samples in my <code>.predict</code> function, and it always make correct prediction in 65-68% of the samples.</p>
<p>Why is that, am I doing something wrong?
I have tried to play with number of layers, number of nodes, with different activation functions and with different optimizers but it only changes the results by 1-2%.
My dataset looks like this:</p>
<pre><code>DataFrame shape (59249, 33)
x_train shape (47399, 32)
y_train shape (47399,)
x_test shape (11850, 32)
y_test shape (11850,)
testing_features shape (1000, 32)
</code></pre>
<p>This is my NN model:</p>
<pre><code>model = Sequential()
model.add(Dense(64, input_dim = x_train.shape[1], activation = 'relu')) # input layer requires input_dim param
model.add(Dropout(0.2))
model.add(Dense(32, activation = 'relu'))
model.add(Dropout(0.2))
model.add(Dense(16, activation = 'relu'))
model.add(Dense(1, activation='sigmoid')) # sigmoid instead of relu for final probability between 0 and 1
# compile the model, adam gradient descent (optimized)
model.compile(loss="binary_crossentropy", optimizer= "adam", metrics=['accuracy'])
# call the function to fit to the data training the network)
es = EarlyStopping(monitor='val_loss', min_delta=0.0, patience=1, verbose=0, mode='auto')
model.fit(x_train, y_train, epochs = 15, shuffle = True, batch_size=32, validation_data=(x_test, y_test), verbose=2, callbacks=[es])
scores = model.evaluate(x_test, y_test)
print(model.metrics_names[0], round(scores[0]*100,2), model.metrics_names[1], round(scores[1]*100,2))
</code></pre>
<p>These are the results:</p>
<pre><code>Train on 47399 samples, validate on 11850 samples
Epoch 1/15
- 25s - loss: 0.3648 - acc: 0.8451 - val_loss: 0.2825 - val_acc: 0.8756
Epoch 2/15
- 9s - loss: 0.2949 - acc: 0.8689 - val_loss: 0.2566 - val_acc: 0.8797
Epoch 3/15
- 9s - loss: 0.2741 - acc: 0.8773 - val_loss: 0.2468 - val_acc: 0.8849
Epoch 4/15
- 9s - loss: 0.2626 - acc: 0.8816 - val_loss: 0.2416 - val_acc: 0.8845
Epoch 5/15
- 10s - loss: 0.2566 - acc: 0.8827 - val_loss: 0.2401 - val_acc: 0.8867
Epoch 6/15
- 8s - loss: 0.2503 - acc: 0.8858 - val_loss: 0.2364 - val_acc: 0.8893
Epoch 7/15
- 9s - loss: 0.2480 - acc: 0.8873 - val_loss: 0.2321 - val_acc: 0.8895
Epoch 8/15
- 9s - loss: 0.2450 - acc: 0.8886 - val_loss: 0.2357 - val_acc: 0.8888
11850/11850 [==============================] - 2s 173us/step
loss 23.57 acc 88.88
</code></pre>
<p>And this is for prediction:</p>
<pre><code>#testing_features are 2000 rows that i extracted from dataset (these samples are not used in training, this is separate dataset thats imported)
prediction = model.predict(testing_features , batch_size=32)
res = []
for p in prediction:
res.append(p[0].round(0))
# Accuracy with sklearn - also much lower
acc_score = accuracy_score(testing_results, res)
print("Sklearn acc", acc_score)
result_df = pd.DataFrame({"label":testing_results,
"prediction":res})
result_df["prediction"] = result_df["prediction"].astype(int)
s = 0
for x,y in zip(result_df["label"], result_df["prediction"]):
if x == y:
s+=1
print(s,"/",len(result_df))
acc = s*100/len(result_df)
print('TOTAL ACC:', round(acc,2))
</code></pre>
<p>The problem is...now I get accuracy with sklearn 52% and <code>my_acc</code> 52%.
Why do I get such low accuracy on validation, when it says that its much larger?</p>
|
<p>The training data you posted gives high validation accuracy, so I'm a bit confused as to where you get that 65% from, but in general when your model performs much better on training data than on unseen data, that means you're <a href="https://en.wikipedia.org/wiki/Overfitting" rel="nofollow noreferrer">over fitting</a>. This is a big and recurring problem in machine learning, and there is no method guaranteed to prevent this, but there are a couple of things you can try:</p>
<ul>
<li>regularizing the weights of your network, e.g. using l2 regularization</li>
<li>using stochastic regularization techniques such as drop-out during training</li>
<li><a href="https://en.wikipedia.org/wiki/Early_stopping" rel="nofollow noreferrer">early stopping</a></li>
<li>reducing model complexity (but you say you've already tried this)</li>
</ul>
|
python|machine-learning|keras|neural-network
| 2 |
1,906,570 | 21,096,172 |
How can i jump to next page in Scrapy Rules
|
<p>I have set Rules to get the next pages from the start_url, but it's not working, it only crawls the start_urls page, and the links in that page (with parseLinks). It doesn't go to the next page set in Rules.</p>
<p>any help ?</p>
<pre><code>from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import Selector
from scrapy import log
from urlparse import urlparse
from urlparse import urljoin
from scrapy.http import Request
class MySpider(CrawlSpider):
name = 'testes2'
allowed_domains = ['example.com']
start_urls = [
'http://www.example.com/pesquisa/filtro/?tipo=0&local=0'
]
rules = (Rule(SgmlLinkExtractor(restrict_xpaths=('//a[@id="seguinte"]/@href')), follow=True),)
def parse(self, response):
sel = Selector(response)
urls = sel.xpath('//div[@id="btReserve"]/../@href').extract()
for url in urls:
url = urljoin(response.url, url)
self.log('URLS: %s' % url)
yield Request(url, callback = self.parseLinks)
def parseLinks(self, response):
sel = Selector(response)
titulo = sel.xpath('h1/text()').extract()
morada = sel.xpath('//div[@class="MORADA"]/text()').extract()
email = sel.xpath('//a[@class="sendMail"][1]/text()')[0].extract()
url = sel.xpath('//div[@class="contentContacto sendUrl"]/a/text()').extract()
telefone = sel.xpath('//div[@class="telefone"]/div[@class="contentContacto"]/text()').extract()
fax = sel.xpath('//div[@class="fax"]/div[@class="contentContacto"]/text()').extract()
descricao = sel.xpath('//div[@id="tbDescricao"]/p/text()').extract()
gps = sel.xpath('//td[@class="sendGps"]/@style').extract()
print titulo, email, morada
</code></pre>
|
<p>You should not override the <code>parse</code> method from <code>CrawlSpider</code>, otherwise the <code>Rule</code>s will not be followed.</p>
<p>See the warning at <a href="http://doc.scrapy.org/en/latest/topics/spiders.html#crawling-rules" rel="nofollow noreferrer">http://doc.scrapy.org/en/latest/topics/spiders.html#crawling-rules</a></p>
<blockquote>
<p>When writing crawl spider rules, avoid using parse as callback, since the CrawlSpider uses the parse method itself to implement its logic. So if you override the parse method, the crawl spider will no longer work.</p>
</blockquote>
|
python|web-scraping|web-crawler|scrapy
| 5 |
1,906,571 | 21,233,089 |
How do I use the @shared_task decorator for class based tasks?
|
<p>As seen in the <a href="http://docs.celeryproject.org/en/latest/django/first-steps-with-django.html#using-the-shared-task-decorator" rel="nofollow noreferrer">documentation</a> the <code>@shared_task</code> decorator lets you create tasks without having any concrete app instance. The given examples show how to decorate a function based task.</p>
<p>Can you help me understand how to decorate a class based task?</p>
|
<p>Quoting Ask from celery-users thread where he explained difference between @task a @shared_task. <a href="https://groups.google.com/forum/#!topic/celery-users/XiSDiNjBR6k" rel="noreferrer">Here is link to the thread</a></p>
<p>TL;DR; @shared_task will create the independent instance of the task for each app, making task reusable. </p>
<p>There is a difference between @task(shared=True) and @shared_task </p>
<p>The task decorator will share tasks between apps by default so that if you do: </p>
<pre><code>app1 = Celery()
@app1.task
def test():
pass
app2 = Celery()
</code></pre>
<p>the test task will be registered in both apps: </p>
<pre><code> assert app1.tasks[test.name]
assert app2.tasks[test.name]
</code></pre>
<p>However, the name ‘test’ will always refer to the instance bound to the ‘app1’
app, so it will be configured using app1’s configuration: </p>
<pre><code>assert test.app is app1
</code></pre>
<p>The @shared_task decorator returns a proxy that always uses the task instance
in the current_app: </p>
<pre><code>app1 = Celery()
@shared_task
def test():
pass
assert test.app is app1
app2 = Celery()
assert test.app is app2
</code></pre>
<p>This makes the @shared_task decorator useful for libraries and reusable apps,
since they will not have access to the app of the user. </p>
<p>In addition the default Django example project defines the app instance
as part of the Django project: </p>
<p>from proj.celery import app </p>
<p>and it makes no sense for a Django reusable app to depend on the project module,
as then it would not be reusable anymore. </p>
|
python|celery
| 38 |
1,906,572 | 70,083,003 |
list.clear() is modifying the existing data in python
|
<p>I am trying to convert all the element of given list into float. But the output is surprising me. I am iterating through the elements using 2 for loop and converting every element to float and adding converted element to another list "c". once the inner loop finished I append the list c into b and clear the list c for next iteration. While clearing "c" the content of b also got removed and updated with next iteration elements. And finally the b is again appended with current list of c. So b is updated as last iteration elements present in c.</p>
<p>I thought the list.clear() is referencing the object when we assign it to another variable. So if we are doing something on it, it will also affect the referenced variable (like clear c is affecting b). However, This can be rectified if we reinitialize c i.e c=[]. Other simple logics like using range will give better results.
But, I would like to get proper explanations for this clear() method.</p>
<pre><code>a=[['4','5',6],['1','2','3']]
b=[]
c=[]
for i in a:
c.clear()
# c=[]
for j in i:
c.append(float(j))
b.append(c)
print(b)
</code></pre>
|
<p>The straight forward answer to your solution is to add <code>.copy()</code> in the line <code>b.append(c.copy())</code>. So it will look like</p>
<pre><code>a = [['4', '5', 6], ['1', '2', '3']]
b = []
c = []
for i in a:
c.clear()
# c=[]
for j in i:
c.append(float(j))
b.append(c.copy())
print(b)
</code></pre>
<p>However this solution is rather slow as it consumes O(n^2) time. If you just want a single list of float at the end then you can use itertools for example.</p>
<pre><code>import itertools
a = [['4', '5', 6], ['1', '2', '3']]
b=[]
for item in itertools.chain.from_iterable(a):
b.append(float(item))
print(b)
</code></pre>
<p>There are more simple solutions instead of using two nested loops. Add a comment if you need more suggestion. I mentioned itertools because in order to introduce you to it as it comes very handy. This problem can be solved without itertools too.</p>
|
python|list
| 1 |
1,906,573 | 53,765,942 |
Add Color To 3D Scatter Plot
|
<p>I have a list of x,y,z points and a list of values assigned to each 3D point.
Now the question is, how can I color each point in a 3D scatter plot according to the list of values ? </p>
<p>The colors should be typical engineering -> RGB -> lowest blue to highest red</p>
<p>Thanks a lot</p>
<p>Basically I am searching for an equivalent to: scatter3(X,Y,Z,S,C)
See here: <a href="https://ch.mathworks.com/help/matlab/ref/scatter3.html" rel="nofollow noreferrer">https://ch.mathworks.com/help/matlab/ref/scatter3.html</a></p>
<p>I tried:</p>
<pre><code>col = [i/max(values)*255 for i in values]
ax.scatter(sequence_containing_x_vals, sequence_containing_y_vals, sequence_containing_z_vals,c=col, marker='o')
pyplot.show()
</code></pre>
<p>..but I don't get the desired result</p>
|
<p>Note the recommended way of producing scatters with colors is to supply the values directly to <code>c</code>:</p>
<pre><code>ax.scatter(x, y, z, c=values, marker='o', cmap="Spectral")
</code></pre>
<p>Minimal example:</p>
<pre><code>import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
x = y = z = values = [1,2,3,4,5]
ax = plt.subplot(projection="3d")
sc = ax.scatter(x, y, z, c=values, marker='o', s=100, cmap="Spectral")
plt.colorbar(sc)
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/aBgJJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aBgJJ.png" alt="enter image description here"></a></p>
|
python-3.x|matplotlib|colors
| 1 |
1,906,574 | 54,820,744 |
Python: Run-Length Encoding
|
<p>I'm getting an error if the input only contains a character without a number attached to it. For example, if the user were to input "a2bc" the output should be "aabc". I have to meet the run-length format. The decode function works if its "a2b1c1". The single character doesn't recognize any of it. I played with the conditions and debugger. I can't seem to meet the format for run-length.</p>
<p>The code displayed below were my attempts. I commented on the block where I tried to fix my problems.</p>
<pre><code>def decode(user_input):
if not user_input:
return ""
else:
char = user_input[0]
num = user_input[1]
if num.isdigit():
result = char * int(num)
# elif num.isalpha():
# # this should skip to the next two characters
else:
result = char * int(num)
return result + decode(user_input[2:])
test1 = decode("a2b3c1")
test2 = decode("a2b3c")
print(test1)
print(test2)
</code></pre>
<p>(Note: the output for test2 should be <code>"aabbbc"</code>)
<br>Thank you so much. </p>
|
<p>You should advance by 1 instead of 2 when the next character is not a digit (i.e. the 1 is implicit):</p>
<pre><code>def decode(user_input):
if len(user_input) < 2 : return user_input
multiplier,skip = (int(user_input[1]),2) if user_input[1].isdigit() else (1,1)
return user_input[0] * multiplier + decode(user_input[skip:])
</code></pre>
<p><em>note that doing this recursively will constrain the size of the input string that you can process because of the maximum recursion limit.</em></p>
|
python|python-3.x|string|sequence|run-length-encoding
| 1 |
1,906,575 | 33,259,968 |
Python partial equivalent in Javascript / jQuery
|
<p>What is the equivalent for Pythons <a href="https://docs.python.org/2/library/functools.html#functools.partial" rel="noreferrer">functools.partial</a> in Javascript or jQuery ?</p>
|
<h2>ES6 solution</h2>
<p>Here is a simple solution that works for <code>ES6</code>. However, since javascript doesn't support named arguments, you won't be able to skip arguments when creating a partial.</p>
<pre><code>const partial = (func, ...args) => (...rest) => func(...args, ...rest);
</code></pre>
<p><strong>Example</strong></p>
<pre><code>const greet = (greeting, person) => `${greeting}, ${person}!`;
const greet_hello = partial(greet, "Hello");
>>> greet_hello("Universe");
"Hello, Universe!"
</code></pre>
|
javascript|jquery|python
| 6 |
1,906,576 | 33,173,418 |
How to cut a very "deep" json or dictionary in Python?
|
<p>I have a json object which is very deep. In other words I have a dictionary, containing dictionaries containing dictionaries and so on many times. So, one can imagine it as a huge tree in which some nodes are very far from the root node.</p>
<p>Now I would like to cut this tree so that I have in it only nodes that are separated not more than N steps from the root. Is there a simple way to do it?</p>
<p>For example if I have:</p>
<pre><code>{'a':{'d':{'e':'f', 'l':'m'}}, 'b':'c', 'w':{'x':{'z':'y'}}}
</code></pre>
<p>And I want to keep only nodes that are 2 steps from the root, I should get:</p>
<pre><code>{'a':{'d':'o1'}, 'b':'c', 'w':{'x':'o2'}}
</code></pre>
<p>So, I just replace the far standing dictionaries by single values.</p>
|
<p>Given that your data is very deep, you may very well run into stack limits with recursion. Here's an iterative approach that you might be able to clean up and polish a bit:</p>
<pre><code>import collections
def cut(dict_, maxdepth, replaced_with=None):
"""Cuts the dictionary at the specified depth.
If maxdepth is n, then only n levels of keys are kept.
"""
queue = collections.deque([(dict_, 0)])
# invariant: every entry in the queue is a dictionary
while queue:
parent, depth = queue.popleft()
for key, child in parent.items():
if isinstance(child, dict):
if depth == maxdepth - 1:
parent[key] = replaced_with
else:
queue.append((child, depth+1))
</code></pre>
|
python|json|dictionary|recursion
| 5 |
1,906,577 | 73,648,369 |
Kivy for mobile and use drive files
|
<p>Y want to access googel drive file from my mobile, in apk made in kivy.
in PC is workink perfectly,but not in mobile,when is loading...crash.
tihs is my code:</p>
<pre><code>from pydrive2.auth import GoogleAuth
from pydrive2.drive import GoogleDrive
gauth = GoogleAuth()
# Try to load saved client credentials
gauth.LoadCredentialsFile("mycreds.txt")
if gauth.credentials is None:
# Authenticate if they're not there
# This is what solved the issues:
gauth.GetFlow()
gauth.flow.params.update({'access_type': 'offline'})
gauth.flow.params.update({'approval_Prompt': 'force'})
gauth.LocalWebserverAuth()
elif gauth.access_token_expired:
# Authenticate if they're not there
gauth.LocalWebserverAuth()
elif gauth.access_token_expired:
# Refresh them if expired
gauth.Refresh()
# gauth.LocalWebserverAuth()
else:
# Initialize the saved creds
gauth.Authorize()
# gauth.LocalWebserverAuth()
# Save the current credentials to a file
gauth.SaveCredentialsFile("mycreds.txt")
drive = GoogleDrive(gauth)
#print(type(self.drive))
# PUT YOUR FILE ID AND ANY-NAME HERE
file_id = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx' #(my id is ocult here)
file_name = "compra.xlsx" # You can use existing drive file name / totally different name
# Get contents of your drive file into the desired file. Here contents are stored in the file specified by 'file_name'
downloaded = drive.CreateFile({'id': file_id})
downloaded.GetContentFile(file_name)
</code></pre>
<p>is posible to do run in android?
thanks!!</p>
|
<p>You may need to request <code>INTERNET</code> permissions. Try something like this at the top of your main python file:</p>
<pre><code>from kivy.utils import platform
if platform == 'android':
perms = ['android.permission.INTERNET']
from android.permissions import request_permissions
request_permissions(perms, None)
</code></pre>
|
python|mobile|kivy|google-oauth|pydrive
| 0 |
1,906,578 | 13,187,619 |
Sublist in a List
|
<p>Created a list <code>flowers</code></p>
<pre><code>>>> flowers = ['rose','bougainvillea','yucca','marigold','daylilly','lilly of the valley']
</code></pre>
<p>Then,</p>
<p>I had to assign to list <code>thorny</code> the sublist of list <code>flowers</code> consisting of the first three objects in the list.</p>
<p>This is what I tried:</p>
<pre><code>>>> thorny = []
>>> thorny = flowers[1-3]
>>> thorny
'daylilly'
>>> thorny = flowers[0-2]
>>> thorny
'daylilly'
>>> flowers[0,1,2]
Traceback (most recent call last):
File "<pyshell#76>", line 1, in <module>
flowers[0,1,2]
TypeError: list indices must be integers, not tuple
>>> thorny = [flowers[0] + ' ,' + flowers[1] + ' ,' + flowers[2]]
>>> thorny
['rose ,bougainvillea ,yucca']
</code></pre>
<p>How can I get just the first 3 objects of list flowers, while maintaining the look of a list inside a list? </p>
|
<p>Slicing notation is <code>[:3]</code> not <code>[0-3]</code>:</p>
<pre><code>In [1]: flowers = ['rose','bougainvillea','yucca','marigold','daylilly','lilly of the valley']
In [2]: thorny=flowers[:3]
In [3]: thorny
Out[3]: ['rose', 'bougainvillea', 'yucca']
</code></pre>
|
python|list|sublist
| 17 |
1,906,579 | 21,653,554 |
Number of active threads
|
<p>Python version: '2.7.3 (default, Apr 10 2013, 06:20:15) \n[GCC 4.6.3]'</p>
<p>I have this:</p>
<pre><code>#!/usr/bin/env python
import time, threading, os
def f1(arg1):
for i in xrange(arg1):
time.sleep(1)
print "i is: ", i
print threading.active_count()
print threading.enumerate()
if __name__ == '__main__':
t = threading.Thread(name="MyThread1", target=f1, args=(5,))
t.start()
</code></pre>
<p>My question is, why is the number of active threads being reported as 2 and why does the list generated by <code>enumerate</code> contain the main thread as well.</p>
<p>I was thinking that the main thread terminates after spawning "MyThread1".</p>
<p>$ ./threadeg.py</p>
<pre><code>i is: 0
2
[<_MainThread(MainThread, stopped 139858183157504)>, <Thread(MyThread1, started 139858153768704)>]
i is: 1
2
[<_MainThread(MainThread, stopped 139858183157504)>, <Thread(MyThread1, started 139858153768704)>]
i is: 2
2
[<_MainThread(MainThread, stopped 139858183157504)>, <Thread(MyThread1, started 139858153768704)>]
i is: 3
2
[<_MainThread(MainThread, stopped 139858183157504)>, <Thread(MyThread1, started 139858153768704)>]
i is: 4
2
[<_MainThread(MainThread, stopped 139858183157504)>, <Thread(MyThread1, started 139858153768704)>]
</code></pre>
|
<p><s></p>
<blockquote>
<p>threading.activeCount() which returns the number of threads active
(that were started with that module). <a href="https://mail.python.org/pipermail/python-list/2002-May/173585.html" rel="nofollow noreferrer">[Source]</a>
</s></p>
</blockquote>
<p>Since the threading module is a pure python module built on top of the thread module, it's pretty easy to view the source code.</p>
<p>Here is the source code for <code>active_count</code></p>
<pre><code>def activeCount():
"""Return the number of Thread objects currently alive.
The returned count is equal to the length of the list returned by
enumerate().
"""
with _active_limbo_lock:
return len(_active) + len(_limbo)
</code></pre>
<p>upon further investigation it should be noted that there is a _MainThread instance is stored in _active (_active and _limbo are both dictionaries that map thread names to their instances). And is deleted from _active when <code>_exitfunc</code> is called.</p>
<p>Here is the source for _MainThread, </p>
<pre><code>class _MainThread(Thread):
def __init__(self):
Thread.__init__(self, name="MainThread")
self._Thread__started.set()
self._set_ident()
with _active_limbo_lock:
_active[_get_ident()] = self
def _set_daemon(self):
return False
def _exitfunc(self):
self._Thread__stop()
t = _pickSomeNonDaemonThread()
if t:
if __debug__:
self._note("%s: waiting for other threads", self)
while t:
t.join()
t = _pickSomeNonDaemonThread()
if __debug__:
self._note("%s: exiting", self)
self._Thread__delete()
</code></pre>
<p>once <code>_exitfunc</code> is called <code>_MainThread</code> waits for all non daemon threads to join and then calls <code>Thread._delete</code> which in this case has been <a href="http://docs.python.org/2/tutorial/classes.html#private-variables-and-class-local-references" rel="nofollow noreferrer">mangled</a> to <code>__Thread_delete</code> which in turn removes <code>_MainThread</code> from the <code>_active</code> dictionary.</p>
<p><code>_exitfunc</code> is assigned to <code>_shutdown</code> on line 1201.</p>
<pre><code>_shutdown = _MainThread()._exitfunc
</code></pre>
<p><code>_shutdown</code> is called from <a href="http://hg.python.org/releasing/2.7.6/file/4913d0e9be30/Python/pythonrun.c#l1703" rel="nofollow noreferrer">pythonrun.c</a>, which in turn is called by <a href="http://docs.python.org/2/c-api/init.html#Py_Finalize" rel="nofollow noreferrer"><code>Py_Finalize</code></a>. <code>Py_Finalize</code> is called by <a href="http://hg.python.org/releasing/2.7.6/file/4913d0e9be30/Python/pythonrun.c#l1775" rel="nofollow noreferrer">Py_Exit</a>, which exits the main process (at this point only daemons are left).</p>
<p><a href="http://docs.python.org/2.7/c-api/sys.html#Py_Exit" rel="nofollow noreferrer">Py_Exit's documentation</a>.</p>
<blockquote>
<p>Exit the current process. This calls Py_Finalize() and then calls the standard C library function exit(status).</p>
</blockquote>
<p>Here is an example to get you the behavior you expect.</p>
<pre><code>import threading, time
def f():
time.sleep(1) #wait for the interpreter to "shutdown"
print threading.enumerate()
if __name__ == '__main__':
t = threading.Thread(target=f)
t.daemon = True
t.start()
threading._shutdown() #simulate an interpreter shutdown
</code></pre>
<p>Another great <a href="https://stackoverflow.com/a/12639040/322909">answer</a> describing how threads are shutdown.</p>
|
python|multithreading
| 1 |
1,906,580 | 21,834,702 |
I'm making a python script that strips whitespace off of the end of lines, but it is only working on the first line
|
<p>Like I said in the title, my script only seems to work on the first line.<br>
Here is my script: </p>
<pre><code>#!/usr/bin/python
import sys
def main():
a = sys.argv[1]
f = open(a,'r')
lines = f.readlines()
w = 0
for line in lines:
spot = 0
cp = line
for char in reversed(cp):
x = -1
if char == ' ':
del line[x]
w += 0
if char != '\n' or char != ' ':
lines[spot] = line
spot += 1
break
x += 1
f.close()
f = open(a,'w')
f.writelines(lines)
print("White Space deleted: "+str(w))
if __name__ == "__main__":
main()
</code></pre>
<p>I'm not too experienced when it comes to loops.</p>
|
<p>The following script do the same thing as your program, more compactly:</p>
<pre><code>import fileinput
deleted = 0
for line in fileinput.input(inplace=True):
stripped = line.rstrip()
deleted += len(line) - len(stripped) + 1 # don't count the newline
print(stripped)
print("Whitespace deleted: {}".format(deleted))
</code></pre>
<p>Here <a href="http://docs.python.org/3/library/stdtypes.html#str.rstrip" rel="nofollow"><code>str.rstrip()</code></a> removes <em>all</em> whitespace from the end of a line (newlines, spaces and tabs).</p>
<p>The <a href="http://docs.python.org/3/library/fileinput.html" rel="nofollow"><code>fileinput</code> module</a> takes care of handling <code>sys.argv</code> for you, opening files one by one if you name more than one file.</p>
<p>Using <code>print()</code> will add the newline back on to the end of the stripped lines.</p>
|
python|for-loop|text-files|whitespace
| 3 |
1,906,581 | 38,305,033 |
Why I cant write dataframe in DB?
|
<p>I have 32 gb RAM and I use jupyter and pandas. My dataframe isn't very big, but when I want to write it in Arctic data base I have "MemoryError":</p>
<pre><code>df_q.shape
(157293660, 10)
def memory(df):
mem = df.memory_usage(index=True).sum() / (1024 ** 3)
print(mem)
memory(df_q)
12.8912200034
</code></pre>
<p>And I want to write it:</p>
<pre><code>from arctic import Arctic
import arctic as arc
store = Arctic('.....')
lib = store['myLib']
lib.write('quotes', df_q)
</code></pre>
<blockquote>
<p>MemoryError Traceback (most recent call
last) in ()
1 memory(df_q)
----> 2 lib.write('quotes', df_q)</p>
<p>/usr/local/lib/python2.7/dist-packages/arctic/decorators.pyc in
f_retry(*args, **kwargs)
48 while True:
49 try:
---> 50 return f(*args, **kwargs)
51 except (DuplicateKeyError, ServerSelectionTimeoutError) as e:
52 # Re-raise errors that won't go away.</p>
<p>/usr/local/lib/python2.7/dist-packages/arctic/store/version_store.pyc
in write(self, symbol, data, metadata, prune_previous_version,
**kwargs)
561
562 handler = self._write_handler(version, symbol, data, **kwargs)
--> 563 mongo_retry(handler.write)(self._arctic_lib, version, symbol, data, previous_version, **kwargs)
564
565 # Insert the new version into the version DB</p>
<p>/usr/local/lib/python2.7/dist-packages/arctic/decorators.pyc in
f_retry(*args, **kwargs)
48 while True:
49 try:
---> 50 return f(*args, **kwargs)
51 except (DuplicateKeyError, ServerSelectionTimeoutError) as e:
52 # Re-raise errors that won't go away.</p>
<p>/usr/local/lib/python2.7/dist-packages/arctic/store/_pandas_ndarray_store.pyc
in write(self, arctic_lib, version, symbol, item, previous_version)
301 def write(self, arctic_lib, version, symbol, item, previous_version):
302 item, md = self.to_records(item)
--> 303 super(PandasDataFrameStore, self).write(arctic_lib, version, symbol, item, previous_version, dtype=md)
304
305 def append(self, arctic_lib, version, symbol, item, previous_version):</p>
<p>/usr/local/lib/python2.7/dist-packages/arctic/store/_ndarray_store.pyc
in write(self, arctic_lib, version, symbol, item, previous_version,
dtype)
385 version['type'] = self.TYPE
386 version['up_to'] = len(item)
--> 387 version['sha'] = self.checksum(item)
388
389 if previous_version:</p>
<p>/usr/local/lib/python2.7/dist-packages/arctic/store/_ndarray_store.pyc
in checksum(self, item)
370 def checksum(self, item):
371 sha = hashlib.sha1()
--> 372 sha.update(item.tostring())
373 return Binary(sha.digest())
374 </p>
<p>MemoryError:</p>
</blockquote>
<p>WTF ?
If I use df_q.to_csv() I will wait for years....</p>
|
<p>Your issue actually is not a memory issue. If you read your errors, it seems that your library is having trouble accessing your data...</p>
<p>1st Error: Says your server has timed out. (<code>ServerSelectionTimeoutError</code>)</p>
<p>2nd Error: Trying to update MongoDB version.</p>
<p>3rd Error: Retries accessing your server, fails.(<code>ServerSelectionTimeoutError</code>)</p>
<p>etc. So essentially your problem lies in the Arctic package itself (see last error is a checksum error). You can also deduce this from the fact that <code>df_q.to_csv()</code> works, however it is very slow since it is not optimized like Artic. I would suggest trying to reinstall the Arctic package</p>
|
database|python-2.7|pandas
| 0 |
1,906,582 | 38,107,752 |
Split string based off multiple possible deminiters but keep delimiters
|
<p>I am cleaning addresses. I am looking to strip everything after specific words (avenue, ave, road, place, etc etc etc).</p>
<p>I was looking at doing something like this but I believe this will return everything before the word. That means "1 first avenue" would return "1 first".</p>
<p>How can I append this (or do it differently?) so that it would return everything up to AND INCLUDING the pattern words?</p>
<pre><code>patterns = ["ave", "avenue", "road", "street" etc etc etc]
reduce(lambda s, pat: s.split(pat, 1)[0], patterns, string)
</code></pre>
|
<p>I think this is what you want.</p>
<pre><code>pattern = ['ave', 'street', 'road']
address = 'Imaginary ave, Fantasy Island'
for i in pattern:
if i in address:
print address[:address.find(i) + len(i)]
</code></pre>
<p>or if there are a list of addresses</p>
<pre><code>print [address[:address.find(i) + len(i)] for i in pattern if i in address]
</code></pre>
|
python
| 2 |
1,906,583 | 30,760,750 |
Understanding inheritance in practice. Printing values of an instance
|
<p>I have a class of Agents with a working <code>__str__</code> method.
I have a class of <code>Family(Agents)</code>.
The <code>Family</code> is structured as a dictionary with the Agent ID as key.
After allocating agents to families, I iterate over <code>my_families</code>:
When I print <code>members.keys()</code>, I get the correct keys.
When I print <code>members.values()</code>, I get a list of instances</p>
<p>But, I cannot access the values themselves inside the instances.
When I try to use a method <code>get_money()</code> I get an answer that Family class does not have that method. Any help?</p>
<pre><code>for family in my_families:
print family.members.values()
</code></pre>
<p>Thanks</p>
<p>Providing more information.
Family class</p>
<pre><code>class Family(agents.Agent):
"""
Class for a set of agents
"""
def __init__(self, family_ID):
self.family_ID = family_ID
# _members stores agent set members in a dictionary keyed by ID
self.members = {}
def add_agent(self, agent):
"Adds a new agent to the set."
self.members[agent.get_ID()] = agent
</code></pre>
<p>Class of agents</p>
<pre><code>class Agent():
"Class for agent objects."
def __init__(self, ID, qualification, money):
# self._ID is unique ID number used to track each person agent.
self.ID = ID
self.qualification = qualification
self.money = money
def get_ID(self):
return self.ID
def get_qual(self):
return self.qualification
def get_money(self):
return self.money
def __str__(self):
return 'Agent Id: %d, Agent Balance: %d.2, Years of Study: %d ' (self.ID, self.money, self.qualification)
#Allocating agents to families
def allocate_to_family(agents, families):
dummy_agent_index = len(agents)
for family in families:
dummy_family_size = random.randrange(1, 5)
store = dummy_family_size
while dummy_family_size > 0 and dummy_agent_index >= 0 and (dummy_agent_index - dummy_family_size) > 0:
family.add_agent(agents[dummy_agent_index - dummy_family_size])
dummy_family_size -= 1
dummy_agent_index -= store
</code></pre>
<p>Finally the printing example that gets me instance objects but not their values</p>
<pre><code>for family in my_families:
print family.members.values()
</code></pre>
|
<p>If <code>Family</code> is supposed to contain <code>Agent</code> objects, why does it inherit from <code>Agent</code>? Regardless, you never initialized the parent <code>Agent</code> object in the <code>Family</code> class's <code>__init__</code>, and according to your edit the <code>get_money</code> method is contained in the <code>Agent</code> class, so <code>Family</code> objects don't have a <code>get_money</code> method. To access that, you would need to first access a <code>Family</code> object's <code>members</code> dictionary, then use a key to access the desired <code>Agent</code> object, then access that object's <code>get_money</code> method.</p>
|
python|class|inheritance|printing|instances
| 1 |
1,906,584 | 40,049,802 |
Pandas - operations on groups using transform
|
<p>Here is my example:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({'A A': ['one', 'one', 'two', 'two', 'one'] ,
'B': ['Ar', 'Br', 'Cr', 'Ar','Ar'] ,
'C': ['12/15/2011', '11/11/2001', '08/30/2015', '07/3/1999','03/03/2000' ],
'D':[1,7,3,4,5]})
df['C'] = pd.to_datetime(df['C'])
def date_test(x):
key_date = pd.Timestamp(np.datetime64('2015-08-13'))
end_date = pd.Timestamp(np.datetime64('2016-10-10'))
result = False
for i in x.index:
if key_date < x[i] < end_date:
result = True
return result
def int_test(x):
result = False
for i in x.index:
if 1 < x[i] < 9:
result = True
return result
</code></pre>
<p>Now I am grouping by column <code>B</code> and transforming column <code>C</code> and <code>D</code></p>
<p>The following code producess column of ones. </p>
<pre><code>df.groupby(['B'])['D'].transform(int_test)
</code></pre>
<p>And the following code produces column of dates</p>
<pre><code>df.groupby(['B'])['C'].transform(date_test)
</code></pre>
<p>I would expect them both to produce collection of ones and zeros and not dates. My goal is to get collection of ones and zeros. Any thoughts?</p>
<p><strong>Update</strong>: My main goal is to understand how <code>transform</code> works. </p>
|
<p>For type consistency with subsequent operations your can do with the results of a <code>transform</code> call, that function tries to cast the resulting Series into the dtype of the selected data it works against. The function source code has this dtype cast explicitly done.</p>
<p>Your boolean data can be turned into dates, thus you obtain a datetime series. Explicitly cast to <code>int</code> to get the expected type:</p>
<pre><code>df.groupby(['B'])['C'].transform(date_test).astype('int64')
</code></pre>
|
python|pandas|transform
| 2 |
1,906,585 | 58,651,514 |
Python polynomial roots are inaccurate
|
<p>An algorithm that I'm trying to implement requires finding the roots of a 10th degree polynomial, which I created with sympy, it looks like this:</p>
<pre class="lang-py prettyprint-override"><code>import sympy
import numpy as np
det = sympy.Poly(1.3339507303385e-16*z**10 + 6.75390067076469e-14*z**9 + 7.18791227134351e-12*z**8 + 2.27504286959352e-10*z**7 + 2.37058998324426e-8*z**6 + 1.63916629439745e-6*z**5 + 3.0608041245671e-5*z**4 + 4.83564348562906e-8*z**3 + 2.0248073519853e-5*z**2 - 4.73126177166988e-7*z + 1.1495883206077e-6)
</code></pre>
<p>For finding the roots of the polynomial, I use the following code:</p>
<pre class="lang-py prettyprint-override"><code>coefflist = det.coeffs()
solutions = np.roots(coefflist)
print(coefflist)
[1.33395073033850e-16, 6.75390067076469e-14, 7.18791227134351e-12, 2.27504286959352e-10, 2.37058998324426e-8, 1.63916629439745e-6, 3.06080412456710e-5, 4.83564348562906e-8, 2.02480735198530e-5, -4.73126177166988e-7, 1.14958832060770e-6]
print(solutions)
[-3.70378229e+02+0.00000000e+00j -1.18366138e+02+0.00000000e+00j
2.71097137e+01+5.77011644e+01j 2.71097137e+01-5.77011644e+01j
-3.59084863e+01+1.44819591e-02j -3.59084863e+01-1.44819591e-02j
2.60969082e-03+7.73805425e-01j 2.60969082e-03-7.73805425e-01j
1.42936329e-02+2.49877948e-01j 1.42936329e-02-2.49877948e-01j]
</code></pre>
<p>However, when I substitute <code>z</code> with a root, lets say the first one, the result is not zero, but some number:</p>
<pre class="lang-py prettyprint-override"><code>print(det.subs(z,solutions[0]))
-1.80384169514123e-6
</code></pre>
<p>I would have expected, that the result probably isn't the integer <code>0</code>, but <code>1e-6</code> is pretty bad (it should be zero, right?). Is there a mistake in my code? Is this inaccuracy normal? Any thoughts/suggestions would be helpful. Is there a more accurate alternative to compute the roots of a 10th degree polynomial?</p>
|
<p>You do not need sympy, the methods in numpy are completely sufficient. Define the polynomial by its list of coefficients and compute the roots</p>
<pre class="lang-py prettyprint-override"><code>p=[1.33395073033850e-16, 6.75390067076469e-14, 7.18791227134351e-12, 2.27504286959352e-10, 2.37058998324426e-8, 1.63916629439745e-6, 3.06080412456710e-5, 4.83564348562906e-8, 2.02480735198530e-5, -4.73126177166988e-7, 1.14958832060770e-6]
sol= np.roots(p); sol
</code></pre>
<p>giving the result</p>
<pre><code>array([ -3.70378229e+02 +0.00000000e+00j, -1.18366138e+02 +0.00000000e+00j,
2.71097137e+01 +5.77011644e+01j, 2.71097137e+01 -5.77011644e+01j,
-3.59084863e+01 +1.44819592e-02j, -3.59084863e+01 -1.44819592e-02j,
2.60969082e-03 +7.73805425e-01j, 2.60969082e-03 -7.73805425e-01j,
1.42936329e-02 +2.49877948e-01j, 1.42936329e-02 -2.49877948e-01j])
</code></pre>
<p>and evaluate the polynomials at these approximate roots</p>
<pre><code>np.polyval(p,sol)
</code></pre>
<p>giving the array</p>
<pre><code>array([ 2.28604877e-06 +0.00000000e+00j, 1.30435230e-10 +0.00000000e+00j,
1.05461854e-11 -7.56043461e-12j, 1.05461854e-11 +7.56043461e-12j,
-3.98439686e-14 +6.84489332e-17j, -3.98439686e-14 -6.84489332e-17j,
1.18584613e-20 +1.59976730e-21j, 1.18584613e-20 -1.59976730e-21j,
6.35274710e-22 +1.74700545e-21j, 6.35274710e-22 -1.74700545e-21j])
</code></pre>
<p>Obviously, evaluating a polynomial close to a root involves lots of catastrophic cancellations, that is, the intermediate terms are large of opposite sign and cancel out, but their errors are proportional to their original sizes. To get an estimate of the combined error size, replace the polynomial coefficients with their absolute values and also the evaluation points.</p>
<pre><code>np.polyval(np.abs(p),np.abs(sol))
</code></pre>
<p>resulting in </p>
<pre><code>array([ 1.81750423e+10, 8.40363409e+05,
8.08166359e+03, 8.08166359e+03,
2.44160616e+02, 2.44160616e+02,
2.50963696e-05, 2.50963696e-05,
2.65889696e-06, 2.65889696e-06])
</code></pre>
<p>In the case of the first root, the scale multiplied with the machine constant gives an error scale of <code>1e+10*1e-16=1e-6</code>, which means that the value at the root is as good as zero within the framework of double-precision floating point.</p>
|
python|numpy|sympy|polynomials
| 2 |
1,906,586 | 51,869,924 |
Python - Check if a word "Is Isogram"
|
<p>I could look up the right answer, but I am just so certain I am correct as I get this to pass all my tests in IDLE, but on my online course it only partially passes - any reason why? </p>
<pre><code>def is_isogram(txt):
if len(list(txt)) == len(set(txt)):
return True
else:
return False
</code></pre>
|
<p>It might be that you aren't accounting for a string with upper <strong><em>and</em></strong> lowercase letters. Using either <code>str.upper</code> or <code>str.lower</code> might be the solution. If that is the case, something like this could do it in one pass.</p>
<pre><code>def is_isogram(txt):
seen = set()
for char in txt.lower():
if char in seen:
return False
seen.add(char)
return True
</code></pre>
|
python
| 1 |
1,906,587 | 51,949,545 |
skip second row of dataframe while reading csv file in python
|
<p>I am unable to skip the second row of a data file while reading a csv file in python.</p>
<p>I am using the following code : </p>
<pre><code>imdb_data = pd.read_csv('IMDB_data.csv', encoding = "ISO-8859-1",skiprows = 2)
</code></pre>
|
<p>Your code will ommit the first two lines of your csv.
If you want the second line to be ommitted (but the first one included) just do this minor change:</p>
<pre><code>imdb_data = pd.read_csv('IMDB_data.csv', encoding = "ISO-8859-1",skiprows = [1])
</code></pre>
|
python|python-3.x
| 11 |
1,906,588 | 51,728,978 |
How to reduce memory consumption and processing time of Python's itertools.product?
|
<h3>What I Have</h3>
<p>I have created a function to generate all possible matrices for a given number of rows and columns and a given list of possible values.</p>
<pre><code>def generate_matrices(rows, columns, values):
"""Returns an iterable over all possible matrices for a given
number of rows and columns and a given list of possible
values.
Arguments:
rows -- number of rows desired for each matrix
columns -- number of columns desired for each matrix
values -- list of values desired for iteration
Returns:
returns an iterator over the generated matrices
Dependencies:
requires the itertools library (`import itertools`)
"""
x = itertools.product(values, repeat = columns)
y = itertools.product(x, repeat = rows)
return y
</code></pre>
<h3>What I Need</h3>
<p>This works fine for small inputs (small numbers of rows and columns, few values), but for larger inputs, all system memory is consumed in the processing of the function. </p>
<p><strong>How can I minimize the memory consumption and processing time of this function?</strong></p>
<p><em>The purpose of these matrices is to provide test values to a set of functions to maximize the output of a particular formula. If there is a better way to test all possible inputs for a variable number of variables and variable ranges, please advise.</em></p>
|
<p>It was little hard to picture what was happening just from reading your code. So here's a small example (something which you should have done for us - if you really want help!):</p>
<pre><code>In [201]: x=itertools.product([1,2],repeat=2)
In [202]: list(x)
Out[202]: [(1, 1), (1, 2), (2, 1), (2, 2)]
In [203]: y=itertools.product(Out[202],repeat=2)
In [204]: list(y)
Out[204]:
[((1, 1), (1, 1)),
((1, 1), (1, 2)),
((1, 1), (2, 1)),
((1, 1), (2, 2)),
((1, 2), (1, 1)),
((1, 2), (1, 2)),
((1, 2), (2, 1)),
((1, 2), (2, 2)),
((2, 1), (1, 1)),
((2, 1), (1, 2)),
((2, 1), (2, 1)),
((2, 1), (2, 2)),
((2, 2), (1, 1)),
((2, 2), (1, 2)),
((2, 2), (2, 1)),
((2, 2), (2, 2))]
</code></pre>
<p>So even if you consume <code>y</code> iteratively, it still has to create the full list of <code>x</code> possibilities.</p>
<p>If I read your problem right, you want to test, sequentially, arrays made from elements of <code>y</code>, e.g.:</p>
<pre><code>In [205]: np.array(Out[204][5])
Out[205]:
array([[1, 2],
[1, 2]])
</code></pre>
<p>For a larger example:</p>
<pre><code>In [206]: x=itertools.product([1,2,3,4],repeat=4)
In [207]: y=itertools.product(x,repeat=3)
In [209]: next(y)
Out[209]: ((1, 1, 1, 1), (1, 1, 1, 1), (1, 1, 1, 1))
In [210]: np.array(_)
Out[210]: array([[1, 1, 1, 1],
[1, 1, 1, 1],
[1, 1, 1, 1]])
</code></pre>
<p>And subsequent <code>next(y)</code> will produce more (3,4) arrays, gradually replacing the 1's with values from [1,2,3,4].</p>
<p>How about generating all matrix values with one product:</p>
<pre><code>In [214]: z = itertools.product([1,2,3,4],repeat=12)
In [215]: np.array(next(z)).reshape(3,4)
Out[215]:
array([[1, 1, 1, 1],
[1, 1, 1, 1],
[1, 1, 1, 1]])
</code></pre>
<p>As far as I can tell it produces the same arrays as your nested generators.</p>
|
python-3.x|numpy|matrix|simulation|itertools
| 0 |
1,906,589 | 19,500,386 |
Python Regex split string into 5 pieces
|
<p>I'm playing around with Python, and i have run into a problem.
I have a large data file where each string is structured like this: </p>
<pre><code>"id";"userid";"userstat";"message";"2013-10-19 06:33:20 (date)"
</code></pre>
<p>I need to split each line into 5 pieces, semicolon being the delimiter. But at the same time within the quotations. </p>
<p>It's hard to explain, so i hope you understand what i mean.</p>
|
<p>That format looks a lot like <code>ssv</code>: semicolon-separated valued (like "csv", but semicolons instead of commas). We can use the <a href="http://docs.python.org/2/library/csv.html" rel="nofollow"><code>csv</code></a> module to handle this:</p>
<pre><code>import csv
with open("yourfile.txt", "rb") as infile:
reader = csv.reader(infile, delimiter=";")
for row in reader:
print row
</code></pre>
<p>produces</p>
<pre><code>['id', 'userid', 'userstat', 'message', '2013-10-19 06:33:20 (date)']
</code></pre>
<p>One advantage of this method is that it will correctly handle the case of semicolons within the quoted data automatically.</p>
|
python|regex|string|split
| 4 |
1,906,590 | 21,942,580 |
Parsing Amazon S3 URIs using regex in Python?
|
<p>I need to write a script to scan a Amazon's S3 bucket, looking for newer versions of a software we are testing.
I'm also using <code>s3cmd</code> do return the <code>ls</code> of that bucket, and the output looks like:</p>
<pre><code> DIR s3://foo/versions/4.4.1.2/
DIR s3://foo/versions/4.5.0.10a/
DIR s3://foo/versions/4.5.0.11a/
DIR s3://foo/versions/4.5.0.12a/
DIR s3://foo/versions/4.5.0.13a/
</code></pre>
<p>there's some whitespace in front of DIR. I was using <code>string.strip().split()</code> to break that string into tokens, and it looks like:</p>
<pre><code>[' ', 'DIR s3://foo/versions/4.4.1.2/',
'DIR s3://foo/versions/4.5.0.10a/',
'DIR s3://foo/versions/4.5.0.11a/',
'DIR s3://foo/versions/4.5.0.12a/',
'DIR s3://foo/versions/4.5.0.13a/',
'2014-02-12 00:33 s3://foo/versions/\n']
</code></pre>
<p>What I wanted was to use the module <code>re</code> and parse that string using a regular expression, but I'm not sure how to produce the appropriate regex that would yield me only the version.
What I needed at the end was an array of the versions like <code>[4.4.1.2, 4.5.0.10a]</code>. Let's say the regex for the version is <code>{d}* which would be</code>{d}.{d}.{d}.{d}<code>or</code>{d}<code>, or</code>{d}.{d}.{[A-Z][a-z]}`, which all possible combinations, but letters always after numbers.</p>
|
<p>If your versions are always come after the <code>versions/</code> then you can use this example:</p>
<pre><code>input = '''
DIR s3://foo/versions/4.4.1.2/
DIR s3://foo/versions/4.5.0.10a/
DIR s3://foo/versions/4.5.0.11a/
DIR s3://foo/versions/4.5.0.12a/
DIR s3://foo/versions/4.5.0.13a/
'''
print re.findall("versions/([^/]*)/", input)
</code></pre>
|
python|regex|amazon-s3
| 4 |
1,906,591 | 43,714,216 |
Python GUI not working
|
<p>I am trying to create a GUI in Python but for some reason it is not working. The GUI should have an info button and a quit button. I know it is a simple error that I should be able to debug but I can't find it.</p>
<p>Traceback error message:</p>
<blockquote>
<p>in
gu=GUI()</p>
<p>self.info_button = tkinter.Button(self.bottom_frame, text="Show Info",
command=self.showinfo) AttributeError: 'GUI' object has no attribute
'showinfo'</p>
</blockquote>
<p>Here is my code:</p>
<pre><code>import tkinter
import tkinter.messagebox
class GUI:
def __init__(self):
#main window
self.main_window = tkinter.Tk()
#create two frames
self.top_frame= tkinter.Frame()
self.bottom_frame = tkinter.Frame()
#object of StringVar class
self.value = tkinter.StringVar()
#top frame label
self.info_label = tkinter.Label(self.top_frame,textvariable = self.value)
#information button that displays the information about the name and address
self.info_button = tkinter.Button(self.bottom_frame, text="Show Info", command=self.showinfo)
#quit button that closes the program
self.quit_button = tkinter.Button(self.bottom_frame, text ='Quit', command=self.main_window.destroy)
#pack method for packing widgets
self.info_label.pack()
self.info_button.pack(side='left')
self.quit_button.pack(side='left')
self.top_frame.pack()
self.bottom_frame.pack()
#main loop for running the program
tkinter.mainloop()
def showinfo(self):
inf = '\tSteven Marcus\n\t274 Baily Drive\n\tWaynesville, NC 27999'
self.value.set(inf)
#call the main function
gu=GUI()
</code></pre>
|
<p>Indentation in Python matters a lot, your indentation is off, you will need to indent the whole <code>def showinfo(self):</code> block so it goes under the <code>GUI</code> class.</p>
<pre><code>import tkinter
import tkinter.messagebox
class GUI:
def __init__(self):
#main window
self.main_window = tkinter.Tk()
#create two frames
self.top_frame= tkinter.Frame()
self.bottom_frame = tkinter.Frame()
#object of StringVar class
self.value = tkinter.StringVar()
#top frame label
self.info_label = tkinter.Label(self.top_frame,textvariable = self.value)
#information button that displays the information about the name and address
self.info_button = tkinter.Button(self.bottom_frame, text="Show Info", command=self.showinfo)
#quit button that closes the program
self.quit_button = tkinter.Button(self.bottom_frame, text ='Quit', command=self.main_window.destroy)
#pack method for packing widgets
self.info_label.pack()
self.info_button.pack(side='left')
self.quit_button.pack(side='left')
self.top_frame.pack()
self.bottom_frame.pack()
#main loop for running the program
tkinter.mainloop()
def showinfo(self): # indent these:
inf = '\tSteven Marcus\n\t274 Baily Drive\n\tWaynesville, NC 27999'
self.value.set(inf)
#call the main function
gu=GUI()
</code></pre>
|
python|user-interface|tkinter
| 1 |
1,906,592 | 43,769,309 |
Pickling a very large text file, 12Gb
|
<p>I'm trying to pickle a large text file, using the following code:</p>
<pre><code>import pickle
file1=open('/home/mustafa/data/wiki.en.text','r')
obj=[file1.read()]
pickle.dump(obj,open('data.pkl','w'),2)
</code></pre>
<p>I get the following error:</p>
<blockquote>
<pre><code>struct.error: 'i' format requires -2147483648 <= number <= 2147483647
</code></pre>
</blockquote>
<p>I think it might be a multiprocessing issue.</p>
|
<p>For this kind of serialization pickle is not a good option. Even for cPickle, information > than 4Gb can be highly problematic. Have you think on using other alternatives like SQLite or HDF5?</p>
|
python|pickle
| 1 |
1,906,593 | 54,611,803 |
How to make a custom annotation that the type checker will recognize as str?
|
<p>I want to use the argument annotation to define a parser for the argument.</p>
<p>Here is an example:</p>
<pre><code>def parser(arg: str) -> str:
return do_something(arg)
def foo(a: parser):
# get arg's annotation and parse it
return get_annotation(foo, "a")(a)
</code></pre>
<p>The problem is when I call the function with an argument of type <code>str</code> the type checker warns me that it's not of type <code>parser</code>.</p>
<pre><code>foo("hello")
</code></pre>
<p>My question: How can I get the best of both worlds and use the annotation feature to get <code>parser</code> where the type checker will get <code>str</code>?
(Preferably without using <code>Union</code>)</p>
<p>Maybe if something like this could work:</p>
<pre><code>def foo(a: parser[str]):
# get arg's annotation and parse it
return get_annotation(foo, "a")(a)
foo("hello") # this compiles without any warnings
</code></pre>
<p>Sort of like <code>Optional</code> or <code>Union</code> return one object but the type checker analyzes them as different objects.</p>
|
<p>Python 3.9 added the <a href="https://docs.python.org/3.9/library/typing.html#typing.Annotated" rel="nofollow noreferrer">Annotated</a> type, so you can do this:</p>
<pre><code>def foo(a: Annotated[str, parser]):
# a is of type str, but has parser as metadata
pass
</code></pre>
|
python
| 0 |
1,906,594 | 9,405,192 |
How does vary the brightness of markers on a Matplotlib basemap?
|
<p>This code varies the transparency of markers on a matplotlib base map via the alpha parameter.</p>
<pre><code>myBaseMap.plot(x_values, y_values, 'x', alpha=0.7, c=(1.,0,0))
</code></pre>
<p>However, how does one vary the brightness of a marker? I do not want semi-transparent markers because I want the markers to cover the content behind them. Thank you!</p>
|
<p>My reading of your question is that you would like to know how to get different transparency for line and markers.</p>
<p>One way to do this is to plot the markers using <code>scatter</code>:</p>
<pre><code>myBaseMap.plot(x_values, y_values, alpha=0.7, c=(1.,0,0), zorder=0)
myBaseMap.scatter(x_values, y_values, marker='x', color=(1.,0,0), zorder=1)
</code></pre>
<p>Lower <code>zorder</code> numbers are drawn first.</p>
<p>Simple example: </p>
<pre><code>import matplotlib.pyplot as plt
plt.plot([1,2,3],[3,2,1],alpha=0.25,c=(1.,0,0),zorder=0)
plt.scatter([1,2,3],[3,2,1],marker='x',color=(1.,0,0),zorder=1,s=75,alpha=1.0)
</code></pre>
<p><img src="https://i.stack.imgur.com/osvUH.png" alt="enter image description here"></p>
|
python|matplotlib
| 1 |
1,906,595 | 9,304,508 |
Python MySQL data import
|
<p>I am using the following script to pull data from a third party tool, create a table in a MySQL database and populate it with the resulting data. The script runs through and I can see the print out of all of the requested data in the Python Shell window. However, when I open the database the table is created with the column names but there are no rows and no data. I have searched around and read that I do not need to use 'conn.commit' for a script that is just retrieving data. Is that the case here? If not does anyone see another reason why the data is not populating the table?</p>
<pre><code>import httplib2, urllib, json, pprint, getpass, string, time, MySQLdb
def usage():
print "Usage: python26 mysql.py or ./mysql.py"
sys.exit(1)
if len(sys.argv) != 1:
usage()
# Connect to the database and create the tables
conn = MySQLdb.connect (host = "localhost",
user = "XXXXXXXXX",
passwd = "XXXXXXXX")
cursor = conn.cursor ()
cursor.execute ("DROP DATABASE IF EXISTS tenable")
cursor.execute ("CREATE DATABASE tenable")
cursor.execute ("USE tenable")
cursor.execute ("""
CREATE TABLE cumvulndata
(
offset BIGINT(10),
pluginName TEXT,
repositoryID SMALLINT(3),
severity TINYINT(2),
pluginID MEDIUMINT(8),
hasBeenMitigated TINYINT(1),
dnsName VARCHAR(255),
macAddress VARCHAR(40),
familyID INT(4),
recastRisk TINYINT(1),
firstSeen DATETIME,
ip VARCHAR(15),
acceptRisk TINYINT(1),
lastSeen DATETIME,
netbiosName VARCHAR(255),
port MEDIUMINT(5),
pluginText MEDIUMTEXT,
protocol TINYINT(3)
)
""")
#
# Security Center organizational user creds
user = 'XXXXXXXXX'
passwd = 'XXXXXXXX'
url = 'https://Security Center Server/request.php'
def SendRequest(url, headers, data):
http = httplib2.Http()
response, content = http.request(url,
'POST',
headers=headers,
body=urllib.urlencode(data))
if 'set-cookie' in response:
headers['Cookie'] = response['set-cookie']
return response, content
headers = {"Content-type": "application/x-www-form-urlencoded"}
input = {'password': passwd,
'username': user}
# Convert input to login JSON
inputjson = json.dumps(input)
data = {"request_id": "8",
"module": "auth",
"action": "login",
"input": inputjson}
# Send Login Request
response, content = SendRequest(url, headers, data)
# Decode JSON to python data structure
result = json.loads(content)
if result["error_code"] == 0:
print "SC4 Login Successful"
token = result['response']['token']
print "Session Token:",token
# Construct the cumulative vuln query JSON
cuminput = {'tool':'vulndetails',
'startOffset':'0',
'endOffset':sys.maxint,
'sortField':'ip',
'sortDir':'asc',
'sourceType':'cumulative',
'filters': [
{'filterName':'lastSeen',
'value':'31',
'operator':'<='},
{"filterName":"severity",
"value":"1,2,3",
"operator":"="}
]}
cuminputjson = json.dumps(cuminput)
#
cumdata = {"request_id": "1",
"module": "vuln",
"action": "query",
"input":cuminputjson,
"token": token}
# Send the cumulative JSON and then populate the table
cumresponse, content = SendRequest(url, headers, cumdata)
resultc = json.loads(content)
off = 0
print "\nFilling cumvulndata table with vulnerabilities from the cumulative database. Please wait..."
for result in resultc['response']['results']:
off += 1
cursor.execute ("""INSERT INTO cumvulndata (offset,pluginName,repositoryID,severity,pluginID,hasBeenMitigated,dnsName,macAddress,familyID,recastRisk,firstSeen,ip,acceptRisk,lastSeen,netbiosName,port,pluginText,protocol)
VALUES
(%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,(FROM_UNIXTIME(%s)),%s,%s,(FROM_UNIXTIME(%s)),%s,%s,%s,%s)""", (off,result["pluginName"],result["repositoryID"],result["severity"],result["pluginID"],result["hasBeenMitigated"],result["dnsName"],result["macAddress"],result["familyID"],result["recastRisk"],result["firstSeen"],result["ip"],result["acceptRisk"],result["lastSeen"],result["netbiosName"],result["port"],result["pluginText"],result["protocol"]))
# Close the cursor and connection
cursor.close ()
conn.close ()
print "Done!!"
</code></pre>
|
<p>Try this</p>
<pre><code>import httplib2, urllib, json, pprint, getpass, string, time, MySQLdb
import sys
def usage():
print "Usage: python26 mysql.py or ./mysql.py"
sys.exit(1)
if len(sys.argv) != 1:
usage()
# Connect to the database and create the tables
conn = MySQLdb.connect (host = "localhost",
user = "XXXXXXXXXX",
passwd = "XXXXXXXX")
cursor = conn.cursor ()
cursor.execute ("DROP DATABASE IF EXISTS tenable")
cursor.execute ("CREATE DATABASE tenable")
cursor.execute ("USE tenable")
cursor.execute ("""
CREATE TABLE cumvulndata
(
offset BIGINT(10),
pluginName TEXT,
repositoryID SMALLINT(3),
severity TINYINT(2),
pluginID MEDIUMINT(8),
hasBeenMitigated TINYINT(1),
dnsName VARCHAR(255),
macAddress VARCHAR(40),
familyID INT(4),
recastRisk TINYINT(1),
firstSeen DATETIME,
ip VARCHAR(15),
acceptRisk TINYINT(1),
lastSeen DATETIME,
netbiosName VARCHAR(255),
port MEDIUMINT(5),
pluginText MEDIUMTEXT,
protocol TINYINT(3)
)
""")
cursor.execute ("""INSERT INTO cumvulndata (offset,pluginName,repositoryID,severity,pluginID,hasBeenMitigated,dnsName,macAddress,familyID,recastRisk,firstSeen,ip,acceptRisk,lastSeen,netbiosName,port,pluginText,protocol)
VALUES
(%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s)""", ('123','plugin','10','1','12','1',"dnsName","macAddress",'15','1','2011-2-2',"ip",'9','2012-5-2',"netbiosName",'123',"pluginText","2"))
#Commit the changes.
conn.commit()
cursor.close()
conn.close()
</code></pre>
<p>Please <code>commit</code> the changes then you will get the inserted data.</p>
|
python|mysql|json|mysql-python|python-db-api
| 0 |
1,906,596 | 55,210,271 |
After deploy django app in digitalocean i can't see my changes in server?
|
<p>I was deploy my webapp in digitalocean which is written by django. Now i change and add some file in it. I change the template folder and also add some photos in static folder. After change i pull the changes from github in my server app directory. But i can't see the changes. I stop the nginx server. I use this command </p>
<pre><code>sudo systemctl start nginx
sudo systemctl stop nginx
sudo systemctl restart nginx
</code></pre>
<p>then again i start but no changes. what should i do?</p>
|
<p>In order to reload Django app restart your <code>gunicorn</code> service.</p>
<pre><code>sudo systemctl restart gunicorn
</code></pre>
|
python|django
| 0 |
1,906,597 | 52,783,224 |
Add break line after each file merge
|
<p>I have a lot of JSON files like the following:</p>
<p>E.g.</p>
<p><code>1.json</code></p>
<pre><code>{"name": "one", "description": "testDescription...", "comment": ""}
</code></pre>
<p><code>test.json</code></p>
<pre><code>{"name": "test", "description": "testDescription...", "comment": ""}
</code></pre>
<p><code>two.json</code></p>
<pre><code>{"name": "two", "description": "testDescription...", "comment": ""}
...
</code></pre>
<p>I want to merge them all in one JSON file like:</p>
<p><code>merge_json.json</code></p>
<pre><code>{"name": "one", "description": "testDescription...", "comment": ""},
{"name": "test", "description": "testDescription...", "comment": ""},
{"name": "two", "description": "testDescription...", "comment": ""}
</code></pre>
<p>I have the following code:</p>
<pre><code>import json
import glob
result = []
for f in glob.glob("*.json"):
with open(f, "r") as infile:
try:
result.append(json.load(infile))
except ValueError as e:
print(f,e)
result = '\n'.join(result)
with open("merged.json", "w", encoding="utf8") as outfile:
json.dump(result, outfile)
</code></pre>
<p>I can merge all files, but everything are in a one line how can I add break-line, after adding each file:</p>
<p>instead of:</p>
<pre><code>{"name": "one", "description": "testDescription...", "comment": ""},{"name": "test", "description": "testDescription...", "comment": ""},{"name": "two", "description": "testDescription...", "comment": ""}
</code></pre>
<p>Have them like:</p>
<p><code>merge_json.json</code></p>
<pre><code>{"name": "one", "description": "testDescription...", "comment": ""},
{"name": "test", "description": "testDescription...", "comment": ""},
{"name": "two", "description": "testDescription...", "comment": ""}
</code></pre>
<p>Appreciated for any help.</p>
|
<pre><code>result = []
for f in glob.glob("*.json"):
with open(f, "r") as infile:
try:
result.append(json.load(infile))
except ValueError as e:
print(f,e)
with open("merged.json", "a", encoding="utf8") as outfile:
for i in result:
json.dump(i, outfile)
outfile.write('\n')
</code></pre>
|
python
| 1 |
1,906,598 | 47,626,803 |
Heroku App Not Compatible with Python Buildpack
|
<p>I'm trying to deploy a Django/Python application that runs locally, but will not deploy to Heroku. When trying to deploy, I receive the error:</p>
<pre><code>App not compatible with buildpack: https://codon-
buildpacks.s3.amazonaws.com/buildpacks/heroku/python.tgz
</code></pre>
<p>I've tried multiple solutions to this issue. Currently my build pack is set to the Python build pack. (heroic build packs returns heroku/python). I have a Procfile, requirements.txt, runtime.txt, and Pipfile.lock, all of which usually resolve this issue.</p>
<p>Procfile:</p>
<pre><code>web: gunicorn foodForThought.wsgi:application --log-file -
</code></pre>
<p>requirements.txt:</p>
<pre><code>Django==1.11.8
pytz==2017.3
</code></pre>
<p>runtime.txt:</p>
<pre><code>python-3.6.0
</code></pre>
<p>Pipfile.lock:</p>
<pre><code>[requires]
python_full_version = "3.6.0"
</code></pre>
<p>All the aforementioned files are located in my home directory, and I'm also working in a virtual environment. Why is this error occurring?</p>
|
<p>So here's what I've come up with. I was also stuck with this problem until I found this command.</p>
<pre><code>heroku buildpacks:add --index 1 heroku/python
</code></pre>
<p>The trick is to <strong>add</strong> this buildpack before you commit to the <em><strong>Heroku</strong></em> git repo. You can follow the steps like so:</p>
<pre><code>git add .
git commit -am "First Commit"
heroku git:remote -a yourapp
heroku buildpacks:add --index 1 heroku/python
git push heroku master
</code></pre>
<p>I found this through the Heroku <a href="https://devcenter.heroku.com/articles/git" rel="nofollow noreferrer" title="The git command cli">docs</a>!</p>
|
python|django|heroku|deployment|gunicorn
| 0 |
1,906,599 | 47,853,024 |
opening a QWidget from another Qwidget with different file
|
<p>i have this file let's say widgetA.py</p>
<pre><code>import sys, requests, json
from PyQt5.QtWidgets import (
QApplication, QWidget, QDesktopWidget
)
class main(QWidget):
def __init__(self, parent=None):
super(main, self).__init__(parent)
self.initUI()
def initUI(self):
self.setFixedSize(254, 380)
self.center()
self.show()
def center(self):
qr = self.frameGeometry()
cp = QDesktopWidget().availableGeometry().center()
qr.moveCenter(cp)
self.move(qr.topLeft())
if __name__ == "__main__":
app = QApplication(sys.argv)
x = main()
sys.exit(app.exec_())
</code></pre>
<p>how can i open that widget from another widget let say widgetB.py using a QPushButton</p>
<p>widgetB.py has exactly the same structure with addition of a button </p>
<pre><code>import widgetA
...
class main(QWidget):
....
def initUI(self):
openwidget = QPushbutton('open', self)
openwidget.clicked.connect(widgetA.show)
self.setFixedSize(254, 380)
self.center()
self.show()
....
....
</code></pre>
|
<p>nvm i'm such an idiot it's seems you cant declare QWidget twice so i change one of them as QDialog, i'm still learning so please bear with me X__X</p>
<p>widgeta.py :</p>
<pre><code>class widgetA(QDialog):
def __init__(self, parent=None):
super(widgetA, self).__init__(parent)
self.initUI()
</code></pre>
<p>then this is in widgetb.py :</p>
<pre><code>from widgeta import widgetA
class widgetB(QWidget):
def __init__(self, parent=None):
super(widgetB, self).__init__(parent)
self.initUI()
self.show()
def initUI(self):
self.setFixedSize(254, 380)
self.center()
button = QPushButton('open', self)
button.clicked.connect(self.openwindow)
self.openwindow = widgetA()
def openwindow(self):
self.openwindow.exec()
</code></pre>
|
python|pyqt5
| 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.