Unnamed: 0
int64 0
1.91M
| id
int64 337
73.8M
| title
stringlengths 10
150
| question
stringlengths 21
64.2k
| answer
stringlengths 19
59.4k
| tags
stringlengths 5
112
| score
int64 -10
17.3k
|
---|---|---|---|---|---|---|
1,909,000 | 68,678,603 |
How to drop the columns by using pandas.Series.str.contains
|
<p>DataFrame like this:</p>
<pre><code>import pandas
df = pandas.DataFrame({'id':[1,2,3,4,5,6],'name':['test1','test2','test','D','E','F'],'sex':['man','woman','woman','man','woman','man']},index=['a','b','c','d','e','f'])
print(df)
print('*'*100)
</code></pre>
<p>I can drop the rows by index label:</p>
<pre><code>df.drop(df[df.name.str.contains('test')|df.sex.str.contains('woman')].index,inplace=True)
print(df)
</code></pre>
<p>How can i find out the columns label which contains 'test' or 'woman' and remove the columns</p>
|
<p>use a <code>bitwise</code> ampersand <code>&</code> for an AND condition and just re-assign the dataframe.</p>
<p>you can invert conditions with <code>~</code></p>
<p>it's recommended not to use <code>inplace</code> anymore see <a href="https://stackoverflow.com/questions/43893457/understanding-inplace-true">this post</a></p>
<pre><code>df1 = df[~(df['name'].str.contains('test')
) & ~(df['sex'].str.contains('woman'))]
print(df1)
id name sex
d 4 D man
f 6 F man
</code></pre>
|
python|pandas
| 0 |
1,909,001 | 41,335,491 |
Cherrypy URLs with unknown application mount path
|
<p>I have a CherryPy webapp that is hosted by the user. Typically the main application is mounted as such:</p>
<pre><code>cherrypy.tree.mount(root,
'/',
root.conf
)
</code></pre>
<p>However, in order to have it work behind a reverse proxy such as nginx, it needs to be able to mount elsewhere, to whatever path the user chooses:</p>
<pre><code>mount = '/my_application'
cherrypy.tree.mount(root,
mount,
root.conf
)
</code></pre>
<p>Where mount can be anything the user chooses.</p>
<p>The problem now is that links become broken.</p>
<pre><code>raise cherrypy.HTTPRedirect("/news")
</code></pre>
<p>No longer works. It will redirect to address:port/news when I need to redirect to address:port/my_application/news.</p>
<p>I could go through and create a conditional for each url that prepends the application path:</p>
<pre><code>if mount != '/':
url = mount + '/news'
raise cherrypy.HTTPRedirect(url)
</code></pre>
<p>But there must be a better way to do this. I've looked at Request Dispatching but I couldn't get it to re-write urls on the fly.</p>
<p>What is the best way to handle this situation?</p>
|
<p>You can use the helper function <code>cherrypy.url</code> to generate the urls relative to the <code>script_name</code> (<code>/my_application</code>).</p>
<p>You will end up with something like: </p>
<pre><code>raise cherrypy.HTTPRedirect(cherrypy.url('/news'))
</code></pre>
<p>Link for the <code>cherrypy.url</code> source: <a href="https://github.com/cherrypy/cherrypy/blob/master/cherrypy/_helper.py#L194" rel="nofollow noreferrer">https://github.com/cherrypy/cherrypy/blob/master/cherrypy/_helper.py#L194</a></p>
|
python-2.7|url-routing|cherrypy
| 1 |
1,909,002 | 41,668,112 |
How to take an element and break it up across multiple versions of the same list?
|
<p>If I have this list:</p>
<pre><code>a = ["1","2","t","fthdf","u"]
</code></pre>
<p>and I want to create 4 new list, with the same elements except for that in the third index which I want to basically splice up like so:</p>
<pre><code>a = ["1","2","t","f","u"]
b = ["1","2","t","t","u"]
c = ["1","2","t","h","u"]
d = ["1","2","t","d","u"]
e = ["1","2","t","f","u"]
</code></pre>
<p>How can I achieve this? Thank you.</p>
<p>Thank you very much for all your answers.</p>
|
<p>You could use a simple list comprehension like</p>
<pre><code>lists = [a[:3] + [elem] + a[4:] for elem in a[3]]
# [['1', '2', 't', 'f', 'u'],
# ['1', '2', 't', 't', 'u'],
# ['1', '2', 't', 'h', 'u'],
# ['1', '2', 't', 'd', 'u'],
# ['1', '2', 't', 'f', 'u']]
</code></pre>
|
python|drop-down-menu
| 2 |
1,909,003 | 41,240,678 |
create a dict of dict from dict of lists in python
|
<p>I have a python object which is a dict of keys being hostnames and values being a list of users and their disk usage in the notation of dicts. I have pasted my dict below as the explanation seems confusing. Each host is a key and under each host there might be several users which are common in the hosts and which might be unique too. I am struggling to check the following conditions.</p>
<ol>
<li>Check if that user exists in each host.</li>
<li>If yes, add the total disk he is utilising in each host.</li>
<li>If not, append the unique user to the dict.</li>
<li>Now in the big dict sort the users in the order of their disk usage.</li>
</ol>
<p>Achieved so far:
1. Log in to each of the hosts
2. Get the users and their disk usage
3. result is stored in a dict with hostnames as keys and values is a list of users and their disk usage.</p>
<p>If I can make this a dict of dict, I hope my problem is solved.</p>
<pre><code>{
'localhost': [
'alice: 1491916K',
'bob: 423576K'
],
'10.252.136.241': [
'alice: 3491916K',
'bob: 4235K',
'chaplin: 3456K'
]
}
</code></pre>
<p>This is a sample output from 2 hosts. Now I have the result object which is a dict in the above form. I want to iterate to each of the host, see if the user 'alice' exists in each host and add his disk space and have a single entry in the dict for 'alice' and the same for 'bob' and then leave 'chaplin' as is in the new dict. I dont want host specific. I want a total usage at the end. </p>
<p>I am just stuck at iterating. I can manage to sum up and create the big dict of 'user': 'total_space' once I can iterate. </p>
<p><strong>[UPDATE]</strong>
My expected output is</p>
<pre><code>expected_output = { 'alice': '498382K', 'bob': '427811K', 'chaplin': '3456K' }
</code></pre>
<p>Here. The usage of alice is being added in each of the hosts. same for bob and chaplin is just appended because he is not present in all the hosts.</p>
|
<p>For this task you can use combination of such tools as <a href="https://docs.python.org/2/library/re.html" rel="nofollow noreferrer">regex</a> and <a href="https://docs.python.org/2/library/itertools.html#itertools.groupby" rel="nofollow noreferrer">itertools.groupby</a>:</p>
<pre><code>values = {
'localhost': [
'alice: 1491916K',
'bob: 423576K'
],
'10.252.136.241': [
'alice: 3491916K',
'bob: 4235K',
'chaplin: 3456K'
]
}
import re
import itertools
numbers = re.compile(r'\d+')
parsed_list = [(el.split(': ')[0], int(numbers.findall(el)[0])) for k, v in values.items() for el in v]
print({k: sum([el[1] for el in v]) for k, v in itertools.groupby(sorted(parsed_list), key=lambda x: x[0])})
</code></pre>
<p>Output:</p>
<pre><code>{'alice': 4983832, 'bob': 427811, 'chaplin': 3456}
</code></pre>
|
python|list|python-3.x|dictionary|iteration
| 1 |
1,909,004 | 57,191,483 |
Include a DLL in python module (pypi)
|
<p>I have python module that wraps functions from a DLL in the same directory, and loads the library using ctypes.</p>
<pre><code>__lib = cdll.LoadLibrary("deviceSys.dll")
</code></pre>
<p>Here's my directory Layout: </p>
<pre><code>deviceSys
- wrapper.py
- deviceSys.dll
- __init__.py
</code></pre>
<p>I'm following the package guidelines, but I'm not sure how to load the dll once my code is a module on PyPi. For instance, if I use ctypes to load the library, it produces an error, because it's searching locally:
<code>OSError: [WinError 126] The specified module could not be found</code></p>
<p>I need to somehow embed my dll or search for the file within included resources for the package. Is there a way to do this?</p>
|
<p>I figured it out. You need to add the DLL to the <code>package_data</code> in the <code>setup.py</code>:</p>
<pre><code>include_package_data=True,
package_data={"devsys": ['deviceSystem.dll']},
</code></pre>
<p>To get the file from within <code>wrapper.py</code> use the following:</p>
<pre><code>dir = os.path.dirname(sys.modules["devsys"].__file__)
path = os.path.join(dir, "deviceSystem.dll")
__lib = cdll.LoadLibrary(path)
</code></pre>
|
python|dll|module|pypi
| 2 |
1,909,005 | 57,105,879 |
How to call Go function from Python
|
<p>I am trying to call go lang function from python
when I call my python program I am seeing the following error.
I am referring to the <a href="https://medium.com/learning-the-go-programming-language/calling-go-functions-from-other-languages-4c7d8bcc69bf" rel="nofollow noreferrer">Go to pythn</a> link. </p>
<p><a href="https://i.stack.imgur.com/dFZr3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dFZr3.png" alt="error"></a></p>
<p>Python Program</p>
<pre><code> from ctypes import *
def call_go_function():
lib = cdll.LoadLibrary("./awesome.so")
lib.Add.argtypes = [c_longlong, c_longlong]
print( lib.Add(12,99))
call_go_function()
</code></pre>
<p>Go Program</p>
<pre><code>package main
import "C"
import (
"sync"
)
var count int
var mtx sync.Mutex
//export Add
func Add(a, b int) int { return a + b }
func main() {}
</code></pre>
|
<p>From the Python path it looks like this is a 32-bit Python version. You cannot mix 32-bit and 64-bit user-space code.</p>
<p>So I guess you need to either:</p>
<ul>
<li>Rebuild your Go code as a 32-bit DLL (see GOARCH=386) or</li>
<li>Install and run a 64-bit Python version.</li>
</ul>
|
python|go|.so
| 2 |
1,909,006 | 64,487,173 |
NOT NULL constraint failed: home_profile.user_id
|
<h2>I have written an extension of the Django <code>auth.user</code> model which adds <em>middle_name</em> and <em>phone</em> fields to the model.</h2>
<pre><code>class Profile(models.Model):
user = models.OneToOneField(User, on_delete=models.CASCADE)
middle_name = models.CharField(max_length=20, blank=True)
phone = models.CharField(max_length=10, blank=True)
def __str__(self):
return(f'{self.user.username} profile')
</code></pre>
<hr />
<h2>I am using the following forms to accept input from the user.</h2>
<pre><code>class RegistrationForm(UserCreationForm):
email = forms.EmailField()
class Meta:
model = User
fields = ['first_name', 'last_name', 'email', 'username', 'password1', 'password2']
class ProfileForm(forms.ModelForm):
phone = forms.CharField(max_length=10, min_length=10)
middle_name = forms.CharField(max_length=20, required=True)
class Meta:
model = Profile
fields = ['middle_name', 'phone']
</code></pre>
<hr />
<h2>The view for the register route is as follows:</h2>
<pre><code>def register(request):
if request.method == 'POST':
user_form = RegistrationForm(request.POST)
profile_form = ProfileForm(request.POST)
if user_form.is_valid() and profile_form.is_valid():
username = user_form.cleaned_data.get('username')
user_form.save()
profile_form.save()
messages.success(request, f'Welcome { username }! Your account has been created. Sign in to continue.')
return redirect('login')
else:
return render(request, 'home/register.html', { 'user_form': user_form, 'profile_form': profile_form })
else:
user_form = RegistrationForm()
profile_form = ProfileForm()
return render(request, 'home/register.html', { 'user_form': user_form, 'profile_form': profile_form })
</code></pre>
<hr />
<h2>This is contents of the signals.py file</h2>
<pre><code>from django.contrib.auth.models import User
from django.db.models.signals import post_save
from django.dispatch import receiver
from .models import Profile
@receiver(post_save, sender=User)
def create_profile(sender, instance, created, **kwargs):
if created:
Profile.objects.create(user=instance)
@receiver(post_save, sender=User)
def save_profile(sender, instance, **kwargs):
instance.profile.save()
</code></pre>
<hr />
<p>The issue is while a row for the user is getting created in the Profile model, data is not getting written into the fields of the row. I am seeing a <code>Exception Value: NOT NULL constraint failed: home_profile.user_id</code></p>
|
<p>By using a signal, you will create a <code>Profile</code> object, but the <code>profile_form</code> will be unaware of that, and thus create <em>another</em> one that does not link to the user. You thus might want to remove the signals. <a href="https://lincolnloop.com/blog/django-anti-patterns-signals/" rel="nofollow noreferrer">Signals are often an <em>anti-pattern</em></a>, and furthermore there are several ORM calls that can <em>circumvent</em> the signals.</p>
<p>If you <em>remove</em> the signals, you will still need to link the <code>Profile</code> object wrapped in your <code>profile_form</code> to the user, you do this with:</p>
<pre><code># remove the signal that constructs a Profile
def register(request):
if request.method == 'POST':
user_form = RegistrationForm(request.POST)
profile_form = ProfileForm(request.POST)
if user_form.is_valid() and profile_form.is_valid():
<b>user =</b> user_form.save() # ← assign the user to a variable
profile_form<b>.instance.user = user</b> # ← set it as user of the profile
profile_form.save()
messages.success(request, f'Welcome {user.username}! Your account has been created. Sign in to continue.')
return redirect('login')
else:
user_form = RegistrationForm()
profile_form = ProfileForm()
return render(request, 'home/register.html', { 'user_form': user_form, 'profile_form': profile_form })</code></pre>
|
python|django
| 1 |
1,909,007 | 69,696,503 |
How to group and sum data and return the biggest sum in Python?
|
<p>Let's say my data looks like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>news_title</th>
<th>company</th>
</tr>
</thead>
<tbody>
<tr>
<td>string</td>
<td>Facebook</td>
</tr>
<tr>
<td>string</td>
<td>Facebook</td>
</tr>
<tr>
<td>string</td>
<td>Amazon</td>
</tr>
<tr>
<td>string</td>
<td>Apple</td>
</tr>
<tr>
<td>string</td>
<td>Amazon</td>
</tr>
<tr>
<td>string</td>
<td>Facebook</td>
</tr>
</tbody>
</table>
</div>
<p>How can I group the companies and get name and the number for the company with the biggest sum?</p>
<p>I want be able to print something like :</p>
<p>Facebook was mentioned in the news the most - 24 times.</p>
<p>I tried this but it did not work the way I wanted:</p>
<pre><code>df.groupby("company").sum()
</code></pre>
|
<p>Use <code>value_counts</code>:</p>
<pre><code>>>> df.company.value_counts().head(1)
Facebook 3
Name: company, dtype: int64
</code></pre>
<p><strong>Update</strong>:</p>
<blockquote>
<p>Could you please tell me how I could go about printing it out in a sentence?</p>
</blockquote>
<pre><code>company, count = list(df.company.value_counts().head(1).items())[0]
print(f'{company} was mentioned in the news the most - {count} times.')
# Output:
Facebook was mentioned in the news the most - 3 times.
</code></pre>
|
python|pandas|sum
| 2 |
1,909,008 | 69,688,115 |
How to find XML element fast using Python?
|
<p>I am quite new to XML and to what makes code effective, and the code I am using takes quite a long time to run.</p>
<p>So I want to extract the elevation from given lat, long-values as fast as possible (I have a lot of lat,long-points). This is how I tried it:</p>
<pre><code>import xml.etree.ElementTree as ET
from urllib.request import urlopen
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
def elevation(lat, long):
query = ('http://openwps.statkart.no/skwms1/wps.elevation2?request=Execute&service=WPS&version=1.0.0'
f'&identifier=elevation&datainputs=lat={lat};lon={long};epsg=4326')
parsing = "{http://www.opengis.net/wps/1.0.0}"
with urlopen(query) as f:
tree = ET.parse(f)
root = tree.getroot()
return float(root.findall(f".//{parsing}Data/*")[0].text)
</code></pre>
<p>Using this function on the data set I have extracted from an csv-file, with several datasets within the same file separated by a "new_sheep"-line:</p>
<pre><code>df = pd.read_csv("/Users/ninsalv/Documents/Sheepdata/Data.csv", delimiter=';',
dtype={"Initial start": "str", "Start": "str", "Stop": "str"})
print(df.head())
dataset = 1
Lat = []
Long = []
temp = 0
for i in range(len(df)):
if "new_sheep" in df.iloc[i][0]:
temp += 1
continue
if temp == dataset:
Lat.append(df.iloc[i][3])
Long.append(df.iloc[i][4])
if temp > dataset:
break
step = np.linspace(0,len(Lat),len(Lat))
altitude = []
for i in range(len(Lat)):
altitude.append(elevation(Lat[i], Long[i]))
if (i % 100) == 0:
print("round number ", i)
plt.plot(step, altitude)
</code></pre>
<p>This works, but it takes almost a minute to find every 100 altitudes, and I have about 7000-15000 points to check in my dataset. Does anybody know either XML, pandas or something else that may make my code faster?</p>
|
<p>What you need to do is to get data (HTTP request) you are looking for in <strong>parallel</strong>. You cab use multi threading for that.</p>
<p>See the example below.</p>
<pre><code>import requests
from requests.sessions import Session
import time
from threading import Thread,local
from queue import Queue
url_list = [] # TODO long list of urls to be populated by your code
q = Queue(maxsize=0) #Use a queue to store all URLs
for url in url_list:
q.put(url)
thread_local = local() #The thread_local will hold a Session object
def get_session() -> Session:
if not hasattr(thread_local,'session'):
thread_local.session = requests.Session() # Create a new Session if not exists
return thread_local.session
def download_link() -> None:
'''download link worker, get URL from queue until no url left in the queue'''
session = get_session()
while not q.empty():
url = q.get()
with session.get(url) as response:
print(f'Read {len(response.content)} from {url}')
q.task_done() # tell the queue, this url downloading work is done
def download_all(urls) -> None:
'''Start 10 threads, each thread as a wrapper of downloader'''
thread_num = 10
for i in range(thread_num):
t_worker = Thread(target=download_link)
t_worker.start()
q.join() # main thread wait until all url finished downloading
print("start work")
start = time.time()
download_all(url_list)
end = time.time()
print(f'download {len(url_list)} links in {end - start} seconds')
</code></pre>
|
python|pandas|xml|parsing
| 0 |
1,909,009 | 66,729,277 |
Cannot perform reduction function min on tensor with no elements because the operation does not have an identity at THCTensorMathReduce.cu:64
|
<p>I am configuring a GitHub repo in which the author stated that you have to install pytorch=0.4 and python=3.7. Now, I have CUDA 11.0 and the Pytorch version is conflicting with CUDA. After installing the Pytorch it gives the below error. Any hint?</p>
<p>My Conda List</p>
<pre><code># Name Version Build Channel
_libgcc_mutex 0.1 main
_pytorch_select 0.1 cpu_0
blas 1.0 mkl
ca-certificates 2021.1.19 h06a4308_1
certifi 2020.12.5 py37h06a4308_0
cffi 1.14.0 py37h2e261b9_0
cuda100 1.0 0 pytorch
cudatoolkit 9.2 0
freetype 2.10.4 h5ab3b9f_0
intel-openmp 2019.4 243
jpeg 9b h024ee3a_2
lcms2 2.11 h396b838_0
libedit 3.1.20210216 h27cfd23_1
libffi 3.2.1 hf484d3e_1007
libgcc-ng 9.1.0 hdf63c60_0
libmklml 2019.0.5 0
libpng 1.6.37 hbc83047_0
libstdcxx-ng 9.1.0 hdf63c60_0
libtiff 4.2.0 h3942068_0
libwebp-base 1.2.0 h27cfd23_0
lz4-c 1.9.3 h2531618_0
mkl 2020.2 256
mkl-service 2.3.0 py37he8ac12f_0
mkl_fft 1.3.0 py37h54f3939_0
mkl_random 1.1.1 py37h0573a6f_0
ncurses 6.2 he6710b0_1
ninja 1.10.2 py37hff7bd54_0
numpy 1.20.1 pypi_0 pypi
numpy-base 1.19.2 py37hfa32c7d_0
olefile 0.46 py37_0
openssl 1.1.1j h27cfd23_0
pillow 8.1.2 py37he98fc37_0
pip 21.0.1 py37h06a4308_0
pycparser 2.20 py_2
python 3.7.1 h0371630_7
pytorch 1.2.0 py3.7_cuda9.2.148_cudnn7.6.2_0 pytorch
readline 7.0 h7b6447c_5
scipy 1.6.1 pypi_0 pypi
setuptools 52.0.0 py37h06a4308_0
six 1.15.0 py37h06a4308_0
sqlite 3.33.0 h62c20be_0
tk 8.6.10 hbc83047_0
torchaudio 0.8.0 pypi_0 pypi
torchvision 0.4.0 py37_cu92 pytorch
typing 3.7.4.3 py37h06a4308_0
typing-extensions 3.7.4.3 hd3eb1b0_0
typing_extensions 3.7.4.3 pyh06a4308_0
wheel 0.36.2 pyhd3eb1b0_0
xz 5.2.5 h7b6447c_0
zlib 1.2.11 h7b6447c_3
zstd 1.4.5 h9ceee32_0
</code></pre>
<p><strong>Error</strong></p>
<pre><code>Model size: 44.17957M
==> Epoch 1/800 lr:0.0003
Traceback (most recent call last):
File "/media/khawar/HDD_Khawar1/hypergraph_reid/main_video_person_reid_hypergraphsage_part.py", line 369, in <module>
main()
File "/media/khawar/HDD_Khawar1/hypergraph_reid/main_video_person_reid_hypergraphsage_part.py", line 230, in main
train(model, criterion_xent, criterion_htri, optimizer, trainloader, use_gpu)
File "/media/khawar/HDD_Khawar1/hypergraph_reid/main_video_person_reid_hypergraphsage_part.py", line 279, in train
htri_loss = criterion_htri(features, pids)
File "/home/khawar/anaconda3/envs/hypergraph_reid/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in __call__
result = self.forward(*input, **kwargs)
File "/media/khawar/HDD_Khawar1/hypergraph_reid/losses.py", line 86, in forward
dist_an.append(dist[i][mask[i] == 0].min())
RuntimeError: invalid argument 1: cannot perform reduction function min on tensor with no elements because the operation does not have an identity at /opt/conda/conda-bld/pytorch_1565287025495/work/aten/src/THC/generic/THCTensorMathReduce.cu:64
</code></pre>
|
<p>As the error message suggests, the argument for <code>min</code> function is empty.<br />
The behavior of <code>torch.min([])</code> is undefined.</p>
<p>Check that <code>dist[i][mask[i] == 0]</code> is not empty, before taking <code>min</code> of it.</p>
|
python|pytorch|torch|torchvision
| 1 |
1,909,010 | 66,465,135 |
Iterating oner array over another in Numpy Python
|
<p>I am trying to create a numpy function where it adds all the elements of <code>second_Numbers</code> for each of the elements in <code>first_Numbers</code>. So for the first element in <code>first_Numbers</code> it will be <code>(6,10,14)</code> made by <code>(3+3=6, 3+7= 10, 3+11= 14)</code>. for the second element in <code>first_Numbers</code> it will go as <code>(8,12,16)</code> made by <code>(5+3=8, 5+7=12, 5+11=16)</code>.</p>
<pre><code>first_Numbers = np.array([3, 5]
second_Numbers = np.array([3, 7, 11])
</code></pre>
<p>Expected Output:</p>
<pre><code>[6,10,14]
[8,12,16]
</code></pre>
|
<pre><code>In [1]: first_Numbers = np.array([3, 5])
...: second_Numbers = np.array([3, 7, 11])
</code></pre>
<p>Make the first array (2,1) shape:</p>
<pre><code>In [2]: first_Numbers[:,None]
Out[2]:
array([[3],
[5]])
</code></pre>
<p>Then by <code>broadcasting</code> we can add them, getting a (2,3) result:</p>
<pre><code>In [3]: first_Numbers[:,None]+second_Numbers
Out[3]:
array([[ 6, 10, 14],
[ 8, 12, 16]])
</code></pre>
|
python|arrays|function|numpy|sum
| 2 |
1,909,011 | 64,921,505 |
Positional argument follows keyword - menubar.add_cascade(label = "File", menu.filemenu) - calculator
|
<p>I'm having trouble making a calculator for college work, and I'm new to Python.</p>
<pre><code>from tkinter import*
import math
import parser
import tkinter.messagebox
root = Tk()
root.title("Joshua's Scientific Calculator")
root.configure(background ="powder blue")
root.resizable(width =False, height =False)
root.geometry("480x568+0+0")
calc = Frame(root)
calc.grid()
menubar = Menu(calc)
filemenu = Menu(menubar, tearoff =0)
menubar.add_cascade(label = "File", menu.filemenu)
filemenu.add_command(label = "Standard")
filemenu.add_command(label = "Scientific")
filemenu.add_seperator()
filemenu.add_command(label = "Quit")
editmenu = Menu(menubar, tearoff =0)
menubar.add_cascade(label = "Edit", menu.editmenu)
editmenu.add_command(label = "Cut")
editmenu.add_command(label = "Copy")
editmenu.add_seperator()
editmenu.add_command(label = "Paste")
helpmenu = Menu(menubar, tearoff =0)
menubar.add_cascade(label = "Help", menu.helpmenu)
helpmenu.add_command(label = "View Help")
root.config(menu=menubar)
root.mainloop()
</code></pre>
<p>This is what I've done so far, but no matter what I do, it keeps coming up with the error positional argument follows keyword at menubar.add_cascade(label = "File", menu.filemenu)</p>
<p>Help?</p>
|
<p>It's because you are using the following construct</p>
<pre><code>menubar.add_cascade(label = "File", menu.filemenu)
</code></pre>
<p>here, since you use <code>label=</code> it makes that argument a "keyword argument", and <code>menu.filemenu</code> is a "positional argument" because you didn't specify which parameter takes that argument.</p>
<p>Python's syntax doesn't allow that order, positional arguments MUST precede all keyword arguments.</p>
<p>You may use all arguments as positional or as keyword, depending on the signature of the function you call</p>
<p>Checkout python's <a href="https://docs.python.org/3.8/glossary.html" rel="nofollow noreferrer">glossary</a> for more details about <a href="https://docs.python.org/3.8/glossary.html#term-argument" rel="nofollow noreferrer">arguments</a> and <a href="https://docs.python.org/3.8/glossary.html#term-parameter" rel="nofollow noreferrer">parameters</a>.</p>
|
python|tkinter
| 1 |
1,909,012 | 63,816,791 |
Pandas Mean of Repeating Measurement
|
<p>I performed a measurement where I changed a parameter and measured a physical quantity. I performed multiple measurements and saved the data to a pandas dataframe. The result looks something like this:</p>
<pre><code> parameter measured_value
0 10 1.10
1 20 1.21
2 30 1.29
3 40 1.42
4 50 1.54
5 10 1.14
6 20 1.22
7 30 1.32
8 40 1.41
9 50 1.52
</code></pre>
<p>In that example I repeated the measurement twice and varied the parameter from 10 to 50 in steps of 10. Is there a way to to average the measured values, such that I get the following result:</p>
<pre><code> parameter mean_measured_value
0 10 1.10
1 20 1.20
2 30 1.30
3 40 1.40
4 50 1.50
</code></pre>
<p>I analyze my data typically with matlab. Basically, I could use numpy to do data analysis like matlab, but this looks quiet unelegant:</p>
<pre><code>meas_value = np.asarray(df['measured_value'])
mean_meas_value = np.mean(np.reshape(meas_value, (5,2)), axis=1)
</code></pre>
<p>Is there an elegant way with pandas?</p>
|
<p>If I understood you right:</p>
<pre><code>meas_value = df.groupby('parameter').sum()
</code></pre>
|
python|database|pandas|mean
| 0 |
1,909,013 | 65,329,435 |
Error b64encode a bytes-like object is required, not 'str' python3
|
<p>I am getting an error when converting base 64 into python, i can't find the solution</p>
<pre><code>uniq_token_hash = hashlib.sha256(uniq_token_string.encode('UTF-8')).hexdigest()
print ('UNIQ HASH: %s' % uniq_token_hash)
auth_token = b64encode('%s;%s;%s' % (devapp,unix_timestamp, uniq_token_hash))
print ('AUTH TOKEN: %s' % auth_token)
line 58, in b64encode
encoded = binascii.b2a_base64(s, newline=False)
TypeError: a bytes-like object is required, not 'str'
</code></pre>
|
<p>There are two kinds of strings in Python. The first is made of Unicode codepoints (characters), and it's called <code>str</code>. The second is a sequence of bytes (small integers from 0 to 255), and it's called <code>bytes</code>. <code>b64encode</code> requires <code>bytes</code> as input. You already know how to convert <code>str</code> to <code>bytes</code>, because you've done it once with <code>.encode</code>.</p>
<pre><code>auth_token = b64encode(('%s;%s;%s' % (devapp,unix_timestamp, uniq_token_hash)).encode('UTF-8'))
</code></pre>
|
python|python-3.x|token
| 1 |
1,909,014 | 65,271,213 |
Is there a better way to write this short script?
|
<pre><code>string = "rRdDbB"
for letter in string:
print(letter)
for letter in string:
for subletter in string:
print(letter+subletter)
for letter in string:
for subletter in string:
for subsubletter in string:
print(letter+subletter+subsubletter)
</code></pre>
<p>I would like to continue this until I get string lengths of 10, but my approach seems very tedious.</p>
|
<p>That's a <a href="//wikipedia.org/wiki/Power_set" rel="nofollow noreferrer">power set</a>. Here is an implementation:</p>
<pre><code>def powerset(in_a):
if len(in_a) <= 1:
yield in_a
yield []
else:
for out_a in powerset(in_a[1:]):
yield [in_a[0]] + out_a
yield out_a
s = 'rRdDbB'
a = list(powerset(list(s)))
print(a)
</code></pre>
<p>Result:</p>
<pre><code>[['r', 'R', 'd', 'D', 'b', 'B'], ['R', 'd', 'D', 'b', 'B'], ['r', 'd', 'D',
'b', 'B'], ['d', 'D', 'b', 'B'], ['r', 'R', 'D', 'b', 'B'], ['R', 'D', 'b',
'B'], ['r', 'D', 'b', 'B'], ['D', 'b', 'B'], ['r', 'R', 'd', 'b', 'B'], ['R',
'd', 'b', 'B'], ['r', 'd', 'b', 'B'], ['d', 'b', 'B'], ['r', 'R', 'b', 'B'],
['R', 'b', 'B'], ['r', 'b', 'B'], ['b', 'B'], ['r', 'R', 'd', 'D', 'B'], ['R',
'd', 'D', 'B'], ['r', 'd', 'D', 'B'], ['d', 'D', 'B'], ['r', 'R', 'D', 'B'],
['R', 'D', 'B'], ['r', 'D', 'B'], ['D', 'B'], ['r', 'R', 'd', 'B'], ['R', 'd',
'B'], ['r', 'd', 'B'], ['d', 'B'], ['r', 'R', 'B'], ['R', 'B'], ['r', 'B'],
['B'], ['r', 'R', 'd', 'D', 'b'], ['R', 'd', 'D', 'b'], ['r', 'd', 'D', 'b'],
['d', 'D', 'b'], ['r', 'R', 'D', 'b'], ['R', 'D', 'b'], ['r', 'D', 'b'], ['D',
'b'], ['r', 'R', 'd', 'b'], ['R', 'd', 'b'], ['r', 'd', 'b'], ['d', 'b'], ['r',
'R', 'b'], ['R', 'b'], ['r', 'b'], ['b'], ['r', 'R', 'd', 'D'], ['R', 'd',
'D'], ['r', 'd', 'D'], ['d', 'D'], ['r', 'R', 'D'], ['R', 'D'], ['r', 'D'],
['D'], ['r', 'R', 'd'], ['R', 'd'], ['r', 'd'], ['d'], ['r', 'R'], ['R'],
['r'], []]
</code></pre>
|
python|for-loop
| 0 |
1,909,015 | 68,528,757 |
"group_list("g", ["a","b","c"]) returns "g: a, b, c"" I have tried the following, anyone knows how to remove the " , " in the end?
|
<pre><code>def group_list(group, users):
members = ""
for x in users:
members = members + x + ", "
return "{}: ".format(group) + members
print(group_list("Marketing", ["Mike", "Karen", "Jake", "Tasha"])) # Should be "Marketing: Mike, Karen, Jake, Tasha"
print(group_list("Engineering", ["Kim", "Jay", "Tom"])) # Should be "Engineering: Kim, Jay, Tom"
print(group_list("Users", "")) # Should be "Users:
</code></pre>
<p>My results are:</p>
<ol>
<li>Marketing: Mike, Karen, Jake, Tasha,</li>
<li>Engineering: Kim, Jay, Tom,</li>
<li>Users:</li>
</ol>
<p>I want to know how to remove the ' , ' in the end. Thank you!</p>
|
<p>Instead, you can use <code>.join()</code></p>
<pre><code>def group_list(group, users):
return "{}: ".format(group) + ','.join(users)
print(group_list("Marketing", ["Mike", "Karen", "Jake", "Tasha"])) # Should be "Marketing: Mike, Karen, Jake, Tasha"
print(group_list("Engineering", ["Kim", "Jay", "Tom"])) # Should be "Engineering: Kim, Jay, Tom"
print(group_list("Users", ""))
</code></pre>
|
python
| 1 |
1,909,016 | 68,765,888 |
Predicted data not similar with actual data after training model
|
<p>I'm new to machine learning and now working on a project about time series forecasting.I confused why predicted data after training model isn't similar with actual data.</p>
<p><a href="https://i.stack.imgur.com/qnSwO.png" rel="nofollow noreferrer">see the data here</a></p>
<p>I'm using tensorflow.js with reactjs,Can anyone help me what wrong with model created? Below is code about that model..</p>
<p>Anyone who help me will appreciated..</p>
<pre><code> ...
let x = tf.tensor2d(labels,[labels.length,1]);
let y = tf.tensor2d(inputs,[inputs.length,1]);
let inputSteps = inputs.length / 3;
let labelSteps = labels.length / 3;
let xs = x.reshape([3,inputSteps,1]);
let ys = y.reshape([3,labelSteps,1]);
const epochs = 30;
const window_size = 10;
const batchSize = 3;
const shuffle = true;
const input_layer_neurons = [inputSteps,1];
const rnn_input_shape = input_layer_neurons;
model.add(tf.layers.dense({ inputShape: input_layer_neurons,units:512 }));
let lstm_cells = [];
for(let i=0; i < hiddenLayers; i++){
lstm_cells.push(tf.layers.lstmCell({ units:20 }));
}
model.add(tf.layers.lstm({
cell: lstm_cells,
units: 50,
activation: 'relu',
inputShape: rnn_input_shape,
returnSequences: true
}))
model.add(tf.layers.dense({ units:1,returnSequences:true }));
model.compile({
optimizer: tf.train.adam(0.0005),
loss: tf.losses.meanSquaredError,
metrics:['mse'],
});
const lossError = [],quantityEpochs = [];
await model.fit(xs, ys, {shuffle,batchSize,epochs,callbacks: {
onEpochEnd: async (epoch,log) => {
console.log('loss : ' + log.loss);
lossError.push(log.loss);
quantityEpochs.push(epoch);
}
}});
const outps = model.predict(ys);
...
</code></pre>
|
<p>I don't see anything wrong here.</p>
<p><a href="https://i.stack.imgur.com/qnSwO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qnSwO.png" alt="enter image description here" /></a></p>
<p>Your model is working just fine. Predicted values will never be the same as actual, unless you overfit the hell out of your model (and then it won't generalize). In any case, your graph shows that the model is learning.</p>
<p>Here is what you can do to get better results -</p>
<ol>
<li>A bit more training can be done with more epochs to reduce the loss further.</li>
<li>If the loss doesn't go further down parameters can be added with a few layers, then the model needs more complexity to learn better. Meaning you need more trainable parameters (more layers, larger layers etc)</li>
</ol>
|
machine-learning|deep-learning|time-series|forecasting|tensorflow.js
| 0 |
1,909,017 | 68,646,271 |
How to check for a new entry in postgres through python?
|
<p>How to check for new entry in postgres with sqlalchemy? What will the possible logic be using timestamp or primary id?</p>
|
<p>Possible logic could be:</p>
<pre><code>from time import sleep
import pyodbc
import pandas as pd
While True:
conn_sql = pyodbc.connect('DSN=YOUR_DSN; UID=YOUR_USER; PWD=YOUR_PASSWORD')
# get actual records count
actual_records_count = pd.read_sql("SELECT COUNT(*) FROM dbo.your_table", conn_sql)
# wait 5 seconds
sleep(5)
# get "new" actual records count
new_records_count = pd.read_sql("SELECT COUNT(*) FROM dbo.your_table", conn_sql)
If actual_records_count == new_records_count:
print("There's no new records here")
else:
print("New row added")
</code></pre>
<p><strong>DISCLAIMER</strong> :<code>while True</code> means loop forever. The while statement takes an expression and executes the loop body while the expression evaluates to (boolean) <code>"true"</code>. True always evaluates to boolean <code>"true"</code> and thus executes the loop body indefinitely.</p>
|
python|postgresql
| 0 |
1,909,018 | 71,466,999 |
Dataframe: How to remove dot in a string
|
<p>I want to use categorical features directly with CatBoost model and I need to declare my object columns as categorical in Catboost model . I have a column in my data frame which is an object containing nace codes looking like this:</p>
<p>NACE_code</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>5632 81.101
8060 41.200
15147 43.120
24644 68.100
29144 86.909
37122 68
39853 43
59268 43
108633 70.220
108693 56.102
175820 43.320
184606 41.200
Name: NACE_code, dtype: object</code></pre>
</div>
</div>
</p>
<p>Python doesn't accept this column as categorical column. Instead it tells me that this is a float since some of the values have dots. I am relatively new in python and I have tried different ways to remove the dot from those values but my last attempt changes all those values without dot to NAN.</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>df['NACE_code'].str.replace(r"(\d)\.", r"\1")
5632 81101
8060 41200
15147 43120
24644 68100
29144 86909
37122 NaN
39853 NaN
59268 NaN
108633 70220
108693 56102
175820 43320
184606 41200
Name: NACE_KODE, dtype: object</code></pre>
</div>
</div>
</p>
<p>How do I get my column to look like this? I appreciate any help I can get!</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>5632 81101
8060 41200
15147 43120
24644 68100
29144 86909
37122 68
39853 43
59268 43
108633 70220
108693 56102
175820 43320
184606 41200</code></pre>
</div>
</div>
</p>
|
<pre><code># The following code should work:
df.NACE_code = df.NACE_code.astype(str)
df.NACE_code = df.NACE_code.str.replace('.', '')
</code></pre>
|
python|string|dataframe
| 1 |
1,909,019 | 10,758,668 |
numpy.loadtxt gives "not iterable" error
|
<p>I'm trying to use <code>numpy.loadtxt</code> to read the data in a file that looks like this:</p>
<pre><code>## 14 line of header
3 0 36373.7641026
3 1 36373.7641026
3 2 36373.7641026
...
</code></pre>
<p>And when I give it this:</p>
<pre><code>>>> chunk, power = numpy.loadtxt(bf,skiprows=14,usecols=(1,2),unpack=True)
</code></pre>
<p>Or even this:</p>
<pre><code>>>> power = numpy.loadtxt(bf,skiprows=14,usecols=(2))
</code></pre>
<p>It says, <code>TypeError: 'int' object is not iterable</code></p>
<p>I assumed it was because the first two columns were clearly integers not float, but now I'm not even sure which int object it's referring, because it won't even read just the floats. How do I make <code>loadtxt</code> work?</p>
<p>Related: How do I specify the format of multiple columns using <code>dtype = ?</code> I'm having trouble figuring it out via google.</p>
|
<p>In your second example, the problem is likely <code>usecols=(2)</code>. <code>usecols</code> must be a sequence. <code>(2)</code> is the integer 2, not a one-element tuple containing 2, and is likely what the error message is complaining about: <code>loadtxt()</code> is trying to iterate over an <code>int</code>. Use <code>(2,)</code> (or <code>[2]</code> if you prefer). </p>
|
python|input|numpy
| 10 |
1,909,020 | 62,877,111 |
Saving all used packages and versions of a python project in a text file on windows?
|
<p>I am trying to save the packages and versions used in my python project. I installed pipreqs but I don't know why it's not working. Maybe I'm not using it correctly. What I did is the following in my jupyter notebook:</p>
<pre><code>pipreqs C:\Users\Documents\myproject\gitlab\Scripts\packages
</code></pre>
<p>When I run it, it says syntax error. I tried replacing <code>path = r'C:\Users\Documents\myproject\gitlab\Scripts\packages'</code> and it still didn't work.
I want the requirements.txt to be saved in this directory. Am I missing something? Can someone please guide me through how pipreqs works ? Thanks</p>
|
<p>Under windows, running pipreqs with normal windows paths work, eg</p>
<pre><code>pipreqs.exe c:\MyProject
</code></pre>
<p>Works as requested, using python 3.7.8 and pipreqs 0.4.10.</p>
<p>The syntax error comes from one of your scripts, which pipreqs actually loads.
You should try to run your scripts one by one to find which of them is having the syntax error.</p>
<p>Good luck.</p>
|
python
| 0 |
1,909,021 | 62,588,981 |
'TypeError: Invalid shape (10000, 28, 28) for image data' in tensor flow Fashion MNIST image prediction
|
<p>I was following the Fashion MNIST image prediction tutorial on TensorFlow; built and trained the model, but after writing the functions to plot the predictions and while attempting to plot the predicted image, it throws the error:</p>
<blockquote>
<p>TypeError: Invalid shape (10000, 28, 28) for image data</p>
</blockquote>
<p>The entire code:</p>
<pre><code>import tensorflow as tf
import numpy as np
from tensorflow import keras
import matplotlib.pyplot as plt
fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt',
'Sneaker', 'Bag', 'Ankle boot']
plt.figure()
plt.imshow(train_images[0])
plt.colorbar()
plt.grid = False
train_images = train_images/255
test_images = test_images/255
plt.figure(figsize=(10,10))
for i in range(25):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid=False
plt.imshow(train_images[i], cmap=plt.cm.binary)
plt.xlabel(class_names[train_labels[i]])
model = keras.Sequential([keras.layers.Flatten(input_shape=(28,28)),keras.layers.Dense(128,
activation ='relu'),keras.layers.Dense(10)])
model.compile(optimizer='adam',loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),metrics=['accuracy'])
model.fit(train_images, train_labels, epochs=50)
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
print('Test Accuracy:', test_acc)
probability_model = tf.keras.Sequential([model, tf.keras.layers.Softmax()])
predictions = probability_model.predict(test_images)
predictions[0]
np.argmax(predictions[0])
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_labels, image=predictions_array, true_label[i], img[i]
plt.grid=False
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel('{}{:2.0f}%({})'.format(class_names[predicted_label], 100*np.max[predictions_array], class_names[true_label]),color=color)
def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array, true_label[i]
plt.grid=False
plt.xticks(range(10))
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color='#777777')
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label].set_color('blue')
i = 0
plt.figure(figsize=(6,3))
plt.subplot(1,2,1)
plot_image(i, predictions[i], test_labels, test_images)
plt.subplot(1,2,2)
plot_value_array(i, predictions[i], test_labels)
</code></pre>
|
<p>here the correct plot function:</p>
<pre><code>def plot_image(i, predictions_array, true_label, img):
predictions_array, true_labels, img = predictions_array, true_label[i], img[i]
plt.grid=False
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_labels:
color = 'blue'
else:
color = 'red'
print(class_names[predicted_label], 100*np.max(predictions_array), class_names[true_labels])
plt.xlabel('{}{:2.0f}%({})'.format(class_names[predicted_label], 100*np.max(predictions_array), class_names[true_labels]), color=color)
</code></pre>
<p>working example: <a href="https://colab.research.google.com/drive/1owyRzS5lRW6DDc3p7D13D8Ih0yoj6Rz5?usp=sharing" rel="nofollow noreferrer">https://colab.research.google.com/drive/1owyRzS5lRW6DDc3p7D13D8Ih0yoj6Rz5?usp=sharing</a></p>
|
python|python-3.x|tensorflow|machine-learning|neural-network
| 1 |
1,909,022 | 67,327,432 |
How to put implement chart.js into an existing page django
|
<p>I am making this website that has this function that shows total hours worked for the employee. I want to make so that when it shows the total hours worked, it will also show the hours worked for each day in that month. I managed to do the chart for that but the chart will show on another page so how do I show the chart and the total hours worked on the same page?</p>
<p>code:</p>
<pre><code>def checkTotalHours(response):
empName = employeeName.objects.all()
if response.method == "POST":
if 'checkName' in response.POST:
n = response.POST.get("check")
emp = employeeName.objects.get(employee=n)
time = emp.total_Hours_Worked
messages.info(response, time + ' hours')
f = Name.objects.filter(name=emp)
allDate = []
cIn = []
today = datetime.today()
datem = today.month
for date in f:
allDate.append(date.date)
totalTime = (datetime.combine(date.date, date.timeOut)- datetime.combine(date.date, date.timeIn)).seconds/3600
cIn.append(totalTime)
hours = int(totalTime)
minutes = (totalTime*60) % 60
seconds = (totalTime*3600) % 60
time = "%d:%02d:%02d" % (hours, minutes, seconds)
#cIn.append(time)
global cInUpdated
global allDateUpdated
cInUpdated = []
allDateUpdated = []
for item in range(len(allDate)):
if allDate[item].month == datem:
cInUpdated.append(cIn[item])
allDateUpdated.append(allDate[item])
print("Here:: ", allDate[item].month," ", cIn[item] )
else:
continue
for item in range(len(cInUpdated)):
print("Work: ", allDateUpdated[item], " ",cInUpdated[item])
return redirect('/totalHours')
else:
form = CreateNewList()
return render(response, "main/checkTotalHours.html", {"empName":empName})
</code></pre>
<p>HTML:</p>
<pre><code>{% extends 'main/base.html' %}
{% block title %}Filter{% endblock %}
{% block content %}
<h1 class="p1">Filter</h1>
<hr class="mt-0 mb-4">
<h2 class="p2">by Total Hours Worked</h3>
<div class="pad">
<form method="post">
{% csrf_token %}
<div class="input-group mb-2">
<div class="input-group-prepend">
<input list="check" name="check" placeholder="enter name...">
<datalist id="check">
{% for empName in empName %}
<option value="{{empName.employee}}">
{% endfor %}
</datalist>
<button type="submit" name="checkName" value="checkName" class="button2" id="example2">Check</button>
</div>
</div>
</form>
</div>
{% for message in messages %}
<div class="container-fluid p-0">
<div class="alert {{ message.tags }} alert-info" role="alert" >
{{ message }}
</div>
</div>
{% endfor %}
{% endblock %}
</code></pre>
<p>and I implemented chart.js on another page with this code</p>
<pre><code>class HomeView(View):
def get(self, request, *args, **kwargs):
return render(request, 'main/index.html')
class ChartData(APIView):
def get(self, request, format = None):
chartLabel = allDateUpdated
chartdata = cInUpdated
data ={
"labels":allDateUpdated,
"chartLabel":chartLabel,
"chartdata":chartdata,
}
return Response(data)
</code></pre>
<p>HTML for the graph:</p>
<pre><code><!DOCTYPE html>
<html lang="en" dir="ltr">
<head>
<meta charset="utf-8">
<title>chatsjs</title>
<!-- Latest compiled and minified CSS -->
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.4.1/css/bootstrap.min.css">
<!-- jQuery library -->
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.4.1/jquery.min.js"></script>
<!-- Latest compiled JavaScript -->
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.4.1/js/bootstrap.min.js"></script>
</head>
<body class="container-fluid">
<center class="row">
<h1>implementation of <b>chartJS</b> using <b>django</b></h1>
</center>
<hr />
<div class="row">
<div class="col-md-6">
<canvas id="myChartline"></canvas>
</div>
<div class="col-md-6">
<canvas id="myChartBar"></canvas>
</div>
</div>
<script src="https://cdn.jsdelivr.net/npm/chart.js@2.8.0"></script>
<script>
var endpoint = '/api';
$.ajax({
method: "GET",
url: endpoint,
success: function(data) {
drawLineGraph(data, 'myChartline');
drawBarGraph(data, 'myChartBar');
console.log("drawing");
},
error: function(error_data) {
console.log(error_data);
}
})
function drawLineGraph(data, id) {
var labels = data.labels;
var chartLabel = data.chartLabel;
var chartdata = data.chartdata;
var ctx = document.getElementById(id).getContext('2d');
var chart = new Chart(ctx, {
// The type of chart we want to create
type: 'line',
// The data for our dataset
data: {
labels: labels,
datasets: [{
label: chartLabel,
backgroundColor: 'rgb(255, 100, 200)',
borderColor: 'rgb(55, 99, 132)',
data: chartdata,
}]
},
// Configuration options go here
options: {
scales: {
xAxes: [{
display: true
}],
yAxes: [{
ticks: {
beginAtZero: true
}
}]
}
}
});
}
function drawBarGraph(data, id) {
var labels = data.labels;
var chartLabel = data.chartLabel;
var chartdata = data.chartdata;
var ctx = document.getElementById(id).getContext('2d');
var myChart = new Chart(ctx, {
type: 'bar',
data: {
labels: labels,
datasets: [{
label: chartLabel,
data: chartdata,
backgroundColor: [
'rgba(255, 99, 132, 0.2)',
'rgba(54, 162, 235, 0.2)',
'rgba(255, 206, 86, 0.2)',
'rgba(75, 192, 192, 0.2)',
'rgba(153, 102, 255, 0.2)',
'rgba(255, 159, 64, 0.2)'
],
borderColor: [
'rgba(255, 99, 132, 1)',
'rgba(54, 162, 235, 1)',
'rgba(255, 206, 86, 1)',
'rgba(75, 192, 192, 1)',
'rgba(153, 102, 255, 1)',
'rgba(255, 159, 64, 1)'
],
borderWidth: 1
}]
},
options: {
scales: {
yAxes: [{
scaleLabel: {
display: true,
labelString: 'Time'
}
}]
}
}
});
}
</script>
</body>
</html>
</code></pre>
<p>I am wondering how to make it so that both shows in one page?</p>
<p>urls:</p>
<pre><code>path("totalHours/", views.checkTotalHours, name="totalHours"),
path('api/', views.ChartData.as_view()),
path('chart/', views.HomeView.as_view(), name = "chart")
</code></pre>
|
<p>You simply need to combine the code of your two templates. You can pass you data to your JS via the template by adding values in a <code><style></style></code> tag if needed. Apart from there's really nothing else there need to be done...</p>
|
python|html|css|django|chart.js
| 1 |
1,909,023 | 60,636,225 |
Python 3.8.1 win 10 cmd sys args issue
|
<p>Does anyone know why I can't send arguments when executing a script?
I have installed python 3.8.1, Windows 10 x64. I have an environment variable (the folder where my scripts are). I can execute the scripts like this:</p>
<pre><code>nameScript.py
</code></pre>
<p>and it works but if I put</p>
<pre><code>nameScript.py 1
</code></pre>
<p>and in this script I use that '1', <code>sys.argv[1]</code> give an error: <code>Index out of bounds</code>.</p>
<p>If I execute the script like:</p>
<pre><code>python D/path_to_script/nameScript.py 1
</code></pre>
<p>it works.</p>
|
<p>your editor configuration is different than default command prompt.May be that is not reading your path and other things.</p>
<p>when executing like </p>
<pre><code>nameScript.py 1
</code></pre>
<p>it understand that nameScript.py is a program and 1 is argument.
hence </p>
<pre><code>sys.argv[0] = 1
sys.argv[1] = Error
</code></pre>
|
python
| 1 |
1,909,024 | 64,286,893 |
Can't set weight for graph with networkx
|
<p>Can't set weight for edges in graph.
My dataset</p>
<p><a href="https://i.stack.imgur.com/DSlnE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DSlnE.png" alt="enter image description here" /></a></p>
<pre><code> dict_value={'ΠΡΡΠΎΡΠ½ΠΈΠΊ':[10301.0,10301.0,10301.0,10301.0,10329.0,10332.0,10333.0,10334.0,174143.0,1030408.0,10306066.0], 'Π‘ΠΎΠ±Π΅ΡΠ΅Π΄Π½ΠΈΠΊ':[300.0,315.0,343.0,344.0,300.0,300.0,300.0,300.0,300.0,300.0,300.0],
'Π§Π°ΡΡΠΎΡΠ°':[164975000,164975000,164437500,164975000,164975000,164975000,164975000,164975000,164975000,164975000,164975000],
'ΠΠ‘ LAC':[9,9,1,9,9,9,9,9,9,9,9],
'ΠΠ»ΠΈΡΠ΅Π»ΡΠ½ΠΎΡΡΡ':[20,3,2,2,3,3,2,3,3,3,3]}
session_graph=pd.DataFrame(dict_value)
</code></pre>
<p>My code:</p>
<pre><code>G = nx.MultiDiGraph()
for row in session_graph.itertuples():
if row[4]==1:
G.add_edge(row[1], row[2],label=row[3],color="green",weight=0.9)
if row[4]==9:
G.add_edge(row[1], row[2],label=row[3],color="red",weight=0.4)
p=nx.drawing.nx_pydot.to_pydot(G)
p.write_png('multi.png')
Image(filename='multi.png')
</code></pre>
<p><a href="https://i.stack.imgur.com/JIQlL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JIQlL.png" alt="enter image description here" /></a></p>
<p>Weight don't change! What I do wrong?Con you help me?</p>
|
<p>If you want to change edge thickness, add <code>penwidth</code> to your arguments</p>
<pre class="lang-py prettyprint-override"><code>G = nx.MultiDiGraph()
for row in session_graph.itertuples():
if row[4]==1:
G.add_edge(row[1], row[2],label=row[3],color="green",weight=0.9, penwidth = 5)
if row[4]==9:
G.add_edge(row[1], row[2],label=row[3],color="red",weight=0.4, penwidth = 1)
</code></pre>
<p>If you draw your graph in <code>dot</code> format with you will see, that the problem is in GraphViz - it ignores weight argument but works with <code>penwidth</code> parameter, so you need to pass it to the drawing library.<br>
See <a href="https://stackoverflow.com/a/2363990/5260876">Graphviz, changing the size of edge</a> question for details.</p>
<p><a href="https://i.stack.imgur.com/85Rg5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/85Rg5.png" alt="resulting graph" /></a></p>
|
python|graph|networkx
| 0 |
1,909,025 | 11,152,855 |
Python : big csv file import
|
<p>I'm currently unsuccessfully trying to import a big csv dataset with Python. Basically, I've got a big csv file made of stocks quotations (one stock by column with for each stock another column which contains the dividends). I'm using the csv Module but the fact is that I can't get a np.array which columns are the stocks quotations.Python creates a np.array by rows and I would like a np.array by column. How can I do??</p>
<p>thanks for you help!!</p>
|
<p>I would recommend using <a href="http://pandas.pydata.org/" rel="nofollow">Pandas</a> library. It also enables you to read big csv files by smaller chuncks. Here's an examle from the docs:</p>
<p>Data:</p>
<pre><code>year indiv zit xit
0 1977 A 1.2 0.60
1 1977 B 1.5 0.50
2 1977 C 1.7 0.80
3 1978 A 0.2 0.06
4 1978 B 0.7 0.20
5 1978 C 0.8 0.30
6 1978 D 0.9 0.50
</code></pre>
<p>Specify chunk size (you get an iterable):</p>
<pre><code>reader = read_table(βtmp.svβ, sep=β|β, chunksize=4)
for chunk in reader:
.....: print chunk
</code></pre>
<p>Output:</p>
<pre><code>year indiv zit xit
0 1977 A 1.2 0.60
1 1977 B 1.5 0.50
2 1977 C 1.7 0.80
3 1978 A 0.2 0.06
year indiv zit xit
0 1978 B 0.7 0.2
1 1978 C 0.8 0.3
2 1978 D 0.9 0.5
</code></pre>
<p>NB! If you need to further manipulate your stock data, Pandas is the best way to go anyway.</p>
|
python|csv|time-series|financial|spyder
| 2 |
1,909,026 | 55,770,737 |
TypeError: 'Product' object is not subscriptable in Flask
|
<p>I am new to python and flask, I am learning to build Flask-rest-api. I am using SQLAlchemy as the db. I tried to post data using postman to the api and I get the TypeError: Object is not subscriptable. I am completely new to this and have no idea how to resolve this and what it means. </p>
<p>I went through some solutions for this in stackoverflow but I couldn't resolve my problem with those solutions. Sorry if I couldn't explain this properly.</p>
<p>This is my code:</p>
<pre><code>from flask import Flask, request, jsonify
from flask_sqlalchemy import SQLAlchemy
from flask_marshmallow import Marshmallow
import os
app = Flask(__name__)
basedir = os.path.abspath(os.path.dirname(__file__))
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///' +
os.path.join(basedir, 'db.sqlite')
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False
db = SQLAlchemy(app)
ma = Marshmallow(app)
class Product(db.Model):
id = db.Column(db.Integer, primary_key=True, unique=True)
name = db.Column(db.String(100))
description = db.Column(db.String(200))
price = db.Column(db.Float)
qty = db.Column(db.Integer)
def __init__(self, name, description, price, qty):
self.name = name
self.description = description
self.price = price
self.qty = qty
class ProductSchema(ma.Schema):
class Meta:
fields = ('id', 'name', 'description', 'price', 'qty')
product_schema = ProductSchema(strict = True)
product_schema = ProductSchema(many=True, strict = True)
@app.route('/product', methods=['POST'])
def add_product():
name = request.json['name']
description = request.json['description']
price = request.json['price']
qty = request.json['qty']
new_product = Product(name, description, price, qty)
db.session.add(new_product)
db.session.commit()
return product_schema.jsonify(new_product)
if __name__ == '__main__':
app.run(debug=True)
</code></pre>
<p>The error that I get when I send is 'TypeError: 'Product' object is not subscriptable'
<a href="https://i.stack.imgur.com/y9l1A.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/y9l1A.png" alt="Error traceback"></a></p>
<pre><code> * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
127.0.0.1 - - [20/Apr/2019 10:58:19] "POST /product HTTP/1.1" 400 -
Product
127.0.0.1 - - [20/Apr/2019 10:58:46] "POST /product HTTP/1.1" 500 -
Traceback (most recent call last):
File "C:\Users\Asus\.virtualenvs\flask_sqlalchemy_rest-UOz-c0A6\lib\site-
packages\flask\app.py", line 2309, in __call__
return self.wsgi_app(environ, start_response)
File "C:\Users\Asus\.virtualenvs\flask_sqlalchemy_rest-UOz-c0A6\lib\site-
packages\flask\app.py", line 2295, in wsgi_app
response = self.handle_exception(e)
File "C:\Users\Asus\.virtualenvs\flask_sqlalchemy_rest-UOz-c0A6\lib\site-
packages\flask\app.py", line 1741, in handle_exception
reraise(exc_type, exc_value, tb)
File "C:\Users\Asus\.virtualenvs\flask_sqlalchemy_rest-UOz-c0A6\lib\site-
packages\flask\_compat.py", line 35, in reraise
raise value
File "C:\Users\Asus\.virtualenvs\flask_sqlalchemy_rest-UOz-c0A6\lib\site-
packages\flask\app.py", line 2292, in wsgi_app
response = self.full_dispatch_request()
File "C:\Users\Asus\.virtualenvs\flask_sqlalchemy_rest-UOz-c0A6\lib\site-
packages\flask\app.py", line 1815, in full_dispatch_request
rv = self.handle_user_exception(e)
File "C:\Users\Asus\.virtualenvs\flask_sqlalchemy_rest-UOz-c0A6\lib\site-
packages\flask\app.py", line 1718, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "C:\Users\Asus\.virtualenvs\flask_sqlalchemy_rest-UOz-c0A6\lib\site-
packages\flask\_compat.py", line 35, in reraise
raise value
File "C:\Users\Asus\.virtualenvs\flask_sqlalchemy_rest-UOz-c0A6\lib\site-
packages\flask\app.py", line 1813, in full_dispatch_request
rv = self.dispatch_request()
File "C:\Users\Asus\.virtualenvs\flask_sqlalchemy_rest-UOz-c0A6\lib\site-
packages\flask\app.py", line 1799, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "D:\workspace\backend\flask_sqlalchemy_rest\app.py", line 50, in
add_product
return product_schema.jsonify(new_product)
File "C:\Users\Asus\.virtualenvs\flask_sqlalchemy_rest-UOz-c0A6\lib\site-
packages\flask_marshmallow\schema.py", line 41, in jsonify
data = self.dump(obj, many=many).data
File "C:\Users\Asus\.virtualenvs\flask_sqlalchemy_rest-UOz-c0A6\lib\site-
packages\marshmallow\schema.py", line 508, in dump
self._update_fields(processed_obj, many=many)
File "C:\Users\Asus\.virtualenvs\flask_sqlalchemy_rest-UOz-c0A6\lib\site-
packages\marshmallow\schema.py", line 777, in _update_fields
ret = self.__filter_fields(field_names, obj, many=many)
File "C:\Users\Asus\.virtualenvs\flask_sqlalchemy_rest-UOz-c0A6\lib\site-
packages\marshmallow\schema.py", line 824, in __filter_fields
obj = obj[0]
TypeError: 'Product' object is not subscriptable
</code></pre>
<p>Can anyone help me in resolving this error? Thanks in advance.</p>
|
<p>The problem is that you have overwritten the value of <code>product_schema</code> such that it is expecting a list of objects rather than a single object. If you change the variable name in the second assignment to something else, such as <code>products_schema</code>, then your code should work.</p>
|
python|flask|flask-sqlalchemy|marshmallow
| 1 |
1,909,027 | 56,708,657 |
Combine paired rows after pandas groupby, give NaN value if ID didn't occur twice in df
|
<p>I have a single dataframe containing an ID column <code>id</code>, and I know that the ID will exist either exactly in one row ('mismatched') or two rows ('matched') in the dataframe.</p>
<ul>
<li>In order to select the mismatched rows and the pairs of matched rows I can use a <code>groupby</code> on the ID column.</li>
<li>Now for each group, I want to take some columns from the second (pair) row, rename them, and copy them to the first row. I can then discard all the second rows and return a single dataframe containing all the modified first rows (for each and every group).</li>
<li>Where there is no second row (mismatched) - it's fine to put NaN in its place.</li>
</ul>
<p>To illustrate this see table below <code>id=1</code> and <code>3</code> are a matched pair, but <code>id=2</code> is mismatched:</p>
<pre><code>entity id partner value
A 1 B 200
B 1 A 300
A 2 B 600
B 3 C 350
C 3 B 200
</code></pre>
<p>The resulting transformation should leave me with the following:</p>
<pre><code>entity id partner entity_value partner_value
A 1 B 200 300
A 2 B 600 NaN
B 3 C 350 200
</code></pre>
<p>What's baffling me is how to come up with a generic way of getting the matching <code>partner_value</code> from row 2, copied into row 1 after the groupby, in a way that also works when there is no matching id.</p>
|
<p>Solution (this was tricky):</p>
<pre><code>dfg = df.groupby('id', sort=False)
# Create 'entity','id','partner','entity_value' from the first row...
df2 = dfg['entity','id','partner','value'].first().rename(columns={'value': 'entity_value'})
# Now insert 'partner_value' from those groups that have a second row...
df2['partner_value'] = nan
df2['partner_value'] = dfg['value'].nth(n=1)
entity id partner entity_value partner_value
id
1 A 1 B 200 300.0
2 A 2 B 600 NaN
3 B 3 C 350 200.0
</code></pre>
<p>This was tricky to get working. The short answer is that although <code>pd.groupby(...).agg(...)</code> in principle allows you to specify a list of tuples of <code>(column, aggregate_function)</code>, and you <a href="https://stackoverflow.com/a/27076865/202229">could then chain those into a rename</a>, that won't work here since we're trying to do two separate aggregate operations both on <code>value</code> column, and rename both their results (you get <code>pandas.core.base.SpecificationError: Function names must be unique, found multiple named value</code>).</p>
<p>Other complications:</p>
<ul>
<li>We can't <em>directly</em> use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.nth.html" rel="nofollow noreferrer"><code>groupby.nth(n)</code></a> which sounds useful at first glance, except it's only on a DataFrame not a Series like <code>df['value']</code>, and also it silently drops groups which don't have an n'th element, not what we want. (But it does keep the index, so we can use it by first initializing the column as all-NaNs, then selectively inserting on that column, as above).</li>
<li>In any case the <code>pd.groupby.agg()</code> syntax won't even let you call <code>nth()</code> by just passing 'nth' as the agg_func name, since <code>nth()</code> is missing its <code>n</code> argument; you'd have to declare a lambda.</li>
<li>I tried defining the following function <code>second_else_nan</code> to use inside an <code>agg()</code> as above, but after much struggling I couldn't get this as this to work for multiple reasons, only one of which is you can't do two aggs on the same column:</li>
</ul>
<p>Code:</p>
<pre><code>def second_else_nan(v):
if v.size == 2:
return v[1]
else:
return pd.np.nan
</code></pre>
<p>(i.e. the equivalent on a list of the <code>dict.get(key, default)</code> builtin)</p>
|
python|python-3.x|pandas|pandas-groupby
| 3 |
1,909,028 | 70,012,451 |
TypeError: strptime() argument 1 must be str, not Timestamp
|
<p>I want to add a time interval of 1 day to a timestamp <code>end</code>, of format <code>0 2020-12-03 Name: Test Date, dtype: datetime64[ns]</code>.</p>
<p><code>df</code>:</p>
<pre><code> Date Id Value
0 2020-12-03 5 050
1 2020-04-07 12 051
2 2020-05-05 6 052
3 2020-05-19 6 059
</code></pre>
<p>I used:</p>
<pre><code>import datetime
...
end = df.loc[df.col== i, 'Date']
print(end)
start = (datetime.datetime.strptime(end,'%Y-%m-%d') - datetime.timedelta(days=1)).strftime('%Y-%m-%d')
...
</code></pre>
<p>and it caught error:</p>
<pre><code>TypeError Traceback (most recent call last)
<ipython-input-161-d9b441080f1a> in <module>
10 print(end)
---> 11 start = (datetime.datetime.strptime(end,'%Y-%m-%d %H:%M:%S') - datetime.timedelta(days=1)).strftime('%Y-%m-%d')
12
13
TypeError: strptime() argument 1 must be str, not Timestamp
</code></pre>
|
<p>In your program, the end is in the "date" type. But it should be in str.</p>
<p>try this,</p>
<p><code>end = str(end)</code></p>
|
python|pandas|datetime
| 0 |
1,909,029 | 17,764,201 |
AMPL vs. Python - Importing tables (multi-dimensional dictionaries?)
|
<p>I am an AMPL user trying to write a linear programming optimization model using Python (My first Python code). I am trying to find how to declare indexed parameters over compound sets. For example, in AMPL, i would say:
Set A
Set B
Set C
param x{A, B, C}
param y{A, B, C}
param z{A, B, C}
The above sets and parameters can be easily read from a database via AMPL.</p>
<p>The table that I read from the database has six fields i.e. A, B, C, x, y, z. Three of them are primary keys (A, B, C) and the rest (x,y,z) are values indexed over the primary keys.</p>
<p>PYTHON PART:
I am using PYODBC module to connect with SQL Server. I tried "dict" but it can index over only one key.
I am not sure which python feature should I use to declare the first three fields as a compound set and x, y and z as values indexed over the compound set. </p>
<pre><code>import pyodbc
con = pyodbc.connect('Trusted_Connection=yes', driver = '{SQL Server Native Client 10.0}', server = 'Server', database='db')
cur = con.cursor()
cur.execute("execute dbo.SP @Param =%d" %Param)
result = cur.fetchall()
Comp_Key, x, y, z= dict((A, B, C, [x,y,z]) for A, B, C, x, y, z in result)
</code></pre>
<p>Ofcourse it is not correct. I cannot think about a way to do this. </p>
<p>Please help me :)
Thanks in advance! </p>
|
<p>In place of this:</p>
<pre><code>Comp_Key, x, y, z= dict((A, B, C, [x,y,z]) for A, B, C, x, y, z in result)
</code></pre>
<p>You could do this (if A, B, C != B, A, C, i.e. order of A B C matters):</p>
<pre><code>final_result = dict(((A, B, C), [x, y, z]) for A, B, C, x, y, z in result)
</code></pre>
<p>OR</p>
<pre><code>final_result = {(A, B, C): [x, y, z] for A, B, C, x, y, z in result} # more readable than first one
</code></pre>
<p>Or you could do this (if A, B, C == B, A, C, i.e. order of A B C does not matter):</p>
<pre><code>final_result = dict((frozenset(A, B, C), [x, y, z]) for A, B, C, x, y, z in result)
</code></pre>
<p>OR</p>
<pre><code>final_result = {frozenset(A, B, C): [x, y, z] for A, B, C, x, y, z in result} # more readable than first one
</code></pre>
|
python|set|mathematical-optimization|ampl|gurobi
| 2 |
1,909,030 | 68,926,758 |
Python class scope lost in nested recursion function
|
<p>When I attempted to write a recursive nested function within a python class, every time the recursive function completed and returned back to the first function my class property reverted back to the original state.</p>
<pre><code>def main():
input = [
[1,3,1],
[1,5,1],
[4,2,1]
]
sln = Solution()
sln.findPath(input)
print("Output: " + str(sln.minPathSum))
print(*sln.minPath, sep = "->")
class Solution():
minPath = []
minPathSum = None
grid = []
def findPath(self, grid):
self.minPath = []
self.minPathSum = None
self.grid = grid
self.recurse(0,0,[])
def recurse(self, xCoord, yCoord, currentPath):
if(len(self.grid) <= yCoord):
return
if(len(self.grid[yCoord]) <= xCoord):
return
currentValue = self.grid[yCoord][xCoord]
currentPath.append(currentValue)
if(len(self.grid) - 1 == yCoord and len(self.grid[yCoord]) - 1 == xCoord):
currentPathSum = sum(currentPath)
if(self.minPathSum == None or currentPathSum < self.minPathSum):
self.minPathSum = currentPathSum
self.minPath = currentPath
else:
#right
self.recurse(xCoord + 1, yCoord, currentPath)
#down
self.recurse(xCoord, yCoord + 1, currentPath)
currentPath.pop()
return
if __name__ == "__main__":
main()
</code></pre>
<p>Running this results in:</p>
<pre><code>Output: 7
</code></pre>
<p>Debugging the code within VSCode does indicate that <code>self.minPath</code> is getting set within the recurse function; however, it appears to be losing scope to the original class instance.</p>
<p>In addition I attempted to recreate the same nested situation with separate code and ended up with the following.</p>
<pre><code>def main():
tst = ScopeTest()
tst.run()
print(*tst.tstArray)
print(tst.tstNumber)
class ScopeTest():
tstArray = []
tstNumber = 0
def run(self):
self.otherFunc()
def otherFunc(self):
self.tstArray = [1,2,3,4,5]
self.tstNumber = 7
if __name__ == "__main__":
main()
</code></pre>
<p>The above did return the expected outcome which leads me to think this has something to do with the recursion.</p>
<p>Just a heads up, I'm fairly new to python so I may be making a rookie mistake, but I can't seem to figure out why this is happening.</p>
|
<p>Your <code>recurse()</code> method is generating a <code>currentPath</code> parameter and when this is deemed to be correct you execute: <code>self.minPath = currentPath</code>. Unfortunately, you just reference the same object as <code>currentPath</code> which is later mutated.</p>
<p>Your line should be: <code>self.minPath = currentPath[:]</code></p>
<p>You then see some contents in <code>self.minPath</code></p>
<p>Mandatory link to <a href="https://nedbatchelder.com/text/names.html" rel="nofollow noreferrer">Ned Batchelder</a></p>
<p>Also you can remove these lines:</p>
<pre><code>minPath = []
minPathSum = None
grid = []
</code></pre>
<p>from just below <code>class Solution():</code></p>
|
python|recursion|scope
| 1 |
1,909,031 | 68,234,071 |
How can I append another dataframe to a dataframe of the same structure and eliminate duplicates?
|
<p>I have a dataframe with words, an id and a code for the language. I have run this dataframe through a spellchecking algorithm - which returned a dataframe with only the words that needed a correction - and now I need to add the corrected words back to the original dataframe but this time the corrected word should replace the wrong one (I'm guessing I need to specify this with the id somehow).</p>
<p>Does anyone know how to solve this?</p>
<p>I have just appended the new dataframe back to the original for now and there's a number at the end for some reason...</p>
|
<p>Without sample, it's hard to answer.</p>
<pre><code>pd.concat([original_df, fixed_df]).drop_duplicates(['id', 'code'], keep='last')
</code></pre>
<pre><code>>>> df1
words id code
0 helo 1 A
1 world 2 B
>>> df2
words id code
0 hello 1 A
>>> pd.concat([df1, df2]).drop_duplicates(['id', 'code'], keep='last')
words id code
1 word 2 B
0 hello 1 A
</code></pre>
|
python|pandas|dataframe
| 0 |
1,909,032 | 68,247,240 |
Tradingview pinescript's RMA (Moving average used in RSI. It is the exponentially weighted moving average with alpha = 1 / length) in python, pandas
|
<p>I've been trying to get same results from tradingviews RMA method but I dont know how to accomplish it</p>
<p>In their page RMA is computed as:</p>
<pre><code>plot(rma(close, 15))
//the same on pine
pine_rma(src, length) =>
alpha = 1/length
sum = 0.0
sum := na(sum[1]) ? sma(src, length) : alpha * src + (1 - alpha) * nz(sum[1])
plot(pine_rma(close, 15))
</code></pre>
<p>to test I used input and their result, this is input column and the same input after applying tradingviewΒ΄s <code>rma(input,14)</code>:</p>
<pre><code>data = [[588.0,519.9035093599585],
[361.98999999999984,508.62397297710436],
[412.52000000000055,501.7594034787397],
[197.60000000000042,480.0337318016869],
[208.71999999999932,460.6541795301378],
[380.1100000000006,454.90102384941366],
[537.6599999999999,460.8123792887413],
[323.5600000000013,451.0086379109742],
[431.78000000000077,449.6351637744761],
[299.6299999999992,438.9205092191563],
[225.1900000000005,423.65404427493087],
[292.42000000000013,414.28018396957873],
[357.64999999999964,410.23517082889435],
[692.5100000000003,430.3976586268306],
[219.70999999999916,415.34854015348543],
[400.32999999999987,414.2757872853794],
[604.3099999999995,427.849659622138],
[204.29000000000087,411.8811125062711],
[176.26000000000022,395.0510330415374],
[204.1800000000003,381.41738782428473],
[324.0,377.3161458368358],
[231.67000000000007,366.91284970563316],
[184.21000000000092,353.8626461552309],
[483.0,363.08674285842864],
[290.6399999999994,357.911975511398],
[107.10000000000036,339.996834403441],
[179.0,328.49706051748086],
[182.36000000000058,318.05869905194663],
[275.0,314.98307769109323],
[135.70000000000073,302.17714357030087],
[419.59000000000015,310.56377617242225],
[275.6399999999994,308.06922073153487],
[440.48999999999984,317.5278478221396],
[224.0,310.8472872634153],
[548.0100000000001,327.78748103031415],
[257.0,322.73123238529183],
[267.97999999999956,318.82043007205664],
[366.51000000000016,322.2268279240526],
[341.14999999999964,323.57848307233456],
[147.4200000000001,310.9957342814536],
[158.78000000000063,300.12318183277836],
[416.03000000000077,308.4022402732943],
[360.78999999999917,312.14422311091613],
[1330.7299999999996,384.90035003156487],
[506.92000000000013,393.61603931502464],
[307.6100000000006,387.4727507925229],
[296.7299999999996,380.991125735914],
[462.0,386.7774738976345],
[473.8099999999995,392.9940829049463],
[769.4200000000002,419.88164841173585],
[971.4799999999997,459.2815306680404],
[722.1399999999994,478.0571356203232],
[554.9799999999996,483.5516259331572],
[688.5,498.19079550936027],
[292.0,483.462881544406],
[634.9500000000007,494.2833900055199]]
# Create the pandas DataFrame
dfRMA = pd.DataFrame(data, columns = ['input', 'wantedrma'])
dfRMA['try1'] = dfRMA['input'].ewm( alpha=1/14, adjust=False).mean()
dfRMA['try2'] = numpy_ewma_vectorized(dfRMA['input'],14)
dfRMA
</code></pre>
<p>ewm does not give me same results, so I searched and found this but I just replicated ewma</p>
<pre><code>def numpy_ewma_vectorized(data, window):
alpha = 1/window
alpha_rev = 1-alpha
scale = 1/alpha_rev
n = data.shape[0]
r = np.arange(n)
scale_arr = scale**r
offset = data[0]*alpha_rev**(r+1)
pw0 = alpha*alpha_rev**(n-1)
mult = data*pw0*scale_arr
cumsums = mult.cumsum()
out = offset + cumsums*scale_arr[::-1]
return out
</code></pre>
<p>I am getting these results</p>
<p><a href="https://i.stack.imgur.com/5qrY4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5qrY4.png" alt="enter image description here" /></a></p>
<p>Do you know how to translate pinescript <strong>rma method in pandas</strong>?</p>
<p>I realized that using pandas ewm it seems to converge, last rows are closer and closer to the value, is this correct?</p>
<p><a href="https://i.stack.imgur.com/78dtY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/78dtY.png" alt="enter image description here" /></a></p>
<p>...
<a href="https://i.stack.imgur.com/me6pl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/me6pl.png" alt="enter image description here" /></a></p>
|
<pre class="lang-js prettyprint-override"><code>const cloneArray = (input) => [...input]
const pluck = (input, key) => input.map(element => element[key])
const pineSma = (source, length) => {
let sum = 0.0
for (let i = 0; i < length; ++i) {
sum += source[i] / length
}
return sum
}
const pineRma = (src, length, last) => {
const alpha = 1.0 / length
return alpha * src[0] + (1.0 - alpha) * last
}
const calculatePineRma = (candles, sourceKey, length) => {
const results = []
for (let i = 0; i <= candles.length - length; ++i) {
const sourceCandles = cloneArray(candles.slice(i, i + length)).reverse()
const closes = pluck(sourceCandles, sourceKey)
if (i === 0) {
results.push(pineSma(closes, length))
continue
}
results.push(pineRma(closes, length, results[results.length - 1]))
}
return results
}
</code></pre>
|
python|pandas|algorithm|pine-script|moving-average
| 0 |
1,909,033 | 59,087,807 |
What is Python's Equivalent for Excel's "Value Of" Solver?
|
<p>How do I run an optimization for a particular value in Python? I'm looking for the equivalent for Excel's "Solver" tool wherein one can set the objective function as a "value of x", such that some parameters <code>P</code> are <em>changed</em> subject to <code>N</code> constraints, to get a value of <code>x</code>.</p>
<p>I'm aware of SciPy's optimize framework, but have only really seen applications for minimizing or maximizing x as opposed to solving for a particular value of x.</p>
<h2>New example</h2>
<p>How do I solve for the return on a portfolio (<code>x</code>) such that the weights in any number of stocks <code>i ... K</code> is between 0 and 1 inclusive, and the sum of all weights is equal to 1 (that is, <code>sum_weights_{i=1}^K == 1</code>).</p>
<p>I have also found a workable solution using Matrix algebra on R demonstrated in this book (<a href="https://faculty.washington.edu/ezivot/econ424/portfolioTheoryMatrix.pdf" rel="nofollow noreferrer">https://faculty.washington.edu/ezivot/econ424/portfolioTheoryMatrix.pdf</a>, page 13). However, have been unable to replicate this in Python.</p>
<h2>Previous example (please ignore)</h2>
<p>For example, how do I solve for the number of work hours required (<code>P</code>) subject to a minimum and maximum hours (<code>N_1</code>, <code>N_2</code>) so that the profit earned is $10,000 (<code>x = 10_000</code>)?</p>
|
<p>Maybe like this:</p>
<pre><code>In [2]: P = sympy.Symbol('P', real=True, negative=False)
In [3]: sympy.solve([sympy.Eq(P * 560, 10_000), 10 < P, P < 100], P)
Out[3]: Eq(P, 125/7)
</code></pre>
<p>Here I'm assuming the wages equal 560.</p>
|
python|scipy|sympy|solver
| 2 |
1,909,034 | 63,088,121 |
Configuring databricks-connect using python OS module
|
<p>I want to configure <code>databricks-connect configure</code> through python OS module after installing databricks-connect through <code>os.system("pip install databricks-connect==6.5")</code></p>
<p>Once databricks-connect is successfully install we need to configure it by passing the following values:</p>
<pre><code>host= "https://<location>.azuredatabricks.net",
port= "8787",
token = "<Token>",
cluster_id = "<ClusterId>",
org_id = "<OrgId>"
</code></pre>
<p>In the terminal if we type <code>databricks-connect configure</code> , it will start asking you above parameter one by one as shown in the figure:</p>
<p><a href="https://i.stack.imgur.com/4KjmK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4KjmK.png" alt="enter image description here" /></a></p>
<p>Now I want same thing to be run using python os.system</p>
<pre><code>os.system("pip install databricks-connect")
os.system("databricks-connect configure")
</code></pre>
<p>After this how to pass host, port, token etc.?<br />
After every value we have to press <code>enter</code> as well.</p>
<p>when I run this on terminal this work fine ,</p>
<pre><code>echo -e 'https://adb-661130381327abc.11.azuredatabricks.net\nxxxxx\n0529-yyyy-twins608\n6611303813275431\n15001' | databricks-connect configure
</code></pre>
<p>but giving me error when i try to run this python os.module</p>
<pre><code>os.sytem("echo -e 'https://adb-661130381327abc.11.azuredatabricks.net\nxxxxx\n0529-yyyy-twins608\n6611303813275431\n15001' | databricks-connect configure")
</code></pre>
<p>Error
"New host value must start with https://, e.g., https://demo.cloud.databricks.com")</p>
|
<p>You can just pipe the data as stdin to the program.</p>
<pre class="lang-py prettyprint-override"><code>import os
host= "https://<location>.azuredatabricks.net"
port= "8787"
token = "<Token>"
cluster_id = "<ClusterId>"
org_id = "<OrgId>"
stdin_list = [host, port, token, cluster_id, org_id]
stdin_string = '\n'.join(stdin_list)
command = "echo '{}' | {}".format(stdin_string, "databricks-connect configure")
os.system(command)
</code></pre>
|
python|bash|subprocess|os.system|databricks-connect
| 2 |
1,909,035 | 35,449,709 |
Django : URL is not serving the correct page
|
<p>Im learning django and in my apps, I have a <strong>urls.py</strong> which has <strong>2 urlpatterns</strong> in it. <strong>No matter</strong> what ever url I use in browser, it <strong>redirects to first url</strong> only</p>
<p><strong>My urls</strong></p>
<pre><code>urlpatterns = [
url(r'^', views.index, name='index'),
url(r'^contact/', views.contact, name='contact')
]
</code></pre>
<p><strong>my views.py</strong></p>
<pre><code>def index(request):
title = 'welcome'
form = SignupForm(request.POST or None)
context={'title':title,'form':form}
if form.is_valid():
instance=form.save(commit=False)
email= form.cleaned_data['email']
email_save,created=Signup.objects.get_or_create(email=email)
return render(request,'index.html',context)
def contact(request):
contact_info=ContactForm(request.POST or None)
if contact_info.is_valid():
contact_info.save(commit=False)
print contact_info.cleaned_data['name']
return render(request,'contact.html',{'contact_info':contact_info})
</code></pre>
<p><strong>My form</strong></p>
<pre><code>class SignupForm(forms.ModelForm):
class Meta:
model = Signup
fields = ['fullname','email']
class ContactForm(forms.Form):
name=forms.CharField(max_length=128, required=True,)
email=forms.EmailField(required=True)
phone=forms.IntegerField(max_value=10)
message=forms.TextInput()
</code></pre>
<p>as per my <strong><em>contact url</em></strong> it should give me <strong><em>contact.html</em></strong>, but it renders only <strong><em>index.html</em></strong>
<a href="https://i.stack.imgur.com/dwFif.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dwFif.png" alt="enter image description here"></a></p>
<p>What could be the issue?</p>
|
<p>You need to change the first redirection line to</p>
<pre><code>url(r'^$', views.index, name='index')
</code></pre>
<p>because contact page url pattern contains <code>^</code> (url pattern of index page ) which redirects you to the index page. If you change like above, <code>index</code> page should be shown only when the url string is followed by an empty string.</p>
|
python|django|django-views|django-urls
| 3 |
1,909,036 | 58,768,107 |
WHERE IN psycopg2 clause not formatting
|
<p>I have been having trouble using <code>WHERE $VARIABLE IN</code> clauses in psycopg2:</p>
<pre class="lang-py prettyprint-override"><code>from app.commons.database import conn
from psycopg2 import sql
from psycopg2.extras import DictCursor
query = '''
SELECT
*
FROM
{}.{}
WHERE
{} in %s
'''.format(
sql.Identifier('information_schema'),
sql.Identifier('tables'),
sql.Identifier('table_schema')
)
data = (
'information_schema',
'pg_catalog'
)
with conn.cursor(cursor_factory=DictCursor) as cursor:
cursor.execute(query, data)
print(cursor.fetchall())
</code></pre>
<p>raises</p>
<blockquote>
<p>TypeError: not all arguments converted during string formatting</p>
</blockquote>
<p>I've read the seemingly hundreds of posts on this same topic, and the overwhelming answer has been: "you need to use tuples when submitting data as the second argument to <code>cursor.execute</code>". I've been doing that and still can't seem to determine where the gap is.</p>
|
<p>Check out the psycopg2 documentation on <a href="http://initd.org/psycopg/docs/usage.html#lists-adaptation" rel="nofollow noreferrer">Lists adaptation</a></p>
<p>You are getting that error because psycopg2 is trying to substitute the two parameters, but you only gave it one parameter. Try changing to this:</p>
<pre><code>from app.commons.database import conn
from psycopg2 import sql
from psycopg2.extras import DictCursor
query = '''
SELECT
*
FROM
{}.{}
WHERE
{} =ANY(%s)
'''.format(
sql.Identifier('information_schema'),
sql.Identifier('tables'),
sql.Identifier('table_schema')
)
data = [
'information_schema',
'pg_catalog'
] # A list now, instead of a tuple
with conn.cursor(cursor_factory=DictCursor) as cursor:
cursor.execute(query, (data, )) # A tuple, containing your list
print(cursor.fetchall())
</code></pre>
|
python-3.x|postgresql|psycopg2
| 1 |
1,909,037 | 73,246,433 |
how to fix 'Column not found: score'?
|
<p>I have used the statement <code>rename( columns={"user_id": "score"},inplace=True)</code> to rename the user_id to score ,but why I get <code>KeyError: 'Column not found: score'</code>
I do not know how to fix that. I used the code from <a href="https://www.geeksforgeeks.org/building-recommendation-engines-using-pandas/?ref=rp" rel="nofollow noreferrer">https://www.geeksforgeeks.org/building-recommendation-engines-using-pandas/?ref=rp</a>. why too many website give wrong code example?
small example here :</p>
<pre><code>import pandas as pd
df = pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]})
df.rename(columns={"A": "arr", "B": "c"},inplace=True)
print(df)
</code></pre>
<p>It works .</p>
<pre><code> import pandas as pd
# Get the column names
col_names = ['user_id', 'item_id', 'rating', 'timestamp']
# Load the dataset
path = 'https://media.geeksforgeeks.org/wp-content/uploads/file.tsv'
ratings = pd.read_csv(path, sep='\t', names=col_names)
# Check the head of the data
print(ratings.head())
# Check out all the movies and their respective IDs
movies = pd.read_csv(
'https://media.geeksforgeeks.org/wp-content/uploads/Movie_Id_Titles.csv')
print(movies.head())
# We merge the data
movies_merge = pd.merge(ratings, movies, on='item_id')
movies_merge.head()
pop_movies = movies_merge.groupby("title")
pop_movies["user_id"].count().sort_values(
ascending=False).reset_index().rename(
columns={"user_id": "score"},inplace=True)
pop_movies['Rank'] = pop_movies['score'].rank(
ascending=0, method='first')
pop_movies
</code></pre>
|
<p>Note that <code>movies_merge.groupby("title")</code> does not return a df. Rather it returns a groupby object (see <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>df.groupby</code></a>):</p>
<pre><code>pop_movies = movies_merge.groupby("title")
print(type(pop_movies))
<class 'pandas.core.groupby.generic.DataFrameGroupBy'>
</code></pre>
<p>Hence, the calculate you perform on this object produces a <em>new</em> df, which you first need to assign to a variable for the <code>.rename( columns={"user_id": "score"},inplace=True)</code> operation to be sensical:</p>
<pre><code>pop_movies = pop_movies["user_id"].count().sort_values(
ascending=False).reset_index()
print(type(pop_movies))
<class 'pandas.core.frame.DataFrame'>
</code></pre>
<p>Now, the rest will work:</p>
<pre><code>pop_movies.rename(
columns={"user_id": "score"},inplace=True)
pop_movies['Rank'] = pop_movies['score'].rank(
ascending=0, method='first')
print(pop_movies.head())
title score Rank
0 Star Wars (1977) 584 1.0
1 Contact (1997) 509 2.0
2 Fargo (1996) 508 3.0
3 Return of the Jedi (1983) 507 4.0
4 Liar Liar (1997) 485 5.0
</code></pre>
|
python
| 2 |
1,909,038 | 31,285,469 |
using reserved words in column names in pandas
|
<p>Using reserved names as column names in pandas does not provide an error or warning message. The code just runs through without doing anything. Is there a way to turn on some kind of warning to prevent use of reserved words as column names? </p>
<p>MWE:</p>
<pre><code>import pandas as pd
index_names = ['1','2','3']
col_names = ['median','A']
df = pd.DataFrame(index = index_names, columns = col_names)
df.fillna(0, inplace = True)
df.ix[1].A = 12
df.ix[1].median = 12
print df
</code></pre>
<p>Output:</p>
<pre><code> median A
1 0 0
2 0 12
3 0 0
</code></pre>
|
<p>In Python, unlike other languages there really isn't a concept of variables. Python has names that point to objects.</p>
<p>Since Python is not strict typed (unlike Java or C++ for example), you have the flexibility to assign names to other objects, even names that point to functions (since functions are, like everything else, an object).</p>
<p>Although this is extremely flexible (it allows to easily overwrite functionality of objects by overwriting the names of functions), it does mean that Python will not warn you if you try to do something that impacts a <a href="https://docs.python.org/2/library/functions.html" rel="nofollow">built-in name</a>. This is called <em>shadowing</em>.</p>
<p>It is just one of the tradeoffs of having a flexible type system, and something Python programmers need to be aware of.</p>
<p>Here is a canonical example:</p>
<pre><code>>>> type(id)
<type 'builtin_function_or_method'>
>>> id(4)
10677280
>>> id = 5
>>> type(id)
<type 'int'>
>>> id(5)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'int' object is not callable
</code></pre>
|
python|pandas
| 1 |
1,909,039 | 15,637,784 |
Python: New list grouping repeated elements from existing list
|
<p>I have one list of the form:</p>
<pre><code>>>> my_list = ['BLA1', 'BLA2', 'BLA3', 'ELE1', 'ELE2', 'ELE3', 'PRI1', 'PRI2', 'NEA1', 'NEA2', 'MAU1', 'MAU2', 'MAU3']
</code></pre>
<p>and I want to create a new list, grouping the repeated elements into lists inside my new list, so at the end I will have:</p>
<pre><code>>>> new_list = [['BLA1', 'BLA2', 'BLA3'], ['ELE1', 'ELE2', 'ELE3'], ['PRI1', 'PRI2'], ['NEA1', 'NEA2'], ['MAU1', 'MAU2', 'MAU3']]
</code></pre>
|
<p>Use <a href="http://docs.python.org/3/library/itertools.html#itertools.groupby" rel="nofollow"><code>itertools.groupby</code></a>:</p>
<pre><code>import itertools
[list(group) for key, group in itertools.groupby(my_list, key=lambda v: v[:3])]
</code></pre>
<p>The <code>key</code> argument is needed here to extract just the part of the value you wanted to group <em>on</em>; the first 3 characters.</p>
<p>Result:</p>
<pre><code>>>> my_list = ['BLA1', 'BLA2', 'BLA3', 'ELE1', 'ELE2', 'ELE3', 'PRI1', 'PRI2', 'NEA1', 'NEA2', 'MAU1', 'MAU2', 'MAU3']
>>> [list(group) for key, group in itertools.groupby(my_list, key=lambda v: v[:3])]
[['BLA1', 'BLA2', 'BLA3'], ['ELE1', 'ELE2', 'ELE3'], ['PRI1', 'PRI2'], ['NEA1', 'NEA2'], ['MAU1', 'MAU2', 'MAU3']]
</code></pre>
<p><code>groupby</code> will combine successive keys that are equal into 1 group. If you have disjoint groups (so same value, but with other values in between) it'll create separate groups for those:</p>
<pre><code>>>> my_list = ['a1', 'a2', 'b1', 'b2', 'a3', 'a4']
>>> [list(group) for key, group in itertools.groupby(my_list)]
[['a1', 'a2'], ['b1', 'b2'], ['a3', 'a4']]
</code></pre>
<p>If that is not what you want you will have to sort <code>my_list</code> first.</p>
|
python|list|grouping
| 6 |
1,909,040 | 59,845,781 |
Selenium trying to iterate over a table, but gets stuck on the first now
|
<p>I'm having a bit of a headscratcher here, where I'm working on a table with Python 3 and selenium. I am trying to extract some data from a table (<code>tblGuid</code>), and get some info from a few columns. </p>
<p>While the data is presumably retrieved correctly (the <code>len(rows)</code> prints the expected amount of rows), the iterator seems to get stuck on the first element, only printing the same <code>socket</code> repeatedly, with the amount of prints matching <code>len(rows)</code></p>
<pre><code>vlan = "vlan14"
time.sleep(3)
# Enter filter for vlan
print("Filtered by vlan: " + vlan)
browser.find_element_by_xpath("/html/body/div[1]/div[4]/div[3]/div[4]/div/div[2]/div/div[1]/div[3]/div/table/tfoot/tr/th[13]/input").send_keys(vlan)
# Sort by socket
browser.find_element_by_xpath("/html/body/div[1]/div[4]/div[3]/div[4]/div/div[2]/div/div[1]/div[1]/div/table/thead/tr/th[14]").click()
time.sleep(2)
table = browser.find_element_by_id('tblGuid')
rows = table.find_elements_by_xpath(".//tr")
time.sleep(2)
print("Len: ", len(rows))
for row in rows:
socket = row.find_element_by_xpath('//td[10]').text
print("Socket: ", socket)
# Other stuff of the same natures as the above two lines go here. Get a different column and assign it to a variable.
browser.quit()
</code></pre>
<p>I am running this code with firefox and not turning on headless mode, to confirm that all clicks, sorts, and filters are applied as intended. The browser output looks as expected, and the data is all there, with socket being a number that varies between 1 and 52 over ~50 rows. It seems to me that the <code>for</code> loop gets stuck on the first element of <code>rows</code>.</p>
<p>I have added a lot of (probably redundant <code>time.sleep()</code> to ensure that the page is loaded properly, and so that I can see the page being updated as the script progresses.</p>
<p>It is probably worth mentioning that the page I am scraping does not contain the table data in HTML, as it is populated by javascript working on a database. At first I thought this was the problem, but the fact that the data being printed as socket matches the first line of the table (as does any other columns) tells me that the data is being retrieved correctly, but I fail to iterate over it.</p>
<p><strong>EDIT - A cleaned up version of the HTML</strong></p>
<pre><code><table id="tblGuid" class="table table-striped table-hover table-condensed detailedTable table-bordered dataTable" style="width: 99.9%;" role="grid" aria-describedby="tblGuid_info">
<tbody>
<tr role="row" class="odd">
<td><button class="tableButton regguid" data-guid="0046ca">Reg.</button></td>
<td>0046ca</td>
<td>0110F17754</td>
<td>A18122</td>
<td><a href="detail?serial=37530" target="_blank">37530</a></td>
<td>05929a</td>
<td>3.0.0</td>
<td>19-12-21 19:56</td>
<td>20-01-19 19:53</td>
<td>20-01-19 19:53</td>
<td>20526661632</td>
<td>1</td>
<td>vlan14</td>
<td class="sorting_1">1</td>
<td>0</td>
<td><a data-node-error="0" data-node-guid="0046ca" href="#"> 0</a></td>
<td><a href="qc?rclId=1279" target="_blank">145811</a></td>
<td>5554</td>
<td>152263</td>
<td>Done</td>
</tr>
<tr role="row" class="even">
<td><button class="tableButton regguid" data-guid="004aa4">Reg.</button></td>
<td>004aa4</td>
<td>0110F17D8D</td>
<td>A19108</td>
<td><a href="detail?serial=37740" target="_blank">37740</a></td>
<td>05936c</td>
<td>3.0.0</td>
<td>19-12-21 20:15</td>
<td>20-01-19 19:54</td>
<td>20-01-19 19:54</td>
<td>20517699584</td>
<td>1</td>
<td>vlan14</td>
<td class="sorting_1">2</td>
<td>0</td>
<td><a data-node-error="0" data-node-guid="004aa4" href="#"> 0</a></td>
<td><a href="qc?rclId=1277" target="_blank">147011</a></td>
<td>5548</td>
<td>152311</td>
<td>Done</td>
</tr>
</tbody>
</table>
</code></pre>
<p>Notes on the above HTML: </p>
<ul>
<li>Around 40 table rows removed for readability.</li>
<li>Table header and footer has been removed.</li>
<li>Some data in the cells have been altered for the purpose of this post. The structure remains the same.</li>
<li>this is how it appears under "inspect element" in firefox.</li>
<li>The xpath referenced in the python code is based on "copy -> xpath" under inspect element.</li>
</ul>
|
<p>Without the table html this is my best guess. It looks like the xpath is not quite doing what is expected. Try to use: <code>find_element_by_xpath('.//td[10]').text</code></p>
<pre><code>for row in rows:
columns = row.find_elements_by_xpath('.//td')
for column in range(len(columns)):
print("column::{}:".format(column), columns[column].text)
#testsocket = columns[9].text
socket = row.find_element_by_xpath('.//td[10]').text
print("Socket: ", socket)
#print("TestSocket: ", testsocket)
</code></pre>
|
python-3.x|selenium|html-table
| 2 |
1,909,041 | 59,790,085 |
Dropping Nulls and Slicing from Pivoted Table in Pandas
|
<p>I'm having trouble slicing my pivoted data frame so that I get results above a certain threshold. I'm trying to filter out results that fall below a minimum value. My data frame looks like so:</p>
<pre><code> Qty
Index Store_Nbr 201712 201801 201802 201803 201804 201805 201806 201807 201808
0 1 356 275 293 256 313 421 493 291 385
1 2 147 316 343 416 361 483 438 136 461
2 3 266 370 162 346 451 414 296 478 295
3 4 322 179 353 241 370 247 423 391 194
4 5 249 389 480 450 102 482 137 251 153
... ... ... ... ... ... ... ... ... ... ...
30 30 0 0 0 0 0 0 0 0 0
31 31 0 0 0 0 0 0 0 0 0
32 32 0 0 0 0 0 0 0 0 0
33 33 392 311 151 488 135 239 212 104 122
34 34 0 0 0 0 0 0 0 0 -1
</code></pre>
<p>After using <code>godzilla = godzilla[godzilla['Qty'] > 150]</code> I get the below data frame where it has converted all of the zero to nulls and hasn't filtered out anything. </p>
<pre><code> Qty
Index Store_Nbr 201712 201801 201802 201803 201804 201805 201806 201807 201808
0 NaN NaN 275 293 256 313 421 493 291 385
1 NaN 147 316 343 416 361 483 438 136 461
2 NaN 266 370 162 346 451 414 296 478 295
3 NaN NaN 179 353 241 370 247 423 391 194
4 NaN 389 480 450 102 482 137 251 153 153
... ... ... ... ... ... ... ... ... ... ...
30 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
31 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
32 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
33 NaN NaN 311 151 488 135 239 212 104 122
34 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
</code></pre>
<p>I've tried doing <code>godzilla.dropna(axis = 0, inplace = True, how = 'any')</code> which returns an empty dataframe and <code>godzilla = godzilla.dropna( subset = godzilla['Qty'])</code> which returns a KeyError: 'Qty'. I'm baffled that it converted zeroes in nulls and why the slice isn't working. Any words of wisdom when trying to filter/slice pivoted data?</p>
<p>Note** That I have more than Qty being pivoted in the data frame.</p>
|
<p>Firstly you replace every value lower than 150 with None.<br>
Than you should filter out row with at least 1 None value as described in <a href="https://stackoverflow.com/questions/44548721/">this Answer</a></p>
<p>This should do the trick:</p>
<pre><code># replaces every value of the DataFrame lower than 150 with None
godzilla = pd.DataFrame([
[x if not isinstance(x, (int, float)) or x >= 150 else None for x in j
] for j in godzilla.as_matrix()])
# replaces every value of the Column "Qty" lower than 150 with None
df["Qty"] = [x if not isinstance(x, (int, float)) or x >= 20 else None for x in df["Qty"]
#dropna will drop all rows with at least 1 null value
df = df.dropna(how='any',axis=0)
</code></pre>
|
python|pandas|dataframe|pivot|slice
| 0 |
1,909,042 | 49,157,985 |
Call a Python script with parameters from a Python script
|
<p>Let's say I have a Python script named <code>script1.py</code> that has the following code:</p>
<pre><code>import sys
name = sys.argv[1]
print("Bye", name)
</code></pre>
<p>And a second Python script <code>script2.py</code> that calls the first one:</p>
<pre><code>import sys
name = sys.argv[1]
print("Hello", name)
import script1
</code></pre>
<p>How can I pass an argument from the second script to the first one, as in the given examples? </p>
<p>I'd like the output to look like this when I run my script from the terminal:</p>
<pre><code>>> python script2.py Bob
Hello Bob
Bye Bob
</code></pre>
|
<p>A couple of options the way you're doing it (although your example does actually work right now because <code>sys.argv</code> is the same for both scripts but won't in the generic case where you want to pass generic arguments).</p>
<p>Use <code>os.system</code></p>
<pre><code>import os
os.system('python script1.py {}'.format(name))
</code></pre>
<p>Use <code>subprocess</code></p>
<pre><code>import subprocess
subprocess.Popen(['python', 'script1.py', name])
</code></pre>
<p>A better way, IMO, would be to make script 1's logic into a function</p>
<pre><code># script1.py
import sys
def bye(name):
print("Bye", name)
if __name__ == '__main__': # will only run when script1.py is run directly
bye(sys.argv[1])
</code></pre>
<p>and then in script2.py do</p>
<pre><code># script2.py
import sys
import script1
name = sys.argv[1]
print("Hello", name)
script1.bye(name)
</code></pre>
|
python
| 2 |
1,909,043 | 49,322,696 |
Appium Python WebDriverWait wait.until(expected_conditions.alert_is_present()) random failure
|
<p>I have a Appium test class testing an iOS app with two nearly identical tests inside:</p>
<pre><code>def test_fail(self):
self.log_in('invalid_user_1')
self.wait.until(expected_conditions.alert_is_present())
alert = self.driver.switch_to.alert
assert "Your mobile number is not registered with us" in alert.text
alert.accept()
def test_normal(self):
self.log_in('empty')
self.wait.until(expected_conditions.alert_is_present())
alert = self.driver.switch_to.alert
assert 'Please enter mobile number' in alert.text
alert.accept()
</code></pre>
<p>When I run the test, <em>test_fail</em> (it runs first before <em>test_normal</em>), it would always fail to catch the warning dialog with error:</p>
<blockquote>
<p>WebDriverException: Message: An unknown server-side error occurred while processing the command. Original error: An attempt was made to operate on a modal dialog when one was not open.</p>
</blockquote>
<p>The *test_normal works though. I tried to comment out <em>test_normal</em>, the <em>test_fail</em> would fail with the same message.</p>
<p>I then try to comment out <em>test_fail</em>, but this time <em>test_normal</em> would work. So for some strange reason, <em>test_fail</em> just won't work with <code>self.wait.until(expected_conditions.alert_is_present())</code></p>
<p>However, if I replace the <em>test_fail</em> test wait.until line:</p>
<pre><code>self.wait.until(expected_conditions.alert_is_present())
</code></pre>
<p>with:</p>
<pre><code>self.wait_for('OK')
</code></pre>
<p>Then everything would work.</p>
<p>The self.wait is declared in the the def setUp(self) <code>self.wait = WebDriverWait(self.driver, 120)</code></p>
<p>I'm running Appium 1.7.2 (Appium GUI 1.4.0) on Mac OS X. The test iOS is run on iPhone 7 simulator with OS 11.2.</p>
<p>The error stack trace:</p>
<pre><code>Error
Traceback (most recent call last):
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/unittest/case.py", line 331, in run
testMethod()
File "/Users/steve/Desktop/Temp/Appium/Ding_ios_aws/ios_aws/tests/test_invalid_login.py", line 16, in test_invalid_user_login
self.wait.until(expected_conditions.alert_is_present())
File "/Users/steve/venv/Appium/lib/python2.7/site-packages/selenium/webdriver/support/wait.py", line 71, in until
value = method(self._driver)
File "/Users/steve/venv/Appium/lib/python2.7/site-packages/selenium/webdriver/support/expected_conditions.py", line 387, in __call__
alert = driver.switch_to.alert
File "/Users/steve/venv/Appium/lib/python2.7/site-packages/selenium/webdriver/remote/switch_to.py", line 55, in alert
alert.text
File "/Users/steve/venv/Appium/lib/python2.7/site-packages/selenium/webdriver/common/alert.py", line 69, in text
return self.driver.execute(Command.GET_ALERT_TEXT)["value"]
File "/Users/steve/venv/Appium/lib/python2.7/site-packages/selenium/webdriver/remote/webdriver.py", line 312, in execute
self.error_handler.check_response(response)
File "/Users/steve/venv/Appium/lib/python2.7/site-packages/appium/webdriver/errorhandler.py", line 29, in check_response
raise wde
WebDriverException: Message: An unknown server-side error occurred while processing the command. Original error: An attempt was made to operate on a modal dialog when one was not open.
</code></pre>
<p>Can anyone help me figure out what's going on?</p>
<p>test_normal dialog screen image
<a href="https://i.stack.imgur.com/v3p8K.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/v3p8K.jpg" alt="image of the dialog of test_normal"></a></p>
<p>test_fail dialog screen image
<a href="https://i.stack.imgur.com/88Oxs.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/88Oxs.jpg" alt="image of the dialog of test_fail"></a></p>
|
<p>Most probably you are facing the bug <a href="https://github.com/appium/appium/issues/10286" rel="nofollow noreferrer">https://github.com/appium/appium/issues/10286</a>.</p>
<p>Bug:
In the first ping/check of alert itself, if alert is not present, Appium throws the exception without waiting for the given time.</p>
<p>This bug is recently logged (15 days before). try latest version. I think it is fixed in latest version beta.</p>
<p>Also refer <a href="https://github.com/facebook/WebDriverAgent/issues/857" rel="nofollow noreferrer">https://github.com/facebook/WebDriverAgent/issues/857</a>, which says it is Appium issue but not WebDriver.</p>
<p>Temporary Solution:
Add 1-3 seconds sleep to make sure alert is present before checking for the explicit condition.</p>
|
python|selenium-webdriver|appium-ios|webdriverwait
| 1 |
1,909,044 | 60,132,197 |
no value being returned from program
|
<p>I have the following code:</p>
<pre><code>import time
import warnings
import numpy as np
import pandas as pd
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from tkinter import *
from tkinter import ttk
from tkinter import messagebox
import external
import excelwork
window = Tk()
window.title('My Sample Solution')
window.geometry('700x500')
window.configure(background = 'green')
rows = 0
while rows < 50:
window.rowconfigure(rows, weight = 1)
window.columnconfigure(rows, weight = 1)
rows += 1
found = ['']
#Label Input:
Label(window, text='Search For: ').grid(column = 0, row = 0)
#Dropdown select search
def DisplayDevNum():
search_for = Srch.get()
found = excelwork.srch_knums(int(search_for))
#This is where you choose what to do with the information
return found
def DisplayDevAdd():
search_for = Srch.get()
found = excelwork.srch_add(str(search_for))
#This is where you choose what to do with the information
return found
def DisplayDevCty():
search_for = Srch.get()
found = excelwork.srch_cty(str(search_for))
#This is where you choose what to do with the information
return found
def DisplayDevStt():
search_for = Srch.get()
found = excelwork.srch_stt(str(search_for))
#This is where you choose what to do with the information
return found
def DisplayStrNum():
search_for = Srch.get()
found = excelwork.srch_snums(int(search_for))
#This is where you choose what to do with the information
return found
def DisplayStrName():
search_for = Srch.get()
found = excelwork.srch_stnm(str(search_for))
#This is where you choose what to do with the information
return found
def chng_srch():
srch_choices.get()
if srch_choices == options[0]:
DisplayDevNum()
return found()
if srch_choices == options[1]:
DisplayDevAdd()
return found()
if srch_choices == options[2]:
DisplayDevCty()
return found()
if srch_choices == options[3]:
DisplayDevStt()
return found()
if srch_choices == options[4]:
DisplayStrNum()
return found()
if srch_choices == options[5]:
DisplayStrName()
return found()
options = ['Device Number','Device Address','Device City','Device State','Store Number','Store Name']
srch_choices = ttk.Combobox(window, values = options)
srch_choices.grid(row = 0, column = 1)
#Input Entry
Srch = Entry(window)
Srch.grid(column = 2, row = 0)
display_tabs = ttk.Notebook(window)
display_tabs.grid(row = 3, column = 0, columnspan = 50, rowspan = 47, sticky = 'NESW')
tab1 = ttk.Frame(display_tabs)
display_tabs.add(tab1, text = 'Info')
Label(tab1, text = 'Kiosk ID: ').grid(column = 0, row = 0)
Label(tab1, text = found[0]).grid(column = 1, row = 0)
#Go Button
Button(window, text = 'Go', command = chng_srch).grid(column = 3, row = 0)
window.mainloop()
</code></pre>
<p>I am trying to print the value of my functions as results of a search. However, I am not getting any output from my returns. My excelwork import is a personally written file and that works. I have tested it and it returns the values expected when directly run. That said, I'm not entirely sure where I went wrong. Can somebody help?</p>
|
<p>There are a few separate issues with your current code.</p>
<p>First, simply changing the <code>found</code> variable will not update the <code>Label</code>. You can instead make a Tkinter <code>StringVar</code> to hold the text you want to display, which will cause the <code>Label</code> to be updated whenever it changes. See the Patterns section of the <code>Label</code> <a href="https://effbot.org/tkinterbook/label.htm" rel="nofollow noreferrer">docs</a> for an example of this.</p>
<p>The second issue, is that when you run the <code>DisplayDevNum</code> function, you need to save the output - running the function by itself doesn't actually do anything. So instead of doing</p>
<pre><code>if srch_choices == options[0]:
DisplayDevNum()
return found()
</code></pre>
<p>you need to do something like this:</p>
<pre><code>if srch_choices == options[0]:
return DisplayDevNum()
</code></pre>
<p>However, simply returning a value from <code>chng_srch</code> will not update either <code>found</code> or the Label contents. To change the <code>found</code> variable (assuming that <code>found</code> is now a <code>StringVar</code> used by the <code>Label</code>), you don't need to return anything from your functions, but you do need to <a href="https://pythonprogramming.net/global-local-variables/" rel="nofollow noreferrer">mark <code>found</code> as a global variable</a> (note that many people consider global variables bad practice in most situations). Another option would be to keep the <code>Label</code> as a variable, and have the <code>chng_srch</code> function explicitly update the <code>Label</code> contents.</p>
|
python|tkinter|return|return-value
| 0 |
1,909,045 | 67,955,172 |
How to delete a message when clicking on another button in Telebot?
|
<p>When I click on the inline button, it will give me a text. And with each next press, he again sends the text.</p>
<p>I want to make it so that when you click on a new button, the old message is deleted and a new one appears.</p>
<p>I hope I explained clearly :D</p>
<p>Just in case, I'll leave the source code :</p>
<pre><code>from telebot import TeleBot # ΠΠΎΠ΄ΠΊΠ»ΡΡΠ°Π΅ΠΌ Π±ΠΈΠ±Π»ΠΈΠΎΡΠ΅ΠΊΡ
from telebot import types
import time # ΠΠΎΠ΄ΠΊΠ»ΡΡΠ°Π΅ΠΌ ΠΌΠΎΠ΄ΡΠ»Ρ Π²ΡΠ΅ΠΌΠ΅Π½ΠΈ
# ΠΠ°ΠΏΠΈΡΡΠ²Π°Π΅ΠΌ ΡΠΎΠΊΠ΅Π½
bot = TeleBot('CODE')
bad_words = ["ΠΆΠΎΠΏΠ°", "Π΄ΡΡΠ°ΠΊ", "ΠΌΡΠ΄Π°ΠΊ", "durak", "Π±Π»Ρ", "Ρ
ΡΠΉ",
"Ρ
ΡΡ"] # Π‘Π»ΠΎΠ²Π°ΡΡ Π΄Π»Ρ ΡΡΠ°Π· ΠΊΠΎΡΠΎΡΡΠ΅ ΠΌΡ Π±ΡΠ΄Π΅ΠΌ Π°Π²ΡΠΎΠΌΠ°ΡΠΈΡΠ΅ΡΠΊΠΈ ΡΠ΄Π°Π»ΡΡΡ ΠΈΠ· ΡΠ°ΡΠ°
other_lang = ["c#", "c++", "Π΄Π΅Π»ΡΡΠΈ", " ΡΠ²Π° ", "java", "php", " ΠΏΡ
ΠΏ", "swift", " ΡΠ²ΠΈΡΡ", " go ",
"javascript", "kotlin", " ΠΊΠΎΡΠ»ΠΈΠ½", "rust", " ΡΠ°ΡΡ ", "basic", " Π±Π΅ΠΉΡΠΈΠΊ", " ΠΏΠ°ΡΠΊΠ°Π»Ρ",
"golang", "pascal", "delphi", "perl", " ΠΏΠ΅ΡΠ» ", "1c", " Π΄Π΅Π»ΡΠΈ", " ΡΠΈ "]
# Π‘Π»ΠΎΠ²Π°ΡΡ Π΄Π»Ρ ΡΡΠ°Π· Π½Π° ΠΊΠΎΡΠΎΡΡΠ΅ΠΌ ΠΌΡ Π±ΡΠ΄Π΅ΠΌ ΡΠ΅Π°Π³ΠΈΡΠΎΠ²Π°ΡΡ ΡΡΠΈΠΊΠ΅ΡΠΎΠΌ
# ΠΡΠ΅ ΠΎΠ΄ΠΈΠ½ ΡΠ»ΠΎΠ²Π°ΡΡ Π΄Π»Ρ ΡΡΠ°Π· Π½Π° ΠΊΠΎΡΠΎΡΡΠ΅ΠΌ ΠΌΡ Π±ΡΠ΄Π΅ΠΌ ΡΠ΅Π°Π³ΠΈΡΠΎΠ²Π°ΡΡ ΡΡΠΈΠΊΠ΅ΡΠΎΠΌ
other_bot = ["aiogram", "Π°ΠΈΠΎΠ³ΡΠ°ΠΌ"]
@bot.message_handler(commands=['bot'])
def button(message):
markup = types.InlineKeyboardMarkup(row_width=2)
item = types.InlineKeyboardButton(
'ΠΡΠΏΠΈΡΡ Π»ΠΎΠ³ΠΈ ', callback_data='question1')
item2 = types.InlineKeyboardButton(
'ΠΡΠ°Π²ΠΈΠ»Π° ββοΈ', callback_data='goodbye')
item3 = types.InlineKeyboardButton(
'ΠΠ°Π½ΡΠ°Π»Ρ ', callback_data='manual')
item4 = types.InlineKeyboardButton(
'Π Π°Π·Π²Π»Π΅ΠΊΡΡ
Π° ', callback_data='game')
markup.add(item, item2, item3, item4)
bot.send_message(message.chat.id, 'ΠΠ΄ΡΠ°Π²ΡΡΠ²ΡΠΉ, {first} {last}'.format(
first=message.from_user.first_name, last=message.from_user.last_name), reply_markup=markup)
@ bot.callback_query_handler(func=lambda call: True)
def callback(call):
if call.message:
if call.data == 'question1':
bot.send_message(call.message.chat.id,
'''ΠΠ°ΠΊ ΠΊΡΠΏΠΈΡΡ Π»ΠΎΠ³ΠΈ?
Π¨α΄α΄¦ 1οΈβ£
Πα΄α΄©ΚΡΚ Π΄α΄α΄§α΄Κ, ΚΡ Π΄α΄α΄§ΠΆΠ½Ρ Ια΄ΠΉα΄ΠΈ Κ Π±α΄α΄α΄ - @reimannlogs_bot ΠΈ ᴨᴩα΄α΄¨ΠΈα΄α΄α΄Ρ /startββοΈ
Π¨α΄α΄¦ 2οΈβ£
Πα΄ΠΆΠΈΚα΄α΄Κ "Πα΄α΄¨α΄α΄§Π½ΠΈα΄Ρ" ΠΈ ᴨα΄α΄¨α΄α΄§Π½Ρα΄Κ Π½α΄ α΄α΄α΄α΄§Ρα΄α΄,
Π½α΄ α΄α΄α΄α΄§Ρα΄α΄ ΚΡ Ρ
α΄α΄ΠΈα΄α΄ α΄Ρᴨиα΄Ρ α΄§α΄α΄¦α΄Κ.
Πα΄α΄Ρα΄α΄§ΡΠ½α΄Ρ Ρα΄Π½α΄ Ια΄ 1 α΄§α΄α΄¦ - 59 α΄©ΡΠ±
Π¨α΄α΄¦ 3οΈβ£
Πα΄ΠΆΠΈΚα΄α΄Κ "ΠΡᴨиα΄Ρ", ΚΡΠ±ΠΈα΄©α΄α΄Κ "Πα΄Π±Ρᴦα΄α΄© Rα΄dLinα΄"
ΠΈ "POPULAR GAMES"
Π¨α΄α΄¦ 4οΈβ£
ΠΠ±α΄©α΄Π±α΄α΄ΡΚα΄α΄Κ α΄§α΄α΄¦ ΠΈ ᴨα΄α΄§ΡΡα΄α΄Κ PROFIT!
(Πα΄Π½Ρα΄α΄§ ᴨᴠα΄Π±α΄©α΄Π±α΄α΄α΄α΄ Κα΄ΠΆΠ½α΄ ᴨα΄α΄§ΡΡΠΈα΄Ρ, ᴨᴩα΄α΄¨ΠΈα΄α΄Κ
α΄α΄Κα΄Π½Π΄Ρ - /bot)
''')
elif call.data == 'goodbye':
bot.send_message(call.message.chat.id,
'''Πα΄©α΄ΚΠΈα΄§α΄ Ρα΄α΄α΄
=======================
Πα΄α΄¨α΄©α΄Ρα΄Π½α΄:
- α΄§ΡΠ±α΄Ρ α΄α΄ΚΚα΄α΄©ΡΠΈΡ Κ Ρα΄α΄α΄ (ᴨα΄α΄Ρᴨα΄α΄/ᴨᴩα΄Π΄α΄ΠΆα΄)
- Ρα΄α΄ΙΡΚα΄α΄Ρ ΠΈα΄§ΠΈ ᴨα΄Κα΄Ρα΄α΄Ρ Π΄α΄©Ρᴦиᴠα΄α΄Π½α΄α΄§Ρ ΠΈα΄§ΠΈ Π±α΄α΄Ρ
- α΄©α΄α΄α΄§α΄Κα΄ ΠΈα΄§ΠΈ Ρᴨα΄ΚΠΈΠ½α΄Π½ΠΈα΄ ᴨα΄Ρ
α΄ΠΆΠΈΡ
α΄©α΄α΄Ρα΄©α΄α΄Κ/Ι―α΄α΄¨α΄Κ/Π½α΄ΠΉΚα΄Κ Κ α΄§ΡΠ±α΄Κ α΄α΄Π½α΄α΄α΄α΄α΄α΄
- ᴨα΄α΄¨α΄©α΄Ι―α΄ΠΉΠ½ΠΈΡα΄α΄α΄Κα΄
- Ια΄§α΄Ρᴨα΄α΄α΄©α΄Π±α΄§α΄Π½ΠΈα΄ "CAPS LOCK"
- Κα΄α΄α΄ΠΈ α΄α΄Π±Ρ Π½α΄α΄Π΄α΄α΄Κα΄α΄Π½α΄ Κ Ρα΄α΄α΄ ΠΈ α΄©α΄ΙΚα΄Π΄ΠΈα΄Ρ "α΄α΄©α΄Ρ"
- α΄α΄α΄α΄α΄©Π±α΄§α΄Π½ΠΈα΄ "ΠΌα΄Π΄α΄α΄©α΄ΡΠΈΠΈ/ᴨᴩα΄α΄α΄α΄α΄/Ι―α΄α΄¨α΄" - Π±α΄Π½ βοΈ
- α΄α΄α΄¨α΄©α΄Κα΄§Ρα΄Ρ α΄α΄α΄©ΠΈΚα΄α΄©Ρ, α΄©α΄α΄Ρα΄§α΄Π½α΄Π½α΄Ρ, α΄Κα΄α΄α΄ΠΈα΄Ρ, Π½α΄ΡΠΈΙΚ, α΄α΄Π½α΄α΄Π½α΄
- α΄ΙΈα΄α΄©Κα΄§Ρα΄Ρ α΄©α΄Ια΄§ΠΈΡΠ½Ρα΄ α΄α΄α΄©α΄Ρ, α΄Π±ΡΙΠΈα΄Ρ α΄©α΄ΙΈα΄α΄©α΄α΄§ΡΠ½ΡΡ α΄ΠΈα΄α΄α΄ΚΡ, α΄α΄α΄Κ ΠΈ α΄Π±Κα΄Π½ ᴨα΄α΄§ΡΙα΄Κα΄α΄α΄α΄§α΄ΠΉ
- ᴨᴩα΄α΄¨α΄α΄¦α΄Π½Π΄α΄ ᴨα΄α΄§ΠΈα΄ΠΈα΄ΠΈ
- ΙΈα΄§ΡΠ΄\α΄α΄¨α΄Κ α΄Π΄ΠΈΠ½α΄α΄α΄ΚΡΚΠΈ Ια΄ α΄α΄Π½α΄α΄α΄α΄α΄α΄Κ α΄α΄§α΄Κα΄ΚΠΈ ΠΈα΄§ΠΈ ᴨᴩα΄Π΄α΄§α΄ΠΆα΄Π½ΠΈΡΚΠΈ (1 ᴨᴩα΄Π΄Ρᴨᴩα΄ΠΆΠ΄α΄Π½ΠΈα΄, ᴨα΄α΄α΄§α΄ - Ι―α΄α΄©α΄ΙΈ) ''')
elif call.data == 'manual':
bot.send_message(call.message.chat.id,
'''
ΠΡΠ°ΡΠΊΠΈΠΉ ΠΌΠ°Π½ΡΠ°Π» ΠΎ ΡΠΎΠΌ ΠΊΠ°ΠΊ ΠΎΠ±ΡΠ°Π±Π°ΡΡΠ²Π°ΡΡ Π»ΠΎΠ³ΠΈ - https://telegra.ph/Kak-obrabatyvat-logi-05-30
ΠΠ° ΠΏΡΠΈΠ²Π»Π΅ΡΠ΅Π½ΠΈΠ΅ Π½ΠΎΠ²ΠΎΠΉ Π°ΡΠ΄ΠΈΡΠΎΡΠΈΠΈ, Π²ΡΠ΄Π°Ρ Π»ΠΎΠ³ΠΈ
''')
elif call.data == 'game':
bot.send_message(call.message.chat.id, '''
ΠΠΎΡ ΡΠΏΠΈΡΠΎΠΊ Π·Π°Π±Π°Π²Π½ΡΡ
ΠΊΠΎΠΌΠ°Π½Π΄
"Π‘ΠΏΠΎΠΉ, ΠΏΡΠΈΡΠΊΠ°!" - ΠΏΠΎΠΏΡΠ³Π°ΠΉ ΡΡΡ ΡΠΊΠ°ΠΆΠ΅Ρ :3
"CΠΊΠ°ΠΉΠ½Π΅Ρ Π²ΠΎΡΡΡΠ°ΡΡ!" - ΠΠΎ Π²ΠΎΡΡΡΠ°Π½Π΅Ρ Π²Π΅Π»ΠΈΠΊΠΈΠΉ Π‘ΠΊΠ°ΠΉΠ½Π΅Ρ!!!
''')
@bot.message_handler(
content_types=['new_chat_members']) # Π₯Π΅Π½Π΄Π»Π΅Ρ ΠΎΠΏΠΈΡΡΠ²Π°ΡΡΠΈΠΉ ΠΏΠΎΠ²Π΅Π΄Π΅Π½ΠΈΠ΅ Π±ΠΎΡΠ° ΠΏΡΠΈ Π΄ΠΎΠ±Π°Π²Π»Π΅Π½ΠΈΠΈ Π½ΠΎΠ²ΠΎΠ³ΠΎ ΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΠ΅Π»Ρ
def greeting(message): # ΠΠ°ΠΏΡΡΠΊΠΎ ΠΎΡΠ½ΠΎΠ²Π½ΠΎΠΉ ΡΡΠ½ΠΊΡΠΈΠΈ Ρ
Π΅Π½Π΄Π»Π΅ΡΠ°
# ΠΡΠ²ΠΎΠ΄ΠΈΠΌ Π² ΠΊΠΎΠ½ΡΠΎΠ»Ρ ΠΈΠΌΡ Π½ΠΎΠ²ΠΎΠ³ΠΎ ΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΠ΅Π»Ρ
print("User " + message.new_chat_member.first_name + " added")
try: # ΠΡΡΠ°Π΅ΠΌΡΡ Π²ΡΠΏΠΎΠ»Π½ΠΈΡΡ ΠΊΠΎΠΌΠ°Π½Π΄Ρ ΠΏΡΠΈΠ²Π΅Π΄Π΅Π½ΡΡ Π½ΠΈΠΆΠ΅
bot.reply_to(message, text='ΠΡΠΈΠ²Π΅ΡΡΡΠ²ΡΡ ΡΠ΅Π±Ρ Π² Π½Π°ΡΠ΅ΠΌ ΡΠ΅ΡΠΏΠ΅Π½ΡΠ°ΡΠΈΠΈ. ΠΡΠ΄Ρ Π²Π΅ΠΆΠ»ΠΈΠ²ΡΠΌ, ΠΈ ΠΌΡ ΠΏΠΎΡΡΠ°ΡΠ°Π΅ΠΌΡΡ ΡΠ΅Π±Π΅ ΠΏΠΎΠΌΠΎΡΡ!',
disable_notification=True) # ΠΡΠ²ΠΎΠ΄ΠΈΠΌ ΠΏΡΠΈΠ²Π΅ΡΡΡΠ²ΠΈΠ΅ Π² ΡΠ°Ρ
except OSError: # ΠΠ³Π½ΠΎΡΠΈΡΡΠ΅ΠΌ ΠΎΡΠΈΠ±ΠΊΡ ΠΏΠΎ ΡΠ°ΠΉΠΌΠ°ΡΡΡ, Π΅ΡΠ»ΠΈ ΡΠ΅Π»Π΅Π³ΡΠ°ΠΌΠΌ ΡΡΠΏΠ΅Π» ΡΠ°Π·ΠΎΡΠ²Π°ΡΡ ΡΠΎΠ΅Π΄ΠΈΠ½Π΅Π½ΠΈΠ΅ ΡΡ Π²ΡΠ΅ΠΌΠ΅Π½ΠΈ ΠΏΡΠΎΡΠ»ΠΎΠΉ ΡΠ΅ΡΠΈΠΈ
# ΠΡΠ²ΠΎΠ΄ΠΈΠΌ ΠΎΡΠΈΠ±ΠΊΡ Π² ΠΊΠΎΠ½ΡΠΎΠ»Ρ
print("GreetingError - Sending again after 5 seconds!!!")
# ΠΠ΅Π»Π°Π΅ΠΌ ΠΏΠ°ΡΠ·Ρ Π² 3 ΡΠ΅ΠΊΡΠ½Π΄Ρ ΠΈ Π²ΡΠΏΠΎΠ»Π½ΡΠ΅ΠΌ ΠΊΠΎΠΌΠ°Π½Π΄Ρ ΠΏΡΠΈΠ²Π΅Π΄Π΅Π½ΡΡ Π½ΠΈΠΆΠ΅
time.sleep(3)
bot.reply_to(message, text='ΠΡΠΈΠ²Π΅ΡΡΡΠ²ΡΡ ΡΠ΅Π±Ρ Π² Π½Π°ΡΠ΅ΠΌ ΡΠ΅ΡΠΏΠ΅Π½ΡΠ°ΡΠΈΠΈ. ΠΡΠ΄Ρ Π²Π΅ΠΆΠ»ΠΈΠ²ΡΠΌ, ΠΈ ΠΌΡ ΠΏΠΎΡΡΠ°ΡΠ°Π΅ΠΌΡΡ ΡΠ΅Π±Π΅ ΠΏΠΎΠΌΠΎΡΡ!',
disable_notification=True) # ΠΡΠ²ΠΎΠ΄ΠΈΠΌ ΠΏΡΠΈΠ²Π΅ΡΡΡΠ²ΠΈΠ΅ Π² ΡΠ°Ρ
@bot.message_handler(
content_types=['left_chat_member']) # Π₯Π΅Π½Π΄Π»Π΅Ρ ΠΎΠΏΠΈΡΡΠ²Π°ΡΡΠΈΠΉ ΠΏΠΎΠ²Π΅Π΄Π΅Π½ΠΈΠ΅ Π±ΠΎΡΠ° ΠΏΡΠΈ Π²ΡΡ
ΠΎΠ΄Π΅ ΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΠ΅Π»Ρ ΠΈΠ· ΡΠ°ΡΠ°
def not_greeting(message): # ΠΠ°ΠΏΡΡΠΊΠΎ ΠΎΡΠ½ΠΎΠ²Π½ΠΎΠΉ ΡΡΠ½ΠΊΡΠΈΠΈ Ρ
Π΅Π½Π΄Π»Π΅ΡΠ°
# ΠΡΠ²ΠΎΠ΄ΠΈΠΌ Π² ΠΊΠΎΠ½ΡΠΎΠ»Ρ ΠΈΠΌΡ ΡΡΠ΅Π΄ΡΠ΅Π³ΠΎ ΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΠ΅Π»Ρ
print("User " + message.left_chat_member.first_name + " left")
try: # ΠΡΡΠ°Π΅ΠΌΡΡ Π²ΡΠΏΠΎΠ»Π½ΠΈΡΡ ΠΊΠΎΠΌΠ°Π½Π΄Ρ ΠΏΡΠΈΠ²Π΅Π΄Π΅Π½ΡΡ Π½ΠΈΠΆΠ΅
bot.reply_to(message, text='ΠΠ°ΠΊ ΠΆΠ°Π»Ρ, ΡΡΠΎ Π²Ρ Π½Π°ΠΊΠΎΠ½Π΅Ρ-ΡΠΎ ΡΡ
ΠΎΠ΄ΠΈΡΠ΅...',
disable_notification=True) # ΠΡΠ²ΠΎΠ΄ΠΈΠΌ ΠΏΡΠΎΡΠ°Π½ΠΈΠ΅ Π² ΡΠ°Ρ
except OSError: # ΠΠ³Π½ΠΎΡΠΈΡΡΠ΅ΠΌ ΠΎΡΠΈΠ±ΠΊΡ ΠΏΠΎ ΡΠ°ΠΉΠΌΠ°ΡΡΡ, Π΅ΡΠ»ΠΈ ΡΠ΅Π»Π΅Π³ΡΠ°ΠΌΠΌ ΡΡΠΏΠ΅Π» ΡΠ°Π·ΠΎΡΠ²Π°ΡΡ ΡΠΎΠ΅Π΄ΠΈΠ½Π΅Π½ΠΈΠ΅ ΡΡ Π²ΡΠ΅ΠΌΠ΅Π½ΠΈ ΠΏΡΠΎΡΠ»ΠΎΠΉ ΡΠ΅ΡΠΈΠΈ
# ΠΡΠ²ΠΎΠ΄ΠΈΠΌ ΠΎΡΠΈΠ±ΠΊΡ Π² ΠΊΠΎΠ½ΡΠΎΠ»Ρ
print("LeftError - Sending again after 5 seconds!!!")
# ΠΠ΅Π»Π°Π΅ΠΌ ΠΏΠ°ΡΠ·Ρ Π² 3 ΡΠ΅ΠΊΡΠ½Π΄Ρ ΠΈ Π²ΡΠΏΠΎΠ»Π½ΡΠ΅ΠΌ ΠΊΠΎΠΌΠ°Π½Π΄Ρ ΠΏΡΠΈΠ²Π΅Π΄Π΅Π½ΡΡ Π½ΠΈΠΆΠ΅
time.sleep(3)
bot.reply_to(message, text='ΠΠ°ΠΊ ΠΆΠ°Π»Ρ, ΡΡΠΎ Π²Ρ Π½Π°ΠΊΠΎΠ½Π΅Ρ-ΡΠΎ ΡΡ
ΠΎΠ΄ΠΈΡΠ΅...',
disable_notification=True) # ΠΡΠ²ΠΎΠ΄ΠΈΠΌ ΠΏΡΠΎΡΠ°Π½ΠΈΠ΅ Π² ΡΠ°Ρ
# Π₯Π΅Π½Π΄Π»Π΅Ρ ΠΎΠΏΠΈΡΡΠ²Π°ΡΡΠΈΠΉ ΠΏΠΎΠ²Π΅Π΄Π΅Π½ΠΈΠ΅ Π±ΠΎΡΠ° ΠΏΡΠΈ Π²Π²ΠΎΠ΄Π΅ /start
@bot.message_handler(commands=['start'])
def starting(message): # ΠΠ°ΠΏΡΡΠΊΠΎ ΠΎΡΠ½ΠΎΠ²Π½ΠΎΠΉ ΡΡΠ½ΠΊΡΠΈΠΈ Ρ
Π΅Π½Π΄Π»Π΅ΡΠ°
try: # ΠΡΡΠ°Π΅ΠΌΡΡ Π²ΡΠΏΠΎΠ»Π½ΠΈΡΡ ΠΊΠΎΠΌΠ°Π½Π΄Ρ ΠΏΡΠΈΠ²Π΅Π΄Π΅Π½ΡΡ Π½ΠΈΠΆΠ΅
bot.reply_to(message, text='Π’Ρ ΠΌΠ½Π΅ ΡΡΡ Π½Π΅ ΡΡΠ°ΡΡΡΠΉ!',
disable_notification=True) # ΠΡΠ²Π΅ΡΠ°Π΅ΠΌ Π½Π° ΠΊΠΎΠΌΠ°Π½Π΄Ρ /start
except OSError: # ΠΠ³Π½ΠΎΡΠΈΡΡΠ΅ΠΌ ΠΎΡΠΈΠ±ΠΊΡ ΠΏΠΎ ΡΠ°ΠΉΠΌΠ°ΡΡΡ, Π΅ΡΠ»ΠΈ ΡΠ΅Π»Π΅Π³ΡΠ°ΠΌΠΌ ΡΡΠΏΠ΅Π» ΡΠ°Π·ΠΎΡΠ²Π°ΡΡ ΡΠΎΠ΅Π΄ΠΈΠ½Π΅Π½ΠΈΠ΅ ΡΡ Π²ΡΠ΅ΠΌΠ΅Π½ΠΈ ΠΏΡΠΎΡΠ»ΠΎΠΉ ΡΠ΅ΡΠΈΠΈ
# ΠΡΠ²ΠΎΠ΄ΠΈΠΌ ΠΎΡΠΈΠ±ΠΊΡ Π² ΠΊΠΎΠ½ΡΠΎΠ»Ρ
print("StartingError - Sending again after 3 seconds!!!")
# ΠΠ΅Π»Π°Π΅ΠΌ ΠΏΠ°ΡΠ·Ρ Π² 3 ΡΠ΅ΠΊΡΠ½Π΄Ρ ΠΈ Π²ΡΠΏΠΎΠ»Π½ΡΠ΅ΠΌ ΠΊΠΎΠΌΠ°Π½Π΄Ρ ΠΏΡΠΈΠ²Π΅Π΄Π΅Π½ΡΡ Π½ΠΈΠΆΠ΅
time.sleep(3)
bot.reply_to(message, text='Π’Ρ ΠΌΠ½Π΅ ΡΡΡ Π½Π΅ ΡΡΠ°ΡΡΡΠΉ!',
disable_notification=True) # ΠΡΠ²Π΅ΡΠ°Π΅ΠΌ Π½Π° ΠΊΠΎΠΌΠ°Π½Π΄Ρ /start
# Π₯Π΅Π½Π΄Π»Π΅Ρ ΠΎΠΏΠΈΡΡΠ²Π°ΡΡΠΈΠΉ ΠΏΠΎΠ²Π΅Π΄Π΅Π½ΠΈΠ΅ Π±ΠΎΡΠ° ΠΏΡΠΈ Π²Π²ΠΎΠ΄Π΅ command1
@bot.message_handler(commands=['command1'])
def bui(message): # ΠΠ°ΠΏΡΡΠΊΠΎ ΠΎΡΠ½ΠΎΠ²Π½ΠΎΠΉ ΡΡΠ½ΠΊΡΠΈΠΈ Ρ
Π΅Π½Π΄Π»Π΅ΡΠ°
try: # ΠΡΡΠ°Π΅ΠΌΡΡ Π²ΡΠΏΠΎΠ»Π½ΠΈΡΡ ΠΊΠΎΠΌΠ°Π½Π΄Ρ ΠΏΡΠΈΠ²Π΅Π΄Π΅Π½ΡΡ Π½ΠΈΠΆΠ΅
# ΠΡΠΊΡΡΠ²Π΅ΠΌ ΡΡΠΈΠΊΠ΅Ρ ΠΈ ΠΏΡΠΈΡΠ²Π°ΠΈΠ²Π°Π΅ΠΌ Π΅Π³ΠΎ ΠΏΠ΅ΡΠ΅ΠΌΠ΅Π½Π½ΠΎΠΉ
bui_pic = open('bui.webp', 'rb')
bot.send_sticker(message.chat.id, bui_pic, reply_to_message_id=message.message_id,
disable_notification=True) # ΠΡΠΏΡΠ°Π²Π»ΡΠ΅ΠΌ ΡΡΠΈΠΊΠ΅Ρ
except OSError: # ΠΠ³Π½ΠΎΡΠΈΡΡΠ΅ΠΌ ΠΎΡΠΈΠ±ΠΊΡ ΠΏΠΎ ΡΠ°ΠΉΠΌΠ°ΡΡΡ, Π΅ΡΠ»ΠΈ ΡΠ΅Π»Π΅Π³ΡΠ°ΠΌΠΌ ΡΡΠΏΠ΅Π» ΡΠ°Π·ΠΎΡΠ²Π°ΡΡ ΡΠΎΠ΅Π΄ΠΈΠ½Π΅Π½ΠΈΠ΅ ΡΡ Π²ΡΠ΅ΠΌΠ΅Π½ΠΈ ΠΏΡΠΎΡΠ»ΠΎΠΉ ΡΠ΅ΡΠΈΠΈ
# ΠΡΠ²ΠΎΠ΄ΠΈΠΌ ΠΎΡΠΈΠ±ΠΊΡ Π² ΠΊΠΎΠ½ΡΠΎΠ»Ρ
print("BuiError - Sending again after 3 seconds!!!")
# ΠΠ΅Π»Π°Π΅ΠΌ ΠΏΠ°ΡΠ·Ρ Π² 3 ΡΠ΅ΠΊΡΠ½Π΄Ρ ΠΈ Π²ΡΠΏΠΎΠ»Π½ΡΠ΅ΠΌ ΠΊΠΎΠΌΠ°Π½Π΄Ρ ΠΏΡΠΈΠ²Π΅Π΄Π΅Π½ΡΡ Π½ΠΈΠΆΠ΅
time.sleep(3)
# ΠΡΠΊΡΡΠ²Π΅ΠΌ ΡΡΠΈΠΊΠ΅Ρ ΠΈ ΠΏΡΠΈΡΠ²Π°ΠΈΠ²Π°Π΅ΠΌ Π΅Π³ΠΎ ΠΏΠ΅ΡΠ΅ΠΌΠ΅Π½Π½ΠΎΠΉ
bui_pic = open('bui.webp', 'rb')
bot.send_sticker(message.chat.id, bui_pic, reply_to_message_id=message.message_id,
disable_notification=True) # ΠΡΠΏΡΠ°Π²Π»ΡΠ΅ΠΌ ΡΡΠΈΠΊΠ΅Ρ
# Π₯Π΅Π½Π΄Π»Π΅Ρ ΠΎΠΏΠΈΡΡΠ²Π°ΡΡΠΈΠΉ ΠΏΠΎΠ²Π΅Π΄Π΅Π½ΠΈΠ΅ Π±ΠΎΡΠ° ΠΏΡΠΈ Π²Π²ΠΎΠ΄Π΅ command2
@bot.message_handler(commands=['command2'])
def zvezda(message): # ΠΠ°ΠΏΡΡΠΊΠΎ ΠΎΡΠ½ΠΎΠ²Π½ΠΎΠΉ ΡΡΠ½ΠΊΡΠΈΠΈ Ρ
Π΅Π½Π΄Π»Π΅ΡΠ°
try: # ΠΡΡΠ°Π΅ΠΌΡΡ Π²ΡΠΏΠΎΠ»Π½ΠΈΡΡ ΠΊΠΎΠΌΠ°Π½Π΄Ρ ΠΏΡΠΈΠ²Π΅Π΄Π΅Π½ΡΡ Π½ΠΈΠΆΠ΅
# ΠΡΠΊΡΡΠ²Π΅ΠΌ ΡΡΠΈΠΊΠ΅Ρ ΠΈ ΠΏΡΠΈΡΠ²Π°ΠΈΠ²Π°Π΅ΠΌ Π΅Π³ΠΎ ΠΏΠ΅ΡΠ΅ΠΌΠ΅Π½Π½ΠΎΠΉ
zv_pic = open('zvezda.webp', 'rb')
bot.send_sticker(message.chat.id, zv_pic, reply_to_message_id=message.message_id,
disable_notification=True) # ΠΡΠΏΡΠ°Π²Π»ΡΠ΅ΠΌ ΡΡΠΈΠΊΠ΅Ρ
except OSError: # ΠΠ³Π½ΠΎΡΠΈΡΡΠ΅ΠΌ ΠΎΡΠΈΠ±ΠΊΡ ΠΏΠΎ ΡΠ°ΠΉΠΌΠ°ΡΡΡ, Π΅ΡΠ»ΠΈ ΡΠ΅Π»Π΅Π³ΡΠ°ΠΌΠΌ ΡΡΠΏΠ΅Π» ΡΠ°Π·ΠΎΡΠ²Π°ΡΡ ΡΠΎΠ΅Π΄ΠΈΠ½Π΅Π½ΠΈΠ΅ ΡΡ Π²ΡΠ΅ΠΌΠ΅Π½ΠΈ ΠΏΡΠΎΡΠ»ΠΎΠΉ ΡΠ΅ΡΠΈΠΈ
# ΠΡΠ²ΠΎΠ΄ΠΈΠΌ ΠΎΡΠΈΠ±ΠΊΡ Π² ΠΊΠΎΠ½ΡΠΎΠ»Ρ
print("ZvezdaError - Sending again after 3 seconds!!!")
# ΠΠ΅Π»Π°Π΅ΠΌ ΠΏΠ°ΡΠ·Ρ Π² 3 ΡΠ΅ΠΊΡΠ½Π΄Ρ ΠΈ Π²ΡΠΏΠΎΠ»Π½ΡΠ΅ΠΌ ΠΊΠΎΠΌΠ°Π½Π΄Ρ ΠΏΡΠΈΠ²Π΅Π΄Π΅Π½ΡΡ Π½ΠΈΠΆΠ΅
time.sleep(3)
# ΠΡΠΊΡΡΠ²Π΅ΠΌ ΡΡΠΈΠΊΠ΅Ρ ΠΈ ΠΏΡΠΈΡΠ²Π°ΠΈΠ²Π°Π΅ΠΌ Π΅Π³ΠΎ ΠΏΠ΅ΡΠ΅ΠΌΠ΅Π½Π½ΠΎΠΉ
zv_pic = open('zvezda.webp', 'rb')
bot.send_sticker(message.chat.id, zv_pic, reply_to_message_id=message.message_id,
disable_notification=True) # ΠΡΠΏΡΠ°Π²Π»ΡΠ΅ΠΌ ΡΡΠΈΠΊΠ΅Ρ
# Π₯Π΅Π½Π΄Π»Π΅Ρ ΠΎΠΏΠΈΡΡΠ²Π°ΡΡΠΈΠΉ ΠΏΠΎΠ²Π΅Π΄Π΅Π½ΠΈΠ΅ Π±ΠΎΡΠ° ΠΏΡΠΈ Π²Π²ΠΎΠ΄Π΅ command3
@bot.message_handler(commands=['command3'])
def jigurda(message): # ΠΠ°ΠΏΡΡΠΊΠΎ ΠΎΡΠ½ΠΎΠ²Π½ΠΎΠΉ ΡΡΠ½ΠΊΡΠΈΠΈ Ρ
Π΅Π½Π΄Π»Π΅ΡΠ°
try: # ΠΡΡΠ°Π΅ΠΌΡΡ Π²ΡΠΏΠΎΠ»Π½ΠΈΡΡ ΠΊΠΎΠΌΠ°Π½Π΄Ρ ΠΏΡΠΈΠ²Π΅Π΄Π΅Π½ΡΡ Π½ΠΈΠΆΠ΅
# ΠΡΠΊΡΡΠ²Π΅ΠΌ ΡΡΠΈΠΊΠ΅Ρ ΠΈ ΠΏΡΠΈΡΠ²Π°ΠΈΠ²Π°Π΅ΠΌ Π΅Π³ΠΎ ΠΏΠ΅ΡΠ΅ΠΌΠ΅Π½Π½ΠΎΠΉ
jig_pic = open('jig.webp', 'rb')
bot.send_sticker(message.chat.id, jig_pic, reply_to_message_id=message.message_id,
disable_notification=True) # ΠΡΠΏΡΠ°Π²Π»ΡΠ΅ΠΌ ΡΡΠΈΠΊΠ΅Ρ
except OSError: # ΠΠ³Π½ΠΎΡΠΈΡΡΠ΅ΠΌ ΠΎΡΠΈΠ±ΠΊΡ ΠΏΠΎ ΡΠ°ΠΉΠΌΠ°ΡΡΡ, Π΅ΡΠ»ΠΈ ΡΠ΅Π»Π΅Π³ΡΠ°ΠΌΠΌ ΡΡΠΏΠ΅Π» ΡΠ°Π·ΠΎΡΠ²Π°ΡΡ ΡΠΎΠ΅Π΄ΠΈΠ½Π΅Π½ΠΈΠ΅ ΡΡ Π²ΡΠ΅ΠΌΠ΅Π½ΠΈ ΠΏΡΠΎΡΠ»ΠΎΠΉ ΡΠ΅ΡΠΈΠΈ
# ΠΡΠ²ΠΎΠ΄ΠΈΠΌ ΠΎΡΠΈΠ±ΠΊΡ Π² ΠΊΠΎΠ½ΡΠΎΠ»Ρ
print("JigurdaError - Sending again after 3 seconds!!!")
# ΠΠ΅Π»Π°Π΅ΠΌ ΠΏΠ°ΡΠ·Ρ Π² 3 ΡΠ΅ΠΊΡΠ½Π΄Ρ ΠΈ Π²ΡΠΏΠΎΠ»Π½ΡΠ΅ΠΌ ΠΊΠΎΠΌΠ°Π½Π΄Ρ ΠΏΡΠΈΠ²Π΅Π΄Π΅Π½ΡΡ Π½ΠΈΠΆΠ΅
time.sleep(3)
# ΠΡΠΊΡΡΠ²Π΅ΠΌ ΡΡΠΈΠΊΠ΅Ρ ΠΈ ΠΏΡΠΈΡΠ²Π°ΠΈΠ²Π°Π΅ΠΌ Π΅Π³ΠΎ ΠΏΠ΅ΡΠ΅ΠΌΠ΅Π½Π½ΠΎΠΉ
jig_pic = open('jig.webp', 'rb')
bot.send_sticker(message.chat.id, jig_pic, reply_to_message_id=message.message_id,
disable_notification=True) # ΠΡΠΏΡΠ°Π²Π»ΡΠ΅ΠΌ ΡΡΠΈΠΊΠ΅Ρ
# Π₯Π΅Π½Π΄Π»Π΅Ρ ΠΎΠΏΠΈΡΡΠ²Π°ΡΡΠΈΠΉ ΠΏΠΎΠ²Π΅Π΄Π΅Π½ΠΈΠ΅ Π±ΠΎΡΠ° ΠΏΡΠΈ Π²Π²ΠΎΠ΄Π΅ help
@bot.message_handler(commands=['help'])
def helper(message): # ΠΠ°ΠΏΡΡΠΊΠΎ ΠΎΡΠ½ΠΎΠ²Π½ΠΎΠΉ ΡΡΠ½ΠΊΡΠΈΠΈ Ρ
Π΅Π½Π΄Π»Π΅ΡΠ°
try: # ΠΡΡΠ°Π΅ΠΌΡΡ Π²ΡΠΏΠΎΠ»Π½ΠΈΡΡ ΠΊΠΎΠΌΠ°Π½Π΄Ρ ΠΏΡΠΈΠ²Π΅Π΄Π΅Π½ΡΡ Π½ΠΈΠΆΠ΅
# ΠΡΠ²Π΅ΡΠ°Π΅ΠΌ Π½Π° ΠΊΠΎΠΌΠ°Π½Π΄Ρ /help
bot.reply_to(message, text='ΠΡΠ³Π» Π² ΠΏΠΎΠΌΠΎΡΡ!', disable_notification=True)
except OSError: # ΠΠ³Π½ΠΎΡΠΈΡΡΠ΅ΠΌ ΠΎΡΠΈΠ±ΠΊΡ ΠΏΠΎ ΡΠ°ΠΉΠΌΠ°ΡΡΡ, Π΅ΡΠ»ΠΈ ΡΠ΅Π»Π΅Π³ΡΠ°ΠΌΠΌ ΡΡΠΏΠ΅Π» ΡΠ°Π·ΠΎΡΠ²Π°ΡΡ ΡΠΎΠ΅Π΄ΠΈΠ½Π΅Π½ΠΈΠ΅ ΡΡ Π²ΡΠ΅ΠΌΠ΅Π½ΠΈ ΠΏΡΠΎΡΠ»ΠΎΠΉ ΡΠ΅ΡΠΈΠΈ
# ΠΡΠ²ΠΎΠ΄ΠΈΠΌ ΠΎΡΠΈΠ±ΠΊΡ Π² ΠΊΠΎΠ½ΡΠΎΠ»Ρ
print("HelperError - Sending again after 3 seconds!!!")
# ΠΠ΅Π»Π°Π΅ΠΌ ΠΏΠ°ΡΠ·Ρ Π² 3 ΡΠ΅ΠΊΡΠ½Π΄Ρ ΠΈ Π²ΡΠΏΠΎΠ»Π½ΡΠ΅ΠΌ ΠΊΠΎΠΌΠ°Π½Π΄Ρ ΠΏΡΠΈΠ²Π΅Π΄Π΅Π½ΡΡ Π½ΠΈΠΆΠ΅
time.sleep(3)
# ΠΡΠ²Π΅ΡΠ°Π΅ΠΌ Π½Π° ΠΊΠΎΠΌΠ°Π½Π΄Ρ /help
bot.reply_to(message, text='ΠΡΠ³Π» Π² ΠΏΠΎΠΌΠΎΡΡ!', disable_notification=True)
# Π₯Π΅Π½Π΄Π»Π΅Ρ ΠΎΠΏΠΈΡΡΠ²Π°ΡΡΠΈΠΉ ΠΏΠΎΠ²Π΅Π΄Π΅Π½ΠΈΠ΅ Π±ΠΎΡΠ° ΠΏΡΠΈ Π³ΠΎΠ»ΠΎΡΠΎΠ²ΠΎΠΌ ΡΠΎΠΎΠ±ΡΠ΅Π½ΠΈΠΈ Π² ΡΠ°ΡΠ΅
@bot.message_handler(content_types=['voice'])
def voice_msg(message): # ΠΠ°ΠΏΡΡΠΊΠΎ ΠΎΡΠ½ΠΎΠ²Π½ΠΎΠΉ ΡΡΠ½ΠΊΡΠΈΠΈ Ρ
Π΅Π½Π΄Π»Π΅ΡΠ°
try: # ΠΡΡΠ°Π΅ΠΌΡΡ Π²ΡΠΏΠΎΠ»Π½ΠΈΡΡ ΠΊΠΎΠΌΠ°Π½Π΄Ρ ΠΏΡΠΈΠ²Π΅Π΄Π΅Π½ΡΡ Π½ΠΈΠΆΠ΅
# ΠΡΠΊΡΡΠ²Π΅ΠΌ ΡΡΠΈΠΊΠ΅Ρ ΠΈ ΠΏΡΠΈΡΠ²Π°ΠΈΠ²Π°Π΅ΠΌ Π΅Π³ΠΎ ΠΏΠ΅ΡΠ΅ΠΌΠ΅Π½Π½ΠΎΠΉ
jpg_pic = open('voice.webp', 'rb')
bot.send_sticker(message.chat.id, jpg_pic, reply_to_message_id=message.message_id,
disable_notification=True) # ΠΡΠΏΡΠ°Π²Π»ΡΠ΅ΠΌ ΡΡΠΈΠΊΠ΅Ρ
except OSError: # ΠΠ³Π½ΠΎΡΠΈΡΡΠ΅ΠΌ ΠΎΡΠΈΠ±ΠΊΡ ΠΏΠΎ ΡΠ°ΠΉΠΌΠ°ΡΡΡ, Π΅ΡΠ»ΠΈ ΡΠ΅Π»Π΅Π³ΡΠ°ΠΌΠΌ ΡΡΠΏΠ΅Π» ΡΠ°Π·ΠΎΡΠ²Π°ΡΡ ΡΠΎΠ΅Π΄ΠΈΠ½Π΅Π½ΠΈΠ΅ ΡΡ Π²ΡΠ΅ΠΌΠ΅Π½ΠΈ ΠΏΡΠΎΡΠ»ΠΎΠΉ ΡΠ΅ΡΠΈΠΈ
# ΠΡΠ²ΠΎΠ΄ΠΈΠΌ ΠΎΡΠΈΠ±ΠΊΡ Π² ΠΊΠΎΠ½ΡΠΎΠ»Ρ
print("Audio_msgError - Sending again after 3 seconds!!!")
# ΠΠ΅Π»Π°Π΅ΠΌ ΠΏΠ°ΡΠ·Ρ Π² 3 ΡΠ΅ΠΊΡΠ½Π΄Ρ ΠΈ Π²ΡΠΏΠΎΠ»Π½ΡΠ΅ΠΌ ΠΊΠΎΠΌΠ°Π½Π΄Ρ ΠΏΡΠΈΠ²Π΅Π΄Π΅Π½ΡΡ Π½ΠΈΠΆΠ΅
time.sleep(3)
# ΠΡΠΊΡΡΠ²Π΅ΠΌ ΡΡΠΈΠΊΠ΅Ρ ΠΈ ΠΏΡΠΈΡΠ²Π°ΠΈΠ²Π°Π΅ΠΌ Π΅Π³ΠΎ ΠΏΠ΅ΡΠ΅ΠΌΠ΅Π½Π½ΠΎΠΉ
jpg_pic = open('voice.webp', 'rb')
bot.send_sticker(message.chat.id, jpg_pic, reply_to_message_id=message.message_id,
disable_notification=True) # ΠΡΠΏΡΠ°Π²Π»ΡΠ΅ΠΌ ΡΡΠΈΠΊΠ΅Ρ
@bot.message_handler(
content_types=['pinned_message']) # Π₯Π΅Π½Π΄Π»Π΅Ρ ΠΎΠΏΠΈΡΡΠ²Π°ΡΡΠΈΠΉ ΠΏΠΎΠ²Π΅Π΄Π΅Π½ΠΈΠ΅ Π±ΠΎΡΠ° ΠΏΠΎΡΠ»Π΅ ΡΠΎΠ³ΠΎ, ΠΊΠ°ΠΊ Π±ΡΠ»ΠΎ Π·Π°ΠΊΡΠ΅ΠΏΠ»Π΅Π½Π½ΠΎ ΡΠΎΠΎΠ±ΡΠ΅Π½ΠΈΠ΅
def pinned_msg(message): # ΠΠ°ΠΏΡΡΠΊΠΎ ΠΎΡΠ½ΠΎΠ²Π½ΠΎΠΉ ΡΡΠ½ΠΊΡΠΈΠΈ Ρ
Π΅Π½Π΄Π»Π΅ΡΠ°
try: # ΠΡΡΠ°Π΅ΠΌΡΡ Π²ΡΠΏΠΎΠ»Π½ΠΈΡΡ ΠΊΠΎΠΌΠ°Π½Π΄Ρ ΠΏΡΠΈΠ²Π΅Π΄Π΅Π½ΡΡ Π½ΠΈΠΆΠ΅
bot.reply_to(message, text='ΠΡ, ΡΠ΅ΠΏΠ΅ΡΡ Π·Π°ΠΆΠΈΠ²Π΅ΠΌ!',
disable_notification=True) # ΠΡΠ²Π΅ΡΠ°Π΅ΠΌ Π½Π° Π·Π°ΠΊΡΠ΅ΠΏΠ»Π΅Π½Π½ΠΎΠ΅ ΡΠΎΠΎΠ±ΡΠ΅Π½ΠΈΠ΅
except OSError: # ΠΠ³Π½ΠΎΡΠΈΡΡΠ΅ΠΌ ΠΎΡΠΈΠ±ΠΊΡ ΠΏΠΎ ΡΠ°ΠΉΠΌΠ°ΡΡΡ, Π΅ΡΠ»ΠΈ ΡΠ΅Π»Π΅Π³ΡΠ°ΠΌΠΌ ΡΡΠΏΠ΅Π» ΡΠ°Π·ΠΎΡΠ²Π°ΡΡ ΡΠΎΠ΅Π΄ΠΈΠ½Π΅Π½ΠΈΠ΅ ΡΡ Π²ΡΠ΅ΠΌΠ΅Π½ΠΈ ΠΏΡΠΎΡΠ»ΠΎΠΉ ΡΠ΅ΡΠΈΠΈ
# ΠΡΠ²ΠΎΠ΄ΠΈΠΌ ΠΎΡΠΈΠ±ΠΊΡ Π² ΠΊΠΎΠ½ΡΠΎΠ»Ρ
print("PinnedError - Sending again after 3 seconds!!!")
# ΠΠ΅Π»Π°Π΅ΠΌ ΠΏΠ°ΡΠ·Ρ Π² 3 ΡΠ΅ΠΊΡΠ½Π΄Ρ ΠΈ Π²ΡΠΏΠΎΠ»Π½ΡΠ΅ΠΌ ΠΊΠΎΠΌΠ°Π½Π΄Ρ ΠΏΡΠΈΠ²Π΅Π΄Π΅Π½ΡΡ Π½ΠΈΠΆΠ΅
time.sleep(3)
bot.reply_to(message, text='ΠΡ, ΡΠ΅ΠΏΠ΅ΡΡ Π·Π°ΠΆΠΈΠ²Π΅ΠΌ',
disable_notification=True) # ΠΡΠ²Π΅ΡΠ°Π΅ΠΌ Π½Π° Π·Π°ΠΊΡΠ΅ΠΏΠ»Π΅Π½Π½ΠΎΠ΅ ΡΠΎΠΎΠ±ΡΠ΅Π½ΠΈΠ΅
# Π₯Π΅Π½Π΄Π»Π΅Ρ ΠΎΠΏΠΈΡΡΠ²Π°ΡΡΠΈΠΉ ΠΏΠΎΠ²Π΅Π΄Π΅Π½ΠΈΠ΅ Π±ΠΎΡΠ° ΠΏΡΠΈ Π΄ΠΎΠ±Π°Π²Π»Π΅Π½ΠΈΠΈ Π°ΡΠ΄ΠΈΠΎΡΠ°ΠΉΠ»Π° Π² ΡΠ°Ρ
@bot.message_handler(content_types=['audio'])
def audio_msg(message): # ΠΠ°ΠΏΡΡΠΊΠΎ ΠΎΡΠ½ΠΎΠ²Π½ΠΎΠΉ ΡΡΠ½ΠΊΡΠΈΠΈ Ρ
Π΅Π½Π΄Π»Π΅ΡΠ°
try: # ΠΡΡΠ°Π΅ΠΌΡΡ Π²ΡΠΏΠΎΠ»Π½ΠΈΡΡ ΠΊΠΎΠΌΠ°Π½Π΄Ρ ΠΏΡΠΈΠ²Π΅Π΄Π΅Π½ΡΡ Π½ΠΈΠΆΠ΅
# ΠΡΠΊΡΡΠ²Π΅ΠΌ ΠΈΠ·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½ΠΈΠ΅ ΠΈ ΠΏΡΠΈΡΠ²Π°ΠΈΠ²Π°Π΅ΠΌ Π΅Π³ΠΎ ΠΏΠ΅ΡΠ΅ΠΌΠ΅Π½Π½ΠΎΠΉ
jpg_pic = open('002.jpg', 'rb')
bot.send_sticker(message.chat.id, jpg_pic, reply_to_message_id=message.message_id,
disable_notification=True) # ΠΡΠΏΡΠ°Π²Π»ΡΠ΅ΠΌ ΠΈΠ·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½ΠΈΠ΅
except OSError: # ΠΠ³Π½ΠΎΡΠΈΡΡΠ΅ΠΌ ΠΎΡΠΈΠ±ΠΊΡ ΠΏΠΎ ΡΠ°ΠΉΠΌΠ°ΡΡΡ, Π΅ΡΠ»ΠΈ ΡΠ΅Π»Π΅Π³ΡΠ°ΠΌΠΌ ΡΡΠΏΠ΅Π» ΡΠ°Π·ΠΎΡΠ²Π°ΡΡ ΡΠΎΠ΅Π΄ΠΈΠ½Π΅Π½ΠΈΠ΅ ΡΡ Π²ΡΠ΅ΠΌΠ΅Π½ΠΈ ΠΏΡΠΎΡΠ»ΠΎΠΉ ΡΠ΅ΡΠΈΠΈ
# ΠΡΠ²ΠΎΠ΄ΠΈΠΌ ΠΎΡΠΈΠ±ΠΊΡ Π² ΠΊΠΎΠ½ΡΠΎΠ»Ρ
print("Audio_msgError - Sending again after 3 seconds!!!")
# ΠΠ΅Π»Π°Π΅ΠΌ ΠΏΠ°ΡΠ·Ρ Π² 3 ΡΠ΅ΠΊΡΠ½Π΄Ρ ΠΈ Π²ΡΠΏΠΎΠ»Π½ΡΠ΅ΠΌ ΠΊΠΎΠΌΠ°Π½Π΄Ρ ΠΏΡΠΈΠ²Π΅Π΄Π΅Π½ΡΡ Π½ΠΈΠΆΠ΅
time.sleep(3)
# ΠΡΠΊΡΡΠ²Π΅ΠΌ ΠΈΠ·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½ΠΈΠ΅ ΠΈ ΠΏΡΠΈΡΠ²Π°ΠΈΠ²Π°Π΅ΠΌ Π΅Π³ΠΎ ΠΏΠ΅ΡΠ΅ΠΌΠ΅Π½Π½ΠΎΠΉ
jpg_pic = open('002.jpg', 'rb')
bot.send_sticker(message.chat.id, jpg_pic, reply_to_message_id=message.message_id,
disable_notification=True) # ΠΡΠΏΡΠ°Π²Π»ΡΠ΅ΠΌ ΠΈΠ·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½ΠΈΠ΅
# Π₯Π΅Π½Π΄Π»Π΅Ρ ΠΎΠΏΠΈΡΡΠ²Π°ΡΡΠΈΠΉ ΠΏΠΎΠ²Π΅Π΄Π΅Π½ΠΈΠ΅ Π±ΠΎΡΠ° Π½Π° ΡΠ΅ΠΊΡΡ Π² ΡΠ°ΡΠ΅
@bot.message_handler(content_types=['text'])
def txt(message): # ΠΠ°ΠΏΡΡΠΊΠΎ ΠΎΡΠ½ΠΎΠ²Π½ΠΎΠΉ ΡΡΠ½ΠΊΡΠΈΠΈ Ρ
Π΅Π½Π΄Π»Π΅ΡΠ°
for i in range(0, len(bad_words)): # ΠΠ΅ΡΠ΅Π±ΠΈΡΠ°Π΅ΠΌ Π²ΡΠ΅ ΡΠ»Π΅ΠΌΠ΅Π½ΡΡ ΡΠ»ΠΎΠ²Π°ΡΡ ΠΏΠΎ ΠΎΡΠ΅ΡΠ΅Π΄ΠΈ
# ΠΡΠΎΠ²Π΅ΡΡΠ΅ΠΌ Π½Π°Π»ΠΈΡΠΈΠ΅ ΠΊΠ°ΠΆΠ΄ΠΎΠ³ΠΎ ΡΠ»ΠΎΠ²Π° ΠΈΠ· Π½Π°ΡΠ΅Π³ΠΎ ΡΠ»ΠΎΠ²Π°ΡΡ Π² ΡΠΎΠΎΠ±ΡΠ΅Π½ΠΈΠΈ
if bad_words[i] in message.text.lower():
try: # ΠΡΡΠ°Π΅ΠΌΡΡ Π²ΡΠΏΠΎΠ»Π½ΠΈΡΡ ΠΊΠΎΠΌΠ°Π½Π΄Ρ ΠΏΡΠΈΠ²Π΅Π΄Π΅Π½ΡΡ Π½ΠΈΠΆΠ΅
# Π£Π΄Π°Π»ΡΠ΅ΠΌ ΡΠΎΠΎΠ±ΡΠ΅Π½ΠΈΠ΅
bot.delete_message(message.chat.id, message.message_id, )
# ΠΡΠ²ΠΎΠ΄ΠΈΠΌ ΡΠ΄Π°Π»Π΅Π½Π½ΠΎΠ΅ ΡΠΎΠΎΠ±ΡΠ΅Π½ΠΈΠ΅ Π² ΠΊΠΎΠ½ΡΠΎΠ»Ρ
print(message.text + " delited")
except OSError: # ΠΠ³Π½ΠΎΡΠΈΡΡΠ΅ΠΌ ΠΎΡΠΈΠ±ΠΊΡ ΠΏΠΎ ΡΠ°ΠΉΠΌΠ°ΡΡΡ, Π΅ΡΠ»ΠΈ ΡΠ΅Π»Π΅Π³ΡΠ°ΠΌΠΌ ΡΡΠΏΠ΅Π» ΡΠ°Π·ΠΎΡΠ²Π°ΡΡ ΡΠΎΠ΅Π΄ΠΈΠ½Π΅Π½ΠΈΠ΅ ΡΡ Π²ΡΠ΅ΠΌΠ΅Π½ΠΈ ΠΏΡΠΎΡΠ»ΠΎΠΉ ΡΠ΅ΡΠΈΠΈ
# ΠΡΠ²ΠΎΠ΄ΠΈΠΌ ΠΎΡΠΈΠ±ΠΊΡ Π² ΠΊΠΎΠ½ΡΠΎΠ»Ρ
print("BadWordsError - Sending again after 3 seconds!!!")
# ΠΠ΅Π»Π°Π΅ΠΌ ΠΏΠ°ΡΠ·Ρ Π² 3 ΡΠ΅ΠΊΡΠ½Π΄Ρ ΠΈ Π²ΡΠΏΠΎΠ»Π½ΡΠ΅ΠΌ ΠΊΠΎΠΌΠ°Π½Π΄Ρ ΠΏΡΠΈΠ²Π΅Π΄Π΅Π½ΡΡ Π½ΠΈΠΆΠ΅
time.sleep(3)
# Π£Π΄Π°Π»ΡΠ΅ΠΌ ΡΠΎΠΎΠ±ΡΠ΅Π½ΠΈΠ΅
bot.delete_message(message.chat.id, message.message_id)
# ΠΡΠ²ΠΎΠ΄ΠΈΠΌ ΡΠ΄Π°Π»Π΅Π½Π½ΠΎΠ΅ ΡΠΎΠΎΠ±ΡΠ΅Π½ΠΈΠ΅ Π² ΠΊΠΎΠ½ΡΠΎΠ»Ρ
print(message.text + " delited")
for l in range(0, len(other_lang)): # ΠΠ΅ΡΠ΅Π±ΠΈΡΠ°Π΅ΠΌ Π²ΡΠ΅ ΡΠ»Π΅ΠΌΠ΅Π½ΡΡ ΡΠ»ΠΎΠ²Π°ΡΡ ΠΏΠΎ ΠΎΡΠ΅ΡΠ΅Π΄
# ΠΡΠΎΠ²Π΅ΡΡΠ΅ΠΌ Π½Π°Π»ΠΈΡΠΈΠ΅ ΠΊΠ°ΠΆΠ΄ΠΎΠ³ΠΎ ΡΠ»ΠΎΠ²Π° ΠΈΠ· Π½Π°ΡΠ΅Π³ΠΎ ΡΠ»ΠΎΠ²Π°ΡΡ Π² ΡΠΎΠΎΠ±ΡΠ΅Π½ΠΈΠΈ
if other_lang[l] in message.text.lower():
try: # ΠΡΡΠ°Π΅ΠΌΡΡ Π²ΡΠΏΠΎΠ»Π½ΠΈΡΡ ΠΊΠΎΠΌΠ°Π½Π΄Ρ ΠΏΡΠΈΠ²Π΅Π΄Π΅Π½ΡΡ Π½ΠΈΠΆΠ΅
# ΠΡΠΊΡΡΠ²Π΅ΠΌ ΡΡΠΈΠΊΠ΅Ρ ΠΈ ΠΏΡΠΈΡΠ²Π°ΠΈΠ²Π°Π΅ΠΌ Π΅Π³ΠΎ ΠΏΠ΅ΡΠ΅ΠΌΠ΅Π½Π½ΠΎΠΉ
get_pic = open('get_out.webp', 'rb')
bot.send_sticker(message.chat.id, get_pic, reply_to_message_id=message.message_id,
disable_notification=True) # ΠΡΠΏΡΠ°Π²Π»ΡΠ΅ΠΌ ΡΡΠΈΠΊΠ΅Ρ
except OSError: # ΠΠ³Π½ΠΎΡΠΈΡΡΠ΅ΠΌ ΠΎΡΠΈΠ±ΠΊΡ ΠΏΠΎ ΡΠ°ΠΉΠΌΠ°ΡΡΡ, Π΅ΡΠ»ΠΈ ΡΠ΅Π»Π΅Π³ΡΠ°ΠΌΠΌ ΡΡΠΏΠ΅Π» ΡΠ°Π·ΠΎΡΠ²Π°ΡΡ ΡΠΎΠ΅Π΄ΠΈΠ½Π΅Π½ΠΈΠ΅ ΡΡ Π²ΡΠ΅ΠΌΠ΅Π½ΠΈ ΠΏΡΠΎΡΠ»ΠΎΠΉ ΡΠ΅ΡΠΈΠΈ
# ΠΡΠ²ΠΎΠ΄ΠΈΠΌ ΠΎΡΠΈΠ±ΠΊΡ Π² ΠΊΠΎΠ½ΡΠΎΠ»Ρ
print("LangError - Sending again after 3 seconds!!!")
# ΠΠ΅Π»Π°Π΅ΠΌ ΠΏΠ°ΡΠ·Ρ Π² 3 ΡΠ΅ΠΊΡΠ½Π΄Ρ ΠΈ Π²ΡΠΏΠΎΠ»Π½ΡΠ΅ΠΌ ΠΊΠΎΠΌΠ°Π½Π΄Ρ ΠΏΡΠΈΠ²Π΅Π΄Π΅Π½ΡΡ Π½ΠΈΠΆΠ΅
time.sleep(3)
# ΠΡΠΊΡΡΠ²Π΅ΠΌ ΡΡΠΈΠΊΠ΅Ρ ΠΈ ΠΏΡΠΈΡΠ²Π°ΠΈΠ²Π°Π΅ΠΌ Π΅Π³ΠΎ ΠΏΠ΅ΡΠ΅ΠΌΠ΅Π½Π½ΠΎΠΉ
get_pic = open('get_out.webp', 'rb')
bot.send_sticker(message.chat.id, get_pic, reply_to_message_id=message.message_id,
disable_notification=True) # ΠΡΠΏΡΠ°Π²Π»ΡΠ΅ΠΌ ΡΡΠΈΠΊΠ΅Ρ
for f in range(0, len(other_bot)): # ΠΠ΅ΡΠ΅Π±ΠΈΡΠ°Π΅ΠΌ Π²ΡΠ΅ ΡΠ»Π΅ΠΌΠ΅Π½ΡΡ ΡΠ»ΠΎΠ²Π°ΡΡ ΠΏΠΎ ΠΎΡΠ΅ΡΠ΅Π΄
# ΠΡΠΎΠ²Π΅ΡΡΠ΅ΠΌ Π½Π°Π»ΠΈΡΠΈΠ΅ ΠΊΠ°ΠΆΠ΄ΠΎΠ³ΠΎ ΡΠ»ΠΎΠ²Π° ΠΈΠ· Π½Π°ΡΠ΅Π³ΠΎ ΡΠ»ΠΎΠ²Π°ΡΡ Π² ΡΠΎΠΎΠ±ΡΠ΅Π½ΠΈΠΈ
if other_bot[f] in message.text.lower():
try: # ΠΡΡΠ°Π΅ΠΌΡΡ Π²ΡΠΏΠΎΠ»Π½ΠΈΡΡ ΠΊΠΎΠΌΠ°Π½Π΄Ρ ΠΏΡΠΈΠ²Π΅Π΄Π΅Π½ΡΡ Π½ΠΈΠΆΠ΅
# ΠΡΠΊΡΡΠ²Π΅ΠΌ Π²ΠΈΠ΄Π΅ΠΎ ΠΈ ΠΏΡΠΈΡΠ²Π°ΠΈΠ²Π°Π΅ΠΌ Π΅Π³ΠΎ ΠΏΠ΅ΡΠ΅ΠΌΠ΅Π½Π½ΠΎΠΉ
pss_pic = open('animation.gif.mp4', 'rb')
bot.send_animation(message.chat.id, pss_pic, reply_to_message_id=message.message_id,
disable_notification=True) # ΠΡΠΏΡΠ°Π²Π»ΡΠ΅ΠΌ Π²ΠΈΠ΄Π΅ΠΎ
except OSError: # ΠΠ³Π½ΠΎΡΠΈΡΡΠ΅ΠΌ ΠΎΡΠΈΠ±ΠΊΡ ΠΏΠΎ ΡΠ°ΠΉΠΌΠ°ΡΡΡ, Π΅ΡΠ»ΠΈ ΡΠ΅Π»Π΅Π³ΡΠ°ΠΌΠΌ ΡΡΠΏΠ΅Π» ΡΠ°Π·ΠΎΡΠ²Π°ΡΡ ΡΠΎΠ΅Π΄ΠΈΠ½Π΅Π½ΠΈΠ΅ ΡΡ Π²ΡΠ΅ΠΌΠ΅Π½ΠΈ ΠΏΡΠΎΡΠ»ΠΎΠΉ ΡΠ΅ΡΠΈΠΈ
# ΠΡΠ²ΠΎΠ΄ΠΈΠΌ ΠΎΡΠΈΠ±ΠΊΡ Π² ΠΊΠΎΠ½ΡΠΎΠ»Ρ
print("AnimError - Sending again after 3 seconds!!!")
# ΠΠ΅Π»Π°Π΅ΠΌ ΠΏΠ°ΡΠ·Ρ Π² 3 ΡΠ΅ΠΊΡΠ½Π΄Ρ ΠΈ Π²ΡΠΏΠΎΠ»Π½ΡΠ΅ΠΌ ΠΊΠΎΠΌΠ°Π½Π΄Ρ ΠΏΡΠΈΠ²Π΅Π΄Π΅Π½ΡΡ Π½ΠΈΠΆΠ΅
time.sleep(3)
# ΠΡΠΊΡΡΠ²Π΅ΠΌ Π²ΠΈΠ΄Π΅ΠΎ ΠΈ ΠΏΡΠΈΡΠ²Π°ΠΈΠ²Π°Π΅ΠΌ Π΅Π³ΠΎ ΠΏΠ΅ΡΠ΅ΠΌΠ΅Π½Π½ΠΎΠΉ
pss_pic = open('animation.gif.mp4', 'rb')
bot.send_animation(message.chat.id, pss_pic, reply_to_message_id=message.message_id,
disable_notification=True) # ΠΡΠΏΡΠ°Π²Π»ΡΠ΅ΠΌ Π²ΠΈΠ΄Π΅ΠΎ
if message.text == 'Π‘ΠΏΠΎΠΉ, ΠΏΡΠΈΡΠΊΠ°!': # ΠΡΠ΅ΠΌ Π½Π°ΡΡ ΡΡΠ°Π·Ρ Π² ΡΠ΅ΠΊΡΡΠ΅ ΡΠΎΠΎΠ±ΡΠ΅Π½ΠΈΡ
try: # ΠΡΡΠ°Π΅ΠΌΡΡ Π²ΡΠΏΠΎΠ»Π½ΠΈΡΡ ΠΊΠΎΠΌΠ°Π½Π΄Ρ ΠΏΡΠΈΠ²Π΅Π΄Π΅Π½ΡΡ Π½ΠΈΠΆΠ΅
bot.reply_to(message, text='Π©Π° ΡΠΏΠΎΡ!') # ΠΡΠ²Π΅ΡΠ°Π΅ΠΌ Π½Π° ΡΠΎΠΎΠ±ΡΠ΅Π½ΠΈΠ΅
# ΠΡΠΊΡΡΠ²Π΅ΠΌ Π°ΡΠ΄ΠΈΠΎ ΠΈ ΠΏΡΠΈΡΠ²Π°ΠΈΠ²Π°Π΅ΠΌ Π΅Π³ΠΎ ΠΏΠ΅ΡΠ΅ΠΌΠ΅Π½Π½ΠΎΠΉ
sti = open('001.mp3', 'rb')
bot.send_audio(message.chat.id, audio=sti, reply_to_message_id=message.message_id,
disable_notification=True) # ΠΡΠΏΡΠ°Π²Π»ΡΠ΅ΠΌ Π°ΡΠ΄ΠΈΠΎ
except OSError: # ΠΠ³Π½ΠΎΡΠΈΡΡΠ΅ΠΌ ΠΎΡΠΈΠ±ΠΊΡ ΠΏΠΎ ΡΠ°ΠΉΠΌΠ°ΡΡΡ, Π΅ΡΠ»ΠΈ ΡΠ΅Π»Π΅Π³ΡΠ°ΠΌΠΌ ΡΡΠΏΠ΅Π» ΡΠ°Π·ΠΎΡΠ²Π°ΡΡ ΡΠΎΠ΅Π΄ΠΈΠ½Π΅Π½ΠΈΠ΅ ΡΡ Π²ΡΠ΅ΠΌΠ΅Π½ΠΈ ΠΏΡΠΎΡΠ»ΠΎΠΉ ΡΠ΅ΡΠΈΠΈ
# ΠΡΠ²ΠΎΠ΄ΠΈΠΌ ΠΎΡΠΈΠ±ΠΊΡ Π² ΠΊΠΎΠ½ΡΠΎΠ»Ρ
print("SongError - Sending again after 3 seconds!!!")
# ΠΠ΅Π»Π°Π΅ΠΌ ΠΏΠ°ΡΠ·Ρ Π² 3 ΡΠ΅ΠΊΡΠ½Π΄Ρ ΠΈ Π²ΡΠΏΠΎΠ»Π½ΡΠ΅ΠΌ ΠΊΠΎΠΌΠ°Π½Π΄Ρ ΠΏΡΠΈΠ²Π΅Π΄Π΅Π½ΡΡ Π½ΠΈΠΆΠ΅
time.sleep(3)
bot.reply_to(message, text='Π©Π° ΡΠΏΠΎΡ!') # ΠΡΠ²Π΅ΡΠ°Π΅ΠΌ Π½Π° ΡΠΎΠΎΠ±ΡΠ΅Π½ΠΈΠ΅
# ΠΡΠΊΡΡΠ²Π΅ΠΌ Π°ΡΠ΄ΠΈΠΎ ΠΈ ΠΏΡΠΈΡΠ²Π°ΠΈΠ²Π°Π΅ΠΌ Π΅Π³ΠΎ ΠΏΠ΅ΡΠ΅ΠΌΠ΅Π½Π½ΠΎΠΉ
sti = open('001.mp3', 'rb')
bot.send_audio(message.chat.id, audio=sti, reply_to_message_id=message.message_id,
disable_notification=True) # ΠΡΠΏΡΠ°Π²Π»ΡΠ΅ΠΌ Π°ΡΠ΄ΠΈΠΎ
elif message.text == "CΠΊΠ°ΠΉΠ½Π΅Ρ Π²ΠΎΡΡΡΠ°ΡΡ!": # ΠΡΠ΅ΠΌ Π½Π°ΡΡ ΡΡΠ°Π·Ρ Π² ΡΠ΅ΠΊΡΡΠ΅ ΡΠΎΠΎΠ±ΡΠ΅Π½ΠΈΡ
try: # ΠΡΡΠ°Π΅ΠΌΡΡ Π²ΡΠΏΠΎΠ»Π½ΠΈΡΡ ΠΊΠΎΠΌΠ°Π½Π΄Ρ ΠΏΡΠΈΠ²Π΅Π΄Π΅Π½ΡΡ Π½ΠΈΠΆΠ΅
# ΠΡΠΊΡΡΠ²Π΅ΠΌ ΠΈΠ·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½ΠΈΠ΅ ΠΈ ΠΏΡΠΈΡΠ²Π°ΠΈΠ²Π°Π΅ΠΌ Π΅Π³ΠΎ ΠΏΠ΅ΡΠ΅ΠΌΠ΅Π½Π½ΠΎΠΉ
ver_pic = open('hqdefault.jpg', 'rb')
bot.send_photo(message.chat.id, ver_pic, reply_to_message_id=message.message_id,
disable_notification=True) # ΠΡΠΏΡΠ°Π²Π»ΡΠ΅ΠΌ ΠΈΠ·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½ΠΈΠ΅
except OSError: # ΠΠ³Π½ΠΎΡΠΈΡΡΠ΅ΠΌ ΠΎΡΠΈΠ±ΠΊΡ ΠΏΠΎ ΡΠ°ΠΉΠΌΠ°ΡΡΡ, Π΅ΡΠ»ΠΈ ΡΠ΅Π»Π΅Π³ΡΠ°ΠΌΠΌ ΡΡΠΏΠ΅Π» ΡΠ°Π·ΠΎΡΠ²Π°ΡΡ ΡΠΎΠ΅Π΄ΠΈΠ½Π΅Π½ΠΈΠ΅ ΡΡ Π²ΡΠ΅ΠΌΠ΅Π½ΠΈ ΠΏΡΠΎΡΠ»ΠΎΠΉ ΡΠ΅ΡΠΈΠΈ
# ΠΡΠ²ΠΎΠ΄ΠΈΠΌ ΠΎΡΠΈΠ±ΠΊΡ Π² ΠΊΠΎΠ½ΡΠΎΠ»Ρ
print("VerError - Sending again after 3 seconds!!!")
# ΠΠ΅Π»Π°Π΅ΠΌ ΠΏΠ°ΡΠ·Ρ Π² 3 ΡΠ΅ΠΊΡΠ½Π΄Ρ ΠΈ Π²ΡΠΏΠΎΠ»Π½ΡΠ΅ΠΌ ΠΊΠΎΠΌΠ°Π½Π΄Ρ ΠΏΡΠΈΠ²Π΅Π΄Π΅Π½ΡΡ Π½ΠΈΠΆΠ΅
time.sleep(3)
# ΠΡΠΊΡΡΠ²Π΅ΠΌ ΠΈΠ·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½ΠΈΠ΅ ΠΈ ΠΏΡΠΈΡΠ²Π°ΠΈΠ²Π°Π΅ΠΌ Π΅Π³ΠΎ ΠΏΠ΅ΡΠ΅ΠΌΠ΅Π½Π½ΠΎΠΉ
ver_pic = open('hqdefault.jpg', 'rb')
bot.send_photo(message.chat.id, ver_pic, reply_to_message_id=message.message_id,
disable_notification=True) # ΠΡΠΏΡΠ°Π²Π»ΡΠ΅ΠΌ ΠΈΠ·ΠΎΠ±ΡΠ°ΠΆΠ΅Π½ΠΈΠ΅
elif " Π±ΠΎΡ " in message.text.lower(): # ΠΡΠ΅ΠΌ Π½Π°ΡΡ ΡΡΠ°Π·Ρ Π² ΡΠ΅ΠΊΡΡΠ΅ ΡΠΎΠΎΠ±ΡΠ΅Π½ΠΈΡ
try: # ΠΡΡΠ°Π΅ΠΌΡΡ Π²ΡΠΏΠΎΠ»Π½ΠΈΡΡ ΠΊΠΎΠΌΠ°Π½Π΄Ρ ΠΏΡΠΈΠ²Π΅Π΄Π΅Π½ΡΡ Π½ΠΈΠΆΠ΅
bot.reply_to(message, text='ΠΠΎΡΡ Π½Π΅ ΡΠΎ, ΡΠ΅ΠΌ ΠΊΠ°ΠΆΡΡΡΡ...',
disable_notification=True) # ΠΡΠ²Π΅ΡΠ°Π΅ΠΌ Π½Π° ΡΠΎΠΎΠ±ΡΠ΅Π½ΠΈΠ΅
except OSError: # ΠΠ³Π½ΠΎΡΠΈΡΡΠ΅ΠΌ ΠΎΡΠΈΠ±ΠΊΡ ΠΏΠΎ ΡΠ°ΠΉΠΌΠ°ΡΡΡ, Π΅ΡΠ»ΠΈ ΡΠ΅Π»Π΅Π³ΡΠ°ΠΌΠΌ ΡΡΠΏΠ΅Π» ΡΠ°Π·ΠΎΡΠ²Π°ΡΡ ΡΠΎΠ΅Π΄ΠΈΠ½Π΅Π½ΠΈΠ΅ ΡΡ Π²ΡΠ΅ΠΌΠ΅Π½ΠΈ ΠΏΡΠΎΡΠ»ΠΎΠΉ ΡΠ΅ΡΠΈΠΈ
# ΠΡΠ²ΠΎΠ΄ΠΈΠΌ ΠΎΡΠΈΠ±ΠΊΡ Π² ΠΊΠΎΠ½ΡΠΎΠ»Ρ
print("Stop_wordError - Sending again after 3 seconds!!!")
# ΠΠ΅Π»Π°Π΅ΠΌ ΠΏΠ°ΡΠ·Ρ Π² 3 ΡΠ΅ΠΊΡΠ½Π΄Ρ ΠΈ Π²ΡΠΏΠΎΠ»Π½ΡΠ΅ΠΌ ΠΊΠΎΠΌΠ°Π½Π΄Ρ ΠΏΡΠΈΠ²Π΅Π΄Π΅Π½ΡΡ Π½ΠΈΠΆΠ΅
time.sleep(3)
bot.reply_to(message, text='ΠΠΎΡΡ Π½Π΅ ΡΠΎ, ΡΠ΅ΠΌ ΠΊΠ°ΠΆΡΡΡΡ...',
disable_notification=True) # ΠΡΠ²Π΅ΡΠ°Π΅ΠΌ Π½Π° ΡΠΎΠΎΠ±ΡΠ΅Π½ΠΈΠ΅
else: # ΠΡΠ»ΠΈ Π½ΠΈΡΠ΅Π³ΠΎ Π½Π΅ ΠΏΠΎΠ΄ΠΎΡΠ»ΠΎ
pass # ΠΠ΄ΡΠΌ Π΄Π°Π»ΡΡΠ΅
if __name__ == '__main__': # ΠΠ»ΠΎΠΊ Π·Π°ΠΏΡΡΠΊΠ° Π±ΠΎΡΠ°
try: # ΠΡΡΠ°Π΅ΠΌΡΡ Π²ΡΠΏΠΎΠ»Π½ΠΈΡΡ ΠΊΠΎΠΌΠ°Π½Π΄Ρ ΠΏΡΠΈΠ²Π΅Π΄Π΅Π½ΡΡ Π½ΠΈΠΆΠ΅
bot.polling(none_stop=True) # ΠΠ°ΠΏΡΡΠΊΠ°Π΅ΠΌ Π±ΠΎΡΠ°
except OSError: # ΠΠ³Π½ΠΎΡΠΈΡΡΠ΅ΠΌ ΠΎΡΠΈΠ±ΠΊΡ ΠΏΠΎ ΡΠ°ΠΉΠΌΠ°ΡΡΡ, Π΅ΡΠ»ΠΈ ΡΠ΅Π»Π΅Π³ΡΠ°ΠΌΠΌ ΡΡΠΏΠ΅Π» ΡΠ°Π·ΠΎΡΠ²Π°ΡΡ ΡΠΎΠ΅Π΄ΠΈΠ½Π΅Π½ΠΈΠ΅ ΡΡ Π²ΡΠ΅ΠΌΠ΅Π½ΠΈ ΠΏΡΠΎΡΠ»ΠΎΠΉ ΡΠ΅ΡΠΈΠΈ
# ΠΡΠ²ΠΎΠ΄ΠΈΠΌ ΠΎΡΠΈΠ±ΠΊΡ Π² ΠΊΠΎΠ½ΡΠΎΠ»Ρ
print("PollingError - Sending again after 5 seconds!!!")
# ΠΠ΅Π»Π°Π΅ΠΌ ΠΏΠ°ΡΠ·Ρ Π² 5 ΡΠ΅ΠΊΡΠ½Π΄ ΠΈ Π²ΡΠΏΠΎΠ»Π½ΡΠ΅ΠΌ ΠΊΠΎΠΌΠ°Π½Π΄Ρ ΠΏΡΠΈΠ²Π΅Π΄Π΅Π½ΡΡ Π½ΠΈΠΆΠ΅
time.sleep(5)
bot.polling(none_stop=True) # ΠΠ°ΠΏΡΡΠΊΠ°Π΅ΠΌ Π±ΠΎΡΠ°
</code></pre>
<p>How do you write the code so that every time the inline button is launched, the old message is deleted and a new one appears?</p>
|
<p>This can be done by <em>edit_message_text</em></p>
<p>Example</p>
<pre><code>old = bot.send_message(message.chat.id,"some text")
bot.edit_message_text("new text here",old.chat.id,old.message_id)
</code></pre>
|
python|telegram-bot
| 0 |
1,909,046 | 30,580,437 |
Flask Python - Pull information from google datastore and display on own url
|
<p>This is my flask applicaion running on google app engine, the code below is a bit of my views.py file. All I'm trying to do is display the stored database information on it's own page with the URL being the ID. The top route works, but the bottom doesn't.</p>
<pre><code>@app.route('/')
def list_posts():
posts = Post.all()
return render_template('list_posts.html', posts=posts)
@app.route('/posts/<int:id>')
def display_posts(id):
posts = Post.all()
return render_template('display_posts.html', posts=posts)
</code></pre>
<p>And this is my template:</p>
<pre><code>{% extends "base.html" %}
{% block content %}
<ul>
<h1 id="">Posts registered on the database</h1>
<li>
{{ post.title }} (written by {{ post.author.nickname() }})<br />
{{ post.content }}
</li>
</ul>
{% endblock %}
</code></pre>
<p>This is the error I get:</p>
<pre><code><class 'jinja2.exceptions.UndefinedError'>: 'post' is undefined
Traceback (most recent call last):
File "/base/data/home/apps/s~smart-cove-95709/1.384716942984350887/main.py", line 4, in <module>
run_wsgi_app(app)
File "/base/data/home/runtimes/python/python_lib/versions/1/google/appengine/ext/webapp/util.py", line 99, in run_wsgi_app
run_bare_wsgi_app(add_wsgi_middleware(application))
File "/base/data/home/runtimes/python/python_lib/versions/1/google/appengine/ext/webapp/util.py", line 117, in run_bare_wsgi_app
result = application(env, _start_response)
File "/base/data/home/apps/s~smart-cove-95709/1.384716942984350887/flask/app.py", line 874, in __call__
return self.wsgi_app(environ, start_response)
File "/base/data/home/apps/s~smart-cove-95709/1.384716942984350887/flask/app.py", line 864, in wsgi_app
response = self.make_response(self.handle_exception(e))
File "/base/data/home/apps/s~smart-cove-95709/1.384716942984350887/flask/app.py", line 861, in wsgi_app
rv = self.dispatch_request()
File "/base/data/home/apps/s~smart-cove-95709/1.384716942984350887/flask/app.py", line 696, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/base/data/home/apps/s~smart-cove-95709/1.384716942984350887/blog/views.py", line 26, in display_posts
return render_template('display_posts.html', posts=posts)
File "/base/data/home/apps/s~smart-cove-95709/1.384716942984350887/flask/templating.py", line 81, in render_template
context, ctx.app)
File "/base/data/home/apps/s~smart-cove-95709/1.384716942984350887/flask/templating.py", line 65, in _render
rv = template.render(context)
File "/base/data/home/apps/s~smart-cove-95709/1.384716942984350887/jinja2/environment.py", line 891, in render
return self.environment.handle_exception(exc_info, True)
File "/base/data/home/apps/s~smart-cove-95709/1.384716942984350887/blog/templates/display_posts.html", line 1, in <module>
{% extends "base.html" %}
</code></pre>
|
<p>Both your <code>render_template</code> calls only pass a <code>posts</code> object:</p>
<pre><code># in list_posts:
return render_template('list_posts.html', posts=posts)
# in display_posts:
return render_template('display_posts.html', posts=posts)
</code></pre>
<p>Your template however only refers to a singular <code>post</code>:</p>
<pre><code>{{ post.title }} (written by {{ post.author.nickname() }})<br />
{{ post.content }}
</code></pre>
<p>So when jinja tries to render the template, it tries to fill in the placeholders using whatβs stored in the variable <code>post</code>, but there is none.</p>
<p>So for this template to work, you would have to pass <em>a single</em> <code>post</code> to the template, e.g.:</p>
<pre><code>return render_template('list_posts.html', post=single_post)
</code></pre>
<p>However, the way you have written your template, it makes more sense that you intended to show multiple posts in a list. So in that case, you should pass a list of posts as <code>posts</code>βas you do right nowβbut change the template to iterate over that list and generate the output for each single post in that list instead:</p>
<pre><code>{% block content %}
<ul>
<h1 id="">Posts registered on the database</h1>
{% for post in posts %}
<li>
{{ post.title }} (written by {{ post.author.nickname() }})<br />
{{ post.content }}
</li>
{% endfor %}
</ul>
{% endblock %}
</code></pre>
<hr>
<p>The <code>/posts/<id></code> route should probably work like this:</p>
<pre><code>@app.route('/posts/<int:id>')
def display_post(id):
q = Post.all()
post = q.filter('id =', id)
return render_template('display_post.html', post=post)
</code></pre>
<p>Note that we only pass a single post, so the template should only expect a single post. You should always try to make it clear whether you are talking about a single or multiple objects by using singular or plural in your variable names, your method names, and your template names (E.g. <code>display_post</code>, <code>display_post.html</code>, and <code>post</code> versus <code>list_posts</code>, <code>list_posts.html</code>, and <code>posts</code>).</p>
|
python|google-app-engine|flask
| 2 |
1,909,047 | 63,983,710 |
CNN model is overfitting to data after reaching 50% accuracy
|
<p>I am trying to identify 3 (classes) mental states based on EEG connectome data. The shape of the data is 99x1x34x34x50x130 (originally graph data, but now represented as a matrix), with respectably represent [subjects, channel, height, width, freq, time series]. For the sake of this study, can only input a 1x34x34 image of the connectome data. From previous studies, it was found that the alpha band (8-1 hz) had given the most information, thus the dataset was narrowed down to 99x1x34x34x4x130. The testing set accuracy on pervious machine learning techniques such as SVMs reached a testing accuracy of ~75%. Hence, by goal is to achieve a greater accuracy given the same data (1x34x34). Since my data is very limited 1-66 for training and 66-99 for testing (fixed ratios and have a 1/3 class distribution), I thought of splitting the data along the time series axis (6th axis) and then averaging the data to a shape of 1x34x34 (from ex. 1x34x34x4x10, 10 is the random sample of time series). This gave me ~1500 samples for training, and 33 for testing (testing is fixed, the class distributions are 1/3).</p>
<p>Model:</p>
<pre><code>SimpleCNN(
(conv1): Conv2d(1, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool1): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(drop1): Dropout(p=0.25, inplace=False)
(fc1): Linear(in_features=9248, out_features=128, bias=True)
(drop2): Dropout(p=0.5, inplace=False)
(fc2): Linear(in_features=128, out_features=3, bias=True)
)
CrossEntropyLoss()
Adam (
Parameter Group 0
amsgrad: False
betas: (0.9, 0.999)
eps: 1e-08
lr: 5e-06
weight_decay: 0.0001
)
</code></pre>
<p>Results:
The training set can achieve an accuracy of 100% with enough iteration, but at the cost of the testing set accuracy. After around 20-50 epochs of testing, the model starts to overfit to the training set and the test set accuracy starts to decrease (same with loss).</p>
<p><a href="https://i.stack.imgur.com/4ELAr.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4ELAr.jpg" alt="enter image description here" /></a></p>
<p>What I have tried:
I have tried tuning the hyperparameters: lr=.001-000001, weight decay=0.0001-0.00001. Training to 1000 epochs (useless bc overfitting in less than 100 epochs). I have also tried increasing/decreasing the model complexity with adding adding addition fc layers and varying amount of channels in CNN layers form 8-64. I have also tried adding more CNN layers and the model did a bit worse averaging around an accuracy of ~45% on the test set. I tried manually scheduling the learning rate every 10 epochs, the results were the same. Weight decay didnβt seem to effect the results much, changed it from 0.1-0.000001.</p>
<p>From previous testing, I have a model that achieves 60% acc on both the testing and the training set. However, when I try to retrain it, the acc instantly goes down to ~40 on both sets (training and testing), which makes no sense. I have tried altering the learning rate from 0.01 to 0.00000001, and also tried weight decay for this.</p>
<p>From training the model and the graphs, it seems like the model dosnβt know what itβs doing for the first 5-10 epochs and then starts to learn rapidly to around ~50%-60% acc on both sets. This is where the model starts to overfit, form there the modelβs acc increases to 100% on the training set, and the acc for the testing set goes down to 33%, which is equivalent to guessing.</p>
<p>Any tips?</p>
<p>Edit:</p>
<p>The modelβs outputs for the test set are very very close to each other.</p>
<pre><code>0.33960407972335815, 0.311821848154068, 0.34857410192489624
</code></pre>
<p>The average standard deviation for the whole test set between predictions for each image are (softmax):</p>
<pre><code>0.017695341517654846
</code></pre>
<p>However, the average std for the training set is <code>.22</code> so...</p>
<p>F1 Scores:</p>
<pre><code>Micro Average: 0.6060606060606061
Macro Average: 0.5810185185185186
Weighted Average: 0.5810185185185186
Scores for each class: 0.6875 0.5 0.55555556
</code></pre>
<p>Here is a confusion matrix:
<a href="https://i.stack.imgur.com/gr1Hi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gr1Hi.png" alt="enter image description here" /></a></p>
|
<p>I have some suggestions, what I would try, maybe you've already done it:</p>
<ul>
<li>increase the probability of dropout, that could decrease overfitting,</li>
<li>I did not see or I missed it but if you don't do it, shuffle all the samples,</li>
<li>there is not so much data, did you thought about using other NN to generate more data of the classes which are having the least score? I am not sure if it is the case here but even randomly rotating, scaling the images can produce more training examples,</li>
<li>another approach you can take, if you haven't done it already, use transfer learning using another popular CNN net and see how it is doing the job, then you can have some comparison, whether it is something wrong with your architecture or it's lack of examples :)
I know these are just suggestions but maybe, if you haven't try some of them, they will bring you closer to the solution.
Good luck!</li>
</ul>
|
python|machine-learning|deep-learning|pytorch|conv-neural-network
| 3 |
1,909,048 | 50,999,685 |
Pass argument and function as argument in python
|
<p>This code given is a part of my program. Here I want to reduce the 3 functions to just 1 as they are exactly similar except 1 line. I read about passing function(let this function be <code>Bar</code>) and its arguments as arguments in another function(let this function be called <code>Foo</code> ). </p>
<p>But in this scenario, <strong>I can't change the function <code>Foo</code></strong>. Here my <code>Foo</code> function is <code>.clicked.connect()</code> and <code>addXMin</code> is my function <code>Bar</code>. I want to pass <code>Bar</code> and its argument <code>num</code> into <code>Foo</code>, where I can't change whats going on in <code>Foo</code>. Is there a way I can reduce the 3 functions into 1 and pass <code>15</code>, <code>10</code> and <code>5</code> as arguments to that single function?</p>
<pre><code>self.add15m.clicked.connect(self.add15Min)
self.add10m.clicked.connect(self.add10Min)
self.add5m.clicked.connect(self.add5Min)
def add15Min(self):
global mins, secs, time
time = self.lineEdit.text()
mins = int((time.split(':'))[0])
mins+=15 #The only different line
secs = int((time.split(':'))[1])
time = str(mins).zfill(2) + ":" + str(secs).zfill(2)
self.lineEdit.setText(time)
def add10Min(self):
global mins, secs, time
time = self.lineEdit.text()
mins = int((time.split(':'))[0])
mins+=10 #The only different line
secs = int((time.split(':'))[1])
time = str(mins).zfill(2) + ":" + str(secs).zfill(2)
self.lineEdit.setText(time)
def add5Min(self):
global mins, secs, time
time = self.lineEdit.text()
mins = int((time.split(':'))[0])
secs = int((time.split(':'))[1])
mins+=5 #The only different line
time = str(mins).zfill(2) + ":" + str(secs).zfill(2)
self.lineEdit.setText(time)
</code></pre>
|
<p>if <code>connect</code> accepts single argument, you can use an anonymous function (<code>lambda</code> in python) like this:</p>
<pre><code>self.add5m.clicked.connect(lambda: self.addMin(5))
def addMin(self, minutes):
time = self.lineEdit.text()
mins = int((time.split(':'))[0])
secs = int((time.split(':'))[1])
mins += minutes
time = str(mins).zfill(2) + ":" + str(secs).zfill(2)
self.lineEdit.setText(time)
</code></pre>
|
python
| 1 |
1,909,049 | 50,872,449 |
regular expression in matching partially correct keyword
|
<p>I want to match strings to keywords. The keywords may not be exactly matched and the maximum unmatched characters is set to 2. </p>
<p>How to use regular expression to do it?</p>
<p>Thanks.</p>
<p>Here are examples:</p>
<pre><code>string partially matched 'abc technology.com'?
apc technology.om yes(wrong p and miss c)
abctechnologycom yes(miss space and dot)
abc technolog.con yes(miss y and wrong n)
abtechnology.com yes(miss c and space)
abc technology.c yes(miss o and m)
abtechnology.co no(miss c, space and m)
abc technology. no(miss com)
abctechnology.c no(mis space and om)
</code></pre>
|
<p>You can use the <a href="https://pypi.org/project/regex/" rel="nofollow noreferrer">regex</a> library and work with fuzzy matching (which is fitting your use-case), specifying a maximum of mismatches, like:</p>
<pre><code>import regex
from pprint import pprint
matcher = regex.compile(r'(abc technology\.com){e<3}')
tests = [
"apc technology.om",
"abctechnologycom",
"abc technolog.con",
"abtechnology.com",
"abc technology.c",
"abtechnology.co",
"abc technology.",
"abctechnology.c",
]
for test in tests:
pprint(matcher.match(test))
</code></pre>
<p><a href="https://repl.it/repls/KindheartedCrushingBusinesses" rel="nofollow noreferrer">Online demo here</a></p>
<p>When the error count is hit, it will return <code>None</code>, otherwise an object that contains a <code>fuzzy_counts</code> tuple, which gives you the total number of substitutions, insertions, deletions. It also contains a <code>fuzzy_changes</code> tuple, which contains the positions of each substitution, insertion, deletion done.</p>
|
python|regex
| 3 |
1,909,050 | 50,980,852 |
Large numpy matrix memory issues
|
<p>I have two questions, but the first takes precedence.</p>
<p>I was doing some timeit testing of some basic numpy operations that will be relevant to me.</p>
<p>I did the following</p>
<pre><code>n = 5000
j = defaultdict()
for i in xrange(n):
print i
j[i] = np.eye(n)
</code></pre>
<p>What happened is, python's memory use almost immediately shot up to 6gigs, which is over 90% of my memory. However, numbers printed off at a steady pace, about 10-20 per second. While numbers printed off, memory use sporadically bounced down to ~4 gigs, and back up to 5, back down to 4, up to 6, down to 4.5, etc etc.
At 1350 iterations I had a segmentation fault.</p>
<p>So my question is, what was actually occurring during this time? Are these matrices actually created one at a time? Why is memory use spiking up and down?</p>
<p>My second question is, I may actually need to do something like this in a program I am working on. I will be doing basic arithmetic and comparisons between many large matrices, in a loop. These matrices will sometimes, but rarely, be dense. They will often be sparse.</p>
<p>If I actually need 5000 5000x5000 matrices, is that feasible with 6 gigs of memory? I don't know what can be done with all the tools and tricks available... Maybe I would just have to store some of them on disk and pull them out in chunks? </p>
<p>Any advice for if I have to loop through many matrices and do basic arithmetic between them?</p>
<p>Thank you. </p>
|
<blockquote>
<p>If I actually need 5000 5000x5000 matrices, is that feasible with 6 gigs of memory?</p>
</blockquote>
<p>If they're dense matrices, and you need them all at the same time, not by a long shot. Consider:</p>
<pre><code>5K * 5K = 25M cells
25M * 8B = 200MB (assuming float64)
5K * 200MB = 1TB
</code></pre>
<hr>
<p>The matrices are being created one at a time. As you get near 6GB, what happens depends on your platform. It might start swapping to disk, slowing your system to a crawl. There might be a fixed-size or max-size swap, so eventually it runs out of memory anyway. It may make assumptions about how you're going to use the memory, guessing that there will always be room to fit your actual working set at any given moment into memory, only to segfault when it discovers it can't. But the one thing it isn't going to do is just work efficiently.</p>
<hr>
<p>You say that most of your matrices are sparse. In that case, use one of the <a href="https://docs.scipy.org/doc/scipy/reference/sparse.html" rel="nofollow noreferrer">sparse matrix</a> representations. If you know which of the 5000 will be dense, you can mix and match dense and sparse matrices, but if not, just use the same sparse matrix type for everything. If this means your occasional dense matrices take 210MB instead of 200MB, but all the rest of your matrices take 1MB instead of 200MB, that's more than worthwhile as a tradeoff.</p>
<hr>
<p>Also, do you actually need to work on all 5000 matrices at once? If you only need, say, the current matrix and the previous one at each step, you can generate them on the fly (or read from disk on the fly), and you only need 400MB instead of 1TB.</p>
<hr>
<p>Worst-case scenario, you can effectively swap things manually, with some kind of caching discipline, like least-recently-used. You can easily keep, say, the last 16 matrices in memory. Keep a dirty flag on each so you know whether you have to save it when flushing it to make room for another matrix. That's about as tricky as it's going to get.</p>
|
python|arrays|numpy|matrix|memory-management
| 1 |
1,909,051 | 50,572,602 |
Value given when variable is empty
|
<p>Suppose I had the following <code>Player</code> base class:</p>
<pre><code>from abc import ABC, abstractmethod
class Player(ABC):
def __init__(self, name, player_type):
self.name = name
self.player_type = player_type
</code></pre>
<p><code>Wizard</code>:</p>
<pre><code>from Player import Player
class Wizard(Player):
def __init__(self, name, player_type = "Wizard"):
super().__init__(self,name)
</code></pre>
<p><code>Main</code>:</p>
<pre><code>from Player import Player
from Wizard import Wizard
def main():
gandalf = Wizard("Gandalf")
print(gandalf.name)
# Will print gandalf because the parameter assignment was shifted
# because self was passed from the child to base class.
print(gandalf.player_type)
if __name__ == "__main__":
main()
</code></pre>
<p>I'm aware from <a href="https://stackoverflow.com/questions/50437465/declaring-subclass-without-passing-self">this question</a>, you shouldn't pass <code>self</code> from the subclass to the base class. That being said, suppose you made the mistake of doing so, the line <code>print(gandalf.name)</code> prints <code><Wizard.Wizard object at 0x049F20B0></code> because <code>name</code> was never assigned but what exactly does this value mean?</p>
|
<p><code>super()</code> does not return a class; it returns a proxy object, so <code>super().__init__(self, name)</code> behaves much the same as <code>foo.__init__(self, name)</code>, where <code>foo</code> is a "real" instance of <code>Wizard</code>. That being the case, the first parameter <code>self</code> of the call to <code>Player.__init__</code> has already been assigned, so your <em>explicit</em> <code>self</code> is assigned to the <em>next</em> parameter, <code>name</code>, and your second argument <code>name</code> is assigned to the third parameter <code>player_type</code>.</p>
<p>Put another way, with <code>Player</code> being the next class in the MRO, <code>super().__init__(self, name)</code> is the same as <code>Player.__init__(self, self, name)</code>.</p>
<p>Put yet another way, <code>super().__init__</code> is a bound method, and so the <code>__init__</code> function's first parameter <code>self</code> has already been supplied.</p>
|
python|python-3.x
| 2 |
1,909,052 | 26,748,711 |
convexityDefects in OpenCV-Python
|
<p>I am new to OpenCV-Python.<br>
I am trying to get the convexityDefects, I have the following code as shown below.</p>
<pre><code>import cv2
import numpy as np
img = cv2.imread('s4.png')
img_gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
ret, thresh = cv2.threshold(img_gray, 127, 255,0)
contours,hierarchy = cv2.findContours(thresh,2,1)
cnt = contours[0]
hull = cv2.convexHull(cnt,returnPoints = False)
defects = cv2.convexityDefects(cnt,hull)
# some codes here.....
</code></pre>
<hr>
<p>But when I run the codes I got an error..</p>
<pre><code>Traceback (most recent call last):
File "C:/Users/jasonc/Desktop/Pyhton/convexityDefects", line 11, in <module>
defects = cv2.convexityDefects(cnt,hull)
error: ..\..\..\OpenCV-2.4.4\modules\imgproc\src\contours.cpp:1969: error: (-215) ptnum > 3
</code></pre>
<p>What's wrong? I searched the internet and most of the example looks the same.</p>
|
<p>So the problem is that cv2.findContours(thresh,2,1) is giving you a list of separate contours instead of a single contour. In the tutorial from which you got this code the star image is nice and smooth so the findContours command returns a single contour at contour[0]. What ever s4.jpg is must not be smooth and continuous and therefore findContours is returning a list of contours. One solution to this would be to meager all the contours. </p>
<pre><code>numberofpoints = 0
for contour in contours:
for point in contour:
numberofpoints = numberofpoints +1
allcountours = np.zeros((numberofpoints,1,2), dtype=np.int32)
count = 0
for contour in contours:
for point in contour:
allcountours[count][0] = [point[0][0],point[0][1]]
count = count + 1
cnt = allcountours
</code></pre>
<p>this should give you one big contour in allcontours. Therefore cnt = allcountours should give you the right solution. </p>
<p>I had the same problem when trying track create a convex hull and defects for a image of a hand.</p>
|
python-2.7|opencv
| 0 |
1,909,053 | 26,850,122 |
I'm having trouble with input and output from files
|
<p>I'm having some trouble with my code in python. I just started learning input and output in class, and how to have python read in data from text files(barely. I'm still a huge beginner). Anyways, my assignment is that I have to have my program read in data from a file and run it through my program. Problem is, I don't have a good idea on how to do that and was wondering if you guys could help me out. The text file just contains a huge life of numbers for python to use in my program. My program finds the mean, median, and standard deviation of a list of numbers that are given to it. Now, instead of user input data, my professor wants python to use data from a file that was already pre-written.</p>
<p>My code:</p>
<pre><code> import math
def mean(values):
average = sum(values)*1.0/len(values)
return average
def deviation(values):
length = len(values)
m = mean(values)
total_sum = 0
for i in range(length):
total_sum += (values[i]-m)**2
root = total_sum*1.0/length
return math.sqrt(root)
def median(values):
if len(values)%2 != 0:
return sorted(values)[len(values)//2]
else:
midavg = (sorted(values)[len(values)//2] + sorted(values)[len(values)//2-1])//2.0
return midavg
def main():
x = [15, 17, 40, 16, 9]
print (mean(x))
print (deviation(x))
print (median(x))
main()
</code></pre>
<p>Now, I have to edit my code so it opens the file, takes the data, and reads the data through my equations. Only problem is, I don't have a good idea on how to do that. Could anyone please help?</p>
<p>I've tried basic input and output myself, but it's done no justice in helping me with the bigger picture.</p>
<pre><code>def main():
total=0
input = open('Stats.txt')
for nextline in input:
mylist = nextline.split()
for n in mylist:
total+=int(n)
print(total)
</code></pre>
|
<p>You have to fill your list from the file.</p>
<p>Open the file and iterate over the lines. Convert the content of the line to an <code>integer</code> and append it to your list. If you don't cxonvert the data you'll get strings and those won't work with mathematical operations. Close your file.</p>
<p>Now work with your list.</p>
<pre><code>filename = 'newfile.txt'
data = []
source = open(filename)
for line in source:
data.append(int(line))
source.close()
print(mean(data))
print(deviation(data))
# more stuff with data
</code></pre>
<p>There is a way to let Python close the file for you so you won't have to remember it.</p>
<pre><code>with open(filename) as source:
for line in source:
data.append(int(line))
</code></pre>
<p>According to your edit this might not be what you want. If the numbers are in one line, rather than one number per line, you'll have to take a different approach (<code>split</code>).</p>
|
python|math|python-3.4
| 0 |
1,909,054 | 64,839,094 |
Why axes aren't visible here
|
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
f1,(ax1,ax2) = plt.subplots(num=1,nrows=2, ncols=1)
f2,(ax3,ax4) = plt.subplots(num=2,nrows=2, ncols=1)
plt.close(2)
plt.close(1)
plt.figure(2)
plt.show()
</code></pre>
<p>Figure 2 is shown, but there are no axes on figure, why is that?
And how to make it to show axes while figure 1 is closed and vice versa?</p>
|
<p>So i created an input variable where the user can choose the figure he wants to see (in your case this variable can be the output of some calculation or something).</p>
<p><code>glob = int(input('Enter figure number'))</code></p>
<p>Then I used the same plotting code you provided (I added the lists of values to make sure the thing actually works) to define the figures:</p>
<pre><code>f1,(ax1, ax2) = plt.subplots(num=1,nrows=2, ncols=1)
ax1.plot(x,y)
ax2.plot(x,np.exp(x))
f2,(ax3,ax4) = plt.subplots(num=2,nrows=2, ncols=1)
ax3.plot(x,y)
ax4.plot(x,np.sqrt(x))
</code></pre>
<p>Then I used a condition on the user entry to select the figure to plot by closing the other one:</p>
<pre><code>if glob==1:
plt.close(2)
elif glob==2:
plt.close(1)
else:
print('Figure non defined')
</code></pre>
<p><strong>Please note</strong> that this solution works only once and you will have to run the program each time you want to chose a figure.
To solve this issue, here is another alternative:</p>
<pre><code>i=0
while i<4:
glob = int(input('Enter figure number'))
if glob==1:
f1,(ax1, ax2) = plt.subplots(num=1,nrows=2, ncols=1)
ax1.plot(x,y)
ax2.plot(x,np.exp(x))
plt.show()
elif glob==2:
f2,(ax3,ax4) = plt.subplots(num=2,nrows=2, ncols=1)
ax3.plot(x,y)
ax4.plot(x,np.sqrt(x))
plt.show()
else:
print('Figure non defined')
i+=1
</code></pre>
<p>Practically, you need to define the figures inside the loop.</p>
|
python|matplotlib
| 0 |
1,909,055 | 57,804,747 |
Why can't python threads run again once they have finished?
|
<p>Assume the following simplest threading example</p>
<pre><code>from threading import Thread
from time import sleep
def main():
t = Thread(target=foo)
t.daemon = True
t.start()
sleep(100)
t.start()
def foo():
print "foo!"
main()
</code></pre>
<p>This attempts to run <code>t</code> twice.<br>
The first time succeeds, but the second one throws an exception stating <em>"threads can only run once"</em></p>
<p>This behavior makes no sense to me.
I would expect a finished thread to be ready to start again.</p>
<p>My question is <strong>WHY</strong> threads are not allowed to start again once they have finished?</p>
<hr>
<p>This question got a "not clear what you're asking" vote - Please tell me what to explain better</p>
<p>This question got an "opinion based" vote. This is NOT opinion based. I am asking you to explain design decisions of python. I hope they didn't go by gut feeling. I'm pretty sure they didn't.</p>
|
<p>The deeper reason why you can't run the same thread more than one time is the same as the reason why you can't go on the same trip more than once. Even if you stay in all of the same places, and do all of the same things when you go back a second time, it's still a <em>different</em> trip.</p>
<p>A thread (a real <em>thread</em>, not the <code>Thread</code> object that gives you the ability to start and manage the thread) is an execution of your code (i.e., a "trip" through your code).</p>
|
python|multithreading
| 0 |
1,909,056 | 56,090,404 |
How install Flask correctly for python 3.7 to run this github project?
|
<p>I want to run this Github code on my MacOS:
<a href="https://github.com/llSourcell/AI_Startup_Prototype/tree/master/flaskSaaS-master" rel="nofollow noreferrer">https://github.com/llSourcell/AI_Startup_Prototype/tree/master/flaskSaaS-master</a></p>
<p>I have both the latest pip, Python 2.7 and 3.7</p>
<p>I have also installed Flask (for python 3: <a href="https://dev.to/sahilrajput/install-flask-and-create-your-first-web-application-2dba" rel="nofollow noreferrer">https://dev.to/sahilrajput/install-flask-and-create-your-first-web-application-2dba</a> )</p>
<p>and made a hello world in PyCharm</p>
<p>I use the given setup instructions from the Github project:
I go to the folder (I have downloaded it and extracted the zip)
than I run the 1st set up code on the terminal:</p>
<pre><code>make install && make dev
</code></pre>
<p>And I get the following message:</p>
<pre><code>pip install -r requirements.txt
Collecting Flask==0.10.1 (from -r requirements.txt (line 1))
Could not fetch URL https://pypi.python.org/simple/flask/: There was a problem confirming the ssl certificate: [SSL: TLSV1_ALERT_PROTOCOL_VERSION] tlsv1 alert protocol version (_ssl.c:590) - skipping
Could not find a version that satisfies the requirement Flask==0.10.1 (from -r requirements.txt (line 1)) (from versions: )
No matching distribution found for Flask==0.10.1 (from -r requirements.txt (line 1))
make: *** [install] Error 1
</code></pre>
<p>Thank you for the help</p>
|
<p>Seems like an issue with the certificate verification. You can try to working around it by adding <code>pypi.org</code> (the registry) and <code>files.pythonhosted.org</code> (the file storage) as trusted hosts. </p>
<p>Try this:</p>
<pre><code>pip install --trusted-host pypi.org --trusted-host files.pythonhosted.org Flask==0.10.1
</code></pre>
<p>The issue may also be fixed by re-installing pip </p>
|
python|web|github|flask
| 1 |
1,909,057 | 18,479,146 |
Python : comparing tuples
|
<p>I am trying to compare tuple A values with tuple B values , and make a 3rd tuple with common values. This is my code so far. Any attepts i have made to get a 3rd tuple with common values failed. Any help is apreciated.</p>
<pre><code>#1st nr , print divs
x = int(raw_input('x=' ))
divizori = ()
for i in range(1,x):
if x%i == 0:
divizori = divizori + (i,)
print divizori
#2nd nr , print divs
y = int(raw_input('y=' ))
div = ()
for i in range(1,y):
if y%i == 0:
div = div + (i,)
print div
#code atempt to print commom found divs
</code></pre>
|
<p>You can take advantage of set operations:</p>
<pre><code>>>> a = (1,2,3,4)
>>> b = (2,3,4,5)
>>> tuple(set(a).intersection(set(b)))
(2, 3, 4)
</code></pre>
|
python|tuples
| 4 |
1,909,058 | 18,351,447 |
How can I make a nested query more efficient?
|
<p>Lets say I'm having 4 tables '<code>A(id, type, protocol), B(id, A_id, info), C(id, B_id, details) and D(id, C_id, port_info)</code>. Table <code>A</code> and Table <code>B</code> are connected via foreign key <code>id</code> from Table <code>A</code> and <code>A_id</code> from Table <code>B</code>. Similarly, Table <code>B</code> and Table<code>C</code> are connected via foreign key <code>id</code> from Table<code>B</code> and <code>B_id</code> from Table <code>C</code>, and in the same way , Table <code>C</code> and Table <code>D</code> are also connected.</p>
<p>Now, I want to get <code>port_info</code> from Table <code>D</code> of all the <code>protocols</code> from Table <code>A</code>.
I know one method whose time complexity is <code>O(n^4)</code>, which I'm using currently. The method is as follow :</p>
<pre><code>db = MySQLdb.connect(host="localhost", user="root", passwd="", db="mydb")
cur = db.cursor()
cur.execute("SELECT * FROM A")
A_results = cur.fetchall()
for A_row in A_results :
id = A_row[0]
cur.execute("SELECT * FROM B WHERE A_id = %d " % (id ))
B_results = cur.fetchall()
for B_row in B_results :
id = B_row[0]
cur.execute("SELECT * FROM C WHERE B_id = %d " % (id ))
c_results = cur.fetchall()
for C_row in C_results :
id = C_row[0]
cur.execute("SELECT * FROM D WHERE C_id = %d " % (id ))
D_results = cur.fetchall()
for D_row in D_results :
print "Port = " + str(port)
</code></pre>
<p>But this method takes <code>O(n^4)</code>, so is there any efficient way in terms of <code>time complexity</code> , that can solve this problem.</p>
<p>Your suggestions are highly appreciated.</p>
|
<p>Execute it in a single <code>JOIN</code> query and let MySQL do the necessary optimizations while handling large data sets (which, after all, is what the database is best at), providing your application with a single result set. The query looks like this:</p>
<pre><code>SELECT A.protocol, D.port_info
FROM A JOIN B ON A.id = B.A_id
JOIN C ON B.id = C.B_id
JOIN D ON C.id = D.C_id
ORDER BY protocol
</code></pre>
<p>...and then use your cursor to go through that single resultset.</p>
|
python|mysql|database|mysql-python|time-complexity
| 2 |
1,909,059 | 18,249,830 |
Fetch latest related objects in django
|
<p>In my django app I have 'Documents'. Each document has one or more 'Revisions' that are ordered by date created. I'd like a way to get the latest revision for every document. The best I have so far is the code below, but I'm thinking there must be a way to do this with less database queries?</p>
<pre><code>def get_document_templates():
result = []
for d in Document.objects.filter(is_template=True).all():
result.append(d.documentrevision_set.latest())
return result
</code></pre>
<p>I've been investigating 'annotate' and 'aggregate' filters, but can't figure out how to do this more efficiently. I'd rather not get my hands dirty in raw SQL, as the database backend may be changing in the near future. Anyone have any ideas ?</p>
<p>thanks!!</p>
|
<p>I think there are two approaches. One explained this this blog post <a href="http://blog.roseman.org.uk/2010/08/14/getting-related-item-aggregate/" rel="nofollow noreferrer">"Getting the related item in an aggregate"</a>. This will get you each <code>Document</code> with an attached 'Revision' (if you need access to both).</p>
<p>If you only want the <code>Revision</code>, you could try making use of the <code>values()</code> method. It's functionality changes subtly when used with aggregations:</p>
<blockquote>
<p>As with the filter() clause, the order in which annotate() and values() clauses are applied to a query is significant. If the values() clause precedes the annotate(), the annotation will be computed using the grouping described by the values() clause.</p>
<p>However, if the annotate() clause precedes the values() clause, the annotations will be generated over the entire query set. In this case, the values() clause only constrains the fields that are generated on output.</p>
</blockquote>
<p>So you could do a grouping of the <code>Revision</code> by <code>Document</code> and aggregate on <code>date</code></p>
<pre><code>Revision.objects.all().values('document').aggregate(Max('date'))
</code></pre>
|
python|django|orm
| 1 |
1,909,060 | 55,556,080 |
Solution for problem 3 in google code jam 2019
|
<p>I tried solving the google code jam question "Cryptopangrams" yesterday. I was able to pass the example cases, but my solution was not accepted.</p>
<p>The problem statement can be found here:
<a href="https://codingcompetitions.withgoogle.com/codejam/round/0000000000051705/000000000008830b" rel="nofollow noreferrer">https://codingcompetitions.withgoogle.com/codejam/round/0000000000051705/000000000008830b</a></p>
<p>I tried to find the factors of the number using traditional methods (pollard rho seemed to be overkill for the constraints) and then sorted all the unique factors into a list. There is a one to one correspondence between the letters of the alphabet and the elements in the list.</p>
<p>So then, i substituted the letters into the products, and returned the string.</p>
<p>The code worked when i tried it on my laptop, when i tested it against the two example cases given in the question, but failed when i uploaded it to the website. </p>
<pre class="lang-py prettyprint-override"><code># Function to find the prime factors of n and returns them in a list
def prime_factors(n):
i = 2
factors = []
while i * i <= n:
if n % i:
i += 1
else:
n //= i
factors.append(i)
if n > 1:
factors.append(n)
return factors
# Gets number of cases
cases = int(input())
for case in range(cases):
text = ''
alphabets = 'abcdefghijklmnopqrstuvwxyz'
ch = input().split()
no_of_chars = int(ch[1])
max_prime = int(ch[0])
# gets products from input
products = [int(i) for i in input().split()]
flag = None
factors = []
pairs = []
# For loop to find factors and append them to pairs
for i in range(len(products)):
l = prime_factors(products[i])
a, b = l[0], l[1]
if i > 0:
if pairs[i-1][1] == a:
flag = True
# Swaps elements of pair if not in linked order
else:
a, b = b, a
pairs.append([a, b])
# Adds new factors to list
if a in factors:
flag = True
else:
factors.append(a)
if b in factors:
flag = False
else:
factors.append(b)
# Sorts the factors
factors.sort()
for i in pairs:
text += alphabets[factors.index(i[0])]
text += alphabets[factors.index(pairs[-1][1])]
# Prints output in the required format
print('Case #{}: {}'.format(case + 1, text.upper())
</code></pre>
<p>On my laptop, given the input (copy-pasted from : <a href="https://codingcompetitions.withgoogle.com/codejam/round/0000000000051705/000000000008830b" rel="nofollow noreferrer">https://codingcompetitions.withgoogle.com/codejam/round/0000000000051705/000000000008830b</a>)</p>
<pre><code>2
103 31
217 1891 4819 2291 2987 3811 1739 2491 4717 445 65 1079 8383 5353 901 187 649 1003 697 3239 7663 291 123 779 1007 3551 1943 2117 1679 989 3053
10000 25
3292937 175597 18779 50429 375469 1651121 2102 3722 2376497 611683 489059 2328901 3150061 829981 421301 76409 38477 291931 730241 959821 1664197 3057407 4267589 4729181 5335543
</code></pre>
<p>I am getting the output : </p>
<pre><code>Case #1: CJQUIZKNOWBEVYOFDPFLUXALGORITHMS
Case #2: SUBDERMATOGLYPHICFJKNQVWXZ
</code></pre>
<p>which is the same as given on the website.
But when i submitted it, i received the following message.</p>
<pre><code>Wrong Answer! Test Set Skipped
</code></pre>
<p>Could anyone tell me where i went wrong?
Thanks!</p>
|
<pre><code>1
107 29
15 15 15 15 21 49 77 143 221 323 437 667 899 1147 1517 1763 2021 2491 3127 3599 4087 4757 5183 5767 6557 7387 8633 9797 10403
Case #1: ABABACCDEFGHIJKLMNOPQRSTUVWXYZ
</code></pre>
<p>Try this test case.
In case you get wrong.
Explanation:
suppose semiprimes of form pq,pq,qr
if you decide to start with p then p,q,p but then this p will not divide qr;
if start with q,p,q,r ........right</p>
<p>So Logic is kept taking gcd of the first semiprime with other until you get gcd!=firstsemiprime, that gcd will your one prime at some position
then, iterate backwards and forward to get other primes.
For Solution:
<a href="https://shashankmishracoder.wordpress.com/2019/04/07/google-code-jam-2019-qualification-round-problem-c-cryptopangrams/" rel="nofollow noreferrer">https://shashankmishracoder.wordpress.com/2019/04/07/google-code-jam-2019-qualification-round-problem-c-cryptopangrams/</a></p>
|
python|python-3.x
| 4 |
1,909,061 | 55,184,622 |
Python - Failed to remove all spaces
|
<p><a href="https://i.stack.imgur.com/SLpum.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SLpum.png" alt="enter image description here"></a></p>
<p>Hello, I tried to </p>
<p><strong>Desired result:</strong>
1. Import data from EXCEL
2. Remove all spaces (include <strong>spaces between and new line</strong>s)
3. Group by column 'BOM', and run the distinct count of 'NAME'</p>
<p><strong>Problem</strong>:</p>
<ol>
<li><p>I have tried 2 ways (remove\ join split) found on previous posts on stackoverflow, but failed as seen in below image.</p></li>
<li><p>In the result part, please see the column 'BOM', why above there is no spaces, but with spaces in the 2nd result?</p></li>
</ol>
<p>Many thanks for any advice.</p>
<pre><code>enter code here
import numpy as np
import pandas as pd
import datetime as datetime
import os
import xlrd
os.chdir('C:/Users/mac/Desktop')
t=pd.read_excel('testdata.xlsx')
# 1st method to remove spaces
#while ' ' in t:
# t.remove(' ')
#2nd method to remove spaces
def remove(t):
return "".join(t.split())
print (t,'\n------')
t1=t.fillna(method='ffill')
t1.groupby(['BOM']).NAME.nunique()
# Group by column "BOM", and then distinct count based on Name
</code></pre>
|
<p>Simply do:</p>
<pre><code>df.NAME = df.NAME.str.replace(' ', '')
df.BOM = df.BOM.str.replace(' ', '')
</code></pre>
|
python
| 0 |
1,909,062 | 55,388,815 |
Access issues with Google Cloud Functions read access to Google Cloud Firestore Collection in Default Database
|
<p>I am trying to write a cloud function in python that would read a collection in Google Cloud Firestore (Native) [not the Realtime Database or Datastore].</p>
<p>I have created a Service Account that has below Roles for the project:
- Project Owner
- Firebase Admin
- Service Account User
- Cloud Functions Developer
- Project Editor</p>
<p>When run on my local I am setting the service account credential in my environment: GOOGLE_APPLICATION_CREDENTIALS</p>
<p>My cloud function is able to access Cloud Storage. I am only having issues with Cloud Firestore.</p>
<ol>
<li><p>I have tried using both the Client Python SDK and the Admin SDK (Python). The Admin SDK seems to only be available for the realtime database as it requires a Database URL to connect.</p></li>
<li><p>I have tried running both from my dev machine and as a cloud function.</p></li>
<li><p>I also changed the Firestore access rules to below for unrestricted access:</p></li>
</ol>
<pre><code>service cloud.firestore {
match /databases/{database}/documents {
match /{document=**} {
allow read, write: if true;
}
}
}
</code></pre>
<p>I am trying to run the same code in the Google Documentation..</p>
<pre class="lang-py prettyprint-override"><code>from google.cloud import firestore
def process_storage_file(data, context):
# Add a new document
db = firestore.Client()
doc_ref = db.collection(u'users').document(u'alovelace')
doc_ref.set({
u'first': u'Ada',
u'last': u'Lovelace',
u'born': 1815
})
# Then query for documents
users_ref = db.collection(u'users')
docs = users_ref.get()
for doc in docs:
print(u'{} => {}'.format(doc.id, doc.to_dict()))
</code></pre>
<p>I am not able to get the Cloud Function to connect to Google Cloud Firestore. I get the error:</p>
<p>line 3, in raise_from google.api_core.exceptions.PermissionDenied: <strong>403 Missing or insufficient permissions.</strong></p>
<p>Both the cloud function and Firestore are in the same GCP Project.</p>
|
<p>The service account you specified on the cloud function UI configuration needs to have the Datastore User Role </p>
|
python-3.x|google-cloud-firestore|google-cloud-functions
| 1 |
1,909,063 | 57,390,014 |
How to click this button using selenium webdriver?
|
<p>I am trying to scrape the data table from nasdaq: <a href="https://www.nasdaq.com/symbol/msft/interactive-chart?timeframe=5d" rel="nofollow noreferrer">https://www.nasdaq.com/symbol/msft/interactive-chart?timeframe=5d</a></p>
<p>What I do is using python and selenium webdriver to click the table button(on top of the chart, with a little table logo) and then scrape.</p>
<pre class="lang-py prettyprint-override"><code>submit = wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, '#dataTableBtn')))
submit.click()
</code></pre>
<p>But it does not work. </p>
<p>Button html here:</p>
<pre><code><div id="dataTableBtn" class="btn hideSmallIR stx-collapsible" onclick="dataTableLoader()"><span>Data Table</span></div>
</code></pre>
<p>EC and By</p>
<pre><code>from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
</code></pre>
|
<p>The <em>chart</em> and the associated elements are within an <code><iframe></code> so you have to:</p>
<ul>
<li>Induce <em>WebDriverWait</em> for the desired <em>frame to be available and switch to it</em>.</li>
<li>Induce <em>WebDriverWait</em> for the desired <em>element to be clickable</em>.</li>
<li><p>You can use either of the following <a href="https://stackoverflow.com/questions/48369043/official-locator-strategies-for-the-webdriver/48376890#48376890">Locator Strategies</a>:</p>
<ul>
<li><p>Using <code>CSS_SELECTOR</code>:</p>
<pre><code>driver.get("https://www.nasdaq.com/symbol/msft/interactive-chart?timeframe=5d")
WebDriverWait(driver, 10).until(EC.frame_to_be_available_and_switch_to_it((By.CSS_SELECTOR,"iframe[src*='edgar-chartiq']")))
WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "div.hideSmallIR#dataTableBtn>span"))).click()
</code></pre></li>
<li><p>Using <code>XPATH</code>:</p>
<pre><code>driver.get("https://www.nasdaq.com/symbol/msft/interactive-chart?timeframe=5d")
WebDriverWait(driver, 10).until(EC.frame_to_be_available_and_switch_to_it((By.XPATH,"//iframe[contains(@src, 'edgar-chartiq')]")))
WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, "//div[@class='btn hideSmallIR stx-collapsible' and @id='dataTableBtn']/span[text()='Data Table']"))).click()
</code></pre></li>
<li><p><strong>Note</strong> : You have to add the following imports :</p>
<pre><code>from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
</code></pre></li>
</ul></li>
<li><p>Browser Snapshot:</p></li>
</ul>
<p><a href="https://i.stack.imgur.com/ap72t.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ap72t.png" alt="nasdaq_datatable"></a></p>
<blockquote>
<p>Here you can find a relevant discussion on <a href="https://stackoverflow.com/questions/53203417/ways-to-deal-with-document-under-iframe">Ways to deal with #document under iframe</a></p>
</blockquote>
|
python-3.x|selenium|xpath|css-selectors|webdriverwait
| 1 |
1,909,064 | 57,369,061 |
Modifying and comparing nested dict
|
<pre><code>for mov_id in user_ratings:
for id_key in mov_id:
</code></pre>
<p>the loop was not working </p>
|
<p>You should pop <code>id_key</code> from the dict you iterate the keys through instead, which is <code>mov_id</code>.</p>
<p>Change:</p>
<pre><code>user_ratings.pop(id_key)
</code></pre>
<p>to:</p>
<pre><code>mov_id.pop(id_key)
</code></pre>
|
python|python-3.x|loops
| 0 |
1,909,065 | 58,513,607 |
Kernel dies when running sample python code
|
<p>I was trying to run this code from <a href="https://keras.io/examples/cifar10_cnn/" rel="nofollow noreferrer">https://keras.io/examples/cifar10_cnn/</a>. Initially, it spat out errors regarding tensorflow and keras and cuda. I solved that by updating and all. Now when I run this code in Jupyter Notebooks, my kernel dies almost instantly. Mind you I am not intentionally using cuda, It just gave some error and I was surprised cause I didn't even program it to use that or whatever. </p>
<pre><code>from __future__ import print_function
import keras
from keras.datasets import cifar10
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Conv2D, MaxPooling2D
import os
batch_size = 32
num_classes = 10
epochs = 100
data_augmentation = True
num_predictions = 20
save_dir = os.path.join(os.getcwd(), 'saved_models')
model_name = 'keras_cifar10_trained_model.h5'
# The data, split between train and test sets:
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# Convert class vectors to binary class matrices.
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
model = Sequential()
model.add(Conv2D(32, (3, 3), padding='same',
input_shape=x_train.shape[1:]))
model.add(Activation('relu'))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(64, (3, 3), padding='same'))
model.add(Activation('relu'))
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes))
model.add(Activation('softmax'))
# initiate RMSprop optimizer
opt = keras.optimizers.RMSprop(learning_rate=0.0001, decay=1e-6)
# Let's train the model using RMSprop
model.compile(loss='categorical_crossentropy',
optimizer=opt,
metrics=['accuracy'])
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
if not data_augmentation:
print('Not using data augmentation.')
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_test, y_test),
shuffle=True)
else:
print('Using real-time data augmentation.')
# This will do preprocessing and realtime data augmentation:
datagen = ImageDataGenerator(
featurewise_center=False, # set input mean to 0 over the dataset
samplewise_center=False, # set each sample mean to 0
featurewise_std_normalization=False, # divide inputs by std of the dataset
samplewise_std_normalization=False, # divide each input by its std
zca_whitening=False, # apply ZCA whitening
zca_epsilon=1e-06, # epsilon for ZCA whitening
rotation_range=0, # randomly rotate images in the range (degrees, 0 to 180)
# randomly shift images horizontally (fraction of total width)
width_shift_range=0.1,
# randomly shift images vertically (fraction of total height)
height_shift_range=0.1,
shear_range=0., # set range for random shear
zoom_range=0., # set range for random zoom
channel_shift_range=0., # set range for random channel shifts
# set mode for filling points outside the input boundaries
fill_mode='nearest',
cval=0., # value used for fill_mode = "constant"
horizontal_flip=True, # randomly flip images
vertical_flip=False, # randomly flip images
# set rescaling factor (applied before any other transformation)
rescale=None,
# set function that will be applied on each input
preprocessing_function=None,
# image data format, either "channels_first" or "channels_last"
data_format=None,
# fraction of images reserved for validation (strictly between 0 and 1)
validation_split=0.0)
# Compute quantities required for feature-wise normalization
# (std, mean, and principal components if ZCA whitening is applied).
datagen.fit(x_train)
# Fit the model on the batches generated by datagen.flow().
model.fit_generator(datagen.flow(x_train, y_train,
batch_size=batch_size),
epochs=epochs,
validation_data=(x_test, y_test),
workers=4)
# Save model and weights
if not os.path.isdir(save_dir):
os.makedirs(save_dir)
model_path = os.path.join(save_dir, model_name)
model.save(model_path)
print('Saved trained model at %s ' % model_path)
# Score trained model.
scores = model.evaluate(x_test, y_test, verbose=1)
print('Test loss:', scores[0])
print('Test accuracy:', scores[1])
</code></pre>
<h1>Error Stack:</h1>
<pre><code>ImportError Traceback (most recent call last)
~\Anaconda3\envs\cv_course\lib\site-packages\tensorflow\python\pywrap_tensorflow.py in <module>
57
---> 58 from tensorflow.python.pywrap_tensorflow_internal import *
59 from tensorflow.python.pywrap_tensorflow_internal import __version__
~\Anaconda3\envs\cv_course\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py in <module>
27 return _mod
---> 28 _pywrap_tensorflow_internal = swig_import_helper()
29 del swig_import_helper
~\Anaconda3\envs\cv_course\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py in swig_import_helper()
23 try:
---> 24 _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
25 finally:
~\Anaconda3\envs\cv_course\lib\imp.py in load_module(name, file, filename, details)
242 else:
--> 243 return load_dynamic(name, filename, file)
244 elif type_ == PKG_DIRECTORY:
~\Anaconda3\envs\cv_course\lib\imp.py in load_dynamic(name, path, file)
342 name=name, loader=loader, origin=path)
--> 343 return _load(spec)
344
ImportError: DLL load failed: The specified module could not be found.
During handling of the above exception, another exception occurred:
ImportError Traceback (most recent call last)
<ipython-input-1-429941c8ff8d> in <module>
1 from __future__ import print_function
----> 2 import keras
3 from keras.datasets import cifar10
4 from keras.preprocessing.image import ImageDataGenerator
5 from keras.models import Sequential
~\Anaconda3\envs\cv_course\lib\site-packages\keras\__init__.py in <module>
1 from __future__ import absolute_import
2
----> 3 from . import utils
4 from . import activations
5 from . import applications
~\Anaconda3\envs\cv_course\lib\site-packages\keras\utils\__init__.py in <module>
4 from . import data_utils
5 from . import io_utils
----> 6 from . import conv_utils
7
8 # Globally-importable utils.
~\Anaconda3\envs\cv_course\lib\site-packages\keras\utils\conv_utils.py in <module>
7 from six.moves import range
8 import numpy as np
----> 9 from .. import backend as K
10
11
~\Anaconda3\envs\cv_course\lib\site-packages\keras\backend\__init__.py in <module>
87 elif _BACKEND == 'tensorflow':
88 sys.stderr.write('Using TensorFlow backend.\n')
---> 89 from .tensorflow_backend import *
90 else:
91 # Try and load external backend.
~\Anaconda3\envs\cv_course\lib\site-packages\keras\backend\tensorflow_backend.py in <module>
3 from __future__ import print_function
4
----> 5 import tensorflow as tf
6 from tensorflow.python.framework import ops as tf_ops
7 from tensorflow.python.training import moving_averages
~\Anaconda3\envs\cv_course\lib\site-packages\tensorflow\__init__.py in <module>
32
33 # pylint: disable=g-bad-import-order
---> 34 from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import
35 from tensorflow.python.tools import module_util as _module_util
36
~\Anaconda3\envs\cv_course\lib\site-packages\tensorflow\python\__init__.py in <module>
47 import numpy as np
48
---> 49 from tensorflow.python import pywrap_tensorflow
50
51 # Protocol buffers
~\Anaconda3\envs\cv_course\lib\site-packages\tensorflow\python\pywrap_tensorflow.py in <module>
72 for some common reasons and solutions. Include the entire stack trace
73 above this error message when asking for help.""" % traceback.format_exc()
---> 74 raise ImportError(msg)
75
76 # pylint: enable=wildcard-import,g-import-not-at-top,unused-import,line-too-long
ImportError: Traceback (most recent call last):
File "C:\Users\Hamza\Anaconda3\envs\cv_course\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Users\Hamza\Anaconda3\envs\cv_course\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Users\Hamza\Anaconda3\envs\cv_course\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "C:\Users\Hamza\Anaconda3\envs\cv_course\lib\imp.py", line 243, in load_module
return load_dynamic(name, filename, file)
File "C:\Users\Hamza\Anaconda3\envs\cv_course\lib\imp.py", line 343, in load_dynamic
return _load(spec)
ImportError: DLL load failed: The specified module could not be found.
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/errors
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.
</code></pre>
<p>Even though I have installed all the relevant packages, and they have been working like a clock before.</p>
<p>When I ran this on another machine it just simply started downloading Cifar10, but that was at uni. I need to complete my work on the home pc and wat to know how to fix it.
How can I successfully execute it?</p>
|
<p>You might be trying to grab the gpu by accident, but it doesn't look like it. You can try adding: </p>
<pre><code>import os
os.environ["CUDA_VISIBLE_DEVICES"]="-1"
</code></pre>
<p>to the very top of the file you are using to force tf to run on your cpu. </p>
<p>You should also check to make sure you only have one version of tf and python etc. installed in your current environment.</p>
|
python|tensorflow|keras|jupyter-notebook|conv-neural-network
| 0 |
1,909,066 | 58,351,760 |
Flask SQLAlchemy query returns a querystring instead of an actual resultselt
|
<p>I have used flask-sqlalchemy to create a mixin in a file called <code>itemAbstract.py</code>, to be shared by two model classes: <code>ItemModel</code> and <code>ItemHistoryModel</code>respectively. Below is the code I have written in the itemAbstract.py</p>
<pre><code>from databaseHandler import databaseHandler
from sqlalchemy.ext.declarative import declared_attr
# pylint: disable=maybe-no-member
class Item(databaseHandler.Model):
__abstract__ = True
itemName = databaseHandler.Column(databaseHandler.String(80), nullable = False)
price = databaseHandler.Column(databaseHandler.Numeric, nullable = False)
itemImage = databaseHandler.Column(databaseHandler.String(1000), nullable = False)
@classmethod
@declared_attr
def restaurantId(cls):
return databaseHandler.Column(
databaseHandler.Integer, databaseHandler.ForeignKey("restaurant.restaurantId"))
@classmethod
@declared_attr
def restaurant(cls):
return databaseHandler.relationship(
"RestaurantModel", backref=databaseHandler.backref('items', lazy=True))
@classmethod
@declared_attr
def productTypeId(cls):
return databaseHandler.Column(
databaseHandler.Integer, databaseHandler.ForeignKey("product_type.productTypeId"))
@classmethod
@declared_attr
def productType(cls):
return databaseHandler.relationship(
"ProductTypeModel", backref=databaseHandler.backref('items', lazy=True))
</code></pre>
<p>And I have inherited it in the <code>itemModel.py</code> and <code>itemHistoryModel.py</code> like so:</p>
<pre><code>from databaseHandler import databaseHandler
from sqlalchemy import and_, or_
from abstracts.itemAbstract import Item
# pylint: disable=maybe-no-member
class ItemModel(Item):
__tablename__ = 'item'
itemId = databaseHandler.Column(databaseHandler.Integer, primary_key = True)
</code></pre>
<p>And </p>
<pre><code>from databaseHandler import databaseHandler
from sqlalchemy import and_, or_
from abstracts.itemAbstract import Item
# pylint: disable=maybe-no-member
class ItemHistoryModel(Item):
__tablename__ = 'item_history'
historyId = databaseHandler.Column(databaseHandler.Integer, primary_key = True)
</code></pre>
<p>I have a class method in both files that is supposed to help me get a list of items a restaurant sells by passing in the <code>restaurantId</code> as parameter</p>
<pre><code>@classmethod
def findItemsByRestaurant(cls, param):
return cls.query.filter_by(restaurantId = param)
</code></pre>
<p>However, anytime I execute this method it returns a query string in the resultset instead of a list of items. Here is a sample resultset:</p>
<pre><code>SELECT item_history.`itemName` AS `item_history_itemName`, item_history.price AS item_history_price, item_history.`itemImage` AS `item_history_itemImage`, item_history.`historyId` AS `item_history_historyId`
FROM item_history
WHERE false = 1
</code></pre>
<p>Somehow, SQLAlchemy makes my parameter <code>false</code> and assigns a value of <code>1</code> to it meanwhile the actual ID of the restaurant is <code>10</code>. What am I doing wrong?</p>
<p>This is the <code>databaseHandler.py</code> file:</p>
<pre><code>from flask_sqlalchemy import SQLAlchemy
databaseHandler = SQLAlchemy()
</code></pre>
|
<p>The <a href="https://docs.sqlalchemy.org/en/13/orm/query.html" rel="nofollow noreferrer">Query object</a> has a number of API methods for getting pythonic objects rather than amending the query:</p>
<ul>
<li>get</li>
<li>all </li>
<li>from_statement </li>
<li>first </li>
<li>one_or_none </li>
<li>one </li>
<li>scalar (as_scalar to be depreciated)</li>
<li>count</li>
</ul>
|
python-3.x|flask-sqlalchemy
| 0 |
1,909,067 | 22,842,289 |
Generate 'n' unique random numbers within a range
|
<p>I know how to generate a random number within a range in Python.</p>
<pre><code>random.randint(numLow, numHigh)
</code></pre>
<p>And I know I can put this in a loop to generate n amount of these numbers</p>
<pre><code>for x in range (0, n):
listOfNumbers.append(random.randint(numLow, numHigh))
</code></pre>
<p>However, I need to make sure each number in that list is unique. Other than a load of conditional statements, is there a straightforward way of generating n number of unique random numbers?</p>
<p>The important thing is that each number in the list is different to the others..</p>
<p>So</p>
<p>[12, 5, 6, 1] = good</p>
<p>But</p>
<p>[12, 5, 5, 1] = bad, because the number 5 occurs twice.</p>
|
<p>If you just need sampling without replacement:</p>
<pre><code>>>> import random
>>> random.sample(range(1, 100), 3)
[77, 52, 45]
</code></pre>
<p><a href="https://docs.python.org/2/library/random.html#random.sample" rel="noreferrer">random.sample</a> takes a population and a sample size <code>k</code> and returns <code>k</code> random members of the population.</p>
<p>If you have to control for the case where <code>k</code> is larger than <code>len(population)</code>, you need to be prepared to catch a <code>ValueError</code>:</p>
<pre><code>>>> try:
... random.sample(range(1, 2), 3)
... except ValueError:
... print('Sample size exceeded population size.')
...
Sample size exceeded population size
</code></pre>
|
python|random|unique
| 539 |
1,909,068 | 45,363,146 |
How to pass rgb color values to python's matplotlib eventplot?
|
<p>I'm simply trying to plot some tick marks with a specific color using matplotlib's eventplot. I'm running Python 3 in Jupyter notebook with %matplotlib inline.</p>
<p>Here's an example code:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
spikes = 100*np.random.random(100)
plt.eventplot(spikes, orientation='horizontal', linelengths=0.9, color=[0.3,0.3,0.5])
</code></pre>
<p>It outputs the following error:</p>
<pre><code>ValueError: colors and positions are unequal sized sequences
</code></pre>
<p>The error occurs presumably because I am not providing a list of colors of the same length as the data (but I wan't them to all just be the same color!). It also gives an error when I use a color string like 'crimson' or 'orchid'. But it works when I use a simple one-letter string like 'r'.</p>
<p>Am I really restricted to just using the extremely limited set of one-letter color strings 'r','b','g','k','m','y', etc... or making a long color list when using this eventplot?</p>
|
<p>According to the <a href="https://matplotlib.org/api/colors_api.html" rel="noreferrer">docs</a>:</p>
<blockquote>
<p>you can pass an (r, g, b) or (r, g, b, a) tuple, where each of r, g, b
and a are in the range [0,1].</p>
</blockquote>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
spikes = 100*np.random.random(100)
plt.eventplot(spikes, orientation='horizontal', linelengths=0.9, color = [(0.3,0.3,0.5)])
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/cz7x8.png" rel="noreferrer"><img src="https://i.stack.imgur.com/cz7x8.png" alt=""></a></p>
|
python|matplotlib|colors
| 17 |
1,909,069 | 41,640,860 |
read and write tfrecords binary file (type missmatch)
|
<p>Hi I'm trying to build an image input pipe. My preprocessed training data is stored in a tfrecords file which I've create with the following lines of codes:</p>
<pre><code>def _bytes_feature(value):
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
def _int64_feature(value):
return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))
</code></pre>
<p>..</p>
<pre><code>img_raw = img.tostring() # typeof(img) = np.Array with shape (50, 80) dtype float64
img_label_text_raw = str.encode(img_lable)
example = tf.train.Example(features=tf.train.Features(feature={
'height': _int64_feature(height), #heigth (integer)
'width': _int64_feature(width), #width (integer)
'depth': _int64_feature(depth), #num of rgb channels (integer)
'image_data': _bytes_feature(img_raw), #raw image data (byte string)
'label_text': _bytes_feature(img_label_text_raw), #raw image_lable_text (byte string)
'lable': _int64_feature(lable_txt_to_int[img_lable])})) #label index (integer)
writer.write(example.SerializeToString())
</code></pre>
<p>Now I try to read the binary data to reconstruct a tensor out of it:</p>
<pre><code>def read_and_decode(filename_queue):
reader = tf.TFRecordReader()
_, serialized_example = reader.read(filename_queue)
features = tf.parse_single_example(
serialized_example,
# Defaults are not specified since both keys are required.
features={
'label': tf.FixedLenFeature([], tf.int64),
'height': tf.FixedLenFeature([], tf.int64),
'width': tf.FixedLenFeature([], tf.int64),
'depth': tf.FixedLenFeature([], tf.int64),
'image_data': tf.FixedLenFeature([], tf.string)
})
label = features['label']
height = tf.cast(features['height'], tf.int64)
width = tf.cast(features['width'], tf.int64)
depth = tf.cast(features['depth'], tf.int64)
image_shape = tf.pack([height, width, depth])
image = tf.decode_raw(features['image_data'], tf.float64)
image = tf.reshape(image, image_shape)
images, labels = tf.train.shuffle_batch([image, label], batch_size=2,
capacity=30,
num_threads=1,
min_after_dequeue=10)
return images, labels
</code></pre>
<p>Sadly this is not working. I get this error messag:</p>
<blockquote>
<p>ValueError: Tensor conversion requested dtype string for Tensor with
dtype int64: 'Tensor("ParseSingleExample/Squeeze_label:0", shape=(),
dtype=int64)'
...</p>
<p>TypeError: Input 'bytes' of 'DecodeRaw' Op has type int64 that does not match expected type of string.</p>
</blockquote>
<p>Can some give me a hint on how to fix this?</p>
<p>Thanks in advance!</p>
<p><strong>UPDATE</strong>: complete code listing of "read_and_decode"</p>
<p>@mmry thank you very much. now my code breaks at shuffeling the batch. With:</p>
<blockquote>
<p>ValueError: All shapes must be fully defined:
[TensorShape([Dimension(None), Dimension(None), Dimension(None)]),
TensorShape([])]</p>
</blockquote>
<p>any suggestions?</p>
|
<p>There is no need to use the <code>tf.decode_raw()</code> op in this line:</p>
<pre><code>label = tf.decode_raw(features['label'], tf.int64)
</code></pre>
<p>Instead, you should be able to write:</p>
<pre><code>label = features['label']
</code></pre>
<p>The <a href="https://www.tensorflow.org/api_docs/python/io_ops/converting#decode_raw" rel="noreferrer"><code>tf.decode_raw()</code></a> op only accepts <code>tf.string</code> tensors, and converts the binary representation of some tensor data (as a variable-length string) into a typed representation (as a vector of a particular type of elements). However, you have defined the feature <code>'label'</code> as having type <code>tf.int64</code>, so there is no need to convert that feature if you want to use it as a <code>tf.int64</code>.</p>
|
image-processing|binary|tensorflow
| 5 |
1,909,070 | 41,265,113 |
Install python package from GitHub into Anaconda on Linux Ubuntu
|
<p>I need to install <a href="https://github.com/lucjon/Py-StackExchange" rel="nofollow noreferrer">this package located on GitHub</a> into Anaconda enviroment. How can I achieve this?</p>
|
<p>You can install the latest version from github with pip using</p>
<pre><code>pip install https://github.com/<user-name>/<repo-name>/tarball/<branch-name>
</code></pre>
<p>in your case this would be:</p>
<pre><code>pip install https://github.com/lucjon/Py-StackExchange/tarball/master
</code></pre>
|
python-3.x|ubuntu|pycharm|anaconda
| 2 |
1,909,071 | 41,672,532 |
Django: Images/Absolute URLS from not displaying in html
|
<p>I have a block of code on my template which does not show up on my html nor does it give any error on Chromes console. I am trying to display a list of images which when clicked takes you to the detail of that image.</p>
<p>Here is the key part of my <strong>HTML</strong>(base.html):</p>
<pre><code> <div class="container-fluid">
<h2>Popular</h2> #only this shows up
{% for obj in object_list %}
<img src = "{{ obj.mpost.url}}" width="300"><br>
<a href='/m/{{ obj.id }}/'> {{obj.username}} </a><br/>
{% endfor %}
</div>
</code></pre>
<p><strong>views.py</strong>:</p>
<pre><code>from django.shortcuts import render,get_object_or_404
from .models import m
# Create your views here.
def home(request):
return render(request,"base.html",{})
def m_detail(request,id=None):
instance = get_object_or_404(m,id=id)
context = {
"mpost": instance.mpost,
"instance": instance
}
return render(request,"m_detail.html",context)
def meme_list(request): #list items not showing
queryset = m.objects.all()
context = {
"object_list": queryset,
}
return render(request, "base.html", context)
</code></pre>
<p><strong>urls.py</strong>:</p>
<pre><code>urlpatterns = [
url(r'^$', home, name='home'),
url(r'^m/(?P<id>\d+)/$', m_detail, name='detail'),#issue or not?
]
</code></pre>
<p><strong>models.py</strong>:</p>
<pre><code>class m(models.Model):
username = "anonymous"
mpost = models.ImageField(upload_to='anon_m')
creation_date = models.DateTimeField(auto_now_add=True)
def __str__(self):
return m.username
</code></pre>
<p>Thank you so much for your time. :)</p>
|
<p>I assume the problem is the fact that you don't have a url at all for the meme list, so either you're showing the home view and need to move the code from the <code>meme_list</code> view into your <code>home</code> view, or you need to make a url for the <code>meme_list</code> (and navigate to that new url instead)</p>
|
python|html|django|python-3.5|django-1.10
| 2 |
1,909,072 | 56,990,930 |
start reading incoming data from a serial port, using pyserial, when a specific character appears
|
<p>I'm trying to read incoming data from a weight scale (Lexus Matrix One). I want the code to start reading the 8 characters after <code>=</code> appears.</p>
<p>The problem is that sometimes the code does that, and other times it starts reading the data at the middle of the measurement sent by the scale, making it impossible to read properly. I'm using the <code>pyserial</code> module on python 3 on windows. </p>
<pre><code>import serial
ser=serial.Serial("COM4", baudrate=9600)
a=0
while a<10:
b=ser.read(8)
a=a+1
print(b)
</code></pre>
<p>the expected result is: <code>b'= 0004.0'</code></p>
<p>but sometimes I get: <code>b'4.0= 000'</code></p>
|
<p>I think we would need a little more information on the format of data coming from your weight scale to provide a complete answer. But your current code only reads the first 80 bytes from the stream, 8 bytes at a time.</p>
<p>If you want to read the next 8 bytes following <em>any</em> equals sign, you could try something like this:</p>
<pre class="lang-py prettyprint-override"><code>import serial
ser = serial.Serial("COM4", baudrate=9600)
readings = []
done = False
while not done:
current_char = ser.read()
# check for equals sign
if current_char == b'=':
reading = ser.read(8)
readings.append(reading)
# this part will depend on your specific needs.
# in this example, we stop after 10 readings
# check for stopping condition and set done = True
if len(readings) >= 10:
done = True
</code></pre>
|
python|pyserial
| 2 |
1,909,073 | 44,704,254 |
Python cassandra driver - SSL: WRONG_VERSION_NUMBER
|
<p>I'm trying to connect to a cassandra node that has SSL enabled, using the datastax cassandra driver, like this:</p>
<pre><code>from cassandra.cluster import Cluster
from cassandra.auth import PlainTextAuthProvider
import ssl
ip = <ip>
ap = PlainTextAuthProvider(username=<username>, password=<password>)
ssl_options = {
'ca_certs': <path to PEM file>,
'ssl_version': ssl.PROTOCOL_TLSv1
}
cluster = Cluster([ip], auth_provider=ap, ssl_options=ssl_options)
session = cluster.connect()
</code></pre>
<p>I can successfully connect to the node using <code>pycassa</code>, but I was trying to switch to using <code>datastax driver</code> for this. </p>
<p>The above code throws the following exception: </p>
<pre><code>NoHostAvailable: ('Unable to connect to any servers', {<ip>: error(1, u"Tried connecting to [(<ip>, <port>)]. Last error: [SSL: WRONG_VERSION_NUMBER] wrong version number (_ssl.c:590)")})
</code></pre>
<p>I know the server accepts PROTOCOL_TLSv1, since it's the default protocol in pycassa. I don't understand what I'm doing wrong here ...</p>
|
<p>This error commonly occurs when trying to connect with SSL on a socket that is not negotiating SSL.</p>
<p>Confirm that SSL is enabled in the server, and for the port to which you are connecting. I think this should be evident in the server system log.</p>
|
python|ssl|cassandra|datastax
| 2 |
1,909,074 | 44,603,803 |
How to (gevent) spawn tasks without waiting for it to join
|
<p>I have wrote a sample code of my problem. I am generating a random string and a shuffle function which adds a delay to a message so it comes out in a different order.</p>
<p>However, the scheduled task only executes once I have <code>joinall</code> at the end. Is there a way to execute the scheduling and tasks while dynamically scheduling new spawning ones. When I keep pressing enter, it schedules a new task but it doesn't execute until I have reached the random condition I have set. But, if I put the <code>join/joinall</code> after the append, it will block. Is this possible to do with gevent or what else libraries can this be done with any other asynchronous I/O or non-blocking libraries or do I have to resort to multithreading.</p>
<pre><code>#!/usr/bin/python
import random
import string
from gevent import sleep, spawn, joinall
def random_string():
digits = "".join( [random.choice(string.digits) for i in xrange(8)] )
chars = "".join( [random.choice(string.letters) for i in xrange(10)] )
return chars
def delay_message(message, delay):
sleep(delay)
print("Shuffled message: {} and time: {}". format(message, delay))
def main():
while True:
s = raw_input("Please continue pressing enter, messages will appear when they are ready")
if s == "":
delay = random.randint(0, 10)
string = random_string()
print("Message: {} and time: {}". format(string, delay))
tasks = []
tasks.append(spawn(delay_message, string, delay))
if (random.randint(0,10) == 5): # random condition in breaking
joinall(tasks, raise_error=True)
break
else:
print("Exiting")
break
if __name__ == "__main__":
main()
</code></pre>
|
<p>Specifically the work will get done whether or not you join, you only need to join when you need to be sure the work is done before you move on to insure concurrency, you can use lots of methods to get status as you are going along (queues, shared variable, callbacks, delegate methods), since you are using greenlets you don't even really need to lock atomic operations like adding numbers, because they are happening on the same thread.</p>
|
python|scheduled-tasks|scheduler|gevent
| 0 |
1,909,075 | 44,724,461 |
Task output encoding in VSCode
|
<p>I'm learning BeautifullSoup with Visual Studio Code and when I run this script:</p>
<pre><code>import requests
from bs4 import BeautifulSoup
from fake_useragent import UserAgent
ua = UserAgent()
header = {'user-agent':ua.chrome}
google_page = requests.get('https://www.google.com',headers=header)
soup = BeautifulSoup(google_page.content,'lxml') # html.parser
print(soup.prettify())
</code></pre>
<p>And I'm getting the following error:</p>
<blockquote>
<p>Traceback (most recent call last): File
"c:\ ... \intro-to-soup-2.py", line 13, in
print(soup.prettify()) File "C:\ ... \Local\Programs\Python\Python36-32\lib\encodings\cp1252.py",
line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_table)[0] UnicodeEncodeError: 'charmap' codec can't encode character
'\U0001f440' in position 515: character maps to </p>
</blockquote>
<p>If I force the encoding for utf-8 in the soup variable I won't be abble to use prettify as it doesn't work with strings...
Also tried using # -<em>- coding: utf-8 -</em>- on the first line of code without sucess.</p>
<p>Here is my tasks.json for this project:</p>
<pre><code>{
// See https://go.microsoft.com/fwlink/?LinkId=733558
// for the documentation about the tasks.json format
"version": "0.1.0",
"command": "python",
"isShellCommand": true,
"args": ["${file}"],
"files.encoding": "utf8",
// Controls after how many characters the editor will wrap to the next line. Setting this to 0 turns on viewport width wrapping (word wrapping). Setting this to -1 forces the editor to never wrap.
"editor.wrappingColumn": 0, // default value is 300
// Controls the font family.
"editor.fontFamily": "Consolas, 'Malgun Gothic', 'λ§μ κ³ λ','Courier New', monospace",
// Controls the font size.
"editor.fontSize": 15,
"showOutput": "always"
}
</code></pre>
<p>The exact same code is running in PyCharm without any problems.
Any ideas how I can fix this in Visual Studio Code?</p>
<p>Here's my "pip freeze" result:</p>
<pre><code>astroid==1.5.3
beautifulsoup4==4.5.3
colorama==0.3.9
fake-useragent==0.1.7
html5lib==0.999999999
isort==4.2.15
lazy-object-proxy==1.3.1
lxml==3.7.2
mccabe==0.6.1
pylint==1.7.1
requests==2.12.5
selenium==3.4.3
six==1.10.0
webencodings==0.5
wrapt==1.10.10
xlrd==1.0.0
XlsxWriter==0.9.6
</code></pre>
<p>Thank you for your time,</p>
<p>Eunito.</p>
|
<p>The problem here seems to be the encoding the python interpreter believes stdout/stderr support. For some reason (arguably, a bug in VSCode) this is set to some platform-specific value (cp1252 in windows for you, I was able to reproduce the issue on OS X and got ascii) instead of utf-8 which the VSCode output window supports. You can modify your task.json to look something like this to address this - it sets an environment variable forcing the Python interpreter to use utf8 for output. </p>
<pre><code>{
// See https://go.microsoft.com/fwlink/?LinkId=733558
// for the documentation about the tasks.json format
"version": "0.1.0",
"command": "python3",
"isShellCommand": true,
"args": ["${file}"],
"showOutput": "always",
"options": {
"env": {
"PYTHONIOENCODING":"utf-8"
}
}
}
</code></pre>
<p>The relevant bit is the "options" dictionary. </p>
|
python|visual-studio-code|vscode-settings|vscode-tasks
| 2 |
1,909,076 | 61,977,077 |
Unexpected result from Python re.search for command line output
|
<p>I'm using <code>subprocess.Popen</code> to run an ffmpeg command (in Windows) and then extract the part of the output that has the frame count with a regex expression using <code>re.search</code>. Sometimes, not always, I get the wrong result from search even if the printed command output string clearly shows what I expect.</p>
<p>When I use <code>re.findall</code> I get 2 results, the "wrong" one and the expected one, but in the output string of the command I still only see one option. I'd like to understand why this is happening.</p>
<p>Here's the code I'm running:</p>
<pre><code>import re
import subprocess
# path to video with 300 frames
cmd = r'ffmpeg -i C:\...\300frames_HUD.avi -map 0:v:0 -c copy -f null -'
p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
output_info = p.communicate()[0]
regex = r'(frame=\s*)([0-9]+)'
search_result = re.search(regex, output_info)
findall_result = re.findall(regex, output_info)
print "SEARCH"
print '0', search_result.group(0)
print '1', search_result.group(1)
print '2', search_result.group(2)
print "FIND ALL"
print findall_result
</code></pre>
<p>Here are the results I get:</p>
<pre><code>SEARCH
0 frame= 293
1 frame=
2 293
FIND ALL
[('frame= ', '293'), ('frame= ', '300')]
</code></pre>
<p>And here is the printed <code>output_info</code>, the ffmpeg command output I'm searching on:</p>
<pre><code>ffmpeg version git-2020-03-15-c467328 Copyright (c) 2000-2020 the FFmpeg developers
built with gcc 9.2.1 (GCC) 20200122
configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-ffnvcodec --enable-cuda-llvm --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt --enable-amf
libavutil 56. 42.100 / 56. 42.100
libavcodec 58. 75.100 / 58. 75.100
libavformat 58. 41.100 / 58. 41.100
libavdevice 58. 9.103 / 58. 9.103
libavfilter 7. 77.100 / 7. 77.100
libswscale 5. 6.101 / 5. 6.101
libswresample 3. 6.100 / 3. 6.100
libpostproc 55. 6.100 / 55. 6.100
Input #0, avi, from 'C:\...\300frames_HUD.avi':
Duration: 00:00:10.00, start: 0.000000, bitrate: 373255 kb/s
Stream #0:0: Video: rawvideo, bgr24, 960x540, 374496 kb/s, 30 fps, 30 tbr, 30 tbn, 30 tbc
Metadata:
title : V
Output #0, null, to 'pipe:':
Metadata:
encoder : Lavf58.41.100
Stream #0:0: Video: rawvideo, bgr24, 960x540, q=2-31, 374496 kb/s, 30 fps, 30 tbr, 30 tbn, 30 tbc
Metadata:
title : V
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Press [q] to stop, [?] for help
frame= 300 fps=0.0 q=-1.0 Lsize=N/A time=00:00:10.00 bitrate=N/A speed=19.4x
video:455625kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
</code></pre>
<p>I'm essentially looking for the 300 number in <code>frame= 300</code>.
I can reproduce this easily when I execute it inside my IDE (pycharm) twice in a row quickly.</p>
|
<p>I figured it out, it's not a regex issue, I'm actually getting more than 1 result from the command output but there is a carriage return ('\r') so only the last one was showing.</p>
<p>I can see it by escaping special characters:</p>
<pre><code>import subprocess
# path to video with 300 frames
cmd = r'ffmpeg -i C:\...\300frames_HUD.avi -map 0:v:0 -c copy -f null -'
p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
output_info = str(p.communicate()[0]).encode('string-escape')
print output_info
</code></pre>
<p>The result essentially looks like this:</p>
<pre><code>"\r\nframe= 239 fps=0.0 q=-1.0 size=N/A time=00:00:07.96 bitrate=N/A speed=15.9x \rframe= 300 fps=0.0 q=-1.0 Lsize=N/A time=00:00:10.00 bitrate=N/A speed=16.5x"
</code></pre>
<p>In short, this is a bit of a quirk with the ffmpeg command and using <code>re.findall</code> to get the very last result seems like the right solution.</p>
|
python|regex|windows|search|ffmpeg
| 1 |
1,909,077 | 72,050,301 |
Moving item in lists with functions
|
<p>My code runs but I'm expecting my <code>orders</code> to follow 3,2,1.
To my knowledge <code>pop()</code> takes the last entry and returns it.
So on my last call <code>move_to_old_orders(made_orders)</code> it returns in the list 1,2,3.
If you look in the output it goes from <strong>3,2,1 / 3,2,1 / 1, 2, 3</strong>.
The print statements at the end are for me to verify the list is empty and has moved.</p>
<p>Code:</p>
<pre><code>unmade_orders = ['order 1' , 'order 2', 'order 3']
made_orders = []
old_orders = []
def make_orders(unmade_orders, made_orders):
''' To make an order '''
while unmade_orders:
current_order = unmade_orders.pop()
print("Order Processing: " + current_order.title() + ".")
made_orders.append(current_order)
def print_orders(made_orders):
''' To print an order '''
for made_order in made_orders:
print("Order Processed: " + made_order.title() + ".")
make_orders(unmade_orders, made_orders)
print_orders(made_orders)
def move_to_old_orders(made_orders):
while made_orders:
current_order_1 = made_orders.pop()
print("Moving Order To Old Orders: " + current_order_1.title() + ".")
old_orders.append(current_order_1)
move_to_old_orders(made_orders)
print(unmade_orders)
print(made_orders)
print(old_orders)
</code></pre>
<p>Output:</p>
<pre><code>Order Processing: Order 3.
Order Processing: Order 2.
Order Processing: Order 1.
Order Processed: Order 3.
Order Processed: Order 2.
Order Processed: Order 1.
Moving Order To Old Orders: Order 1.
Moving Order To Old Orders: Order 2.
Moving Order To Old Orders: Order 3.
</code></pre>
|
<p>Your function <code>make_orders()</code><br> is appending the list <code>make_orders = []</code> in this <code>make_orders = ["order 3","order 2","order 1"]</code> kind of format as it should be,<br> 'cause it's the nature of <code>append()</code> <br></p>
<p>That's why when you call the function <code>move_to_old_orders()</code> <br>
It starts from the last element which is <code>order 1</code><br></p>
<p>That's why you are getting the output of <code>1 2 3</code></p>
|
python|list|function
| 0 |
1,909,078 | 36,009,592 |
Turning a list into a set - Python
|
<p>I wanted to print a set of lines in a file that start with a certain character (here it's "c"), but I get an error whenever I try to convert a list into a set</p>
<p>I have the following code:</p>
<pre><code>z = open("test.txt", "r")
wordList = [line.rstrip().split() for line in z if line.startswith(("c"))]
wordList = set(wordList)
print(wordList)
</code></pre>
<p>Here is the error I get:</p>
<pre><code>Traceback (most recent call last):
wordList = set(wordList)
TypeError: unhashable type: 'list'
</code></pre>
|
<p>If you drop the <code>.split()</code>, you'll end up with your set of lines.</p>
|
python|list|set|list-comprehension|typeerror
| 2 |
1,909,079 | 29,531,764 |
Python: Mime message with Pictures
|
<p>im trying to send png images using my mail and python.
Here is a script i found :</p>
<pre><code># Import smtplib for the actual sending function
import smtplib
# Here are the email package modules we'll need
from email.mime.image import MIMEImage
from email.mime.multipart import MIMEMultipart
COMMASPACE = ', '
# Create the container (outer) email message.
msg = MIMEMultipart()
msg['Subject'] = 'Our family reunion'
# me == the sender's email address
# family = the list of all recipients' email addresses
msg['From'] = me
msg['To'] = COMMASPACE.join(family)
msg.preamble = 'Our family reunion'
# Assume we know that the image files are all in PNG format
for file in pngfiles:
# Open the files in binary mode. Let the MIMEImage class automatically
# guess the specific image type.
with open(file, 'rb') as fp:
img = MIMEImage(fp.read())
msg.attach(img)
# Send the email via our own SMTP server.
s = smtplib.SMTP('localhost')
s.send_message(msg)
s.quit()
</code></pre>
<p>source: <a href="https://docs.python.org/3/library/email-examples.html" rel="nofollow">https://docs.python.org/3/library/email-examples.html</a></p>
<p>My problem there is that when i want to precise the pngfiles path i write something like :</p>
<pre><code>pngfiles="/Desktop/Test2"
</code></pre>
<p>only returning an error message such </p>
<pre><code>Traceback (most recent call last):
File "C:\Python27\work\Picture.py", line 46, in <module>
fp = open(file, 'rb')
IOError: [Errno 13] Permission denied: '/'
</code></pre>
<p>Its really a silly problem but don't know how to write it properly... Any helps please ? :)</p>
<p>Thanks !</p>
|
<p>You file path:</p>
<pre><code>"/Desktop/Test2"
</code></pre>
<p>and your python path:</p>
<pre><code>"C:\Python27\work\Picture.py"
</code></pre>
<p>are conflicting (1st in Unix, 2nd is Windows).</p>
<p>First thing would be to use a real windows path.</p>
<pre><code>pngfiles_folder = "C:\\Python27\\work\\"
</code></pre>
<p>But you will also need a way to identify which files to attach (png in this case). To do so, you can use the <a href="https://stackoverflow.com/questions/3964681/find-all-files-in-a-directory-with-extension-txt-in-python/3964691#3964691">glob</a> package to create a list of files to attach:</p>
<pre><code>import glob
for file in glob.glob("C:\\Python27\\work\\*.png"):
....
</code></pre>
|
python|path|directory|mime
| 0 |
1,909,080 | 29,567,144 |
My RGB to Hex is giving me "-"
|
<p>I am trying to convert my RGB colors to hex, and when I use
color = "#%02x%02x%02x" %(r,g,b)
it sometimes gives me hex codes with "-" in them. Is there a way to fix this?</p>
|
<p>In this case, it's because one of <code>r, g, b</code> is negative: the dashes are, in fact, the unary <code>-</code> operatorβpossibly better known as the negative-number sign. This becomes more clear if you add commas and whitespace to the format string:</p>
<pre><code>>>> "#%02x%02x%02x" % (0, -213, 468)
'#00-d51d4'
>>> "%02x, %02x, %02x" % (0, -213, 468)
'00, -d5, 1d4'
</code></pre>
<p>Aside from negative numbers, you also have numbers greater than 255. Note that those aren't valid RGB values, either.</p>
|
python|colors|hex|rgb
| 0 |
1,909,081 | 29,734,260 |
Memory Efficient data structure for Trie Implementation
|
<p>I am implementing a Trie in python. Till now I have come across two different methods to implement it:</p>
<h3>1) use a class Node (similar to struct Node in C++) with data members:</h3>
<p><code>char</code> - to store character<br>
<code>is_end</code> - to store end of word (true or false)<br>
<code>prefix_count</code> - store number of words with current prefix<br></p>
<p><code>child</code> - Node type dict (to store other nodes i.e. for 26 alphabets)</p>
<pre><code>class Node(object):
def __init__(self):
self.char = ''
self.word = ''
self.is_end = False
self.prefix_count = 0
self.child = {}
</code></pre>
<h3>2) use a dictionary to store all the data:</h3>
<p>e.g. for the input <code>words = {'foo', 'bar', 'baz', 'barz'}</code></p>
<p>we derive this dictionary:</p>
<pre><code> {'b': {'a': {'r': {'_end_': '_end_', 'z': {'_end_': '_end_'}},
'z': {'_end_': '_end_'}}},
'f': {'o': {'o': {'_end_': '_end_'}}}}
</code></pre>
<p>Which is the efficient and standard data structure, which is both memory efficient and fast for traversal and other trie operations on big dataset of words?</p>
|
<p>A direct replacement would be nested a <code>list</code>;</p>
<p>However [arguably] more Pythonic, more compact in memory and thus faster for lookup would be a nested <code>tuple</code>.</p>
<p>Of course, updating such trie becomes logN, as you'll have to recreate every ancestor node. Still, if your lookups are a lot more frequent than updates, it may be worth it.</p>
|
python|data-structures|tree|nlp|trie
| 1 |
1,909,082 | 70,276,607 |
Extract Recipients Email Address in Outlook using Python Win32com
|
<p>I'm trying to extract the Recipient email address in Python using Win32com client.</p>
<p>Here's my code so far:</p>
<pre><code>import win32com.client
outlook = win32com.client.Dispatch("Outlook.Application").GetNamespace("MAPI")
inbox = outlook.Folders["[my email address"].Folders["Inbox"]
def get_email_address():
for message in inbox.Items:
print("========")
print("Subj: " + message.Subject)
print('To:', message.Recipients) #this part does not work
print("Email Type: ", message.SenderEmailType)
if message.Class == 43:
try:
if message.SenderEmailType == "SMTP":
print("Name: ", message.SenderName)
print("Email Address: ", message.SenderEmailAddress)
print('To:', message.Recipients) #this part does not work
print("Date: ", message.ReceivedTime)
elif message.SenderEmailType == "EX":
print("Name: ", message.SenderName)
print("Email Address: ", message.Sender.GetExchangeUser(
).PrimarySmtpAddress)
print('To:', message.Recipients) #this part does not work
print("Date: ", message.ReceivedTime)
except Exception as e:
print(e)
continue
if __name__ == '__main__':
get_email_address()
</code></pre>
<p>As you can see, I can get the sender email address...but how do I get the recipient email address?</p>
|
<p>It is similar to what you do with the sender - loop through recipients in the <code>MailItem.Recipients</code> collection and for each <code>Recipient</code> use the <code>Recipient.AddressEntry</code> property to do what you are already doing with the <code>MailItem.Sender</code>property.</p>
<p>Also note that this is not the most efficient way - opening an address entry can be expensive or outright impossible if the profile does not have the parent Exchange server, e.g. if you are processing a standalone MSG file or a message copied to a PST from an Exchange mailbox. In most cases the SMTP addresses are available on the message directly, e.g. from the <code>PidTagSenderSmtpAddress</code> (DASL name <code>http://schemas.microsoft.com/mapi/proptag/0x5D01001F</code>) which can be accessed using <code>MailItem.PropertyAccessor.GetProperty</code>. Similarly, recipient SMTP address might be available in the <code>PR_SMTP_ADDRESS</code> property (DASL name <code>http://schemas.microsoft.com/mapi/proptag/0x39FE001F</code>, use <code>Recipient.PropertyAccessor.GetProperty</code>) - you can see these properties in <a href="https://www.dimastr.com/outspy" rel="nofollow noreferrer">OutlookSpy</a> (I am its author - click IMessage button property).</p>
|
python|email|outlook|win32com
| 0 |
1,909,083 | 45,762,221 |
How to insert an element at the beginning of a list in Python?
|
<p>I have two lists and the first one is having additional data (102) when compared to the second one.</p>
<p><strong>First List:</strong></p>
<pre><code>['102 1 1 v-csm CSM vCSM enabled active up D:042 H:20 M:14 S:58 5.4.4.4-68.REL']
</code></pre>
<p><strong>Second List:</strong></p>
<pre><code>['2 1 v-csm CSM vCSM enabled active up D:331 H:12 M:20 S:33 5.4.4.4-68.REL']
</code></pre>
<p>I want to insert a string like "xyz" at the first position of the list so that using pandas data can be mapped to correct columns.</p>
|
<p>Let's say that the second list is called <code>array</code>.</p>
<p>You can use <code>array.insert(0, "xyz")</code> or <code>array = ["xyz"] + array</code> to add "xyz" to the beginning of the second list.</p>
|
python|list
| 10 |
1,909,084 | 46,051,175 |
Heroku Python3.5 Import Error : No module named ='_tkinter'
|
<p>While deploying in Heroku and adding customized buildpacks such as libspatialindex, another error occurred where Python 3.5 now looks for Tkinter. </p>
<p>Locally, by installing using <code>sudo apt-get tk-dev</code> this would be solved and trying out the suggestion from this similar problem: <a href="https://stackoverflow.com/questions/43697460/import-matplotlib-failing-on-heroku">import matplotlib failing on Heroku</a>, the error still persists.</p>
<p>Here are my buildpacks:</p>
<pre><code>https://github.com/heroku/heroku-buildpack-apt
heroku/python
https://github.com/julienfr112/libspatialindex-buildpack.git
</code></pre>
<p>And my Aptfile containing only:</p>
<pre><code>python3-tk
libpq-dev
build-essential
libncursesw5-dev
libreadline5-dev
libssl-dev
libgdbm-dev
libc6-dev
libsqlite3-dev tk-dev
libbz2-dev
</code></pre>
<p>On Heroku push here's the tail of the log:</p>
<pre><code>2017-09-05T08:25:58.903075+00:00 app[web.1]: File "/app/.heroku
/python/lib/python3.5/site-packages/six.py", line 82, in _import_module
2017-09-05T08:25:58.903076+00:00 app[web.1]: __import__(name)
2017-09-05T08:25:58.903076+00:00 app[web.1]: File "/app/.heroku
/python/lib/python3.5/tkinter/__init__.py", line 35, in <module>
2017-09-05T08:25:58.903076+00:00 app[web.1]: import _tkinter
# If this fails your Python may not be configured for Tk
2017-09-05T08:25:58.903077+00:00 app[web.1]: ImportError: No module
named '_tkinter'
</code></pre>
<p>Any ideas?</p>
|
<p>Change the matplotlib backend from tkinter to something else. At the very beginning of the program do this:</p>
<pre><code>import matplotlib
matplotlib.use('Agg')
</code></pre>
<p>This way the rest of the program will use the backend that you set ('Agg', 'SVG', etc, etc)</p>
<p>Another option would be to try and mess with the The matplotlibrc file per: <a href="https://matplotlib.org/users/customizing.html#the-matplotlibrc-file" rel="nofollow noreferrer">https://matplotlib.org/users/customizing.html#the-matplotlibrc-file</a></p>
|
python|python-3.x|heroku|python-3.5
| 1 |
1,909,085 | 33,478,933 |
Python Numpy array conversion
|
<p>I've got 2 numpy arrays x1 and x2. Using python 3.4.3</p>
<pre><code>x1 = np.array([2,4,4])
x2 = np.array([3,5,3])
</code></pre>
<p>I would like to a numpy array like this:</p>
<pre><code>[[2,3],[4,5],[4,3]]
</code></pre>
<p>How would i go about this?</p>
|
<p>You can use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.column_stack.html"><code>numpy.column_stack</code></a>:</p>
<pre><code>In [40]: x1 = np.array([2,4,4])
In [41]: x2 = np.array([3,5,3])
In [42]: np.column_stack((x1, x2))
Out[42]:
array([[2, 3],
[4, 5],
[4, 3]])
</code></pre>
|
python|numpy
| 8 |
1,909,086 | 33,248,440 |
In Python, how do I grab the value of an entry in a python list created through a loop
|
<p>I am writing Python code to pull data from a csv file and format it into geoJSON. I want to also grab single entries from the data to include in the properties of the geoJSON. For example, I convert a column of csv timestamps into geoJSON, and I also want to pull out the first and last timestamps in that list for the properties below:</p>
<pre><code> "properties": {
"species": "racoon",
"id": "12345",
"first transmitted": "Aug 12, 2014",
"last transmitted": "Nov 1, 2015"
}
</code></pre>
<p>I have written Python code that converts a column of timestamps from the csv to the geoJSON format:</p>
<pre><code># loop through the csv by row skipping the first
iter = 0
timevals = ""
for row in rawData:
iter += 1
if iter >= 2 and itemid == row[1]:
rawtime = row[4]
rawtime = rawtime.replace("T", " ")
rawtime = rawtime.replace("Z", "")
timevals += template_time % int( time.mktime(time.strptime(rawtime, '%Y-%m-%d %H:%M:%S') ))
</code></pre>
<p>This gives me the list of timestamps from the csv file formatted for my geoJSON file. Now I want to grab the first and last timestamp in that list. I altered it like this in order to try to grab the first:</p>
<pre><code># loop through the csv by row skipping the first
iter = 0
timevals = ""
for row in rawData:
iter += 1
if iter >= 2 and itemid == row[1]:
rawtime = row[4]
rawtime = rawtime.replace("T", " ")
rawtime = rawtime.replace("Z", "")
timeval = int( time.mktime(time.strptime(rawtime, '%Y-%m-%d %H:%M:%S') ))
timevals += template_time % timeval
startTime = timeval.iloc[0]
print startTime
</code></pre>
<p>My error is:</p>
<pre><code>startTime = timeval.iloc[0]
AttributeError: 'int' object has no attribute 'iloc'
</code></pre>
<p>What is the way to write this?</p>
<p><strong>EDIT</strong>
I got the startTime by initializing </p>
<pre><code>startTime = ""
</code></pre>
<p>and then writing</p>
<pre><code>if startTime == "":
startTime = rawtime
</code></pre>
<p>so now I have the following, plus the endTime:</p>
<pre><code># loop through the csv by row skipping the first
iter = 0
startTime = ""
for row in rawData:
iter += 1
if iter >= 2 and itemid == row[1]: # itemid
rawtime = row[4]
rawtime = rawtime.replace("T", " ")
rawtime = rawtime.replace("Z", "")
if startTime == "":
startTime = rawtime
endTime = rawtime
</code></pre>
<p>This ends up producing the first and final values, startTime and endTime, though its a bit of a trick and I was wondering if there was something more pythonic.</p>
|
<p>Thanks for the clarification edit. Can you easily hold the data in memory? If so, you have a simple path: define a function for the conversion, then apply that only to the first and last items.</p>
<p>I'm assuming that you're skipping the first line because it's header information. If so, then you don't really need to check for it explicitly; its 2nd entry won't match itemid, anyway.</p>
<pre><code>def convert_time(rawtime):
rawtime = row[4]
rawtime = rawtime.replace("T", " ")
rawtime = rawtime.replace("Z", "")
row_stream = [row in rawData if row[1] == itemid]
startTime = convert_time(row_stream[0])
endTime = convert_time(row_stream[-1])
</code></pre>
<p>If you have too much data for this to work, then trade the list comprehension for a generator, and let the Python run-time take care of the optimizations:</p>
<pre><code>row_stream = (row in rawData if row[1] == itemid)
</code></pre>
|
python|for-loop
| 0 |
1,909,087 | 73,810,956 |
Selenium Python - How to load existed profile on chrome?
|
<p>So, I want to use my existed profile on chrome to make easily to login and fetch some data from the website. So I tried this on my current codes but it doesn't load the profile for some reason,</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.chrome.service import Service
from webdriver_manager.chrome import ChromeDriverManager
import time
class ClientDriver:
def __init__(self):
self.options = Options()
self.options.add_argument("--lang=en_US")
self.options.add_argument("--disable-gpu")
self.options.add_argument("--no-sandbox")
self.options.add_argument("--disable-dev-shm-usage")
self.options.add_argument(
r"user-data-dir=C:\Users\User\AppData\Local\Google\Chrome\User Data\Profile 1"
)
self.options.add_argument("--profile-directory=Profile 1")
def check(self):
driver = webdriver.Chrome(
service=Service(ChromeDriverManager().install()), options=self.options
)
driver.get("chrome://version/")
time.sleep(100)
x = ClientDriver()
x.check()
</code></pre>
<p>As can you see on my current codes, it will redirect to <code>chrome://version/</code> to check the Profile Path and it's not <code>C:\Users\User\AppData\Local\Google\Chrome\User Data\Profile 1</code> but it was <code>C:\Users\Users\AppData\Local\Google\Chrome\User Data\Profile 1\Profile 1</code>. Can someone help me out?</p>
|
<p>You're checking <code>chrome://version/</code> from your selenium test, of course you see a wrong path!<br />
You should instead open a normal chrome window, insert <code>chrome://version/</code> in the search bar, press enter and check from there your profile path.<br />
You will see it will be something like <code>C:\Users\User\AppData\Local\Google\Chrome\User Data\Profile 1</code></p>
<p>And so in the code you'll have to write:</p>
<pre><code>self.options.add_argument(
r"user-data-dir=C:\Users\User\AppData\Local\Google\Chrome\User Data"
)
self.options.add_argument("--profile-directory=Profile 1")
</code></pre>
|
python-3.x|selenium|google-chrome|selenium-webdriver
| 0 |
1,909,088 | 73,703,322 |
How to take pandas dataframe values baesed on string matching?
|
<p>I'm trying to match two dataframes by the strings they have in common.
This is the code for example:</p>
<pre><code>import pandas as pd
import numpy as np
A = ['DF-PI-05', 'DF-PI-09', 'DF-PI-10', 'DF-PI-15', 'DF-PI-16',
'DF-PI-19', 'DF-PI-89', 'DF-PI-92', 'DF-PI-93', 'DF-PI-94',
'DF-PI-95', 'DF-PI-96', 'DF-PI-25', 'DF-PI-29', 'DF-PI-30',
'DF-PI-34', 'DF-PI-84']
B = ['PI-05', 'PI-10', 'PI-89', 'PI-90', 'PI-93', 'PI-94', 'PI-95',
'PI-96', 'PI-09', 'PI-15', 'PI-16', 'PI-19', 'PI-91A', 'PI-91b',
'PI-92', 'PI-25-CU', 'PI-29', 'PI-30', 'PI-34', 'PI-84-CU-S1',
'PI-84-CU-S2']
A = pd.DataFrame(A,columns=['ID'])
B = pd.DataFrame(B,columns=['Name'])
B['param_2'] = np.linspace(20,300,21)
A['param_1'] = np.linspace(0,20,17)
</code></pre>
<p>I would like to have ONE pandas dataframe with the shape (21,3) where the columns are ['Name','param_1',param_2].
where if the column 'ID' from pandas dataframe B matches the column 'Name' from pandas dataframe A, I get the values from the other columns in A and B, if the 'Name' doesn't match, I have NaN.</p>
<p>I have tried the code below but doesn't work.</p>
<pre><code>B['param_1'] = np.nan
for date_B, row_B in B.iterrows():
for date_A, row_A in A.iterrows():
if row_B['Name'] in row_A['ID']:
row_B['param_1'] = row_A['param_1']
break
</code></pre>
<p>I also would prefer some solution without for loops, because I have a large dataset, perhaps with pandas.isin() function, I have tried but didn't manage to get the right result.
Something like in this example:</p>
<p><a href="https://stackoverflow.com/questions/52265419/join-dataframes-based-on-partial-string-match-between-columns">Join dataframes based on partial string-match between columns</a></p>
|
<p>Try left join:</p>
<pre><code>A["_ID"] = A.ID.str[3:]
pd.merge(B,A, how="left", left_on="Name", right_on="_ID")[['Name','param_1', 'param_2']]
</code></pre>
|
python|string|dataframe|merge|match
| 1 |
1,909,089 | 73,564,117 |
How do I remove overflow along the z-axis for a 3D matplotlib surface?
|
<p>I'm trying to graph a 3d mesh surface with matplotlib and constrain the limits of the graph. The X and Y axes are correctly constrained, but there is overflow in the Z-Axis.</p>
<p>What am I missing? Here's my code:</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
import numpy as np
from mpl_toolkits import mplot3d
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure(figsize=(10,10))
x = np.linspace(-6,6,100)
y = np.linspace(-6,6,100)
X,Y = np.meshgrid(x,y)
def f(x,y):
return x**2 + 3*y
Z = f(X,Y)
ax = plt.axes(projection = '3d')
ax.plot_surface(X,Y,Z,cmap='viridis')
ax.title.set_text("z=x**2+3y")
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('z')
ax.set_zlim3d(zmin=-3,zmax=5)
ax.set_xlim3d(xmin=-6,xmax=6)
ax.set_ylim3d(ymin=-6,ymax=6)
plt.show()
</code></pre>
<p>The graph:</p>
<p><a href="https://i.stack.imgur.com/LVAvw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LVAvw.png" alt="Overflowing graph" /></a></p>
<p>Edit:</p>
<p>When I add clipping/min/max to the Z values, the graph is a little better, but it sets z values outside the bounds to the bounds themselves. Both of the following suggestions do this. Perhaps it's because I'm on a mac?</p>
<pre class="lang-py prettyprint-override"><code>z_tmp = np.maximum(np.minimum(5,Z),-3)
</code></pre>
<pre class="lang-py prettyprint-override"><code>z_temp = np.clip(Z, -3, 5, None)
</code></pre>
<p><a href="https://i.stack.imgur.com/sniiW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sniiW.png" alt="better graph" /></a></p>
|
<p>Your data is outside the axis boundaries. Try rotate the view and you will notice.</p>
<pre class="lang-py prettyprint-override"><code>z = x**2 + 3*y
</code></pre>
<p>If you want to only show a defined area of the data you could add a max() min() limitation on the Z data to exclude the data outside your wanted limitations.</p>
<pre class="lang-py prettyprint-override"><code>Z = f(X,Y)
z_tmp = np.maximum(np.minimum(5,Z),-3)
ax = plt.axes(projection = '3d')
ax.plot_surface(X,Y,z_tmp,cmap='viridis')
</code></pre>
<p>I'm not sure the matplotlib behaves as it should in your default case.</p>
|
python|numpy|matplotlib
| 2 |
1,909,090 | 12,879,346 |
How does kalman filter track multiple objects in Opencv python?
|
<p>I am not understanding the process to track and label several moving objects using python. I am able to isolate the moving objects (although the binary image still contains lots of noise) by converting each frame to grayscale, then blurring, then doing BGS. </p>
<p>I have found the contours with <code>cv2.findContours()</code>, which gives me blobs as a list of numpy matricies. I want to use Kalman filter to track these blobs since it's very good at predicting the blob's position in the presence of noise. However it seems to me that finding the contours is an unnecessary step given the nature of KF, especially since the contour function returned a lot of highly questionable blobs.</p>
<p>I looked at the code for the <a href="https://projects.developer.nokia.com/opencv/browser/opencv/opencv-2.3.1/samples/python/kalman.py" rel="nofollow">kalman filter</a>, and I don't see how I can tell it to track blobs, let alone tell the filter where the blobs are (or how to create the blobs using KF alone). </p>
<p>My question is, how does KF handle multiple object tracking if it doesn't know what or where the blobs are beforehand (which is why I got the contours, but this result is somewhat horrible). And, once KF does begin tracking the objects, how does it store the blobs such that it can easily be labeled?</p>
|
<p>The Kalman filter itself doesn't contain multiple object tracking machinery. For this, you need an additional algorithm on top: for example, Multiple Hypothesis Tracking (MHT) in Reid 1979 if you have unknown/varying numbers of objects or Joint Probabilistic Data Association if you have known numbers of objects. </p>
<p>Note, in order to actually implement MHT you need additional improvements introduced in Cox and Hingorani 1996, "An efficient implementation of Reid's multiple hypothesis tracking..."</p>
|
python|opencv|kalman-filter
| 6 |
1,909,091 | 31,078,778 |
Find one eigenvector
|
<p>Suppose I have a really large (symmetric) matrix M of size N by N and I just want to extract one eigenvector corresponding to one eigenvalue. Is there a way to do this without finding all the eigenvectors. One way I thought of doing this is to first find the eigenvalues (which is fast) and then solve for one eigenvector.</p>
<pre><code>E = np.linalg.eigvalsh(M)
e = E[N/2]
v = np.linalg.solve(E-np.diag([e]*N), 0)
</code></pre>
<p>But of course you can guess that the solution of this is just v=0. I could do an SVD decomposition of M-eI but I this seems to slower than just computing all the eigenvectors of M.</p>
|
<p>You can use</p>
<pre><code>from scipy.sparse.linalg import eigs
</code></pre>
<p>Take a look at the docstring - there is a parameter <code>k</code> that sets how many eigenvalues and vectors are to be calculated. If I recall correctly, this is done by an iterative scheme from ARPACK. So make sure you set <code>tol</code> to an appropriate value for your application.</p>
|
performance|numpy|scipy|eigenvalue|eigenvector
| 1 |
1,909,092 | 31,056,142 |
How to parse dates to get the year in Python
|
<p>I have a bunch of dates from a csv file in this format</p>
<pre><code>1/25/2015 1:51
1/26/2015 11:22
1/26/2015 20:33
1/27/2015 15:29
1/28/2015 19:11
1/28/2015 19:41
1/30/2015 6:20
2/2/2015 16:24
2/4/2015 18:38
</code></pre>
<p>Is there an efficient proper way in Python to parse the date and extract the year?
2/8/2015 22:22</p>
|
<pre><code>time.strptime("2/8/2015 22:22", "%d/%m/%Y %H:%M")
</code></pre>
|
python|date
| 0 |
1,909,093 | 39,885,637 |
How do I distribute all contents of root directory to a directory with that name
|
<p>I have a project named <code>myproj</code> structured like</p>
<pre><code>/myproj
__init__.py
module1.py
module2.py
setup.py
</code></pre>
<p>my <code>setup.py</code> looks like this</p>
<pre><code>from distutils.core import setup
setup(name='myproj',
version='0.1',
description='Does projecty stuff',
author='Me',
author_email='me@domain.com',
packages=[''])
</code></pre>
<p>But this places <code>module1.py</code> and <code>module2.py</code> in the install directory. </p>
<hr>
<p>How do I specify <code>setup</code> such that the directory <code>/myproj</code> and all of it's contents are dropped into the install directory?</p>
|
<p>In your <code>myproj</code> root directory for this project, you want to move <code>module1.py</code> and <code>module2.py</code> into a directory named <code>myproj</code> under that, and if you wish to maintain Python < 3.3 compatibility, add a <code>__init__</code>.py into there.</p>
<pre><code>βββ myproj
βΒ Β βββ __init__.py
βΒ Β βββ module1.py
βΒ Β βββ module2.py
βββ setup.py
</code></pre>
<p>You may also consider using <a href="https://setuptools.readthedocs.io/en/latest/" rel="nofollow"><code>setuptools</code></a> instead of just distutils. <code>setuptools</code> provide a lot more helper methods and additional attributes that make setting up this file a lot easier. This is the bare minimum <code>setup.py</code> I would construct for the above project:</p>
<pre><code>from setuptools import setup, find_packages
setup(name='myproj',
version='0.1',
description="My project",
author='me',
author_email='me@example.com',
packages=find_packages(),
)
</code></pre>
<p>Running the installation you should see lines like this:</p>
<pre><code>copying build/lib.linux-x86_64-2.7/myproj/__init__.py -> build/bdist.linux-x86_64/egg/myproj
copying build/lib.linux-x86_64-2.7/myproj/module1.py -> build/bdist.linux-x86_64/egg/myproj
copying build/lib.linux-x86_64-2.7/myproj/module2.py -> build/bdist.linux-x86_64/egg/myproj
</code></pre>
<p>This signifies that the setup script has picked up the required source files. Run the python interpreter (preferably outside this project directory) to ensure that those modules can be imported (not due to relative import).</p>
<p>On the other hand, if you wish to provide those modules at the root level, you definitely need to declare <a href="https://docs.python.org/distutils/setupscript.html#listing-individual-modules" rel="nofollow"><code>py_modules</code></a> explicitly.</p>
<p>Finally, the <a href="https://packaging.python.org/" rel="nofollow">Python Packaging User Guide</a> is a good resource for more specific questions anyone may have about building distributable python packages.</p>
|
python|distutils|setup.py
| 1 |
1,909,094 | 29,090,828 |
Remove an element from list and return the result
|
<p>I want to generate a list which looks like:</p>
<pre><code>['ret-120','ret-115','ret-110',....'ret-5','ret5',....'ret240']
</code></pre>
<p>Please note, there's no <code>ret0</code> element in the list. So I have to remove it from a list populated by range function. I've tried:</p>
<pre><code>['ret'+str(x) for x in list(range(-120,241,5)).remove(0)]
</code></pre>
<p>However this gives out an error:</p>
<pre><code>TypeError: 'NoneType' object is not iterable
</code></pre>
<p>Is this possible to accomplish with one line of code?</p>
|
<p>The simplest way to do what you want is to add a conditional inside your list comprehension:</p>
<pre><code>lst = ['cumRet'+str(x) for x in xrange(-120,241,5) if x != 0]
# This part: ^^^^^^^^^
</code></pre>
<p>I also removed the unnecessary list creation and changed <code>range</code>-><code>xrange</code> (note that this <code>range</code>-><code>xrange</code> change is Python2 only)</p>
|
python|list
| 3 |
1,909,095 | 52,331,534 |
Python make list of tuples from two lists and a constant
|
<p>I would like to make a list of tuples using two lists and a constant, as follows, I'd like to generate ob in the fast way possible. I need to make about a hundred thousand of these, so time matters...</p>
<p>On average the size A and B have a length of about 1000</p>
<pre><code>constant = 3
A = [1,2,3]
B = [0.1,0.2,0.3]
ob = [(3,1,0.1),(3,2,0.2),(3,3,0.3)]
</code></pre>
<p>I know zip(A,B) can create a list of tuples but I need the constant at the beginning. Here is the code I am using at them moment, I was wondering if there is a faster way to do this</p>
<pre><code>ob = []
for i in xrange(len(A)):
a = A[i]
b = B[i]
ob.append((constant,a,b))
print ob
</code></pre>
|
<p>You could use <a href="https://docs.python.org/3/library/itertools.html#itertools.repeat" rel="nofollow noreferrer"><strong><code>itertools.repeat</code></strong></a>:</p>
<pre><code>from itertools import repeat
ob = list(zip(repeat(constant), A, B))
</code></pre>
|
python|list|tuples
| 4 |
1,909,096 | 51,928,090 |
Separate the numbers in a numpy array which are in single quote separated by spaces
|
<p>I am trying to separate the pixel values of an image in python which are in a numpy array of 'object' data-type in a single quote like this:</p>
<pre><code>['238 236 237 238 240 240 239 241 241 243 240 239 231 212 190 173 148 122 104 92 .... 143 136 132 127 124 119 110 104 112 119 78 20 17 19 20 23 26 31 30 30 32 33 29 30 34 39 49 62 70 75 90']
</code></pre>
<p>The shape of the numpy array is coming as 1.</p>
<p>There are a total of 784 numbers but I cannot access them individually.</p>
<p>I wanted something like:</p>
<p><code>[238, 236, 237, ......, 70, 75, 90]</code> of dtype int or float.</p>
<p>There are 1000 such numpy arrays like the one above.</p>
<p>Thanks in advance.</p>
|
<p>You can use <code>str.split</code></p>
<p><strong>Ex:</strong></p>
<pre><code>l = ['238 236 237 238 240 240 239 241 241 243 240 239 231 212 190 173 148 122 104 92 143 136 132 127 124 119 110 104 112 119 78 20 17 19 20 23 26 31 30 30 32 33 29 30 34 39 49 62 70 75 90']
print( list(map(int, l[0].split())) )
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>[238, 236, 237, 238, 240, 240, 239, 241, 241, 243, 240, 239, 231, 212, 190, 173, 148, 122, 104, 92, 143, 136, 132, 127, 124, 119, 110, 104, 112, 119, 78, 20, 17, 19, 20, 23, 26, 31, 30, 30, 32, 33, 29, 30, 34, 39, 49, 62, 70, 75, 90]
</code></pre>
|
python|numpy
| 2 |
1,909,097 | 51,897,241 |
Importing Large csv (175 GB) to MySQL Server with Unusual Delimeters
|
<p>I have a 175 GB csv that I am trying to pull into MySQL.</p>
<p>The table is set up and formatted.</p>
<p>The problem is, the csv uses unorthodox delimeters and line seperators (both are 3 character strings, @%@ and @^@).</p>
<p>After a lot of trial and error I was able to get the process to start in HeidiSQL, but it would freeze up and never actually populate any data.</p>
<p>I would ideally like to use Python, but the parser only accepts 1-character line separators, making this tricky.</p>
<p>Does anyone have any tips on getting this to work?</p>
|
<p>MySQL <code>LOAD DATA</code> statement will process a csv file with multiple character delimiters</p>
<p><a href="https://dev.mysql.com/doc/refman/5.7/en/load-data.html" rel="nofollow noreferrer">https://dev.mysql.com/doc/refman/5.7/en/load-data.html</a></p>
<p>I'd expect something like this:</p>
<pre><code>LOAD DATA LOCAL INFILE '/dir/my_wonky.csv'
INTO TABLE my_table
FIELDS TERMINATED BY '@%@'
LINES TERMINATED BY '@^@'
( col1
, col2
, col3
)
</code></pre>
<p>I'd use a very small subset of the .csv file and do the load into a test table, just to get it working, make necessary adjustments, verify the results.</p>
<p>I would also want to break up the load into more manageable chunks, and avoid blowing out rollback space in the ibdata1 file. I would use something like <code>pt-fifo-split</code> (part of the Percona toolkit) to break the file up into a series of separate loads, but unfortunately, <code>pt-fifo-split</code> doesn't provide a way to specify the line delimiter character(s). To make use of that, we'd have to pre-process the file, to replace existing new line characters, and replace the line delimiter <code>@^@</code> with new line characters.</p>
<p>(If I had to load the whole file in a single shot, I'd do that into a MyISAM table, and not an InnoDB table, as a staging table. And I'd have a separate process that copied rows (in reasonably sized chunks) from the MyISAM staging table into the InnoDB table.)</p>
|
python|mysql|pandas|csv|heidisql
| 3 |
1,909,098 | 51,863,463 |
Identifying parameters in HTTP request
|
<p>I am fairly proficient in Python and have started exploring the requests library to formulate simple HTTP requests. I have also taken a look at Sessions objects that allow me to login to a website and -using the session key- continue to interact with the website through my account.</p>
<p>Here comes my problem: I am trying to build a simple API in Python to perform certain actions that I would be able to do via the website. However, I do not know how certain HTTP requests need to look like in order to implement them via the requests library.</p>
<p>In general, when I know how to perform a task via the website, how can I identify:</p>
<ul>
<li>the type of HTTP request (GET or POST will suffice in my case)</li>
<li>the URL, i.e where the resource is located on the server</li>
<li>the body parameters that I need to specify for the request to be successful</li>
</ul>
|
<p>This has nothing to do with python, but you can use a network proxy to examine your requests.</p>
<ol>
<li>Download a network proxy like Burpsuite</li>
<li>Setup your browser to route all traffic through Burpsuite (default is localhost:8080)</li>
<li>Deactivate packet interception (in the Proxy tab)</li>
<li>Browse to your target website normally</li>
<li>Examine the request history in Burpsuite. You will find every information you need</li>
</ol>
|
python|api|http|python-requests
| 0 |
1,909,099 | 59,811,362 |
Is it possible to read files generated with C++ std::setw using Python csv reader?
|
<p>I have a data file generated using C++ <code>std::setw</code> e.g. </p>
<pre class="lang-cpp prettyprint-override"><code>file << std::scientific << std::setprecision(data_precision);
for (double data : a_data)
{
file << std::setw(data_width) << data;
}
file << "\n";
</code></pre>
<p>Is it possible to read the data using python csv.reader or similar? I have tried the following: </p>
<pre><code>with data as csvfile:
fieldreader = csv.reader(csvfile)
next(fieldreader)
for row in fieldreader:
values.append(float(row[0]))
</code></pre>
<p>which outputs the entire first row, indicating the whole row is stored as one entry. I have also tried a few different delimiters e.g. <code>\t</code> which didn't help.</p>
<p>Example output below: </p>
<pre class="lang-none prettyprint-override"><code># z phi phi1 Massless
-16.0000000 0.0000000 9.9901854997e-01 1.0910677716e-19
-16.0000000 0.0245437 9.9871759471e-01 1.6545142956e-05
-16.0000000 0.0490874 9.9781493216e-01 3.3051500271e-05
-16.0000000 0.0736311 9.9631097893e-01 4.9477653557e-05
-16.0000000 0.0981748 9.9420658732e-01 6.5784269579e-05
</code></pre>
|
<p>The <a href="https://docs.python.org/3/library/csv.html#csv.reader" rel="nofollow noreferrer"><em><code>csvfile</code></em> argument</a> to the <code>csv.reader</code> initializer "can be any object which supports the iterator protocol and returns a string each time its <code>next()</code> method is called".</p>
<p>This means you could read the file by defining a generator function like the one shown below to preprocess the lines of the file to make them acceptable to the <code>csv.reader</code>:</p>
<pre><code>import csv
def preprocess(file):
for line in file:
yield ','.join(line.split())
values = []
with open('cppfile.txt') as file:
fieldreader = csv.reader(preprocess(file))
next(fieldreader)
for row in fieldreader:
print(f'row={row}')
values.append(float(row[0]))
print()
print(values)
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code>row=['-16.0000000', '0.0000000', '9.9901854997e-01', '1.0910677716e-19']
row=['-16.0000000', '0.0245437', '9.9871759471e-01', '1.6545142956e-05']
row=['-16.0000000', '0.0490874', '9.9781493216e-01', '3.3051500271e-05']
row=['-16.0000000', '0.0736311', '9.9631097893e-01', '4.9477653557e-05']
row=['-16.0000000', '0.0981748', '9.9420658732e-01', '6.5784269579e-05']
[-16.0, -16.0, -16.0, -16.0, -16.0]
</code></pre>
|
python|c++|csv|setw
| 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.