Unnamed: 0
int64
0
1.91M
id
int64
337
73.8M
title
stringlengths
10
150
question
stringlengths
21
64.2k
answer
stringlengths
19
59.4k
tags
stringlengths
5
112
score
int64
-10
17.3k
1,904,400
12,796,950
Error while installing MySQLdb for Python on Mac OSX 10.6.8 with mysql inside XAMPP
<p>I am trying to import MySQldb in python and call the python script from a php script in XAMPP. Here is what I did:</p> <p>Environment: 1. Mac OSX 10.6.8 2. Python version 2.6 (default)[64bit]</p> <p>Done so far: 1. Installed XAMPP 2. MySQL config path: /Applications/XAMPP/xamppfiles/bin/mysql_config 3. Downloaded MySQL-python-1.2.4b4 4. edited the site.cfg with config path for MySQL 5. Ran following commands </p> <p>sudo python setup.py clean python setup.py build</p> <p>Got the following error:</p> <pre><code>running build running build_py copying MySQLdb/release.py -&gt; build/lib.macosx-10.6-universal-2.6/MySQLdb running build_ext building '_mysql' extension gcc-4.2 -fno-strict-aliasing -fno-common -dynamic -DNDEBUG -g -fwrapv -Os -Wall - Wstrict-prototypes -DENABLE_DTRACE -pipe -Dversion_info=(1,2,4,'beta',4) - D__version__=1.2.4b4 -I/Applications/XAMPP/xamppfiles/include/mysql - I/System/Library/Frameworks/Python.framework/Versions/2.6/include/python2.6 -c _mysql.c -o build/temp.macosx-10.6-universal-2.6/_mysql.o -mmacosx-version-min=10.4 -arch i386 -arch ppc -D_P1003_1B_VISIBLE -DSIGNAL_WITH_VIO_CLOSE -DSIGNALS_DONT_BREAK_READ - DIGNORE_SIGHUP_SIGQUIT -DDONT_DECLARE_CXA_PURE_VIRTUAL In file included from _mysql.c:44: /Applications/XAMPP/xamppfiles/include/mysql/my_config.h:1053:1: warning: "HAVE_WCSCOLL" redefined In file included from /System/Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:8, from _mysql.c:29: </code></pre> <p>/System/Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyconfig.h:803:1 : warning: this is the location of the previous definition /usr/libexec/gcc/powerpc-apple-darwin10/4.2.1/as: assembler (/usr/bin/../libexec/gcc/darwin/ppc/as or /usr/bin/../local/libexec/gcc/darwin/ppc/as) for architecture ppc not installed Installed assemblers are: /usr/bin/../libexec/gcc/darwin/x86_64/as for architecture x86_64 /usr/bin/../libexec/gcc/darwin/i386/as for architecture i386 In file included from _mysql.c:44: /Applications/XAMPP/xamppfiles/include/mysql/my_config.h:1053:1: warning: "HAVE_WCSCOLL" redefined In file included from /System/Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/Python.h:8, from _mysql.c:29: /System/Library/Frameworks/Python.framework/Versions/2.6/include/python2.6/pyconfig.h:803:1: warning: this is the location of the previous definition _mysql.c:3131: fatal error: error writing to -: Broken pipe compilation terminated. lipo: can't open input file: /var/tmp//ccQsr7Lk.out (No such file or directory) error: command 'gcc-4.2' failed with exit status 1</p>
<p>yum install mysql-devel</p> <p>pip install MySQL-python</p>
python|macos|xampp|osx-snow-leopard|mysql-python
0
1,904,401
24,553,760
Second loop is not executing in django template
<p>HI I have very strange problem .</p> <p>Here is my template </p> <pre><code>&lt;table class="table table-striped table-condensed tablesorter" id="myTable"&gt; &lt;thead&gt; &lt;tr&gt; &lt;th&gt;Store&lt;/th&gt; &lt;th&gt;Image&lt;/th&gt; &lt;th&gt;Price(USD)&lt;/th&gt; &lt;th&gt;Manufacturer&lt;/th&gt; &lt;th&gt;Model&lt;/th&gt; &lt;th&gt;Shipping&lt;/th&gt; &lt;th&gt;Replacement&lt;/th&gt; &lt;th&gt;Details&lt;/th&gt; &lt;/tr&gt; &lt;/thead&gt; &lt;tbody&gt; {% for x in result_amazon|slice:"1" %} {% if forloop.first %} &lt;tr&gt; &lt;td&gt; &lt;a href="" target="_blank"&gt; &lt;img height="85" width="110" src={% static "images/Amazon-Logo.jpg" %} alt=""&gt; &lt;/a&gt; &lt;/td&gt; &lt;td&gt;&lt;img src={{x.medium_image_url}} alt=""&gt;&lt;/td&gt; &lt;td&gt;&lt;strong&gt;&lt;span class="WebRupee"&gt;&lt;/span&gt; {% for y in x.list_price %} {% if y.price != 'None'%} {{y}} {% endif %} {% endfor %}&lt;/strong&gt; &lt;/td&gt; &lt;td&gt;{{x.manufacturer}}&lt;/td&gt; &lt;td&gt;{{x.model}}&lt;/td&gt; &lt;td&gt;Rs. 99&lt;/td&gt; &lt;td&gt;Out of Stock&lt;/td&gt; &lt;td&gt; &lt;a href="{{x.detail_page_url}}" class="btn btn-mini btn-primary trackinfo" rel="7#@#17205" title="Visit Store" target="_blank"&gt; Visit Store &lt;/a&gt; &lt;/td&gt; &lt;/tr&gt; {% endif %} {% endfor %} {% for x in result_bestbuy.products %} {% if forloop.first %} &lt;tr&gt; &lt;td&gt; &lt;a href=""&gt; &lt;img height="85" width="110" src={% static "images/bestbuy.gif" %} alt=""&gt; &lt;/a&gt; &lt;/td&gt; &lt;td&gt;&lt;img style="height: 168px;" src={{x.image}} alt=""&gt;&lt;/td&gt; &lt;td&gt;&lt;strong&gt;&lt;span class="WebRupee"&gt;&lt;/span&gt;{{x.regularPrice}}&lt;/strong&gt;&lt;/td&gt; &lt;td&gt;{{x.manufacturer}}&lt;/td&gt; &lt;td&gt;{{x.modelNumber}}&lt;/td&gt; &lt;td&gt;{% if x.freeShipping %}Free Shipping {% else %}{{x.shippingCost }}{% endif %}&lt;/td&gt; &lt;td&gt;14 Days&lt;/td&gt; &lt;td&gt; &lt;a href="{{x.url}}" class="btn btn-mini btn-primary trackinfo" rel="27#@#17205" title="Visit Store" target="_blank"&gt; Visit Store &lt;/a&gt; &lt;/td&gt; &lt;/tr&gt; {% endif %} {% endfor %} {% for x in result_amazon %} {% if not forloop.first %} &lt;tr&gt; &lt;td&gt; &lt;a href=""&gt; &lt;/a&gt; &lt;img height="85" width="110" src={% static "images/Amazon-Logo.jpg" %} alt=""&gt; &lt;/a&gt; &lt;/td&gt; &lt;td&gt;&lt;img src={{x.medium_image_url}} alt=""&gt;&lt;/td&gt; &lt;td&gt;&lt;strong&gt;&lt;span class="WebRupee"&gt;&lt;/span&gt; {% for y in x.list_price %} {% if y.price != 'None'%} {{y}} {% endif %} {% endfor %} &lt;/strong&gt; &lt;/td&gt; &lt;td&gt;{{x.manufacturer}}&lt;/td&gt; &lt;td&gt;{{x.model}}&lt;/td&gt; &lt;td&gt;Rs. 99&lt;/td&gt; &lt;td&gt;Out of Stock&lt;/td&gt; &lt;td&gt; &lt;a href="{{x.detail_page_url}}" class="btn btn-mini btn-primary trackinfo" rel="7#@#17205" title="Visit Store" target="_blank"&gt; Visit Store &lt;/a&gt; &lt;/td&gt; &lt;/tr&gt; {% endif %} {% endfor %} {% for x in result_bestbuy.products %} {% if not forloop.first %} &lt;tr&gt; &lt;td&gt; &lt;a href=""&gt; &lt;/a&gt; &lt;/td&gt; &lt;td&gt;&lt;img style="height: 168px;" src={{x.image}} alt=""&gt;&lt;/td&gt; &lt;td&gt;&lt;strong&gt;&lt;span class="WebRupee"&gt;&lt;/span&gt;{{x.regularPrice}}&lt;/strong&gt;&lt;/td&gt; &lt;td&gt;{{x.manufacturer}}&lt;/td&gt; &lt;td&gt;{{x.modelNumber}}&lt;/td&gt; &lt;td&gt;{% if x.freeShipping %}Free Shipping {% else %}{{x.shippingCost }}{% endif %}&lt;/td&gt; &lt;td&gt;14 Days&lt;/td&gt; &lt;td&gt; &lt;a href="{{x.url}}" class="btn btn-mini btn-primary trackinfo" rel="27#@#17205" title="Visit Store" target="_blank"&gt; Visit Store &lt;/a&gt; &lt;/td&gt; &lt;/tr&gt; {% endif %} {% endfor %} &lt;/tbody&gt; &lt;/table&gt; </code></pre> <p>I am trying to limit the first loop to execute only once initially then rest all has to be executed hence I have applied the <code>forloop.first</code> condition .</p> <p>My simple question is <code>"Why the third loop(result_amazon) is not printing any data"</code> (There are lot many data present in the result_amazon).</p> <p>Please help me out what might I am doing wrong here .</p>
<p>Check the console of your browser and look for any errors in your html code.</p>
python|django|django-templates
0
1,904,402
41,125,121
ValueError, though check has already be performed for this
<p>Getting a little stuck with NaN data. This program trawls through a folder in an external hard drive loads in a txt file as a dataframe, and should reads the very last value of the last column. As some of the last rows do not complete for what ever reason, i have chosen to take the row before (or that's what i hope to have done. Here is the code and I have commented the lines that I think are giving the trouble:</p> <pre><code>#!/usr/bin/env python3 import glob import math import pandas as pd import numpy as np def get_avitime(vbo): try: df = pd.read_csv(vbo, delim_whitespace=True, header=90) row = next(df.iterrows()) t = df.tail(2).avitime.values[0] return t except: pass def human_time(seconds): secs = seconds/1000 mins, secs = divmod(secs, 60) hours, mins = divmod(mins, 60) return '%02d:%02d:%02d' % (hours, mins, secs) def main(): path = 'Z:\\VBox_Backup\\**\\*.vbo' events = {} customers = {} for vbo_path in glob.glob(path, recursive=True): path_list = vbo_path.split('\\') event = path_list[2].upper() customer = path_list[3].title() avitime = get_avitime(vbo_path) if not avitime: # this is to check there is a number continue else: if event not in events: events[event] = {customer:avitime} print(event) elif customer not in events[event]: events[event][last_customer] = human_time(events[event][last_customer]) print(events[event][last_customer]) events[event][customer] = avitime else: total_time = events[event][customer] total_time += avitime events[event][customer] = total_time last_customer = customer events[event][customer] = human_time(events[event][customer]) df_events = pd.DataFrame(events) df.to_csv('event_track_times.csv') main() </code></pre> <p>I put in a line to check for a value, but I am guessing that NaN is not a null value, hence it hasn't quite worked.</p> <pre><code>C:\Users\rob.kinsey\AppData\Local\Continuum\Anaconda3) c:\Users\rob.kinsey\Pro ramming&gt;python test_single.py BARCELONA 03:52:42 02:38:31 03:21:02 00:16:35 00:59:00 00:17:45 01:31:42 03:03:03 03:16:43 01:08:03 01:59:54 00:09:03 COTA 04:38:42 02:42:34 sys:1: DtypeWarning: Columns (0) have mixed types. Specify dtype option on import or set low_memory=False. 04:01:13 01:19:47 03:09:31 02:37:32 03:37:34 02:14:42 04:53:01 LAGUNA_SECA 01:09:10 01:34:31 01:49:27 03:05:34 02:39:03 01:48:14 SILVERSTONE 04:39:31 01:52:21 02:53:42 02:10:44 02:11:17 02:37:11 01:19:12 04:32:21 05:06:43 SPA Traceback (most recent call last): File "test_single.py", line 56, in &lt;module&gt; main() File "test_single.py", line 41, in main events[event][last_customer] = human_time(events[event][last_customer]) File "test_single.py", line 23, in human_time </code></pre> <p>The output is starting out correctly, except for the sys:1 error, but at least it carries on, and the final error that stalls the program completely. How can I get past this NaN issue, all variables I am working with should be of float data type or should have been ignored. All data types should only be strings or floats until the time conversion which are integers.</p>
<p>Ok, even though no one answered, I am compelled to answer my own question as I am not convinced I am the only person that has had this problem. </p> <p>There are 3 main reasons for receiving NaN in a data frame, most of these revolve around infinity, such as using 'inf' as a value or dividing by zero, which will also provide NaN as a result, the wiki page was the most helpful for me in solving this issue: <a href="https://en.wikipedia.org/wiki/NaN" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/NaN</a></p> <p>One other important point about NaN it that is works a little like a virus, in that anything that touches it in any calculation will result in NaN, so the problem can get exponentially worse. Actually what you are dealing with is missing data, and until you realize that's what it is, NaN is the least useful and frustrating thing as it comes under a datatype not an error yet any mathematical operations will end in NaN. BEWARE!!</p> <p>The reason on this occasion is because a specific line was used to get the headers when reading in the csv file and although that worked for the majority of these files, some of them had the headers I was after on a different line, as a result, the headers being imported into the data frame either were part of the data itself or a null value. As a result, trying to access a column in the data frame by header name resulted in NaN, and as discussed earlier, this proliferated though the program causing a few problems which had used workarounds to combat, one of which was actually acceptable which is to add this line:</p> <pre><code>df = df.fillna(0) </code></pre> <p>after the first definition of the df variable, in this case:</p> <pre><code>df= pd.read_csv(vbo, delim_whitespace=True, header=90) </code></pre> <p>The bottom line is that if you are receiving this value, the best thing really is to work out why you are getting NaN in the first place, then it is easier to make an informed decision as to whether or not replacing NaN with '0' is a viable choice. </p> <p>I sincerely hope this helps anyone who finds it. Regards iFunction</p>
python-3.x|nan
0
1,904,403
30,855,991
Django DRF with oAuth2 using DOT (django-oauth-toolkit)
<p>I am trying to make DRF work with oAuth2 (django-oauth-toolkit).</p> <p>I was focusing on <a href="http://httplambda.com/a-rest-api-with-django-and-oauthw-authentication/" rel="noreferrer">http://httplambda.com/a-rest-api-with-django-and-oauthw-authentication/</a></p> <p>First I followed that instruction, but later, after getting authentication errors, I setup this demo: <a href="https://github.com/felix-d/Django-Oauth-Toolkit-Python-Social-Auth-Integration" rel="noreferrer">https://github.com/felix-d/Django-Oauth-Toolkit-Python-Social-Auth-Integration</a></p> <p>Result was the same: I couldn't generate access token using this curl:</p> <pre><code>curl -X POST -d "grant_type=password&amp;username=&lt;user_name&gt;&amp;password=&lt;password&gt;" -u "&lt;client_id&gt;:&lt;client_secret&gt;" http://127.0.0.1:8000/o/token/ </code></pre> <p>I got this error:</p> <pre><code>{"error": "unsupported_grant_type"} </code></pre> <p>The oAuth2 application was set with grant_type password. I changed grant_type to "client credentials" and tried this curl:</p> <pre><code>curl -X POST -d "grant_type=client_credentials" -u "&lt;client_id&gt;:&lt;client_secret&gt;" http://127.0.0.1:8000/o/token/ </code></pre> <p>This worked and I got generated auth token.</p> <p>After that I tried to get a list of all beers:</p> <pre><code>curl -H "Authorization: Bearer &lt;auth_token&gt;" http://127.0.0.1:8000/beers/ </code></pre> <p>And I got this response: </p> <pre><code>{"detail":"You do not have permission to perform this action."} </code></pre> <p>This is the content of <strong>views.py</strong> that should show the beers:</p> <pre><code>from beers.models import Beer from beers.serializer import BeerSerializer from rest_framework import generics, permissions class BeerList(generics.ListCreateAPIView): serializer_class = BeerSerializer permission_classes = (permissions.IsAuthenticated,) def get_queryset(self): user = self.request.user return Beer.objects.filter(owner=user) def perform_create(self, serializer): serializer.save(owner=self.request.user) </code></pre> <p>I am not sure what can be the problem here. First with "unsuported grant type" and later with other curl call. This also happen to me when I did basic tutorial from django-oauth-toolkit. I am using Django 1.8.2 and python3.4</p> <p>Thanks for all help!</p> <p>My settings.py looks like this</p> <pre><code>import os BASE_DIR = os.path.dirname(os.path.dirname(__file__)) SECRET_KEY = 'hd#x!ysy@y+^*%i+klb)o0by!bh&amp;7nu3uhg+5r0m=$3x$a!j@9' DEBUG = True TEMPLATE_DEBUG = True ALLOWED_HOSTS = [] TEMPLATE_CONTEXT_PROCESSORS = ( 'django.contrib.auth.context_processors.auth', ) INSTALLED_APPS = ( 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'oauth2_provider', 'rest_framework', 'beers', ) MIDDLEWARE_CLASSES = ( 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.auth.middleware.SessionAuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', ) AUTHENTICATION_BACKENDS = ( 'django.contrib.auth.backends.ModelBackend', ) ROOT_URLCONF = 'beerstash.urls' WSGI_APPLICATION = 'beerstash.wsgi.application' DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': os.path.join(BASE_DIR, 'db.sqlite3'), } } LANGUAGE_CODE = 'en-us' TIME_ZONE = 'UTC' USE_I18N = True USE_L10N = True USE_TZ = True STATIC_URL = '/static/' REST_FRAMEWORK = { 'DEFAULT_AUTHENTICATION_CLASSES': ( 'oauth2_provider.ext.rest_framework.OAuth2Authentication', ) } OAUTH2_PROVIDER = { # this is the list of available scopes 'SCOPES': {'read': 'Read scope', 'write': 'Write scope'} } </code></pre>
<p>I have tried the demo you mentioned and everything was fine.</p> <pre><code>$ curl -X POST -d &quot;grant_type=password&amp;username=superuser&amp;assword=123qwe&quot; -u&quot;xLJuHBcdgJHNuahvER9pgqSf6vcrlbkhCr75hTCZ:nv9gzOj0BMf2cdxoxsnYZuRYTK5QwpKWiZc7USuJpm11DNtSE9X6Ob9KaVTKaQqeyQZh4KF3oZS4IJ7o9n4amzfqKJnoL7a2tYQiWgtYPSQpY6VKFjEazcqSacqTx9z8&quot; http://127.0.0.1:8000/o/token/ {&quot;access_token&quot;: &quot;jlLpKwzReB6maEnjuJrk2HxE4RHbiA&quot;, &quot;token_type&quot;: &quot;Bearer&quot;, &quot;expires_in&quot;: 36000, &quot;refresh_token&quot;: &quot;DsDWz1LiSZ3bd7NVuLIp7Dkj6pbse1&quot;, &quot;scope&quot;: &quot;read write groups&quot;} $ curl -H &quot;Authorization: Bearer jlLpKwzReB6maEnjuJrk2HxE4RHbiA&quot; http://127.0.0.1:8000/beers/ [] </code></pre> <p>In your case, I think, you have created an application with wrong <em>&quot;Authorization grant type&quot;</em>.</p> <p>Use this application settings:</p> <pre><code>Name: just a name of your choice Client Type: confidential Authorization Grant Type: Resource owner password-based </code></pre> <p>This <a href="https://django-oauth-toolkit.readthedocs.org/en/latest/rest-framework/getting_started.html#step-3-register-an-application" rel="nofollow noreferrer">https://django-oauth-toolkit.readthedocs.org/en/latest/rest-framework/getting_started.html#step-3-register-an-application</a> helped me a lot.</p> <p>Here the database file I've created: <a href="https://www.dropbox.com/s/pxeyphkiy141i1l/db.sqlite3.tar.gz?dl=0" rel="nofollow noreferrer">https://www.dropbox.com/s/pxeyphkiy141i1l/db.sqlite3.tar.gz?dl=0</a></p> <p>You can try it yourself. No source code changed at all. Django admin username - superuser, password - 123qwe.</p>
python|django|django-rest-framework
13
1,904,404
40,316,476
Codewars: Solving Kata - Highest and Lowest
<p>This should work but I'm getting errors when running the test cases. For some reason the fourth one fails. numbers[0] prints out '-1' but after assigning to highest_number or lowest_number only the '-' prints out. What gives?</p> <p>Code:</p> <pre><code>def high_and_low(numbers): if numbers: highest_number = numbers[0] lowest_number = numbers[0] numbers = numbers.split(" ") print(highest_number) print(lowest_number) print(numbers[0]) for num in numbers: if int(num) &gt; int(highest_number): highest_number = num if int(num) &lt; int(lowest_number): lowest_number = num return highest_number + " " + lowest_number </code></pre> <p>Test Cases:</p> <pre><code>Test.assert_equals(high_and_low("4 5 29 54 4 0 -214 542 -64 1 -3 6 -6"), "542 -214"); Test.assert_equals(high_and_low("1 -1"), "1 -1"); Test.assert_equals(high_and_low("1 1"), "1 1"); Test.assert_equals(high_and_low("-1 -1"), "-1 -1"); Test.assert_equals(high_and_low("1 -1 0"), "1 -1"); Test.assert_equals(high_and_low("1 1 0"), "1 0"); Test.assert_equals(high_and_low("-1 -1 0"), "0 -1"); Test.assert_equals(high_and_low("42"), "42 42"); </code></pre> <p>Error:</p> <pre><code>ValueError: invalid literal for int() with base 10: '-' </code></pre>
<p>Split your numbers first, else you are just assigning the first <em>character</em> of <code>numbers</code> to your variables:</p> <pre><code> numbers = numbers.split(" ") highest_number = numbers[0] lowest_number = numbers[0] </code></pre>
python
1
1,904,405
40,098,489
How to Scrape Page HTML and follow Next Link in Selenium
<p>I'm trying to scrape a website for research, and I'm stuck. I want the scraper to read the page source, and append that to a local HTML file so I can analyze the data off-campus. I have experimented with <code>BeautifulSoup</code> and <code>Scrapy</code>, but I have found that I need to use <code>Selenium</code> to interact with the page to navigate through my university's authentication system. (I'm not including that code, because it's tangential to my question.)</p> <p>When I run the script it navigates to the page and clicks the link, but it only saves the first page's HTML. It then duplicates and appends that page's HTML every time it click on the link. </p> <p>How do I use <code>Selenium</code> to click on the next page link, scrape the HTML, and save to file until I reach the last page?</p> <pre><code>source = driver.page_source while True: with open("test.html", "a") as TestFile: TestFile.write(source) try: driver.implicitly_wait(200) driver.find_element_by_css_selector('li.next').click() except AttributeError: break </code></pre> <p>Edit: I added the AttributeError to the except and received the following error.</p> <blockquote> <p>selenium.common.exceptions.StaleElementReferenceException: Message: stale element reference: element is not attached to the page document</p> </blockquote> <p>My assumption is that I need to slow down the <code>.click()</code>, which is why I originally had the implicit wait, but that doesn't seem to be doing the trick.</p>
<p>You need to assign the <code>page source</code> to <code>source</code> variable inside the while loop.</p> <pre><code>source = driver.page_source while True: with open("test.html", "a") as TestFile: TestFile.write(source) try: driver.implicitly_wait(200) driver.find_element_by_css_selector('li.next').click() source = driver.page_source except AttributeError: break </code></pre>
python|selenium
0
1,904,406
29,263,485
TypeError: unsupported operand type(s) for -: 'int' and 'main'
<p>I am trying to use the value of a click event in calculations. Whenever I try to convert it to an int, it throws this error:</p> <pre><code>TypeError: unsupported operand type(s) for -: 'int' and 'main' </code></pre> <p>Here is a piece of the code that gets the error</p> <pre><code>def goto(self, event): self.ex = int(event.x) self.ey = int(event.y) self.find_distance(self.ex, self.ey) def find_distance(xclick, yclick, self): #distance formula = sqrt((x2 - x1)^2 + (y2 - y1)^2) self.xadd = (xPos - xclick)^2 self.yadd = (yPos - yclick)^2 self.step2 = self.xadd + self.yadd print sqrt(step2) </code></pre>
<p>Apparently the <code>event.x</code> and <code>event.y</code> variables are 'corrupted' and not really integer values. Try to <code>print(event.x)</code> or <code>print(event.y)</code>, that'll show you their true values. I'm guessing <code>event.x</code> and <code>event.y</code> are instances of something.</p> <p>You can also do <code>print(type(event.x))</code> and <code>print(type(event.y))</code>. Thanks @Reut Sharabani.</p>
python|python-2.7|tkinter|tkinter-canvas
1
1,904,407
52,055,253
EventSocket library is working with python3
<p>I would like to ask you if the <a href="https://pypi.org/project/eventsocket/" rel="nofollow noreferrer">EventSocket</a> library is working with python3. If not, is python3 support planned?</p> <p>Best regards.</p>
<p>I received a response from the author ( Alexandre Fiori) that this is not yet planned.</p> <p>Cheers </p>
python-3.x|twisted
0
1,904,408
51,566,014
pandas df, creating column of lists with input from other two columns
<p>example df:</p> <pre><code>id start end a 2018-04-01 2018-04-03 b 2018-04-01 2018-04-03 c 2018-04-02 2018-04-03 </code></pre> <p>ideal output A</p> <pre><code>id start end lst a 2018-04-01 2018-04-03 [2018-04-01, 2018-04-02, 2018-04-03] b 2018-04-01 2018-04-03 [2018-04-01, 2018-04-02, 2018-04-03] c 2018-04-02 2018-04-03 [2018-04-02, 2018-04-03] </code></pre> <p>What I have so far (doesn't work)</p> <pre><code>def gen_day_list(s1, s2): for d1 in s1: for d2 in s2: delta = d2 - d1 for i in range(delta.days + 1): return (d1 + dt.timedelta(i)) df[date_list] = df.apply(gen_day_list(df['date1'], df['date2'])) </code></pre> <p>Once I get the ideal output A, I would then try to run the following code to get to ideal output B</p> <pre><code>lst1 = ['a','b','c'] lst2 = ['b','c','d'] lst3 = ['c','d','e'] comp_lst = lst1 + lst2 +lst3 from collections import Counter Counter(comp_lst) </code></pre> <p>ideal output B</p> <pre><code>Counter({'a': 1, 'b': 2, 'c': 3, 'd': 2, 'e': 1}) Counter({'2018-04-01': 2, '2018-04-02': 3, '2018-04-03': 3}) </code></pre> <p>Any help would be greatly appreciated!</p>
<p>IIUC</p> <pre><code>df['lst']=[pd.date_range(start=x,end=y,freq='D').date.astype(str).tolist() for x , y in zip(df.start,df.end)] Counter(sum(df['lst'].tolist(),[])) Out[327]: Counter({'2018-04-01': 2, '2018-04-02': 3, '2018-04-03': 3}) </code></pre>
python|python-2.7|pandas|dataframe
0
1,904,409
51,879,782
Passing a python object from a python2 script to python3 script
<p>I ran into an issue where my current traffic generator: Warp17 has generated python code that is only python2 compatible. We are moving all else to python3 so i have created a cli tool that wraps our warp17 api. Is there a better way to pass information from a subscript than just printing the output and loading it in json?</p> <pre><code>#!/usr/bin/python import python2_api if __name__ == "__main__": do_some_argparse() result = dict(run_some_code_on_args()) print(result) </code></pre> <p></p> <pre><code>def run_warp_cli(*args): result = subprocess.Popen('warp_cli', args) return json.loads(result) </code></pre> <p>This is basically what I am doing. I was curious if there was any interesting way to avoid doing this and get back an actual python dictionary object etc.</p>
<p>The binary representation of dictionaries, and in fact objects in general, has changed quite a bit between Python 2 and Python 3. That means that you will not be able to use the objects directly between versions of Python. Some form of serialization to a mutually acceptable format will be necessary.</p> <p>The JSON format you are using fits this criterion. I would recommend sticking with it as it is flexible, human readable and quite general.</p> <p>If space becomes an issue, you can look into pickling as an alternative, but I would not recommend it as a first choice. While <a href="https://docs.python.org/3/library/pickle.html#pickle.load" rel="nofollow noreferrer"><code>pickle.load</code></a> can understand Python 2 data just fine, there are many caveats to keep in mind with different interpretations of datatypes between Python versions, especially <code>bytes</code>: <a href="https://stackoverflow.com/q/28218466/2988730">Unpickling a python 2 object with python 3</a></p> <p>Here is a resource that may help you with some common errors from unpicking on the Python 3 side: <a href="https://stackoverflow.com/q/46001958/2988730">TypeError: a bytes-like object is required, not &#39;str&#39; when opening Python 2 Pickle file in Python 3</a></p>
python|python-3.x|python-2.7
1
1,904,410
69,013,936
Python rounding error on very simple calculation
<p>Yes, I know about floats and I have read the replies to other similar questions (including the link about what every programmer should know about floating points).</p> <p>I understand that float calculations might go wrong, but I was surprised that the following actually happened:</p> <pre><code>Python 3.9.6 (default, Jun 30 2021, 10:22:16) &gt;&gt;&gt; round(2.45, 1) 2.5 # correct &gt;&gt;&gt; round(2.25, 1) 2.2 # wrong &gt;&gt;&gt; round(3.25, 1) 3.2 # wrong </code></pre> <p>I was (still am) bitten by this an I am surprised that this super simple thing does not work in Python.</p> <p>Is this expected behaviour? Do I really need the Decimals package to deal with these simple calculations?</p>
<blockquote> <p>[...] if two multiples are equally close, rounding is done toward the even choice (so, for example, both round(0.5) and round(-0.5) are 0, and round(1.5) is 2).</p> </blockquote> <p><a href="https://docs.python.org/3/library/functions.html#round" rel="nofollow noreferrer">https://docs.python.org/3/library/functions.html#round</a></p>
python|floating-point|rounding
1
1,904,411
63,477,477
Django upload FileField and ImageField in differnt servers
<p>I want to make a system where user can upload document files and also images <em>(both for different tasks)</em></p> <p>and i want to store the files in my own ftp server and images in s3 bucket.</p> <p>i am using <code>django-storages</code> package</p> <p>never saw a django approach like this, where <code>FileField</code> and <code>ImageField</code>s can be uploaded to different servers</p> <p>for example, let's say when user uploads a file the file gets uploaded to my ftp server</p> <pre><code>FTP_USER = 'testuser'#os.environ['FTP_USER'] FTP_PASS = 'testpassword'#os.environ['FTP_PASS'] FTP_PORT = '21'#os.environ['FTP_PORT'] DEFAULT_FILE_STORAGE = 'storages.backends.ftp.FTPStorage' FTP_STORAGE_LOCATION = 'ftp://' + FTP_USER + ':' + FTP_PASS + '@192.168.0.200:' + FTP_PORT # Static files (CSS, JavaScript, Images) # https://docs.djangoproject.com/en/1.11/howto/static-files/ STATIC_URL = '/static/' STATICFILES_DIRS = [ os.path.join(BASE_DIR, &quot;static_my_proj&quot;), ] STATIC_ROOT = os.path.join(BASE_DIR, &quot;static_cdn&quot;, &quot;static_root&quot;) MEDIA_URL = 'ftp://192.168.0.200/' MEDIA_ROOT = 'ftp://192.168.0.200/'#os.path.join(BASE_DIR, &quot;static_cdn&quot;, &quot;media_root&quot;) </code></pre> <p><em>but problem is images now goto ftp server also</em></p> <p><strong>because of this</strong></p> <pre><code>DEFAULT_FILE_STORAGE = 'storages.backends.ftp.FTPStorage' </code></pre> <p><strong>yeah i know i can make different directories inside uploaded server root directory like this</strong></p> <pre><code>def get_filename_ext(filepath): base_name = os.path.basename(filepath) name, ext = os.path.splitext(base_name) return name, ext def upload_image_path(instance, filename): # print(instance) #print(filename) new_filename = random.randint(1,3910209312) name, ext = get_filename_ext(filename) final_filename = '{new_filename}{ext}'.format(new_filename=new_filename, ext=ext) return &quot;myapp/{new_filename}/{final_filename}&quot;.format( new_filename=new_filename, final_filename=final_filename ) class Product(models.Model): title = models.CharField(max_length=120) slug = models.SlugField(blank=True, unique=True) document = models.FileField(upload_to=upload_image_path, null=True, blank=True) def get_absolute_url(self): #return &quot;/products/{slug}/&quot;.format(slug=self.slug) return reverse(&quot;detail&quot;, kwargs={&quot;slug&quot;: self.slug}) def __str__(self): return self.title </code></pre> <p><strong>but that's not what i want, i want different servers for file and image uploads</strong></p> <p>is that even possible ? i mean there can be only one <code>MEDIA_ROOT</code> so how can i write <em>two server addresses</em>, am i making sense ?</p> <p><strong>EDIT 1:</strong></p> <p><a href="https://stackoverflow.com/users/548562/iain-shelvington">iain shelvington</a> mentioned a great point, that to add storage option for each field for customized storage backend</p> <p>like this</p> <pre><code>from storages.backends.ftp import FTPStorage fs = FTPStorage() class FTPTest(models.Model): file = models.FileField(upload_to='srv/ftp/', storage=fs) class Document(models.Model): docfile = models.FileField(upload_to='documents') </code></pre> <p>and in settings this</p> <pre><code>DEFAULT_FILE_STORAGE = 'storages.backends.ftp.FTPStorage' FTP_STORAGE_LOCATION = 'ftp://user:password@localhost:21 </code></pre> <p>but user uploaded photos also gets uploaded to that ftp server due tp</p> <pre><code>DEFAULT_FILE_STORAGE = 'storages.backends.ftp.FTPStorage' </code></pre> <p>and about the <code>MEDIA_URL</code> and <code>MEDIA_ROOT</code> they can be only one right ? so how can i put two different server address there ?</p> <p>Thanks for reading this, i really appreciate it.</p>
<p>You can set a <strong><code>base_url</code></strong> in the <strong><code>FTPStorage</code></strong> class as</p> <pre><code>from storages.backends.ftp import FTPStorage from django.conf import settings fs = FTPStorage(<b>base_url=settings.FTP_STORAGE_LOCATION</b>) class FTPTest(models.Model): file = models.FileField(upload_to='srv/ftp/', storage=fs) class Document(models.Model): docfile = models.FileField(upload_to='documents')</code></pre> <p>This <code>base_url</code> is useful in <em><strong>building the &quot;absolute URL&quot; of the file</strong></em> and hence you don't need <em><code>MEDIA_URL</code></em> and <em><code>MEDIA_ROOT</code></em> <em>&quot;in this case&quot;</em></p> <hr /> <blockquote> <p>I want different servers for file and image uploads.</p> </blockquote> <p>You can achieve it by specifying the <strong><code>storage</code></strong> parameter the <code>FileField</code> (or <code>ImageField</code>)</p> <blockquote> <p>but user uploaded photos also gets uploaded to that ftp server due to, <code>DEFAULT_FILE_STORAGE</code> settings.</p> </blockquote> <p>It is your choice. What should be the <em>default storage class in your case</em>? Do you need to upload the media files to S3? or local storage, where the project runs? set the value accordingly.</p>
python|python-3.x|django|django-models|django-views
2
1,904,412
36,487,916
Pandas DataFrame from dictionary with nested lists of dictionaries
<pre><code>my_dict = { 'company_a': [], 'company_b': [ {'gender': 'Male', 'investor': True, 'name': 'xyz', 'title': 'Board Member'} ], 'company_c': [], 'company_m': [ {'gender': 'Male', 'investor': None, 'name': 'abc', 'title': 'Advisor'}, {'gender': 'Male', 'investor': None, 'name': 'opq', 'title': 'Advisor'} ], 'company_x': [], 'company_y': [] } </code></pre> <p>How do I convert the above Python dictionary to a Pandas dataframe with these columns: <code>company, gender, investor, name, title</code></p> <p>The column <code>company</code> will be populated by the top-level keys of <code>my_dict</code>. The other columns will be populated with the values in the dictionaries within the arrays.</p> <p>I've tried <code>pd.DataFrame.from_dict(my_dict, orient='index')</code>, but it doesn't give me what I want.</p>
<p>This version fills all missing values with <code>None</code>:</p> <pre><code>data = {'company': [], 'gender': [], 'investor': [], 'name': [], 'title': []} for k, v in my_dict.items(): for entry in v: data['company'].append(k) if not v: data['company'].append(k) for name in ['gender', 'investor', 'name', 'title']: has_entry = False for entry in v: has_entry = True data[name].append(entry.get(name)) if not has_entry: data[name].append(None) df = pd.DataFrame(data) print(df) </code></pre> <p>Output:</p> <pre><code> company gender investor name title 0 company_a None None None None 1 company_y None None None None 2 company_b Male True xyz Board Member 3 company_c None None None None 4 company_x None None None None 5 company_m Male None abc Advisor 6 company_m Male None opq Advisor </code></pre> <p>You can also replace all <code>None</code> with <code>NaN</code>:</p> <pre><code>print(df.fillna(np.nan)) </code></pre> <p>Output:</p> <pre><code> company gender investor name title 0 company_a NaN NaN NaN NaN 1 company_y NaN NaN NaN NaN 2 company_b Male True xyz Board Member 3 company_c NaN NaN NaN NaN 4 company_x NaN NaN NaN NaN 5 company_m Male NaN abc Advisor 6 company_m Male NaN opq Advisor </code></pre>
python|dictionary|pandas|dataframe
3
1,904,413
19,674,441
How to fill a web form on a page that uses JavaScript
<p>I'm trying to write a script that automatically fills a web form using <code>Mechanize</code>. However, the target page uses JavaScript and <code>Mechanize</code> won't find the form I need. Is there any way to get a form out of a <code>div</code> that only displays if JavaScript is enabled and fill it out?</p>
<p>You can analyse the JavaScript (can you post it?) and see if you can solve the problem by scraping a different URL, or reimplementing the logic yourself in Python. If what you really need is to execute JavaScript within the DOM programmatically, try <a href="http://phantomjs.org/" rel="nofollow">PhantomJS</a>, the headless web stack.</p>
javascript|python|html|mechanize
2
1,904,414
17,083,958
Pygame not working properly on ubuntu
<p>My pygame code</p> <pre><code>bif="images.jpg" mif="point.png" import pygame, sys from pygame.locals import * pygame.init() screen=pygame.display.set_mode((640,360),0,32) background=pygame.image.load(bif).convert() mouse_c=pygame.image.load(mif).convert_alpha() while True: for event in pygame.event.get(): if event.type == QUIT: pygame.quit() sys.exit() screen.blit(background, (0,0)) </code></pre> <p>Error message </p> <pre><code>Traceback (most recent call last): File "game.py", line 8, in &lt;module&gt; background=pygame.image.load(bif).convert() pygame.error: File is not a Windows BMP file </code></pre> <p>The Same Code Works on different machine but doesn't works on my machine .</p> <p>Can anyone please help me to solve that problem</p> <p>How Can I uninstall whole python from my ubuntu machine and than reinstall.</p>
<p>Pygame usually loads png files. Several other filetypes will not work. Jpeg will not always be supported. If you want to be absolutely sure that the image will load properly, use an uncompressed bitmap format, but png should work. You could convert the jpg file to png format in any number of ways, for example using an image editor to save it in a different format, or just downloading format conversion software online. If you load png files only, there will be no error.</p>
python|pygame
1
1,904,415
16,872,673
IPython: Protecting {} brackets in shell invocations
<p>I've been playing a little with IPython and its ability to execute shell commands prefixed with a <code>!</code>. I've run into a problem that can be illustrated by the following example:</p> <pre><code>In [1]: filename="mytxtfile.txt" In [2]: !echo $filename mytxtfile.txt In [3]: !echo ${filename}.bak .txt.bak In [4]: !echo ${filename} .txt </code></pre> <p>I was under the impression that wrapping the name of the python variable in <code>{...}</code> would allow me to append something to it without a whitespace in between. Appending itself works, but apparently <code>${filename}</code> is <strong>different</strong> from <code>$filename</code> in IPython.</p> <p><strong>Why is that and how would I append something to the value of a python variable during a shell invocation?</strong></p>
<p>I've found my mistake: There is no <code>$</code> in front of the protecting <code>{...}</code>.</p> <pre><code>In [5]: !echo {filename}.bak mytxtfile.txt.bak </code></pre> <p>Reference: <a href="http://ipython.org/ipython-doc/stable/interactive/reference.html#manual-capture-of-command-output" rel="nofollow">http://ipython.org/ipython-doc/stable/interactive/reference.html#manual-capture-of-command-output</a></p>
ipython
2
1,904,416
9,044,056
How to do implement a paging solution with a dict object in Python
<p>Is there a better way to implement a paging solution using <code>dict</code> than this?</p> <p>I have a <code>dict</code> with image names and URLs. I need to 16 key value pairs at a time depending on the user's request, i.e. page number. It's a kind of paging solution. I can implement this like:</p> <p>For example :</p> <pre><code>dict = {'g1':'first', 'g2':'second', ... } </code></pre> <p>Now I can create a mapping of the keys to numbers using:</p> <pre><code>ordered={} for i, j in enumerate(dict): ordered[i]=j </code></pre> <p>And then retrieve them:</p> <pre><code>dicttosent={} for i in range(paegnumber, pagenumber+16): dicttosent[ordered[i]] = dict[ordered[i]] </code></pre> <p>Is this a proper method, or will this give random results?</p>
<ul> <li>Store <em>g1</em>, <em>g2</em>, etc in a list called <em>imagelist</em></li> <li>Fetch the pages using <code>imagelist[pagenumber: pagenumber+16]</code>.</li> <li>Use your original dict (image numbers to urls) to lookup the url for each of those 16 imagenames.</li> </ul>
python|dictionary|paging
3
1,904,417
39,231,977
keras: multiple w_constraints
<p>I want a set of weights to be constrained to have a fixed norm (as in <code>unitnorm</code>) and non-negative values (as in <code>nonneg</code>). This pair of constraints is useful in some kinds of optical modeling.</p> <p>I'm not a Python expert, so I tried <code>W_constraint = nonneg(), W_constraint = maxnorm(1))</code> and got <code>SyntaxError: keyword argument repeated</code>. Is there a better way? Thanks in advance!</p>
<p>If you look at the topology.py file in the keras source code it has a property:</p> <pre><code> @property def constraints(self): cons = {} for layer in self.layers: for key, value in layer.constraints.items(): if key in cons: raise Exception('Received multiple constraints ' 'for one weight tensor: ' + str(key)) cons[key] = value return cons </code></pre> <p>which raises an exception when multiple constraints are received for one weight tensor. I think the best way to do this would be to implement a custom constraint (say nonneg_and_maxnorm?), you can see examples of implemented constraints in constraints.py in the keras source code.</p>
python|constraints|keras
4
1,904,418
39,058,817
Event callback after a Tkinter Entry widget
<p>From the first answer here: <a href="https://stackoverflow.com/questions/6548837/how-do-i-get-an-event-callback-when-a-tkinter-entry-widget-is-modified">StackOverflow #6548837</a> I can call callback when the user is typing: </p> <pre><code>from Tkinter import * def callback(sv): print sv.get() root = Tk() sv = StringVar() sv.trace("w", lambda name, index, mode, sv=sv: callback(sv)) e = Entry(root, textvariable=sv) e.pack() root.mainloop() </code></pre> <p>However, the event occurs on every typed character. How to call the event when the user is done with typing and presses enter, or the Entry widget loses focus (i.e. the user clicks somewhere else)?</p>
<p>I think this does what you're looking for. I found relevant information <a href="http://docs.huihoo.com/tkinter/an-introduction-to-tkinter-1997/intro06.htm" rel="noreferrer">here</a>. The <code>bind</code> method is the key.</p> <pre><code>from Tkinter import * def callback(sv): print sv.get() root = Tk() sv = StringVar() e = Entry(root, textvariable=sv) e.bind('&lt;Return&gt;', (lambda _: callback(e))) e.pack() root.mainloop() </code></pre>
python|events|tkinter|callback
9
1,904,419
39,399,712
Delete pandas column with no name
<p>I know to delete a column I can use <code>df = df.drop('column_name', 1)</code>, but how would I remove a column just by indexing which column it is? As in without inserting the name?</p> <p>Sorry if this question has been repeated I can't seem to find it</p>
<p>You can delete column by index <code>i</code></p> <p><code>df.drop(df.columns[i], axis=1)</code></p> <p>Adapted from duplicate question:</p> <p><a href="https://stackoverflow.com/a/20301769/3006366">https://stackoverflow.com/a/20301769/3006366</a></p>
python|pandas|dataframe
20
1,904,420
52,616,702
pywinauto - How to identify GUI objects
<p>I am fairly new to Python and I am having trouble identifying GUI objects that I am hoping to control using pywinauto.</p> <p>Following <a href="https://pywinauto.github.io/" rel="nofollow noreferrer">this example</a> with the below code, I am able to open Notepad.exe and select "Help" from the Menu object. </p> <pre><code>from pywinauto.application import Application # Run a target application app = Application().start("notepad.exe") # Select a menu item app.UntitledNotepad.menu_select("Help-&gt;About Notepad") # Click on a button app.AboutNotepad.OK.click() # Type a text string app.UntitledNotepad.Edit.type_keys("pywinauto Works!", with_spaces = True) </code></pre> <p>This is pretty cool, but I want to apply this to a more practical example. What I am trying to do is open Excel using <code>app = Application().start(r"C:\Program Files\Microsoft Office\root\Office16\EXCEL.EXE")</code> and select "Blank Workbook" from the menu pane that pops up when you start Excel 2016 - thereby opening a new workbook.</p> <p><a href="https://i.stack.imgur.com/7qUh2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7qUh2.png" alt="enter image description here"></a></p> <p>I've targeted the object using UISpy and identified that the name is "Blank workbook". Using the above example code, what is the line of code I should execute that will select this object to open a new workbook? And more importantly, how do I figure that information out for myself?</p> <p><a href="https://i.stack.imgur.com/ZAA8g.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZAA8g.png" alt="enter image description here"></a></p> <p>I am using Python 3.6.1. In an unrelated question - I found it interesting that I'm able to open "notepad.exe" without the fully qualified name, but opening Excel requires <code>app = Application().start(r"C:\Program Files\Microsoft Office\root\Office16\EXCEL.EXE")</code> - I'm not sure why this is the case, but that's a question for another day...</p>
<p>You can use external tools like SWAPY, pyinspect to identify the objects names. a full list can be found here: <a href="https://pywinauto.readthedocs.io/en/latest/getting_started.html" rel="nofollow noreferrer">https://pywinauto.readthedocs.io/en/latest/getting_started.html</a></p> <p>Regarding the python usage, you can trigger print_control_identifiers() function on each of pywinauto dialog objects. the function shows all the dialog's objects: buttons/labels/checkboxes etc.. you can read about it via here: <a href="https://pywinauto.readthedocs.io/en/latest/code/code.html#controls-reference" rel="nofollow noreferrer">https://pywinauto.readthedocs.io/en/latest/code/code.html#controls-reference</a></p> <p>So basically, try app.print_control_identifiers() check if the button's name is there and if it does just use getattr(app, X).click() while X is the button's name.</p>
python|python-3.6|pywinauto
0
1,904,421
52,890,290
JSONEncoder subclassing went wrong
<p>I need some help regarding extending standard <code>json.JSONEncoder</code> in Python.</p> <p>I have object like:</p> <pre><code>temp = { "a": "test/string", "b": { "b1": "one/more/string", "b2": 666 }, "c": 123 } </code></pre> <p>I need to override(extend) encoding of str to replace <code>/</code> to <code>\/</code>.</p> <p>Standard json.dumps will return:</p> <pre><code>{"a":"test/string","b":{"b1":"one/more/string","b2":666},"c":123} </code></pre> <p>And i need to get:</p> <pre><code>{"a":"test\\/string","b":{"b1":"one\\/more\\/string","b2":666},"c":123} </code></pre> <p>Don't even try to ask me why i need to do so... I have overwritten <code>default()</code> method already but it gets ignored when i calling <code>json.dumps</code> with my subclass</p> <p>My encoder class:</p> <pre><code>class RetardJSONEncoder(json.JSONEncoder): def default(self, obj): if isinstance(obj, str): return json.JSONEncoder.default(self, obj.replace("/", "\/")) return json.JSONEncoder.default(self, obj) </code></pre>
<p>With current implementation of <code>json</code> package, it's impossible to re-define how strings are encoded. You'd prefer to apply a fix for your client side or encode strings before you put them in the object to send. Otherwise, you'll end up with your own JSON library.</p>
python|json|regex|python-3.x
1
1,904,422
47,947,400
Cannot find element I want to click - selenium
<p>I'm trying to automate updating a service desk ticket upon receipt of an E-mail. I've already figured out the way to 'listen' for an E-mail and launch the website/login, however for the life of me one of the parts I thought would be the easiest is proving to be the most difficult. Essentially I need to click the dropdown in the image below and either type or click the option 'Acknowledged'.</p> <p>Service desk photo with drop down options :</p> <p><a href="https://i.stack.imgur.com/twxph.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/twxph.jpg" alt="enter image description here"></a></p> <p>Here is the snippet of code for the drop down, note: the number in the ID in the code changes based on the ticket (in this example the number is 298)</p> <pre><code>&lt;tr&gt; &lt;td id="td_298_status"&gt; &lt;div id="298_status" class="dropdown-wrapper add-scroll autosuggest get-values-on-open hiddendropdown formField opened"&gt; &lt;select class="selectedKeysValues" style="display:none" id="status" name="status"&gt; &lt;option value="1" selected=""&gt;New&lt;/option&gt; &lt;/select&gt; &lt;span class="dd-description"&gt;&lt;a href="javascript:void(0)" onclick="openAdvancedSearchForComboBox('SelectFilterValues.jsp?func=updateComboBox&amp;amp;fromComboBox=YES&amp;amp;dbValueField=11587&amp;amp;dbCaptionField=12868&amp;amp;dbTable=20130&amp;amp;comboboxId=status&amp;amp;moduleRelevance=16');"&gt;Advanced Search&lt;/a&gt;&lt;/span&gt; &lt;select class="custom_select" style="display:none" name="status_CustomSelect" id="status_CustomSelect"&gt; &lt;/select&gt;&lt;div class="newListSelected status_CustomSelect" tabindex="0"&gt;&lt;input type="text" class="autoSuggestInput" value="" style="width: 156px;"&gt;&lt;div class="selectedTxt"&gt;&lt;span class="defaultText"&gt;New&lt;/span&gt;&lt;/div&gt;&lt;div class="containerContentDiv" style="width: auto; top: 25px;"&gt;&lt;div class="jScrollPaneContainer" style="display: block;"&gt;&lt;div class="scroll_pane" id="addScroll_status_CustomSelect" style="display: block; height: 202px; top: 0px; overflow-x: hidden; overflow-y: auto;"&gt;&lt;ul class="newList" style="left: 0px; display: block;"&gt;&lt;li class="option_0_option"&gt;Please select a status&lt;/li&gt;&lt;li class="option_1_option selected hiLite"&gt;New&lt;/li&gt;&lt;li class="option_3_option"&gt;Closed&lt;/li&gt;&lt;li class="option_4_option"&gt;Submit Error&lt;/li&gt;&lt;li class="option_5_option"&gt;Pending&lt;/li&gt;&lt;li class="option_7_option"&gt;Deleted&lt;/li&gt;&lt;li class="option_11_option"&gt;Request Rejected&lt;/li&gt;&lt;li class="option_37_option"&gt;Work In Progress&lt;/li&gt;&lt;li class="option_38_option"&gt;Resolved&lt;/li&gt;&lt;li class="option_39_option"&gt;Acknowledged&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class="addedDescription" style="display: block;"&gt;&lt;a href="javascript:void(0)" onclick="openAdvancedSearchForComboBox('SelectFilterValues.jsp?func=updateComboBox&amp;amp;fromComboBox=YES&amp;amp;dbValueField=11587&amp;amp;dbCaptionField=12868&amp;amp;dbTable=20130&amp;amp;comboboxId=status&amp;amp;moduleRelevance=16');"&gt;Advanced Search&lt;/a&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt; &lt;span class="afterSelectJS" style="display:none"&gt;closureInformationCheck();StatusChange();&lt;/span&gt; &lt;/div&gt; &lt;/td&gt; &lt;td id="closureInformationTD" style="display: none;"&gt;&lt;table id="closureInformationTable"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td class="Form_Ctrl_Label"&gt;Closure Information&lt;/td&gt; &lt;td id="td_298_closureInformation" style="padding-left:25px;"&gt; &lt;div id="298_closureInformation" class="dropdown-wrapper add-scroll autosuggest get-values-on-open hiddendropdown formField"&gt; &lt;select class="selectedKeysValues" style="display:none" id="closureInformation" name="closureInformation"&gt; &lt;option value="0" selected=""&gt;None&lt;/option&gt; &lt;/select&gt; &lt;span class="dd-description"&gt;&lt;a href="javascript:void(0)" onclick="openAdvancedSearchForComboBox('SelectFilterValues.jsp?func=updateComboBox&amp;amp;fromComboBox=YES&amp;amp;dbValueField=11587&amp;amp;dbCaptionField=12868&amp;amp;dbTable=21229&amp;amp;comboboxId=closureInformation');"&gt;Advanced Search&lt;/a&gt;&lt;/span&gt; &lt;select class="custom_select" style="display:none" name="closureInformation_CustomSelect" id="closureInformation_CustomSelect"&gt; &lt;/select&gt;&lt;div class="newListSelected closureInformation_CustomSelect" tabindex="0"&gt;&lt;input type="text" class="autoSuggestInput" value="" style="display: none;"&gt;&lt;div class="selectedTxt"&gt;&lt;span class="defaultText"&gt;None&lt;/span&gt;&lt;/div&gt;&lt;div class="containerContentDiv"&gt;&lt;div class="jScrollPaneContainer" style="display: none;"&gt;&lt;div class="scroll_pane" id="addScroll_closureInformation_CustomSelect" style="height: 200px; overflow-y: auto; overflow-x: hidden; display: none;"&gt;&lt;ul class="newList" style="left: 0px; display: none;"&gt;&lt;/ul&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class="addedDescription" style="display: none;"&gt;&lt;a href="javascript:void(0)" onclick="openAdvancedSearchForComboBox('SelectFilterValues.jsp?func=updateComboBox&amp;amp;fromComboBox=YES&amp;amp;dbValueField=11587&amp;amp;dbCaptionField=12868&amp;amp;dbTable=21229&amp;amp;comboboxId=closureInformation');"&gt;Advanced Search&lt;/a&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt; &lt;/div&gt; &lt;/td&gt; &lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;/td&gt;&lt;/tr&gt; </code></pre> <p>I've tried a whole bunch fo things like:</p> <pre><code>driver.find_element_by_xpath('//*[@id="298_status"]/div/input').click() </code></pre> <p>or</p> <pre><code>driver.find_element_by_xpath(u"//td[contains(text(), '_status')]") </code></pre> <p>But I always get errors stating that the element cant be found. Any ideas?</p>
<p>It seems that you might want to interact with the <code>select</code> element. They work differently from all other elements, and there is a specific Selenium library for handling them. Here is probably the most straightforward way to accomplish what you're trying to do. Caveat though, I'm not sure why the element is not being found on the web page -- that is the first thing you'll have to get answered. For that, I'd make sure it is scrolled into view, if it is not visible by default when the page is rendered. You may have a timeout issue, etc.</p> <pre><code>from selenium.webdriver.support.select import Select from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC timeout = 15 try: status_select = WebDriverWait(self.thread.driver, timeout).until( EC.visibility_of_element_located(("id", "status")) ) Select(status_select).select_by_value("Acknowledged") except: print("Element was not visible on page after", timeout, "seconds.") </code></pre>
python|selenium
0
1,904,423
47,652,019
Amount of memory needed for training with faster_rcnn and rfcn models
<p>I've been trying to make some trainings using some faster_rcnn and rfcn based models, using Google's Object Detection API, but after some training steps i get some errors regarding, what i assume to be memory issues. What is considered a good amount of free RAM before starting making a training with the above models?</p> <p>Here is some of log errors:</p> <p>InvalidArgumentError (see above for traceback): assertion failed: [maximum box coordinate value is larger than 1.010000: ] [1.0111111] [[Node: Loss/ToAbsoluteCoordinates/Assert/AssertGuard/Assert = Assert[T=[DT_STRING, DT_FLOAT], summarize=3, _device="/job:localhost/replica:0/task:0/cpu:0"](Loss/ToAbsoluteCoordinates/Assert/AssertGuard/Assert/Switch/_1307, Loss/ToAbsoluteCoordinates/Assert/AssertGuard/Assert/data_0, Loss/ToAbsoluteCoordinates/Assert/AssertGuard/Assert/Switch_1/_1309)]]</p>
<p>This problem is not for OOM error. It should be something wrong with your bounding boxes. Go for a check if your xmax or ymax values are biger than the width and height value. Not enough memory should be look like this.<a href="https://i.stack.imgur.com/m1Zoe.jpg" rel="nofollow noreferrer">enter image description here</a></p>
tensorflow|object-detection|object-detection-api
0
1,904,424
34,134,108
Exception in Python email block
<p>My below code is for sending email with an attachment.</p> <pre><code>def email(location): COMMASPACE = ', ' fromaddr = &quot;jeff@abc.com&quot; toaddr = (&quot;jeff@def.com&quot;) outer = MIMEMultipart() outer['Subject'] = 'Dashboard' outer['From'] = fromaddr outer['To'] = toaddr msg = MIMEBase('text','plain') msgtext = 'Please find dashboard for the current reporting week.' msg.set_payload(msgtext) Encoders.encode_base64(msg) outer.attach(msg) outer.epilogue = ' ' ctype, encoding = mimetypes.guess_type(&quot;parse.mht&quot;) maintype, subtype = ctype.split('/', 1) fp = open(location,'rb') msg = MIMEBase(maintype, subtype) msg.set_payload(fp.read()) fp.close() Encoders.encode_base64(msg) msg.add_header('Content-Disposition', 'attachment', filename='Dashboard.mht') outer.attach(msg) s = smtplib.SMTP() s.connect('mailhost.dev.ch3.s.com') s.sendmail(fromaddr, toaddr, outer.as_string(False)) s.close() try: email(location) appendlog.write(&quot;Email has been sent, clearing down files... \n&quot;) print &quot;Email has been sent, clearing down files...&quot; success = 1 except Exception,e: print repr(e) print &quot;Email has failed to send.&quot; </code></pre> <p>I am getting an exception as</p> <blockquote> <p>Typerror(&quot;Expected list, got type 'str' &quot;,)</p> <p>Email has failed to send</p> </blockquote> <p>Can someone tell me what is wrong in the code?</p>
<p><code>toaddr</code> should be a <code>list</code>, not <code>tuple</code> or <code>str</code> if you use SMTP</p> <p>Or you can use yagmail, where you can also just send a string, and it also much easier to send attachments with:</p> <pre><code>import yagmail fromaddr = "jeff@abc.com" toaddr = "jeff@def.com" yag = yagmail.SMTP(fromaddr, 'yourpassword', host='mailhost.dev.ch3.s.com') yag.send(toaddr, 'Subject', ['Please find dashboard for the current reporting week.', '/path/to/local/parse.mht']) </code></pre> <p>Note that adding the filename as string will convert it to an attachment.</p>
python|email|mime
1
1,904,425
65,919,522
Arduino - Raspberry Pi, Bluetooth Connection using D-BUS API
<p>Task is to automate pairing and connection process between Arduino and Raspberry Pi over Bluetooth using D-BUS API based on python script.</p> <p>The Bluetooth module connected to Arduino is: <a href="https://wiki.seeedstudio.com/Grove-Serial_Bluetooth_v3.0/" rel="nofollow noreferrer">Grove - Serial Bluetooth v3.0.</a></p> <p>I am able to automate pairing process. The pairing script does the following in order:</p> <ol> <li>It looks for an Bluetooth module named <strong>Slave</strong> by creating adapter object and using <strong>StartDiscovery</strong> method.(naming is done in the Arduino).</li> <li>Registers <strong>Bluetooth agent.</strong></li> <li>Creates device object and pairs via <strong>Pair</strong> method if the device <strong>is not already paired.</strong></li> </ol> <p>The part of the code that does above steps given below:</p> <pre><code>register_agent() start_discovery() time.sleep(10) for i in range(len(address_list)): new_dbus_device = get_device(address_list[i]) dev_path = new_dbus_device.object_path device_properties = dbus.Interface(new_dbus_device, &quot;org.freedesktop.DBus.Properties&quot;) pair_status = device_properties.Get(&quot;org.bluez.Device1&quot;, &quot;Paired&quot;) if not pair_status: new_dbus_device.Pair(reply_handler=pair_reply, error_handler=pair_error, timeout=60000) </code></pre> <p>Here is what <strong>register_agent()</strong> does as requested:</p> <pre><code>def register_agent(): agent = Agent(bus, path) capability = &quot;NoInputNoOutput&quot; obj = bus.get_object(BUS_NAME, &quot;/org/bluez&quot;); manager = dbus.Interface(obj, &quot;org.bluez.AgentManager1&quot;) manager.RegisterAgent(path, capability) </code></pre> <p>However when I try to call <strong>Connect</strong> method as documented in <a href="https://github.com/bluez/bluez/blob/master/doc/device-api.txt" rel="nofollow noreferrer">device-api</a> of Bluez, it always fails. I have created a custom <strong>serial port profile</strong> and tried <strong>ConnectProfile</strong> method but it failed again.</p> <p>The communication over Bluetooth works if I use deprecated <strong>rfcomm tool</strong>, or it works if I use <strong>python socket module.</strong> However I want to avoid using rfcomm due to being deprecated. Regarding using python socket library, the connection works only in <strong>rfcomm channel 1</strong>, others channels produce <strong>Connection Refused</strong> error.</p> <p>Adding <strong>MRE</strong>, to clarify the question better:</p> <pre><code>import dbus import dbus.service import dbus.mainloop.glib import sys import time import subprocess from bluezutils import * from bluetooth import * from gi.repository import GObject, GLib from dbus.mainloop.glib import DBusGMainLoop DBusGMainLoop(set_as_default=True) path = &quot;/test/agent&quot; AGENT_INTERFACE = 'org.bluez.Agent1' BUS_NAME = 'org.bluez' bus = dbus.SystemBus() device_obj = None dev_path = None def set_trusted(path2): props = dbus.Interface(bus.get_object(&quot;org.bluez&quot;, path2), &quot;org.freedesktop.DBus.Properties&quot;) props.Set(&quot;org.bluez.Device1&quot;, &quot;Trusted&quot;, True) class Rejected(dbus.DBusException): _dbus_error_name = &quot;org.bluez.Error.Rejected&quot; class Agent(dbus.service.Object): exit_on_release = True def set_exit_on_release(self, exit_on_release): self.exit_on_release = exit_on_release @dbus.service.method(AGENT_INTERFACE, in_signature=&quot;&quot;, out_signature=&quot;&quot;) def Release(self): print(&quot;Release&quot;) if self.exit_on_release: mainloop.quit() @dbus.service.method(AGENT_INTERFACE, in_signature=&quot;os&quot;, out_signature=&quot;&quot;) def AuthorizeService(self, device, uuid): print(&quot;AuthorizeService (%s, %s)&quot; % (device, uuid)) return @dbus.service.method(AGENT_INTERFACE, in_signature=&quot;o&quot;, out_signature=&quot;s&quot;) def RequestPinCode(self, device): set_trusted(device) return &quot;0000&quot; @dbus.service.method(AGENT_INTERFACE, in_signature=&quot;o&quot;, out_signature=&quot;u&quot;) def RequestPasskey(self, device): set_trusted(device) return dbus.UInt32(&quot;0000&quot;) @dbus.service.method(AGENT_INTERFACE, in_signature=&quot;ou&quot;, out_signature=&quot;&quot;) def RequestConfirmation(self, device, passkey): set_trusted(device) return @dbus.service.method(AGENT_INTERFACE, in_signature=&quot;o&quot;, out_signature=&quot;&quot;) def RequestAuthorization(self, device): return @dbus.service.method(AGENT_INTERFACE, in_signature=&quot;&quot;, out_signature=&quot;&quot;) def Cancel(self): print(&quot;Cancel&quot;) def pair_reply(): print(&quot;Device paired and trusted&quot;) set_trusted(dev_path) def pair_error(error): err_name = error.get_dbus_name() if err_name == &quot;org.freedesktop.DBus.Error.NoReply&quot; and device_obj: print(&quot;Timed out. Cancelling pairing&quot;) device_obj.CancelPairing() else: print(&quot;Creating device failed: %s&quot; % (error)) mainloop.quit() def register_agent(): agent = Agent(bus, path) capability = &quot;NoInputNoOutput&quot; obj = bus.get_object(BUS_NAME, &quot;/org/bluez&quot;); manager = dbus.Interface(obj, &quot;org.bluez.AgentManager1&quot;) manager.RegisterAgent(path, capability) def start_discovery(): global pi_adapter pi_adapter = find_adapter() scan_filter = dict({&quot;DuplicateData&quot;: False}) pi_adapter.SetDiscoveryFilter(scan_filter) pi_adapter.StartDiscovery() def stop_discovery(): pi_adapter.StopDiscovery() def get_device(dev_str): # use [Service] and [Object path]: device_proxy_object = bus.get_object(&quot;org.bluez&quot;,&quot;/org/bluez/hci0/dev_&quot;+dev_str) # use [Interface]: device1 = dbus.Interface(device_proxy_object,&quot;org.bluez.Device1&quot;) return device1 def char_changer(text): text = text.replace(':', r'_') return text def slave_finder(device_name): global sublist_normal sublist_normal = [] sublist= [] address = [] edited_address = None sub = subprocess.run([&quot;hcitool scan&quot;], text = True, shell = True, capture_output=True) print(sub.stdout) #string type sublist = sub.stdout.split() for i in range(len(sublist)): if sublist[i] == device_name: print(sublist[i-1]) sublist_normal.append(sublist[i-1]) edited_address = char_changer(sublist[i-1]) address.append(edited_address) return address def remove_all_paired(): for i in range(len(sublist_normal)): sub = subprocess.run([&quot;bluetoothctl remove &quot; + sublist_normal[i]], text = True, shell = True, capture_output=True) time.sleep(1) if __name__ == '__main__': pair_status = None address_list = slave_finder('Slave') #Arduino bluetooth module named as &quot;Slave&quot;, here we are finding it. time.sleep(2) remove_all_paired() #rfcomm requires repairing after release print(sublist_normal) if address_list: register_agent() start_discovery() time.sleep(10) for i in range(len(address_list)): new_dbus_device = get_device(address_list[i]) dev_path = new_dbus_device.object_path device_properties = dbus.Interface(new_dbus_device, &quot;org.freedesktop.DBus.Properties&quot;) pair_status = device_properties.Get(&quot;org.bluez.Device1&quot;, &quot;Paired&quot;) if not pair_status: new_dbus_device.Pair(reply_handler=pair_reply, error_handler=pair_error, timeout=60000) mainloop = GLib.MainLoop() mainloop.run() </code></pre> <p><strong>sudo btmon output:</strong></p> <pre><code>Bluetooth monitor ver 5.50 = Note: Linux version 5.4.83-v7l+ (armv7l) 0.832473 = Note: Bluetooth subsystem version 2.22 0.832478 = New Index: DC:A6:32:58:FE:13 (Primary,UART,hci0) [hci0] 0.832481 = Open Index: DC:A6:32:58:FE:13 [hci0] 0.832484 = Index Info: DC:A6:32:5.. (Cypress Semiconductor Corporation) [hci0] 0.832487 @ MGMT Open: bluetoothd (privileged) version 1.14 {0x0001} 0.832490 @ MGMT Open: btmon (privileged) version 1.14 {0x0002} 0.832540 </code></pre> <p>So the question is why <strong>Connect</strong> and <strong>ConnectProfile</strong> methods are always failing, what do I need to do establish bluetooth communication based on D-BUS API between Arduino and Raspberry Pi?</p>
<p>I think you had answered your own question. The reason that a <code>Connect</code> doesn't work is because you have no Serial Port Profile (SPP) running on the Raspberry Pi. If you have <code>btmon</code> running you will see that the client disconnects because there is no matching profile with what the Arduino is offering.</p> <p>The port number used in the Python Socket connection needs to match the port number on the Arduino Bluetooth module. This is probably why only <code>1</code> is working.</p> <p>For reference I tested this with a Raspberry Pi and HC-06 I had laying around. I removed all the scanning code to try to get to the minimum runnable code. Here is the code I used:</p> <pre class="lang-py prettyprint-override"><code>import socket from time import sleep import dbus import dbus.service import dbus.mainloop.glib from gi.repository import GLib BUS_NAME = 'org.bluez' AGENT_IFACE = 'org.bluez.Agent1' AGNT_MNGR_IFACE = 'org.bluez.AgentManager1' ADAPTER_IFACE = 'org.bluez.Adapter1' AGENT_PATH = '/ukBaz/bluezero/agent' AGNT_MNGR_PATH = '/org/bluez' DEVICE_IFACE = 'org.bluez.Device1' CAPABILITY = 'KeyboardDisplay' my_adapter_address = '11:22:33:44:55:66' my_device_path = '/org/bluez/hci0/dev_00_00_12_34_56_78' dbus.mainloop.glib.DBusGMainLoop(set_as_default=True) bus = dbus.SystemBus() class Agent(dbus.service.Object): @dbus.service.method(AGENT_IFACE, in_signature='o', out_signature='s') def RequestPinCode(self, device): print(f'RequestPinCode {device}') return '0000' class Device: def __init__(self, device_path): dev_obj = bus.get_object(BUS_NAME, device_path) self.methods = dbus.Interface(dev_obj, DEVICE_IFACE) self.props = dbus.Interface(dev_obj, dbus.PROPERTIES_IFACE) self._port = 1 self._client_sock = socket.socket(socket.AF_BLUETOOTH, socket.SOCK_STREAM, socket.BTPROTO_RFCOMM) def connect(self): # self.methods.Connect() self._client_sock.bind((my_adapter_address, self._port)) self._client_sock.connect((self.address, self._port)) def disconnect(self): self.methods.Disconnect() def pair(self): self.methods.Pair(reply_handler=self._pair_reply, error_handler=self._pair_error) def _pair_reply(self): print(f'Device trusted={self.trusted}, connected={self.connected}') self.trusted = True print(f'Device trusted={self.trusted}, connected={self.connected}') while self.connected: sleep(0.5) self.connect() print('Successfully paired and connected') def _pair_error(self, error): err_name = error.get_dbus_name() print(f'Creating device failed: {err_name}') @property def trusted(self): return bool(self.props.Get(DEVICE_IFACE, 'Trusted')) @trusted.setter def trusted(self, value): self.props.Set(DEVICE_IFACE, 'Trusted', bool(value)) @property def paired(self): return bool(self.props.Get(DEVICE_IFACE, 'Paired')) @property def connected(self): return bool(self.props.Get(DEVICE_IFACE, 'Connected')) @property def address(self): return str(self.props.Get(DEVICE_IFACE, 'Address')) if __name__ == '__main__': agent = Agent(bus, AGENT_PATH) agnt_mngr = dbus.Interface(bus.get_object(BUS_NAME, AGNT_MNGR_PATH), AGNT_MNGR_IFACE) agnt_mngr.RegisterAgent(AGENT_PATH, CAPABILITY) device = Device(my_device_path) if device.paired: device.connect() else: device.pair() mainloop = GLib.MainLoop() try: mainloop.run() except KeyboardInterrupt: agnt_mngr.UnregisterAgent(AGENT_PATH) mainloop.quit() </code></pre>
python|arduino|bluetooth|raspberry-pi|dbus
0
1,904,426
65,925,114
Troubles using pyuic5
<p>I had python 3.8.5 installed on my machine, and I have pyuic5 installed and used it to convert .ui to .py files (using cmd). I recently upgraded to python 3.8.7, and whenever I try to convert now, I get this error:</p> <pre><code>C:\Users\....&gt;pyuic5 -x Windows.ui -o Windows.py Fatal error in launcher: Unable to create process using '&quot;d:\program files\python3.8.5\python.exe&quot; &quot;D:\Program Files\Python 3.8.7\Scripts\pyuic5.exe&quot; -x Windows.ui -o Windows.py': The system cannot find the file specified. </code></pre> <p>I made sure that pyqt5 and pyuic5 have been installed correctly. I am no expert, but I believe that cmd is looking for the python.exe file from python3.8.5 folder (the older version I had installed) instead of the one in python3.8.7 folder.</p>
<p>UPDATE: I pip uninstalled/re-installed the following, and that solved my issue: PyQt5 PyQt5-tools PyQt5-sip PyQtWebEngine</p>
python|pyqt5|pyuic
0
1,904,427
39,839,409
When plotting with Bokeh, how do you automatically cycle through a color pallette?
<p>I want to use a loop to load and/or modify data and plot the result within the loop using Bokeh (I am familiar with <a href="http://matplotlib.org/1.2.1/examples/api/color_cycle.html" rel="noreferrer">Matplotlib's <code>axes.color_cycle</code></a>). Here is a simple example</p> <pre><code>import numpy as np from bokeh.plotting import figure, output_file, show output_file('bokeh_cycle_colors.html') p = figure(width=400, height=400) x = np.linspace(0, 10) for m in xrange(10): y = m * x p.line(x, y, legend='m = {}'.format(m)) p.legend.location='top_left' show(p) </code></pre> <p>which generates this plot</p> <p><img src="https://i.stack.imgur.com/9OtK9.png" alt="bokeh plot"></p> <p>How do I make it so the colors cycle without coding up a list of colors and a modulus operation to repeat when the number of colors runs out? </p> <p>There was some discussion on GitHub related to this, issues <a href="https://github.com/bokeh/bokeh/issues/351" rel="noreferrer">351</a> and <a href="https://github.com/bokeh/bokeh/issues/2201" rel="noreferrer">2201</a>, but it is not clear how to make this work. The four hits I got when searching the <a href="http://bokeh.pydata.org/en/latest/search.html?q=cycle+colors" rel="noreferrer">documentation</a> for <code>cycle color</code> did not actually contain the word <code>cycle</code> anywhere on the page.</p>
<p>It is probably easiest to just get the list of colors and cycle it yourself using <a href="https://docs.python.org/3.5/library/itertools.html" rel="noreferrer"><code>itertools</code></a>: </p> <pre><code>import numpy as np from bokeh.plotting import figure, output_file, show # select a palette from bokeh.palettes import Dark2_5 as palette # itertools handles the cycling import itertools output_file('bokeh_cycle_colors.html') p = figure(width=400, height=400) x = np.linspace(0, 10) # create a color iterator colors = itertools.cycle(palette) for m, color in zip(range(10), colors): y = m * x p.line(x, y, legend='m = {}'.format(m), color=color) p.legend.location='top_left' show(p) </code></pre> <p><a href="https://i.stack.imgur.com/L7l9G.png" rel="noreferrer"><img src="https://i.stack.imgur.com/L7l9G.png" alt="enter image description here"></a></p>
python|plot|bokeh
38
1,904,428
39,697,604
AWS lambda_handler error for set_contents_from_string to upload in S3
<p>Recently started working on python scripting for encrypting the data and uploading to S3 using aws lambda_handler function. From local machine to S3 it was executing fine (note: Opens all permissions for anyone from bucket side) when the same script executing from aws Lambda_handler (note: Opens all permissions for anyone from bucket side) getting the below error.</p> <pre><code>{ "stackTrace": [ [ "/var/task/enc.py", 62, "lambda_handler", "up_key = up_bucket.new_key('enc.txt').set_contents_from_string(buf.readline(),replace=True,policy='public-read',encrypt_key=False)" ], [ "/var/task/boto/s3/key.py", 1426, "set_contents_from_string", "encrypt_key=encrypt_key)" ], [ "/var/task/boto/s3/key.py", 1293, "set_contents_from_file", "chunked_transfer=chunked_transfer, size=size)" ], [ "/var/task/boto/s3/key.py", 750, "send_file", "chunked_transfer=chunked_transfer, size=size)" ], [ "/var/task/boto/s3/key.py", 951, "_send_file_internal", "query_args=query_args" ], [ "/var/task/boto/s3/connection.py", 668, "make_request", "retry_handler=retry_handler" ], [ "/var/task/boto/connection.py", 1071, "make_request", "retry_handler=retry_handler)" ], [ "/var/task/boto/connection.py", 940, "_mexe", "request.body, request.headers)" ], [ "/var/task/boto/s3/key.py", 884, "sender", "response.status, response.reason, body)" ] ], "errorType": "S3ResponseError", "errorMessage": "S3ResponseError: 403 Forbidden\n&lt;?xml version=\"1.0\" encoding=\"UTF-8\"?&gt;\n&lt;Error&gt;&lt;Code&gt;AccessDenied&lt;/Code&gt;&lt;Message&gt;Access Denied&lt;/Message&gt;&lt;RequestId&gt;4B09C24C4D79C147&lt;/RequestId&gt;&lt;HostId&gt;CzhDhtYDERh9E/e4tVHek35G3CEMh0qFifcnd06fKN/oyLHtj9bWg87zZOajBNQDfqIC2QrldsA=&lt;/HostId&gt;&lt;/Error&gt;" } </code></pre> <p>Here is the script i am executing</p> <pre><code>def lambda_handler(event, context): cipher = AESCipher(key='abcd') print "ready to connect S3" conn = boto.connect_s3() print "connected to download" bucket = conn.get_bucket('s3download') key = bucket.get_key("myinfo.json") s3file = key.get_contents_as_string() lencp = cipher.encrypt(s3file) buf = StringIO.StringIO(lencp) print lencp print "connected to upload" up_bucket = conn.get_bucket("s3upload") up_key = up_bucket.new_key('enc.txt').set_contents_from_string(buf.readline(),replace=True,policy='public-read') print "completed upload" return </code></pre>
<p>Solved the problem it was due to policy='public-read' ,after removing this able to perform the upload and also a note if in IAM role if you still enable all S3 functions (i.e PutObject,getObject) upload can't work.Need to create a bucket policy for this particular role then only upload work smoothly.</p>
python|amazon-web-services|amazon-s3|boto|aws-lambda
1
1,904,429
16,050,174
TypeError: unsupported operand type(s) for +: 'int' and 'str'
<p>This code is for a trivia game, and every time someone gets a question right, they are awarded with a certain amount of points per question. I do not know how to get the code to add the points together after each right answer. Every time I try something different, I always get this error message.</p> <pre><code>import sys def open_file(file_name, mode): """Open a file.""" try: the_file = open(file_name, mode) except IOError as e: print("Unable to open the file", file_name, "Ending program.\n", e) input("\n\nPress the enter key to exit.") sys.exit() else: return the_file def next_line(the_file): """Return next line from the trivia file, formatted.""" line = the_file.readline() line = line.replace("/", "\n") return line def next_block(the_file): """Return the next block of data from the trivia file.""" category = next_line(the_file) question = next_line(the_file) answers = [] for i in range(4): answers.append(next_line(the_file)) correct = next_line(the_file) if correct: correct = correct[0] explanation = next_line(the_file) points = next_line(the_file) return category, question, answers, correct, explanation, points def welcome(title): """Welcome the player and get his/her name.""" print("\t\tWelcome to Trivia Challenge!\n") print("\t\t", title, "\n") def main(): trivia_file = open_file("trivia_points.txt", "r") title = next_line(trivia_file) welcome(title) score = 0 # get first block category, question, answers, correct, explanation, points = next_block(trivia_file) while category: # ask a question print(category) print(question) for i in range(4): print("\t", i + 1, "-", answers[i]) # get answer answer = input("What's your answer?: ") # check answer if answer == correct: print("\nRight!", end=" ") total = sum(points + points) score = total else: print("\nWrong.", end=" ") print(explanation) print("Score:", score, "\n\n") points = int(points) # get next block category, question, answers, correct, explanation, points = next_block(trivia_file) trivia_file.close() print("That was the last question!") print("You're final score is", score) main() input("\n\nPress the enter key to exit.") </code></pre>
<p>You're getting the points out as a string, and need to convert them to an int <em>before</em> doing any computation with it, ideally as soon as possible:</p> <pre><code>points = int(next_line(the_file)) </code></pre> <p>in next_block should do the trick. Also, you are not adding the points to the score, you're replacing it. </p> <pre><code>total = sum(points + points) score = total </code></pre> <p>should be</p> <pre><code>score += points </code></pre> <p>to add 'points' to 'score'.</p>
python-3.x
0
1,904,430
31,711,327
How to run a database search inside template in django
<p>I am new and trying to display the image of user after he logs in. I am using the django user model which is a Foreign key to my extended user model which is called Consumer. I am using auth.views to authenticate the user and generate view. I able to get the username but not able to retrieve the image URL as I am not able get the Consumer instance. Where and how can I extract the Consumer detail for the logged in user without writing my own authentication code.</p> <p>Please help. Please see my code below</p> <p>Models.py</p> <pre><code>class Consumer(models.Model): user = models.OneToOneField(User) cimage = models.ImageField(blank=True) company = models.CharField(max_length=32) </code></pre> <p>urls.py</p> <pre><code>(r'^login/$', 'django.contrib.auth.views.login'), </code></pre> <p>screen.html </p>
<p>If you're getting the username via <code>{{ user.username }}</code> you can just do the same thing to get to the consumer: <code>{{ user.consumer.cimage }}</code>.</p>
python|django|templates
0
1,904,431
31,722,238
Installing caffe on ubuntu 15.04 with anaconda 3 for python 3.4 - no module caffe found
<p>I am trying to install caffe on my ubuntu 15.04 with anaconda 3 (for python 3.4). I managed to install all requirements and I followed the instructions from official website. So I downloaded caffe-master and did:</p> <pre><code>cd ./caffe-master make all make pycaffe </code></pre> <p>It completes fine, no errors (finally). But after that if I go into anaconda and do</p> <pre><code>import caffe </code></pre> <p>I get no module caffe is found. What am I doing wrong? Any ideas?</p>
<p>Finally solved. Honestly the issue was in incorrect makefile.config. I needed to be extremely careful in adjusting it to specify all path to anaconda folders - I have incorrectly specified the path to python3.4 libraries.</p> <p>The point is - when set up caffe with anaconda and facing issues you need to go over makefile.config one more time - you should have misconfigured something</p>
python|ubuntu|anaconda|caffe
2
1,904,432
40,565,188
Django DisallowedHost error
<p>I'm running django with apache and I'm getting the following error in my apache error.log:</p> <pre><code>django.core.exceptions.DisallowedHost: Invalid HTTP_HOST header: 'example.com'. You may need to add 'example.com' to ALLOWED_HOSTS., referer: http://example.com/ </code></pre> <p>In my settings.py I have:</p> <pre><code>ALLOWED_HOSTS = ['*'] </code></pre> <p>This should allow any host shouldn't it?</p> <p>Edit: After some more investigating I've found out that no matter what I set ALLOWED_HOSTS to, it always results in the above error. I can reach the website just fine when I'm using the local IP address of the server. The only thing that's not working is the remote URL.</p>
<p>I finally found the solution to the problem.</p> <p>The wsgi.py that's connecting django with apache was overriding the ALLOWED_HOSTS setting in my settings.py.</p> <p>wsgi has it's own ALLOWED_HOSTS that can be set independently from the django settings. Checking all the possible configuration files was crucial for finding this error.</p>
python|django|apache
2
1,904,433
40,500,216
How to Transpose each element in a 3D np array
<p>Given a 3D array a, I want to call np.tranpose on each of the element in its first index.<br> For example, given the array:</p> <pre><code>array([[[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]], [[2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2]], [[3, 3, 3, 3], [3, 3, 3, 3], [3, 3, 3, 3]]) </code></pre> <p>I want:</p> <pre><code>array([[[1, 1, 1], [1, 1, 1], [1, 1, 1], [1, 1, 1]], [[2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2]], [[3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3]]]) </code></pre> <p>Essentially I want to transpose each element inside the array. I tried to reshape it but I can't find a good way of doing it. Looping through it and calling transpose on each would be too slow. Any advice? </p>
<p>You can use the built-in numpy <code>transpose</code> method and directly specify the axes to transpose</p> <pre><code>&gt;&gt;&gt; a = np.array([[[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]], [[2, 2, 2, 2], [2, 2, 2, 2], [2, 2, 2, 2]], [[3, 3, 3, 3], [3, 3, 3, 3], [3, 3, 3, 3]]]) &gt;&gt;&gt; print(a.transpose((0, 2, 1))) [[[1 1 1] [1 1 1] [1 1 1] [1 1 1]] [[2 2 2] [2 2 2] [2 2 2] [2 2 2]] [[3 3 3] [3 3 3] [3 3 3] [3 3 3]]] </code></pre>
python|numpy
15
1,904,434
9,823,143
Check if a directory is a (file system) root
<p>I have a script that searches for a directory containing a specific file, starting from the current directory and going up the tree (think of trying to find out where the <code>.git</code> directory sits).</p> <p>My method looks like this:</p> <pre><code>def getDir(self,cwd): path = os.path.abspath(cwd) if not os.path.isdir(path): raise RuntimeError("getDir should not be run on files") if FILE in os.listdir(path): return path parent = os.path.join(path, os.path.pardir) if os.access(parent, os.R_OK) and os.access(parent, os.X_OK): return self.getDir(parent) else return None </code></pre> <p>Now the problem with this method is that, if it cannot find the directory, it loops (and eventually stack-overflows) because apparently joining <code>/</code> and <code>..</code> gives you <code>/</code> again. I tried comparing <code>path</code> with <code>parent</code> or their <code>repr</code>s, but that did not work (they were always distinct). My workaround for now is to include a depth counter in the recursive method and stop at some random maximum threshold.</p> <p>My question is thus, is there a reliable cross-platform way to check whether I have reached a root in the file system?</p>
<pre><code>if os.path.dirname(path) == path: # you have yourself root. # works on Windows and *nix paths. # does NOT work on Windows shares (\\server\share) </code></pre>
python
24
1,904,435
26,033,781
Converting python.io object to std::ostream when using boost::python
<p>I have seen answers related to converting python.io object to std::istream. Is there anyway, this can be achieved for std::ostream using boost::iostream::sink? In my case I have a C++ function <p>void writeToStream(std::ostream&amp;)</p> How can I expose this function to python?</p>
<p>As was done in <a href="https://stackoverflow.com/a/24984472/1053968">this</a> answer, one should consider implementing a device type to be used by Boost.IOStream to perform delegation rather than converting to a <code>std::ostream</code>. In this case, the Boost.IOStream's <a href="http://www.boost.org/doc/libs/1_57_0/libs/iostreams/doc/index.html" rel="nofollow noreferrer">Device</a> concept will need to support the <a href="http://www.boost.org/doc/libs/1_57_0/libs/iostreams/doc/concepts/sink.html" rel="nofollow noreferrer">Sink</a> concept. It can also be extended to support additional functionality, such as flushing the buffer, by modeling the <a href="http://www.boost.org/doc/libs/1_53_0/libs/iostreams/doc/concepts/flushable.html" rel="nofollow noreferrer">Flushable</a> concept.</p> <p>A concept specifies what a type must provide. In the case of the Sink concept, a model can be defined as follows:</p> <pre class="lang-cpp prettyprint-override"><code>struct Sink { typedef char char_type; typedef sink_tag category; std::streamsize write(const char* s, std::streamsize n) { // Write up to n characters from the buffer // s to the output sequence, returning the // number of characters written } }; </code></pre> <p>The Flushable concept is a bit less direct in the document, and can be determined upon examining the <a href="http://www.boost.org/doc/libs/1_53_0/libs/iostreams/doc/functions/flush.html#flush_device" rel="nofollow noreferrer"><code>flush()</code></a> function semantics. In this case, the model can be defined as follows:</p> <pre class="lang-cpp prettyprint-override"><code>struct Flushable { typedef flushable_tag category; bool flush() { // Send all buffered characters downstream. On error, // throw an exception. Otherwise, return true. } }; </code></pre> <p>Here is a basic type that models the Sink and Flushable concept and delegates to a Python object using <a href="http://en.wikipedia.org/wiki/Duck_typing" rel="nofollow noreferrer">duck typing</a>. The python object is:</p> <ul> <li>Required to have a <code>write(str)</code> method that returns either <code>None</code> or the number of bytes written.</li> <li>An optional <code>flush()</code> method.</li> </ul> <pre class="lang-cpp prettyprint-override"><code>/// @brief Type that implements the Boost.IOStream's Sink and Flushable /// concept for writing data to Python object that support: /// n = object.write(str) # n = None or bytes written /// object.flush() # if object.flush exists, then it is callable class PythonOutputDevice { public: // This class models both the Sink and Flushable concepts. struct category : boost::iostreams::sink_tag, boost::iostreams::flushable_tag {}; explicit PythonOutputDevice(boost::python::object object) : object_(object) {} // Sink concept. public: typedef char char_type; std::streamsize write(const char* buffer, std::streamsize buffer_size) { namespace python = boost::python; // Copy the buffer to a python string. python::str data(buffer, buffer_size); // Invoke write on the python object, passing in the data. The following // is equivalent to: // n = object_.write(data) python::extract&lt;std::streamsize&gt; bytes_written( object_.attr("write")(data)); // Per the Sink concept, return the number of bytes written. If the // Python return value provides a numeric result, then use it. Otherwise, // such as the case of a File object, use the buffer_size. return bytes_written.check() ? bytes_written : buffer_size; } // Flushable concept. public: bool flush() { // If flush exists, then call it. boost::python::object flush = object_.attr("flush"); if (!flush.is_none()) { flush(); } // Always return true. If an error occurs, an exception should be thrown. return true; } private: boost::python::object object_; }; </code></pre> <p>Notice that in order to support multiple concepts, a nested <code>category</code> struct was created that inherits from the multiple category tags that the model implements.</p> <p>With the Boost.IOStream Device available, the last step is to expose an auxiliary function that will create the stream with a Python object, then invoke the existing <code>writeToStream()</code> function.</p> <pre class="lang-cpp prettyprint-override"><code>/// @brief Use an auxiliary function to adapt the legacy function. void aux_writeToStream(boost::python::object object) { // Create an ostream that delegates to the python object. boost::iostreams::stream&lt;PythonOutputDevice&gt; output(object); writeToStream(output); }; BOOST_PYTHON_MODULE(example) { namespace python = boost::python; python::def("writeToStream", &amp;aux_writeToStream); } </code></pre> <hr> <p>Here is a complete minimal example:</p> <pre class="lang-cpp prettyprint-override"><code>#include &lt;iosfwd&gt; // std::streamsize #include &lt;iostream&gt; #include &lt;boost/python.hpp&gt; #include &lt;boost/iostreams/concepts.hpp&gt; // boost::iostreams::sink #include &lt;boost/iostreams/stream.hpp&gt; /// @brief Legacy function. void writeToStream(std::ostream&amp; output) { output &lt;&lt; "Hello World"; output.flush(); } /// @brief Type that implements the Boost.IOStream's Sink and Flushable /// concept for writing data to Python object that support: /// n = object.write(str) # n = None or bytes written /// object.flush() # if flush exists, then it is callable class PythonOutputDevice { public: // This class models both the Sink and Flushable concepts. struct category : boost::iostreams::sink_tag, boost::iostreams::flushable_tag {}; explicit PythonOutputDevice(boost::python::object object) : object_(object) {} // Sink concept. public: typedef char char_type; std::streamsize write(const char* buffer, std::streamsize buffer_size) { namespace python = boost::python; // Copy the buffer to a python string. python::str data(buffer, buffer_size); // Invoke write on the python object, passing in the data. The following // is equivalent to: // n = object_.write(data) python::extract&lt;std::streamsize&gt; bytes_written( object_.attr("write")(data)); // Per the Sink concept, return the number of bytes written. If the // Python return value provides a numeric result, then use it. Otherwise, // such as the case of a File object, use the buffer_size. return bytes_written.check() ? bytes_written : buffer_size; } // Flushable concept. public: bool flush() { // If flush exists, then call it. boost::python::object flush = object_.attr("flush"); if (!flush.is_none()) { flush(); } // Always return true. If an error occurs, an exception should be thrown. return true; } private: boost::python::object object_; }; /// @brief Use an auxiliary function to adapt the legacy function. void aux_writeToStream(boost::python::object object) { // Create an ostream that delegates to the python object. boost::iostreams::stream&lt;PythonOutputDevice&gt; output(object); writeToStream(output); }; BOOST_PYTHON_MODULE(example) { namespace python = boost::python; python::def("writeToStream", &amp;aux_writeToStream); } </code></pre> <p>Interactive usage:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; import example &gt;&gt;&gt; import io &gt;&gt;&gt; with io.BytesIO() as f: ... example.writeToStream(f) ... print f.getvalue() ... Hello World &gt;&gt;&gt; with open('test_file', 'w') as f: ... example.writeToStream(f) ... &gt;&gt;&gt; $ cat test_file Hello World </code></pre>
c++|boost-python
1
1,904,436
1,542,803
Is there a version of os.getcwd() that doesn't dereference symlinks?
<blockquote> <p><strong>Possible Duplicate:</strong><br> <a href="https://stackoverflow.com/questions/123958/how-to-get-set-logical-directory-path-in-python">How to get/set logical directory path in python</a> </p> </blockquote> <p>I have a Python script that I run from a symlinked directory, and I call os.getcwd() in it, expecting to get the symlinked path I ran it from. Instead it gives me the "real" path, and in this case that's not helpful. I need it to actually give me the symlinked version.</p> <p>Does Python have a command for that?</p>
<p>Workaround: <code>os.getenv('PWD')</code></p>
python|path|symlink
16
1,904,437
44,189,293
Loop unexpectantly stops after one iteration
<p>I'm iterating over a list of lists with length 4, but it stops after one iteration.</p> <pre><code> print((data)) formatted_exec_dictionary_list = [] for entry in data: print(entry) try: formatted_exec = { 'minute': self.validate_type_and_range(entry[0], 0, 59), 'hour': self.validate_type_and_range(entry[1], 0, 23), 'path': entry[2] } formatted_exec_dictionary_list.append(formatted_exec) except IndexError as exception: print(exception) if self.logger: self.logger.error(""" Could not successfully parse entry... Exception: %s""", exception) </code></pre> <p>In the topmost print in the snippet this is the output:</p> <pre><code>[[u'30', u'1', u'/bin/run_me_daily'], [u'45', u'*', u'/bin/run_me_hourly'], [u'*', u'*', u'/bin/run_me_every_minute'], [u'*', u'19', u'/bin/run_me_sixty_times']] </code></pre> <p>But the for-loop wrapped print only returns the first entry, then it stops. </p> <p>No exception seems to be thrown as the print isn't firing and the rest of the program functions fine. It's as if all the rest of the entries in the list are just ignored...</p> <p>Is this something common? Any clue why this would be happening?</p> <p>For clarification NOTHING in particular happens in the loop that seems to be throwing an exception. The function <code>validate_type_and_range</code> prints the expected data, but only for the one iteration.</p>
<p>When I replace your 'validate_type_and_range(...)' with some strings, I get the following output:</p> <pre><code>[[u'30', u'1', u'/bin/run_me_daily'], [u'45', u'*', u'/bin/run_me_hourly'], [u'*', u'*', u'/bin/run_me_every_minute'], [u'*', u'19', u'/bin/run_me_sixty_times']] [u'30', u'1', u'/bin/run_me_daily'] [u'45', u'*', u'/bin/run_me_hourly'] [u'*', u'*', u'/bin/run_me_every_minute'] [u'*', u'19', u'/bin/run_me_sixty_times'] </code></pre> <p>I think there is something going on in your 'validate_type_and_range'</p> <pre><code>print((data)) formatted_exec_dictionary_list = [] for entry in data: print(entry) try: formatted_exec = { 'minute': "foo", 'hour': "bar", 'path': entry[2] } formatted_exec_dictionary_list.append(formatted_exec) except IndexError as exception: print(exception) if self.logger: self.logger.error(""" Could not successfully parse entry... Exception: %s""", exception) </code></pre>
python|python-3.x|loops
1
1,904,438
44,022,029
In Keras, how could I input two images to a Dense layer?
<p>I would like to use Keras and TensorFlow to do the folowing:</p> <p>I have 10.000 sets of three images, each 8x8 bytes with a color value of 0 to 16 in my training data. I call them image A, B and C. (actually they are represented as three strings of 64 hexadecimal chars in a csv file)</p> <p>C is a sort of combination of A and B.</p> <p>What I would like to do is train a network so that it gets A and B as inputs and C as a label. Then feed two images (A,B) the network has not seen before and generate C.</p> <p>What I have done to read the csv file is:</p> <pre><code>import pandas as pds training_data_filename = 'training_data.csv' dataframeX = pds.read_csv(training_data_filename, usecols=[2, 3]) dataframeY = pds.read_csv(training_data_filename, usecols=[4]) print(dataframeX.head()) print(dataframeY.head()) </code></pre> <p>The output is:</p> <pre><code> A \ 0 0x7ff10d4d9fc927fc0fdfcd43e1715a34b7def9d77299... 1 0xe6da2b370a3b5b42cf2bc0a082e2e0165f2f321bd10c... 2 0xf179a821c24164fcacd6e75b15892fa8c8e42cb1571a... 3 0x27eb1825923ef55e80491199ec68438b94857f02ed47... 4 0x77bc45a7728b4578066639ef4525f0fe5d80d450595c... B 0 0xd9b40a45b5df976433ebdfddcfe290a66ecf576d18ae... 1 0xf765866850270f595d8239acee9d0d8634249b9d4ac6... 2 0x61cc8870ca4690953184f3b680bcdc38039215d2174b... 3 0x515f6afe8e6f9098abc0da8807ee1070947a4686edd0... 4 0x8ffb621c0398e392d0be0000e3088649eced85dd0c45... C 0 0x35c1f6ebc3ef2d424ced351b65be8c396f8396d69411... 1 0x4421ce104498ad13b639d6f26b105b4be092f3786f3b... 2 0xcbab77b6c025fe66f967f46ff6138f034dc28fedcb50... 3 0x1f6c372b5d2dfecf29592f71a49dfda5fb930e4e90d5... 4 0x2618f3b170046de775727421ed122174f31ae15fa555... </code></pre> <p>But I don't understand how to feed (A,B) to a Dense layer and use C as the desired output. How do I build my model?</p>
<p>You can not feed Strings into a dense layer. The dense layer expects a vector of numbers as input. So the first step would be to convert your input images to a numpy array (I am not sure why your images are hexadecimal strings). </p> <p>E.g.:</p> <pre><code>import cv2 numpy_array = cv2.imread("img.jpg") </code></pre> <p>From there it is pretty easy to feed the numpy array to a dense layer and perform classification for example. For a code example see: <a href="https://github.com/fchollet/keras/blob/master/examples/mnist_mlp.py" rel="nofollow noreferrer">Training a simple deep NN on the MNIST dataset</a></p> <p>That said, if I understand you correctly, you want to genereate an image C from two images A and B. This is not a job for a deep neural network. If this is what you want to do, have a look and Convolutional Neural Networks and Generative Adversarial Networks.</p>
python|tensorflow|neural-network|deep-learning|keras
0
1,904,439
43,965,659
How to get the amount of 'strings'(?) and print
<p>I have a list of different words divided with ':' in a .txt, as such: </p> <pre><code>banana:pinapple apple:grapes orange:nuts ... </code></pre> <p>How can I get the number of lines that have a word on the left of the semicolon and print that number?</p> <p>I am using this to seperate them:</p> <p><code>string1, string2 = line.split(':')</code></p> <p>I want to print the number sort of like this:</p> <pre><code>print(number of lines where there exists is a string1) </code></pre>
<pre><code>fileName = 'myfile.txt' counterDict = {} with open(fileName,'r') as fh: for line in fh: left,right = line.rstrip('\n').split(':') counterDict[left] = counterDict.get(left,0)+1 print(counterDict) </code></pre> <p><strong># cat myfile.txt</strong></p> <pre><code>banana:pinapple apple:grapes orange:nuts banana:pinapple apple:grapes orange:nuts banana:pinapple apple:grapes orange:nuts </code></pre> <p><strong># Result</strong></p> <pre><code>{'banana':3,'apple':3,'orange':3} </code></pre>
python
0
1,904,440
32,668,514
'int' object is not callable Error
<pre><code>t=(input()).rsplit(' ') t1=int(t[0]) t2=int(t[1]) k=[] for i in range(0,t1): k.append(input().rsplit(' ')) k6=int(input()) for i in range(0,t1): k[i][k6]=int(k[i][k6]) k.sort(key=k6) for i in range(0,t1): for j in range(0,t2): print(k[i][j],end=' ') print() </code></pre> <p>So I was solving a problem related to sorting and came by this error,</p> <pre><code>Traceback (most recent call last): File "solution.py", line 10, in &lt;module&gt; k.sort(key=k6) TypeError: 'int' object is not callable </code></pre> <p>Please can anyone help me with this. The question is this <a href="https://hackerrank-challenge-pdfs.s3.amazonaws.com/8203-python-sort-sort-English?AWSAccessKeyId=AKIAJAMR4KJHHUS76CYQ&amp;Expires=1442671240&amp;Signature=Xhd3JWwVfN7maARBP3t7OXlIp8E%3D&amp;response-content-disposition=inline%3B%20filename%3Dpython-sort-sort-English.pdf&amp;response-content-type=application%2Fpdf" rel="nofollow">Question</a></p>
<p>From the documentation:</p> <blockquote> <p>The value of the key parameter should be a function that takes a single argument and returns a key to use for sorting purposes. This technique is fast because the key function is called exactly once for each input record.</p> </blockquote> <p>For example:</p> <pre><code>student_tuples = [ ('john', 'A', 15), ('jane', 'B', 12), ('dave', 'B', 10), ] sorted(student_tuples, key=lambda student: student[2]) # sort by age [('dave', 'B', 10), ('jane', 'B', 12), ('john', 'A', 15)] </code></pre> <p>So basically what you can do is to write a lambda function like this. Though you have to fix several issue to get the expected result.</p>
python|sorting
0
1,904,441
13,971,240
Sqlalchemy Nested Query Returns Wrong data
<p>I have this SQL expression:</p> <pre><code>SELECT count(*), date_trunc('day', data) from ( select max(hc.acesso) as data from historico_comunicacao hc join cliente c on (c.id = hc.id_cliente) group by c.id having max(hc.acesso) between ('2012-11-30') and ('2012-12-07 23:59:59') order by max(hc.acesso) desc ) as x GROUP BY 2 ORDER BY 2 DESC; </code></pre> <p><strong>So, I did so in SqlAlchemy:</strong></p> <pre><code>nestedQuery = session.query(func.max(MapComunicacao.acesso).label('data'))\ .join(MapCliente)\ .group_by(MapCliente.id)\ .having(func.max(MapComunicacao.acesso).between(dataini, dataFinal))\ .order_by(desc(func.max(MapComunicacao.acesso))) query = session.query( func.count(nestedQuery.subquery().columns.data), extract('day', nestedQuery.subquery().columns.data) )\ .group_by('anon_3.data') result = query.all() </code></pre> <p>No error occurs, but the data returned are wrong for me.</p> <p>I have two tables: Cliente (Customer) and Historico_Acesso (Access_History). What I want to know is the total number of customers who have talked to my DataBase for the last time, grouped by date. </p> <p>Like this:</p> <p>Total Date</p> <p>19; "2012-12-07 00:00:00+00"</p> <p>16; "2012-12-06 00:00:00+00" </p> <p>20; "2012-12-05 00:00:00+00" </p> <p>06; "2012-12-04 00:00:00+00" </p> <p>06; "2012-12-03 00:00:00+00"</p> <p>01; "2012-12-02 00:00:00+00" </p> <p>04; "2012-12-01 00:00:00+00" </p> <p>09; "2012-11-30 00:00:00+00"</p>
<p>Wild guess: try to extract <code>nestedQuery.subquery()</code> into a variable, so that it's called only once.</p> <p>If that doesn't work, SQLAlchemy will gladly print the generated SQL for you with <code>print query</code>. You can then compare with your handcrafted SQL query.</p>
python|sql|postgresql|sqlalchemy|nested
1
1,904,442
14,054,193
Use a comment to improve code analyst function in PyDev like ZendStudio
<p>In Zend Studio, I can use</p> <pre><code>/** @return CUser */ function getUser() { } </code></pre> <p>to tell IDE, the type of return value of the getUser function is CUser.</p> <p>but I can't do the same thing in pydev, I want to know how I can do the same thing.</p>
<p>PyDev does have predefined completion modules. It doesn't look like it works as nicely as ZendStudio, but it's similar to the feature you're looking for.</p> <p>Take a look here: <a href="http://pydev.org/manual_101_interpreter.html#PyDevInterpreterConfiguration-PredefinedCompletions" rel="nofollow">http://pydev.org/manual_101_interpreter.html#PyDevInterpreterConfiguration-PredefinedCompletions</a></p> <p>You basically define a module that acts like an interface/documentation of your module.</p>
python|pydev|zend-studio
1
1,904,443
14,075,397
How to count occurences at the end of the list
<p>I would like to count the same occurences at the end of the list in python. It's very trivial to do, but I'm interested in some of your interesting solutions as well. List can contain only '1' or '2' item. Result must be in [3,4,5]. If less than 2 quit, if more than 5 return 5.</p> <p><strong>Examples:</strong></p> <p>Let's have </p> <pre><code> L = [1,1,2] Result: None (quit) L = [1,2,1,1] Result: None (quit) L = [1,2,1,1,1] Result: 3 L = [1,1,2,2,2,2] Result: 4 L = [1,2,1,1,1,1,1,1] Result: 5 </code></pre>
<p>I fulfil the boring job of giving a readable answer. ;) It works with all kinds of elements, not just <code>1</code>s and <code>2</code>s. </p> <pre><code>In [1]: def list_end_counter(lst): ....: counter = 0 ....: for elem in reversed(lst): ....: if elem == lst[-1]: ....: counter += 1 ....: else: ....: break ....: if counter &lt; 3: ....: return None ....: elif counter &gt; 5: ....: return 5 ....: return counter </code></pre> <p>A slight modification to save some lines:</p> <pre><code>In [1]: def list_end_counter(lst): ....: def stop(): ....: raise StopIteration() ....: counter = sum(1 if elem == lst[-1] else stop() for elem in reversed(lst)) ....: return None if counter &lt; 3 else 5 if counter &gt; 5 else counter </code></pre> <p>Both give the correct results:</p> <pre><code>In [2]: print list_end_counter([1,1,2]) None In [3]: print list_end_counter([1,2,1,1]) None In [4]: print list_end_counter([1,2,1,1,1]) 3 In [5]: print list_end_counter([1,1,2,2,2,2]) 4 In [6]: print list_end_counter([1,2,1,1,1,1,1,1]) 5 </code></pre>
python
2
1,904,444
14,189,937
Separate mixture of gaussians in Python
<p>There is a result of some physical experiment, which can be represented as a histogram <code>[i, amount_of(i)]</code>. I suppose that result can be estimated by a mixture of 4 - 6 Gaussian functions.</p> <p>Is there a package in Python which takes a histogram as an input and returns the mean and variance of each Gaussian distribution in the mixture distribution?</p> <p>Original data, for example:</p> <p><img src="https://i.stack.imgur.com/ze2G9.png" alt="Sample data"></p>
<p>This is a <a href="http://en.wikipedia.org/wiki/Mixture_model">mixture of gaussians</a>, and can be estimated using an <a href="http://en.wikipedia.org/wiki/Expectation%E2%80%93maximization_algorithm">expectation maximization</a> approach (basically, it finds the centers and means of the distribution at the same time as it is estimating how they are mixed together).</p> <p>This is implemented in the <a href="http://www.pymix.org/pymix/">PyMix</a> package. Below I generate an example of a mixture of normals, and use PyMix to fit a mixture model to them, including figuring out what you're interested in, which is the size of subpopulations:</p> <pre><code># requires numpy and PyMix (matplotlib is just for making a histogram) import random import numpy as np from matplotlib import pyplot as plt import mixture random.seed(010713) # to make it reproducible # create a mixture of normals: # 1000 from N(0, 1) # 2000 from N(6, 2) mix = np.concatenate([np.random.normal(0, 1, [1000]), np.random.normal(6, 2, [2000])]) # histogram: plt.hist(mix, bins=20) plt.savefig("mixture.pdf") </code></pre> <p>All the above code does is generate and plot the mixture. It looks like this:</p> <p><img src="https://i.stack.imgur.com/M337Q.png" alt="enter image description here"></p> <p>Now to actually use PyMix to figure out what the percentages are:</p> <pre><code>data = mixture.DataSet() data.fromArray(mix) # start them off with something arbitrary (probably based on a guess from the figure) n1 = mixture.NormalDistribution(-1,1) n2 = mixture.NormalDistribution(1,1) m = mixture.MixtureModel(2,[0.5,0.5], [n1,n2]) # perform expectation maximization m.EM(data, 40, .1) print m </code></pre> <p>The output model of this is:</p> <pre><code>G = 2 p = 1 pi =[ 0.33307859 0.66692141] compFix = [0, 0] Component 0: ProductDist: Normal: [0.0360178848449, 1.03018725918] Component 1: ProductDist: Normal: [5.86848468319, 2.0158608802] </code></pre> <p>Notice it found the two normals quite correctly (one <code>N(0, 1)</code> and one <code>N(6, 2)</code>, approximately). It also estimated <code>pi</code>, which is the fraction in each of the two distributions (you mention in the comments that's what you're most interested in). We had 1000 in the first distribution and 2000 in the second distribution, and it gets the division almost <em>exactly</em> right: <code>[ 0.33307859 0.66692141]</code>. If you want to get this value directly, do <code>m.pi</code>.</p> <p>A few notes:</p> <ul> <li>This approach takes a vector of values, not a histogram. It should be easy to convert your data into a 1D vector (that is, turn <code>[(1.4, 2), (2.6, 3)]</code> into <code>[1.4, 1.4, 2.6, 2.6, 2.6]</code>)</li> <li>We had to guess the number of gaussian distributions in advance (it won't figure out a mix of 4 if you ask for a mix of 2).</li> <li>We had to put in some initial estimates for the distributions. If you make even remotely reasonable guesses it should converge to the correct estimates.</li> </ul>
python|statistics|normal-distribution
18
1,904,445
27,297,690
Tkinter button not working (Python 3.x)
<p>I'm working on my final project for my computing I class. </p> <p>The problem that I am having is: <br> When I click on the new entry button, hit the back button and click on the new entry button once again it <strong>does not work</strong>. </p> <p>If you guys could tell me why that is? </p> <p>The command on the button seems to be only working once. Thanks for your help.</p> <p><strong>Code:</strong></p> <pre><code>from tkinter import * import tkinter.filedialog class App(Tk): def __init__(self): Tk.__init__(self) self.title("Entry Sheet") self.font = ("Helvetica","13") self.header_font = ("Helvetica","18") self.exercise_font = ("Helvetica","13","bold") self.delete = 'a' self.new_user() def new_user(self): if self.delete == 'b': self.delete = 'c' self.hide() self.delete = 'b' self.new_entry = Button(self, text = 'New Entry', command = self.entry, width = 15) self.new_entry.grid(row = 1, column = 0, columnspan = 3, padx = 10, pady = 5) self.look_entry = Button(self, text = 'See Entries', command = self.see_entries, width = 15) self.look_entry.grid(row = 2, column =0, columnspan = 3, padx = 10, pady = 5) def entry(self): print(1) self.delete = 'b' self.hide() self.entry = Label(self, text = 'New Entry', font = self.header_font) self.entry.grid(row = 0, column = 0, columnspan = 2) self.numberlbl = Label(self, text = 'Please choose a muscle?', font = self.font) self.numberlbl.grid(row = 1, column= 0, columnspan = 2, sticky = 'w' ) self.muscle_chosen = IntVar() self.chest = Radiobutton(self, text = "chest", variable = self.muscle_chosen, value = 1, font = self.font) self.bicep = Radiobutton(self, text = "bicep", variable = self.muscle_chosen, value = 2, font = self.font) self.chest.grid(row = 2, column = 0) self.bicep.grid(row = 2, column = 1) self.exerciseslbl = Label(self, text = 'Please enter the number of exercises: ', font = self.font) self.exerciseslbl.grid(row = 3, column = 0, columnspan = 3) self.exercises_spinbox = Spinbox(self, from_= 1, to_= 50, width = 5, font = self.font) self.exercises_spinbox.grid(row = 4, column = 0) self.back_button = Button(self, text = 'Back', command = self.new_user, width = 10) self.back_button.grid(row =5, column=0, pady =10) def see_entries(self): print("Goes through") def hide(self): if self.delete == 'b': self.new_entry.grid_remove() self.look_entry.grid_remove() elif self.delete == 'c': self.entry.grid_remove() self.numberlbl.grid_remove() self.chest.grid_remove() self.bicep.grid_remove() self.exerciseslbl.grid_remove() self.exercises_spinbox.grid_remove() self.back_button.grid_remove() def main(): app = App() app.mainloop() if __name__=="__main__": main() </code></pre>
<p>In your <code>entry</code> function you overwrite <code>self.entry</code>, which is the name of the function, with a reference to a <code>Label</code>. When the button then calls <code>self.entry</code> it isn't function.<br> Simply call the <code>Label</code> something else.</p>
python|python-3.x|tkinter
1
1,904,446
27,312,126
python array of decimals into binary numbers efficienty
<p>I made a code that reads 256x256 matrix in a file, convert them to binary numbers, read portions of them to make two other files of 256x256 matrix. I think it's doing what I want, but it takes several minutes to process one file. Any suggestions for more efficient code?</p> <p>Here's my code.</p> <pre><code>#Read in file f = open(path, 'r') l = [list(map(int, line.split(' '))) for line in f] a = np.array(l) A = a.ravel() #Convert to binary bnry = () for data in A: bnry = np.append(bnry, bin(data)) #Extract two data from each number and store them to 'THL' and 'THH' thl = () thh = () THL = () THH = () for ths in bnry: thl = int(ths[2:14], 2) THL = np.append(THL, thl) thh = int(ths[-12:], 2) THH = np.append(THH, thh) #Save 'THL' and 'THH' to text files np.savetxt('THL.txt', np.reshape(THL, (256,256)), fmt='%.4g', delimiter=' ') np.savetxt('THH.txt', np.reshape(THH, (256,256)), fmt='%.4g', delimiter=' ') </code></pre> <p>Solution Much thanks to Oliver W., I was able to get a much shorter code performing whole lot faster than my several minutes long code. They are basically</p> <pre><code>thl = np.bitwise_and(a, 2**12 - 1) thh = np.bitwise_and(a, (2**12-1) &lt;&lt; (12+4)) &gt;&gt; (12+4) </code></pre> <p>I still need to study more and familiarize with how they work, but they are working great!</p>
<p>You're trying to extract the first 12 bits of a series of integers and also the last 12. These are 2 distinct problems. The latter is the easiest.</p> <p>First, let's start by using <code>numpy</code>'s convenience methods to read in the data from the text file, rather than concocting our own (more likely less optimal) functions:</p> <pre><code>a = np.fromfile(file, dtype=np.uint32, sep=' ') # alternative: np.genfromtxt </code></pre> <p>Next, let's tackle the easy problem: extracting the lowest 12 bits from a:</p> <pre><code>THH = np.bitwise_and(a, 2**12 - 1) </code></pre> <p>This is a vectorized operation, so it's very efficient and blazingly fast.</p> <p>Getting the first 12 bits is a whole other problem however, as it depends on the numbers themselves. My guess is that this is not actually what you want to do, but that someone has asked you to extract the 12 "highest" bits from some integers, when they are interpreted as having a fixed number of bits. An explanation with "the 3 highest bits" will make it more clear:</p> <p>If you need the 3 highest bits of a number, let's say 20, you could interpret 20 as being written like <code>0b10100</code>, in which case the 3 highest bits are <code>101</code>, which is the decimal number 5. However, if you interpreted 20 as <code>0b0010100</code>, so with 2 leading zeroes, the 3 highest bits would be <code>001</code>, which is the decimal number 1. There aren't many operations that actually require you to take the length of the (smallest) binary representation into account, and I guess this is also why you say "I think it's doing what I want". </p> <p>So, more likely, your numbers are to be interpreted in binary format with a fixed length. Looking at the largest value you mention, 268374015, that length is probably 28 digits (and even that seems unlikely, 32 bits is more common). But, assuming it is 28 bits, and you want only to extract the first 12, you could use the same trick as before:</p> <pre><code>np.logical_and(a, (2**12 - 1)&lt;&lt;16) &gt;&gt; 16 </code></pre> <p>Note that <code>&gt;&gt;</code> and <code>&lt;&lt;</code> are <a href="https://wiki.python.org/moin/BitwiseOperators" rel="nofollow">bitshift operations</a>.</p> <p>I am making several assumptions about your data though (that you're dealing only with integers is clear, as you're converting everything with <code>int()</code>) and of course, your <em>real</em> problem.</p> <p>However, if you <em>really</em> want the first 12 bits from those numbers, starting from the most significant bit, then the following code shows you:</p> <ol> <li>a reimplementation of your for-loops, slightly more efficient (<code>method_old()</code>)</li> <li>a vectorized implementation of that algorithm (<code>method_new()</code>)</li> </ol> <p>And their execution times on my machine.</p> <pre><code>import numpy as np a = np.random.random_integers(0, 268374016, (512,512)) def method_new(nbr_select_bits=12): high_bits = np.empty_like(a) for bit in range(32, nbr_select_bits -1, -1): mask = a &lt; 2**bit shift = bit - nbr_select_bits bitmask = (2**nbr_select_bits - 1) &lt;&lt; shift high_bits[mask] = np.bitwise_and(a[mask], bitmask) &gt;&gt; shift low_bits = np.bitwise_and(a, 2**nbr_select_bits - 1) return high_bits, low_bits def method_old(nbr_select_bits=12): thl = np.empty(a.size, dtype=np.uint32) thh = np.empty(a.size, dtype=np.uint32) for enu, data in enumerate(a.ravel()): thl[enu] = int(bin(data)[2:2+nbr_select_bits], 2) try: thh[enu] = int(bin(data)[-nbr_select_bits:], 2) except ValueError: # because 'int("b11", 2)' fails thh[enu] = int(bin(data)[-(nbr_select_bits+1):], 2) return thl.reshape(a.shape), thh.reshape(a.shape) from timeit import timeit new_t = timeit('method_new()', setup='from __main__ import method_new', number=1) old_t = timeit('method_old()', setup='from __main__ import method_old', number=1) print('Old method ran in {:.2f}ms,\nNew method ran in {:.2f}ms.\n' 'Speedup: {:.1f}x'.format(old_t*1000, new_t*1000, old_t/new_t)))) # Output: # Old method ran in 473.97ms, # New method ran in 60.18ms. # Speedup: 7.9x </code></pre>
python|python-3.x|binary|data-conversion
0
1,904,447
23,292,284
Accessing feinCMS urls in django templates
<p>I want to create a home page containing a list of feincms pages and linked to their url.</p> <p>I am trying </p> <pre><code>{% url page_obj.get_absolute_url %} </code></pre> <p>which is throwing an error </p> <pre><code>Reverse for '/test/' with arguments '()' </code></pre> <p>The feincms page resides at localhost/test/</p> <p>I have included the following in the urls.py </p> <pre><code>urlpatterns += patterns('', url(r'', include('feincms.contrib.preview.urls')), url(r'', include('feincms.urls')), ) </code></pre>
<p><code>get_absolute_url()</code> already returns the final URL. There's no need to pass it to the <code>{% url %}</code> templatetag. Just do a variable substition in your template:</p> <pre><code>&lt;a href="{{ page_obj.get_absolute_url }}"&gt; </code></pre>
python|django|feincms
1
1,904,448
23,054,398
Materialized path relationship in declarative SQLAlchemy
<p>I have a hierarchical categories model, where hierarchy is maintained using materialized path (one character per level):</p> <pre><code>class Category(Base): __tablename__ = 'categories' id = Column(SmallInteger, primary_key=True) path = Column(String, unique=True, nullable=False) # problematic relationship all_subcats = relationship('Category', lazy='dynamic', viewonly=True, primaryjoin=foreign(path).like(remote(path).concat('%'))) </code></pre> <p>When trying to define "all subcategories" relationship I run into a problem:</p> <pre><code>sqlalchemy.exc.ArgumentError: Can't determine relationship direction for relationship 'Category.all_subcats' - foreign key columns within the join condition are present in both the parent and the child's mapped tables. Ensure that only those columns referring to a parent column are marked as foreign, either via the foreign() annotation or via the foreign_keys argument. </code></pre> <p>SQLAlchemy is confused, because I'm joining on the <em>same column</em>. All examples I've managed to find always join on different columns.</p> <p>Is this sort of relationship possible at all? I want to query through this join, so custom @property is not acceptable.</p>
<p>Use the latest git master or version 0.9.5 or greater of SQLAlchemy. Then:</p> <pre><code>from sqlalchemy import * from sqlalchemy.orm import * from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class Element(Base): __tablename__ = 'element' path = Column(String, primary_key=True) related = relationship('Element', primaryjoin= remote(foreign(path)).like( path.concat('/%')), viewonly=True, order_by=path) e = create_engine("sqlite://", echo=True) Base.metadata.create_all(e) sess = Session(e) sess.add_all([ Element(path="/foo"), Element(path="/foo/bar1"), Element(path="/foo/bar2"), Element(path="/foo/bar2/bat1"), Element(path="/foo/bar2/bat2"), Element(path="/foo/bar3"), Element(path="/bar"), Element(path="/bar/bat1") ]) e1 = sess.query(Element).filter_by(path="/foo/bar2").first() print [e.path for e in e1.related] </code></pre> <p>note that this model, whether you deal with "descendants" or "anscestors", uses collections. You want to keep <code>remote()</code> and <code>foreign()</code> together so that the ORM considers it as one to many.</p>
python|sqlalchemy|relationship|declarative|materialized-path-pattern
2
1,904,449
23,138,446
text being cut in pygame
<p>I seem to have problems with displaying text on the screen The code draws text on the screen but half 'S' of 'Score' gets cut for reason. However, if I change screen.blit(text, self.score_rect, self.score_rect) to screen.blit(text, self.score_rect), it works fine. I would like to know why is this happening and how can I fix this.</p> <p>Thanks.</p> <p>Here's the code:</p> <pre><code>class Score(object): def __init__(self, bg, score=100): self.score = score self.score_rect = pygame.Rect((10,0), (200,50)) self.bg = bg def update(self): screen = pygame.display.get_surface() font = pygame.font.Font('data/OpenSans-Light.ttf', 30) WHITE = (255, 255, 255) BG = (10, 10, 10) score = "Score: " + str(self.score) text = font.render(score, True, WHITE, BG) text.set_colorkey(BG) screen.blit( self.bg, self.score_rect, self.score_rect) screen.blit(text, self.score_rect, self.score_rect) def main(): pygame.init() #initialize pygame pygame.init() screen = pygame.display.set_mode((640, 480)) pygame.display.set_caption('Score Window') #initialize background bg = pygame.Surface((screen.get_size())).convert() bg.fill((30, 30, 30)) screen.blit(bg, (0, 0)) #initialize scoreboard score_board = Score(bg) while True: for event in pygame.event.get(): if event.type == QUIT: exit(0) score_board.update() pygame.display.flip() </code></pre>
<p>Well - it looks like the third parameter on the call do <code>blit</code>, where you repeat the core_rect` parameter is designed exactly to do that: it selects a rectangular area on the source image (in this case your rendered text) to be pasted in the destination (in this case, the screen).</p> <p>Text in Pygame is rendered with nice margins, you should not need the source-crop parameter at all - and if you thinbk ou do, you should pass it a suitable set of coordinates, relevant inside the rendered text, not therectangle with the destination coordinates on the screen.</p> <p>From <a href="http://www.pygame.org/docs/ref/surface.html#pygame.Surface.blit" rel="nofollow">http://www.pygame.org/docs/ref/surface.html#pygame.Surface.blit</a>:</p> <blockquote> <p>blit() draw one image onto another blit(source, dest, area=None, special_flags = 0) -> Rect Draws a source Surface onto this Surface. The draw can be positioned with the dest argument. Dest can either be pair of coordinates representing the upper left corner of the source. A Rect can also be passed as the destination and the topleft corner of the rectangle will be used as the position for the blit. The size of the destination rectangle does not effect the blit.</p> <p>An optional area rectangle can be passed as well. This represents a smaller portion of the source Surface to draw. ...</p> </blockquote>
python|pygame
0
1,904,450
23,352,073
Pandas accessing groupby column data
<p>I've got a dataframe that has been grouped by two columns names grouped, let's say the headings look like this:</p> <pre><code> A, B, C, D, E, F IdxA, IdxB derp foo 1 5 6 3 2 1 derp bar 2 3 4 1 9 0 ... </code></pre> <p>For each IdxB I want to get a list of all the unique value pairs in cols E and Fn and a list of unique values from D. Currently I am using a loop, that goes something like this: </p> <pre><code>for (IdxA, IdxB), tbl in grouped: pairValues = tbl[['E', 'F']].drop_duplicates() E_unique = tbl['D'].unique() print IdxB print E_unique for _, row in pairValues.iterrows(): print row['E'] + ' ' + row['F'] print </code></pre> <p>I feel like there is a better way to do this, but I'm a bit of a noob to Pandas... Is there a better way or did I do it a sufficiently "pythonic" way?</p> <p>Note: the cells actually contain text data not numbers, I just used numbers for simplicity.</p> <p>An example output: </p> <pre><code>IdxB Name (eg. foo) List of unique values belonging to IdxB (content is IP addresses) List of unique string pairs from ['E','F'] belonging to IdxB (content is strings) </code></pre> <p>Thanks very much</p>
<p>One starting point is to reset the index and then group by indexb. Say your dataframe is called df:</p> <pre><code>def gimmeStuff(group): data = group.drop_duplicates(['E', 'F']) return data[['D', 'E', 'F']] df.reset_index(inplace=True) results = df.groupby('IdxB').apply(gimmeStuff) </code></pre> <p>Since there is no real data given from your side, I can't do a real test - there might be typos or so, but this is the way I would lay it down. This will give you a dataset indexed by <code>IdxB</code> containing the columns D, E, F. D will contain the same value repeatedly for every IdxB, and E,F will be the unique combinations.</p> <p><strong>Update</strong></p> <p>/edit says that you can actually directly group the data, if you don't want to reindex:</p> <pre><code>results = df.groupby(level=1).apply(gimmeStuff) </code></pre>
python|pandas
0
1,904,451
8,259,976
Flask-SQLAlchemy: Photo column type
<p>In a web application I'm coding with Flask/SQLAlchemy, several of my models needs a "Photo" column type, which would handle storing the original image somewhere on the filesystem, and creating differents thumbnails size of the image. Ideally, I-d want something like:</p> <pre><code>class MyModel(Base): id = Column(Integer, primary_key=True) photo = Column(Photo(root="/path/to/photos/", formats={ "big" : "800x600", "small" : "400x300", "thumbnail": "100x75" })) </code></pre> <p>and then, I could access URI/URL of file like this: model.photo.big etc...</p> <p>So, my question is: how to add setters/getters on the model.photo object so that I can access URIS/URLS with the mentionned syntax ? By the way, if someone has a good tutorial/resource (other than the official doc) on user defined types with SQLAlchemy, I would be grateful if he could share it.</p> <p>Thx. </p>
<p>Have you looked at <a href="http://flask-uploads.readthedocs.org/en/latest/" rel="nofollow">Flask-Upload</a>? It seems to be exactly what you were looking for.</p>
python|sqlalchemy|flask|flask-sqlalchemy
2
1,904,452
413,228
PyGreSQL vs psycopg2
<p>What is the difference between these two apis? Which one faster, reliable using Python DB API?</p> <p><strong>Upd:</strong> I see two psql drivers for Django. The first one is psycopg2. What is the second one? pygresql?</p>
<p>For what it's worth, django uses psycopg2.</p>
python|postgresql
5
1,904,453
41,926,447
List sensors in ev3dev with python
<p>Using the python binding, it is possible to get a list of motors using list_motors(), however I cannot find a similar function for listing sensors.</p> <p>How would I list all the available sensors?</p>
<p>Using the code for list_motors, I have come up with the following function for listing sensors:</p> <pre><code>def list_sensors(name_pattern=ev3.Sensor.SYSTEM_DEVICE_NAME_CONVENTION, **kwargs): classpath = abspath(ev3.Device.DEVICE_ROOT_PATH + '/' + ev3.Sensor.SYSTEM_CLASS_NAME) return (ev3.Sensor(name_pattern=name, name_exact=True) for name in ev3.list_device_names(classpath, name_pattern, **kwargs)) </code></pre>
python
0
1,904,454
41,837,235
Python - Name parsing for email checking
<p>I'm trying to build a script that create different variations of a person's name to test its email. Basically, what I want the script to do is:</p> <ul> <li><p>If I input "John Smith" I need to get in return a list containing <code>[john, johnsmith, john.smith, john_smith, smith, jsmith, j.smith, smithj, smith.j, j_smith, smith_j,smithjohn, smith.john, smith_john, etc]</code></p></li> <li><p>If I input "John May Smith" I need to get in return a list containing <code>[john, johnmay, johnsmith, john.may, john.smith, john_may, john_smith, jmay, jsmith, j.may, j.smith, j_may, j_smith, johnmaysmith, john.may.smith, john_may_smith, jms, johnms, john.m.s, john_m_s, jmsmith, j.m.smith, j_m_smith, j.m.s, j_m_s, jmays, j.may.s, j_may_s, etc]</code>. Technically, it would be three lists with name parts: <code>[j, john][m, may][s, smith]</code> that would mix in different orders and the parts could be separated or not by "." or "_".</p></li> <li><p>John Smith and John May Smith are only examples, I should be able to enter any name, decompose it and mix its parts, initials and separators ('.' and '_').</p></li> </ul> <p>To decompose a name I'm using the following:</p> <pre><code>import nameparser name="John May Smith" name=nameparser.HumanName(name) parts=[] for i in name: j=[i[0],i] parts.append(j) </code></pre> <p>This way <code>parts</code> gets like this:</p> <pre><code>[['j', 'john'], ['m', 'may'], ['s', 'smith']] </code></pre> <p>Note that the list in this case has three sublists, however it could have been 2, 4, 5 or 6.</p> <p>I created another list called separators:</p> <pre><code>separators=['.','_'] </code></pre> <p>My question is: What is the best way to mix those lists to create a list of possible email local-parts* as described in the example above? I've been burning my brain to find a way to do it for a few days but haven't been able to.</p> <p>*Local-part is what comes before the @ (in jmaysmith@apple.com, the local part would be "jmaysmith").</p>
<p>the following code should do what you want</p> <pre class="lang-py prettyprint-override"><code>from nameparser import HumanName from itertools import product, chain, combinations def name_combinations(name): name=HumanName(name) parts=[] ret=[] for i in name: j=[i[0].lower(),i.lower()] ret.append(i.lower()) parts.append(j) separators=['','.','_'] for r in range(2,len(parts)+1): for c in combinations(parts,r): ret = chain(ret,map(lambda l: l[0].join(l[1:]),product(separators,*c))) return ret print(list(name_combinations(name))) </code></pre> <p>In your examples I have not seen <code>jms</code>, <code>j.s</code> or <code>js</code> in your examples. If that is intentional feel free to clarify what should be excluded.</p> <p>For reference: The output is</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; print(list(name_combinations("John Smith"))) ['john', 'smith', 'js', 'jsmith', 'johns', 'johnsmith', 'j.s', 'j.smith', 'john.s', 'john.smith', 'j_s', 'j_smith', 'john_s', 'john_smith'] &gt;&gt;&gt; print(list(name_combinations("John May Smith"))) ['john', 'may', 'smith', 'jm', 'jmay', 'johnm', 'johnmay', 'j.m', 'j.may', 'john.m', 'john.may', 'j_m', 'j_may', 'john_m', 'john_may', 'js', 'jsmith', 'johns', 'johnsmith', 'j.s', 'j.smith', 'john.s', 'john.smith', 'j_s', 'j_smith', 'john_s', 'john_smith', 'ms', 'msmith', 'mays', 'maysmith', 'm.s', 'm.smith', 'may.s', 'may.smith', 'm_s', 'm_smith', 'may_s', 'may_smith', 'jms', 'jmsmith', 'jmays', 'jmaysmith', 'johnms', 'johnmsmith', 'johnmays', 'johnmaysmith', 'j.m.s', 'j.m.smith', 'j.may.s', 'j.may.smith', 'john.m.s', 'john.m.smith', 'john.may.s', 'john.may.smith', 'j_m_s', 'j_m_smith', 'j_may_s', 'j_may_smith', 'john_m_s', 'john_m_smith', 'john_may_s', 'john_may_smith'] </code></pre>
python|string|parsing|combinations
1
1,904,455
41,876,734
How to implement a static counter in python
<p>I am calling a method and I need a static counter within this method. It's required to parse the elements of the list. The counter will tell which position of the list to lookup at.</p> <p>For e.g</p> <pre><code>static_var_with_position = 0 noib_list = [3, 2, 2, 2, 2, 1, 2, 2] def foo(orig_output, NOB): # tried two ways #static_var_with_position += 1 # doesn't work #global static_var_with_position #static_var_with_position += 1 # doesn't work either bit_required = noib_list[static_var_with_position] converted_output = convert_output(orig_output, NOB, bit_required) </code></pre> <p>The static_var_with_position value is never incremented. I have commented the two ways I tried to increment value.</p> <p>In c++ its piece of cake, but I couldn't find anything similar in python so far. Any help will be appreciated :)</p> <p>Thanks!</p>
<p>Instead of using a global/static counter variable, you could use an <a href="https://docs.python.org/3.5/library/functions.html#iter" rel="nofollow noreferrer"><code>iterator</code></a>:</p> <pre><code>iterator = iter(noib_list) def foo(orig_output, NOB): bit_required = next(iterator) converted_output = convert_output(orig_output, NOB, bit_required) </code></pre> <p>The iterator will automatically keep track which is the <a href="https://docs.python.org/3.5/library/functions.html#next" rel="nofollow noreferrer"><code>next</code></a> element internally.</p> <p>When the iterator is exhausted (i.e. when you reached the end of the list), <code>next</code> will raise a <code>StopIteration</code> error, so it you do not know when the end is reached, you can use <code>bit_required = next(iterator, None)</code> instead; then just test whether the value is <code>None</code> and you know that the list is exhausted (or just use <code>try/except</code>).</p>
python|static
2
1,904,456
42,087,695
Error in dividing the columns and adding the result into a new column
<p>I have a dataframe 'keep_df' as shown:</p> <pre><code> DateTime Modeled Flow(cfs) Observed Flow(cfs) Seconds Event Event 1 2016-08-15 15:35:00 11.85926 0.0 300.00 Event 1 2016-08-15 10:05:00 30.05923 0.0 300.00 Event 1 2016-08-15 10:00:00 31.10118 0.0 300.00 Event 1 2016-08-15 09:55:00 32.17444 0.0 300.00 Event 1 2016-08-15 09:50:00 33.25405 0.0 300.00 </code></pre> <p>I wanted to create new columns which is got by dividing Modeled Flow (cfs) and also Observed flow (cfs) with Seconds column as shown below:</p> <pre><code>keep_df['Modelled Volume(f3)'] = keep_df['Modeled Flow(cfs)']/keep_df['Seconds'] keep_df['Observed Volume(f3)'] = keep_df['Observed Flow(cfs)']/keep_df['Seconds'] keep_df() </code></pre> <p>But after running the above i'm getting an error like this:</p> <pre><code>Type error: 'Dataframe' object is not callable </code></pre> <p>Since 'Seconds' column is default value i tried this:</p> <pre><code>keep_df['Modelled Volume(f3)'] = 'Modeled Flow (cfs)'/300.00 keep_df['Observed Volume(f3)'] = 'Observed Flow (cfs)'/300.00 keep_df() </code></pre> <p>But still i'm getting this error:</p> <pre><code>TypeError Traceback (most recent call last) &lt;ipython-input-91-b268c3093f01&gt; in &lt;module&gt;() ----&gt; 1 keep_df['Modelled Volume(f3)'] = 'Modeled Flow (cfs)'/300.00 2 keep_df['Observed Volume(f3)'] = 'Observed Flow (cfs)'/300.00 3 keep_df() TypeError: unsupported operand type(s) for /: 'str' and 'float' </code></pre> <p>What could i possibly do?</p>
<p>Try using <code>.div</code> instead</p> <pre><code>keep_df['Modelled Volume(f3)'] = keep_df['Modeled Flow(cfs)'].div(keep_df['Seconds']) keep_df['Observed Volume(f3)'] = keep_df['Observed Flow(cfs)'].div(keep_df['Seconds']) keep_df </code></pre> <p>Also for the second part</p> <pre><code>keep_df['Modelled Volume(f3)'] = keep_df['Modeled Flow(cfs)']/300.00 keep_df['Observed Volume(f3)'] = keep_df['Observed Flow(cfs)']/300.00 keep_df </code></pre>
python|pandas
2
1,904,457
47,403,047
migration errors for project using django-csvimport
<p>I am having trouble moving my Django 1.11 app to production.</p> <p>There's another question that looks like a similar issue, but I am unable to make the answers suggested in the comments work as desired: <a href="https://stackoverflow.com/questions/46918822/django-import-errors-for-app-csvimport">Django import errors for app csvimport</a></p> <p>if I comment out the code in settings.py to remove the django-csvimport library stuff like so:</p> <pre><code>INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'rest_framework', # 'csvimport.app.CSVImportConf', 'custom_app_name', ] </code></pre> <p>then my migrations work fine, and the app runs(sans csvimport app). Then if I comment the csvimport APP line back in and run the migrations, they fail with the following:</p> <pre><code>Operations to perform: Apply all migrations: admin, auth, contenttypes, csvimport, sessions, custom_app_name Running migrations: Applying csvimport.0002_test_models...Traceback (most recent call last): File "/home/www-root/envs/django_env_1/lib/python3.4/site-packages/django/db/backends/utils.py", line 65, in execute return self.cursor.execute(sql, params) psycopg2.ProgrammingError: syntax error at or near "csvtests_country" LINE 1: ...CONSTRAINT "csvtests_item_country_id_5f8b06b9_fk_"csvtests_c... ^ The above exception was the direct cause of the following exception: Traceback (most recent call last): File "manage.py", line 22, in &lt;module&gt; execute_from_command_line(sys.argv) File "/home/www-root/envs/django_env_1/lib/python3.4/site-packages/django/core/management/__init__.py", line 364, in execute_from_command_line utility.execute() File "/home/www-root/envs/django_env_1/lib/python3.4/site-packages/django/core/management/__init__.py", line 356, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/home/www-root/envs/django_env_1/lib/python3.4/site-packages/django/core/management/base.py", line 283, in run_from_argv self.execute(*args, **cmd_options) File "/home/www-root/envs/django_env_1/lib/python3.4/site-packages/django/core/management/base.py", line 330, in execute output = self.handle(*args, **options) File "/home/www-root/envs/django_env_1/lib/python3.4/site-packages/django/core/management/commands/migrate.py", line 204, in handle fake_initial=fake_initial, File "/home/www-root/envs/django_env_1/lib/python3.4/site-packages/django/db/migrations/executor.py", line 115, in migrate state = self._migrate_all_forwards(state, plan, full_plan, fake=fake, fake_initial=fake_initial) File "/home/www-root/envs/django_env_1/lib/python3.4/site-packages/django/db/migrations/executor.py", line 145, in _migrate_all_forwards state = self.apply_migration(state, migration, fake=fake, fake_initial=fake_initial) File "/home/www-root/envs/django_env_1/lib/python3.4/site-packages/django/db/migrations/executor.py", line 244, in apply_migration state = migration.apply(state, schema_editor) File "/home/www-root/envs/django_env_1/lib/python3.4/site-packages/django/db/backends/base/schema.py", line 93, in __exit__ self.execute(sql) File "/home/www-root/envs/django_env_1/lib/python3.4/site-packages/django/db/backends/base/schema.py", line 120, in execute cursor.execute(sql, params) File "/home/www-root/envs/django_env_1/lib/python3.4/site-packages/django/db/backends/utils.py", line 80, in execute return super(CursorDebugWrapper, self).execute(sql, params) File "/home/www-root/envs/django_env_1/lib/python3.4/site-packages/django/db/backends/utils.py", line 65, in execute return self.cursor.execute(sql, params) File "/home/www-root/envs/django_env_1/lib/python3.4/site-packages/django/db/utils.py", line 94, in __exit__ six.reraise(dj_exc_type, dj_exc_value, traceback) File "/home/www-root/envs/django_env_1/lib/python3.4/site-packages/django/utils/six.py", line 685, in reraise raise value.with_traceback(tb) File "/home/www-root/envs/django_env_1/lib/python3.4/site-packages/django/db/backends/utils.py", line 65, in execute return self.cursor.execute(sql, params) django.db.utils.ProgrammingError: syntax error at or near "csvtests_country" LINE 1: ...CONSTRAINT "csvtests_item_country_id_5f8b06b9_fk_"csvtests_c... </code></pre> <p>Any advice would be very helpful.<br> Thanks!</p>
<p>So I was able to fix this issue based on the stack trace error which read:</p> <pre><code>django.db.utils.ProgrammingError: syntax error at or near "csvtests_country" LINE 1: ...CONSTRAINT "csvtests_item_country_id_5f8b06b9_fk_"csvtests_c... </code></pre> <p>The issue was that there was a piece of generated code which added double quotes inside of single quotes, like so:</p> <pre><code> options={ 'managed': True, 'db_table': '"csvtests_country"', }, </code></pre> <p>To find the migration I went to my virtual environment's python install:</p> <pre><code>django_env/lib/python3.4/site-packages/csvimport/migrations/ </code></pre> <p>and removed the double quotes based on the fact that none of the other 'db_table' : 'sometable_name' calls used double-quotes. I guess this is a bug that I'll be reporting to the csvimport github page. I will post any updates to this question if I find more information. </p> <p>Hopefully this helps someone in a similar situation.</p> <p>Thanks!</p> <hr> <p>UPDATE:</p> <p>I did not see this in the issue list from the csvimport github page before. It appears to be a known bug: <a href="https://github.com/edcrewe/django-csvimport/issues/77" rel="nofollow noreferrer">https://github.com/edcrewe/django-csvimport/issues/77</a></p>
python|django|python-3.x
2
1,904,458
58,535,825
Combine overlapping ranges of numbers
<p>I need to combine overlapping ranges of numbers into single range. So I have a list with sub-lists of something like:</p> <pre><code>[[83,77],[103,97],[82,76],[101,95],[78,72],[97,91],[72,66],[89,83],[63,57],[78,72],[53,47],[65,59],[41,35],[50,44],[28,22],[34,28],[14,8],[16,10]] </code></pre> <p>So from 83 to 77 is overlapping 82 to 76 and will become 76 to 83. If any other ranges overlap this range would ad it's minimum of maximum to this range, and when there no others that overlap then the method should go to the next one in the list and try to merge that with it's overlappings. </p> <p>I hoop this makes sense.</p>
<p>Use an IntervalTree <a href="https://en.wikipedia.org/wiki/Interval_tree" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Interval_tree</a></p> <p>There is an implementation available in python:</p> <pre><code>pip install intervaltree </code></pre> <pre><code>import intervaltree intervals = [ [77, 83], [97, 103], [76, 82], [95, 101], [72, 78], [91, 97], [66, 72], [83, 89], [57, 63], [72, 78], [47, 53], [59, 65], [35, 41], [44, 50], [22, 28], [28, 34], [8, 14], [10, 16], ] tree = intervaltree.IntervalTree.from_tuples(intervals) print(tree) tree.merge_overlaps() print(tree) tree.merge_overlaps(strict=False) print(tree) </code></pre> <p>note that I had to make your points be <code>(start, end)</code> instead of <code>(end, start)</code>.</p> <pre><code>IntervalTree([Interval(8, 14), Interval(10, 16), Interval(22, 28), Interval(28, 34), Interval(35, 41), Interval(44, 50), Interval(47, 53), Interval(57, 63), Interval(59, 65), Interval(66, 72), Interval(72, 78), Interval(76, 82), Interval(77, 83), Interval(83, 89), Interval(91, 97), Interval(95, 101), Interval(97, 103)]) </code></pre> <p>is merged to</p> <pre><code>IntervalTree([Interval(8, 16), Interval(22, 28), Interval(28, 34), Interval(35, 41), Interval(44, 53), Interval(57, 65), Interval(66, 72), Interval(72, 83), Interval(83, 89), Interval(91, 103)]) </code></pre> <p>and with <code>strict=False</code> allowing touching intervals to be merged</p> <pre><code>IntervalTree([Interval(8, 16), Interval(22, 34), Interval(35, 41), Interval(44, 53), Interval(57, 65), Interval(66, 89), Interval(91, 103)]) </code></pre>
python|numbers|range|overlap
2
1,904,459
33,686,452
Sorting a list based on date in mongodb using python and tornado
<p>I am trying to sort a list of students from a table in mongodb. I am using tornado and python. I am using <code>Motorclient</code> to connect with db .I am getting a proper result when using <code>students.find()</code></p> <pre><code>Sid = self.body['Sid'] data = [] _id = db.students.find({"Sid": Sid},{'_id': False,"status": False,"Dateofadmission":False}) for document in (yield _id.to_list(length=100)): data.append(document) return[{"status code": 1,"studentInfo": data }] </code></pre> <p>Now when I try to sort and list it gives me internal server error, with no error logging in the terminal.</p> <pre><code>_id = db.students.find({"Sid": Sid},{'_id': False,"status": False,"Dateofadmission":False}).sort({'Dateofadmission' : -1}) </code></pre> <p>the date is stored in mongodb as:</p> <pre><code>{ "_id" : ObjectId("56443dc03f32df1bf0e8b4e8"), "Dateofadmission" : ISODate("2015-10-22T00:00:00Z"), "Sid" : "56443dc03f32df1bf0e8b4e8", "Name" : "Ram" } </code></pre> <p>Someone please guide me on how I can sort the list of students based on the <code>Dateofadmission</code></p>
<p><a href="http://api.mongodb.org/python/current/api/pymongo/cursor.html?_ga=1.182210983.1014514819.1447393203#pymongo.cursor.Cursor.sort" rel="nofollow"><code>sort()</code></a> in <code>pymongo</code> takes two parameters - a key (or a list of keys) and the direction. Replace:</p> <pre><code>db.students.find({"Sid": Sid}, {'_id': False,"status": False,"Dateofadmission":False}).sort({'Dateofadmission' : -1}) </code></pre> <p>with:</p> <pre><code>db.students.find({"Sid": Sid}, {'_id': False,"status": False,"Dateofadmission":False}).sort('Dateofadmission', pymongo.DESCENDING) </code></pre>
python|mongodb|sorting|tornado|tornado-motor
3
1,904,460
33,906,535
Convert python to objective-c
<p>So I'm trying to convert a some python to it's objective-c equivalent but i'm not having much luck.</p> <p>The Python code is as follows:</p> <pre><code>def get_next_guess(passwords): scores = {} for candidate in passwords: results = [] remainder = (x for x in passwords if x != candidate) for correct_pw in remainder: no_alternatives = len(refine(remainder, candidate, distance(candidate, correct_pw))) print(no_alternatives) results.append(no_alternatives) scores[candidate] = max(results) print(scores) return min(scores, key = lambda x: scores[x]) </code></pre> <p>And my current Objective-C code is:</p> <pre><code>void get_next_guess(NSMutableArray * passwords) { NSMutableDictionary * scores = [NSMutableDictionary new]; for (NSString* candidate in passwords) { NSMutableArray * results = [NSMutableArray new]; NSMutableArray * remainder = [NSMutableArray new]; for (NSString * x in passwords) { if (x != candidate) { [remainder addObject:x]; } } for (NSString * correct_pw in remainder) { NSUInteger no_alternatives = [refine(remainder, candidate, distance(candidate, correct_pw)) count]; NSNumber *n = [NSNumber numberWithInteger:no_alternatives]; [results addObject:n]; } NSArray *sorted_Array = [results sortedArrayUsingDescriptors: @[[NSSortDescriptor sortDescriptorWithKey:@"intValue" ascending:YES]]]; [scores setObject:[sorted_Array lastObject] forKey:candidate]; } NSLog(@"table: %@", scores); } </code></pre> <p>I appreciate that the obj-c code is very rough, I'm just trying to get something that will work. The code is part of a puzzle solver i'm trying to create. </p> <p>I suspect the (main) problem is around the obj-c version of:</p> <pre><code>remainder = (x for x in passwords if x != candidate) </code></pre> <p>The Obj-c version returns this:</p> <pre><code>table: { COELOMS = 4; HOLLOES = 4; MYOLOGY = 5; PADLOCK = 5; PARTONS = 4; PILINGS = 6; POMPONS = 5; PRECESS = 6; PROSECT = 4; SALLOWS = 4; TOOLERS = 5; TROILUS = 6; } </code></pre> <p>And the Python version returns this:</p> <pre><code>{'PARTONS': 3, 'HOLLOES': 3, 'PADLOCK': 4, 'TOOLERS': 4, 'COELOMS': 3, 'PROSECT': 3, 'MYOLOGY': 4, 'PRECESS': 0, 'TROILUS': 5, 'SALLOWS': 3, 'PILINGS': 4, 'POMPONS': 2} </code></pre> <p>(the Python output being correct)</p>
<p>Change <code>if (x != candidate)</code> to <code>if (![x isEqualToString: candidate])</code>.</p>
python|objective-c
0
1,904,461
47,045,220
How to merge two lists into dictionary without using nested for loop
<p>I have two lists:</p> <pre><code>a = [0, 0, 0, 1, 1, 1, 1, 1, .... 99999] b = [24, 53, 88, 32, 45, 24, 88, 53, ...... 1] </code></pre> <p>I want to merge those two lists into a dictionary like: </p> <pre><code>{ 0: [24, 53, 88], 1: [32, 45, 24, 88, 53], ...... 99999: [1] } </code></pre> <p>A solution might be using <code>for</code> loop, which does not look good and elegant, like:</p> <pre><code>d = {} unique_a = list(set(list_a)) for i in range(len(list_a)): if list_a[i] in d.keys: d[list_a[i]].append(list_b[i]) else: d[list_a] = [list_b[i]] </code></pre> <p>Though this does work, it’s an inefficient and would take too much time when the list is extremely large. I want to know more elegant ways to construct such a dictionary? </p> <p>Thanks in advance!</p>
<p>You can use a <a href="https://docs.python.org/3/library/collections.html#collections.defaultdict" rel="noreferrer">defaultdict</a>:</p> <pre><code>from collections import defaultdict d = defaultdict(list) list_a = [0, 0, 0, 1, 1, 1, 1, 1, 9999] list_b = [24, 53, 88, 32, 45, 24, 88, 53, 1] for a, b in zip(list_a, list_b): d[a].append(b) print(dict(d)) </code></pre> <p>Output:</p> <pre><code>{0: [24, 53, 88], 1: [32, 45, 24, 88, 53], 9999: [1]} </code></pre>
python|list|dictionary
34
1,904,462
46,952,187
Make_Password returning different hash value for same password
<p>i don't want to use django default user table so i created a user table with username and hashed password. I hashed the password with make_password() when posting to database but when i tried retrieving information from the database using the same method to hash user input password, i get a different hash for same password. below is the view code for saving the user and retrieving the details.</p> <p>view.py</p> <pre><code>class HMUser(APIView): def post(self, request): request.data['password'] = make_password(request.data['password']) print(request.data['password']) serialize = serializers.HMUserSerializer(data=request.data) if serialize.is_valid(): serialize.save() return Response(1, status=status.HTTP_201_CREATED) return Response(serialize.errors, status=status.HTTP_400_BAD_REQUEST) class Login(APIView): def get(self, request, username, password): print("pass ", password) userpassword = make_password(password) print("Hash", userpassword) user_details = models.HMUser.objects.get(username=username, password=userpassword) serialize = serializers.HMUserSerializer(user_details) return Response(serialize.data) </code></pre>
<p>Mismatch of salt is your problem, </p> <p>Salt is seed or random number which is used in addition to a password to make it more secure, Since there is no salt specified it generates random at runtime making new value each time for the same password.</p> <p>Ideal way is to use <code>check_password()</code> which is much more friendly, </p> <p>Workaround is to specify your own salt, something like <code>make_password(salt='mySalt',password=realPassword)</code></p> <p>You need to ensure that salt use while creating a password and verifying are same.</p>
python|django
1
1,904,463
37,798,740
Django Celery: Model object does not exists within the celery's task (ATOMIC_REQUESTS=False)
<p>I am getting <code>MyModel matching query does not exist.</code> error while fetch the object which I am creating before entering the celery task. I am calling the task from within my <code>APIView</code>.</p> <pre><code>my_model_obj = MyModel(x=1, y=2) my_model_obj.save() my_celery_task.delay(my_model_obj.id) </code></pre> <p>within my task function, I am doing:</p> <pre><code>@task() def my_celery_task(my_model_id): MyModel.objects.get(id=my_model_id) </code></pre> <p>I do not have <code>ATOMIC_REQUESTS</code> param in my Django's <code>DATABASE</code> configuration. So, by default it should be False.</p> <p>I believe it is because the Django is releasing control from the model object even before the data is been actually saved to the DB. This is an intermittent issue which happens sometime and sometime it works fine.</p> <p>Earlier I had the similar issue in which I was updating the values of model object but the updated values were not reflecting within the celery's task. In order to make that run, I have added 10 secs of delay. But this time I am looking for some permanent solution. Is there some way to fix this? I haven't got any configuration param in neither Django's or Celery's configuration to deal with this kind of behavior.</p>
<p>The issue was because I was using <code>TransactionMiddleware</code>, and it does similar thing as <code>@transaction.commit_on_success</code> decorator. If you want to keep using <code>TransactionMiddleware</code>, you should look into using <code>@transaction.autocommit</code> decorator in your views with celery tasks, or <code>@transaction.commit_manually</code></p>
python|django|django-orm|django-celery|celery-task
0
1,904,464
37,983,188
Receiving onesignal.com notifications
<p>I would like my program to receive 3rd party notifications sent using the OneSignal (<a href="http://onesignal.com" rel="nofollow">http://onesignal.com</a>) platform.</p> <p>Is there any (Python) client for it or is?</p>
<p>OneSignal push notifications work on Android, iOS, Amazon FireOS, and Windows Phone. There isn't a Python client side library as this isn't normally used for mobile app development.</p>
python|push-notification|onesignal
1
1,904,465
38,002,285
Why XORing two hex strings produce an output in the following format: '\x02\x07\x01P\x00\x07
<p>The ultimate purpose of this program is to get the words Eve and Bob back in plain text from XORed hex strings. So first I got Eve and bob in hex, then XORed them together, was hoping to then see the result in hex.The problem is when I XOR the hex strings for Bob and Eve I get this notation, '\x02\x07\x01P\x00\x07 and as a beginner I have no idea what this means. what is that notation? </p> <p>The plan is to later work my way backwards from the XORed Eve and Bob and get each name in plain text separately. </p> <p>My simple code so far is: </p> <pre><code> a= 'bob' b='Eve' g=a.encode("hex") v=b.encode("hex") def strxor(a, b): if len(a) &gt; len(b): return "".join([chr(ord(x) ^ ord(y)) for (x, y) in zip(a[:len(b)], b)]) else: return "".join([chr(ord(x) ^ ord(y)) for (x, y) in zip(a, b[:len(a)])]) k=strxor(g,v) </code></pre>
<p>this is basic xor_encryption... in effect b works like an encryption/decryption key</p> <pre><code>import itertools name1 = "eve online" name2 = "bob" xor_encrypted = "".join(chr(ord(a)^ord(b)) for a,b in zip(name1,itertools.cycle(name2))) print repr(xor_encrypted) # 3 characters will look something like '\x07\x19\x07' xor_decrypted = "".join(chr(ord(a)^ord(b)) for a,b in zip(xor_encrypted,itertools.cycle(name2))) print repr(xor_decrypted) # should now print our decrypted 'eve' </code></pre>
python|xor
0
1,904,466
37,837,679
python execute exe file with script, enter username, password, etc
<p>I want to call an exe-file and already input parameters/input-data.</p> <pre><code>cmd = dir_path + 'file.exe do something test' p = Popen(cmd, shell=True, stdout=PIPE, stderr=STDOUT) </code></pre> <p>This is already working fine. If you do that by using the exe-file you would press enter to execute the next step which would be entering your username. Then password and so on. Now I dont know how to implement this into my script, that this steps are all done my the script. Basically i dont know how to do the ENTER step with python. Thx for any help. </p> <p><strong>UPDATE</strong></p> <p>Tried now 2 ways (all mentioned within this question or others on stackoverflow).</p> <p><em>way 1></em> </p> <pre><code>p = subprocess.Popen(cmd, shell=False, stdin=subprocess.PIPE,stdout=subprocess.PIPE,stderr=subprocess.PIPE, universal_newlines=True) # cmd represents the path of the exe file p.stdin.write(username+"\n") p.stdin.write(password+"\n") p.sdin.close() </code></pre> <p><em>way 2</em></p> <pre><code>p = subprocess.Popen(cmd, shell=False, stdin=subprocess.PIPE,stdout=subprocess.PIPE,stderr=subprocess.PIPE, universal_newlines=True) out,err = p.communicate("\n".join([username,password,"\n"])) # last \n should terminate the exe file and communicate should wait for the process to exit </code></pre> <p>Both ways present me with the same output in one line:</p> <blockquote> <p>Username: Password:</p> </blockquote> <p>normally i hoped it would be:</p> <blockquote> <p>Username: "username wrote by process + \n" <br> Password: "password wrote by process + \n"</p> </blockquote> <p>the application is running on windows7 (has to be) and the exe file runs within command line.</p> <p>So is it maybe possible that the exe does not support <em>way1</em> and <em>way2</em> so its not possible to write it through a subprocesss or am I making a major mistake?!</p> <p>thanks again for any help.</p>
<p>You're almost there!</p> <p>You can do the following:</p> <pre><code>import subprocess p = subprocess.Popen(cmd, stdin=subprocess.PIPE, shell=True) p.communicate(input='\r') </code></pre> <p>You may need to use \n instead of \r if there are any problems. \r is a carriage return, and \n is a newline.</p>
python|pipe|subprocess|popen
3
1,904,467
30,129,109
Array of class objects in Python 3.4
<p>How do i declare an array of class objects in Python 3.4? In C++ i can do it easily such a way: </p> <pre><code>class Segment { public: long int left, right; Segment() { left = 0; right = 0; } void show_all() { std::cout &lt;&lt; left &lt;&lt; " " &lt;&lt; right &lt;&lt; endl; } }; int main() { const int MaxN = 10; Segment segment[MaxN]; for(int i = 0; i &lt; MaxN; i++) { std::cin &gt;&gt; segment[i].left; std::cin &gt;&gt; segment[i].right; } } </code></pre> <p>In Python i have almost the same but can not find a way to create a list of class' objects and iterate through it as in C++.</p> <pre><code>class Segment: def __init__(self): self.left = 0 self.right = 0 def show_all(self): print(self.left, self.right) segment = Segment() </code></pre> <p>So how to make such a list?</p>
<p>Just create a list.</p> <pre><code>segments = [Segment() for i in range(MaxN)] for seg in segments: seg.left = input() seg.right = input() </code></pre>
python|arrays|list
4
1,904,468
57,228,496
How to do action every X iterations until infinity in Python?
<p>Consider the following code:</p> <pre><code>i = 0 while True: # Do some other stuff first. # Check if this iteration is after 7. i += 1 if i % 7 == 0: print 'Factor of 7' </code></pre> <p>This works fine, but the counter <code>i</code> will be a massive number sooner than later. Is there a better way to do something every X (in the above example, every 7) iterations for the long-term so that we don't have to store huge numbers? I have thought of the following:</p> <pre><code>i = 0 while True: # Do some other stuff first. # Check if this iteration is after 7. i += 1 if i % 7 == 0: i = 0 print 'Factor of 7' </code></pre> <p>But it seems that there would be a better way. Any suggestions?</p>
<p>You can just use:</p> <pre><code>while True: for i in range(7): # Do some other stuff first. print 'Factor of 7' </code></pre>
python|python-2.7
1
1,904,469
27,641,213
Slicing -> not sure what -len does
<p>can someone explain how the output is <code>'b'</code> for <code>a[-len(a)]</code>?</p> <pre><code>a = "blueberry" &gt;&gt;&gt; a[-len(a)] 'b' </code></pre>
<p>If a negative number is used as an index, internally, the length of the sequence is added to it, and the result is then used as the index.</p> <p>So, what happens is:</p> <pre><code>a[-len(a)] a[len(a)-len(a)] a[0] </code></pre> <p>which is <code>b</code>.</p>
python|slice
3
1,904,470
72,444,193
PySpark Parallelism
<p>I am new to spark and am trying to implement reading data from a parquet file and then after some transformation returning it to web ui as a paginated way. Everything works no issue there.</p> <p>So now I want to improve the performance of my application, after some google and stack search I found out about pyspark parallelism.</p> <p>What I know is that :</p> <ol> <li>pyspark parallelism works by default and It creates a parallel process based on the number of cores the system has.</li> <li>Also for this to work data should be partitioned.</li> </ol> <p>Please correct me if my understanding is not right.</p> <p>Questions/doubt:</p> <ol> <li><p>I am reading data from one parquet file, so my data is not partitioned and if I use the <code>.repartition()</code> method on my dataframe that is expensive. so how should I use PySpark Parallelism here ?</p> </li> <li><p>Also I could not find any simple implementation of pyspark parallelism, which could explain how to use it.</p> </li> </ol>
<p>In spark cluster 1 core reads one partition so if you are on multinode spark cluster then you need to leave some meory for existing system manager like Yarn etc.</p> <p><a href="https://spoddutur.github.io/spark-notes/distribution_of_executors_cores_and_memory_for_spark_application.html" rel="nofollow noreferrer">https://spoddutur.github.io/spark-notes/distribution_of_executors_cores_and_memory_for_spark_application.html</a></p> <p>you can use reparation and specify number of partitions</p> <p>df.repartition(n)</p> <p>where n is the number of partition. Repartition is for parlelleism, it will be ess expensive then process your single file without any partition.</p>
python-3.x|pyspark
0
1,904,471
48,849,394
How can i decode bytes in a list in python?
<p>i use python 2.7.8 , and i try to get the origin/root of the word using built in function called stem(param), but the list i use was in hex and when i run the program there is an error occurred. here is the code : </p> <pre><code> from nltk.stem.isri import ISRIStemmer st = ISRIStemmer() f=open("Hassan.txt","rU") text=f.read() text1=text.split() for i in range(1,numOfWords): #numOfWords is var that contain the num of print st.stem(text1[i]) # words in list (text1) </code></pre> <p>and the output was as here: </p> <pre><code> Warning (from warnings module): File "C:\Python27\lib\site-packages\nltk\stem\isri.py", line 154 if token in self.stop_words: UnicodeWarning: Unicode equal comparison failed to convert both arguments to Unicode - interpreting them as being unequal Traceback (most recent call last): File "C:\Python27\Lib\mycorpus.py", line 81, in &lt;module&gt; print st.stem(text1[i]) File "C:\Python27\lib\site-packages\nltk\stem\isri.py", line 156, in stem token = self.pre32(token) # remove length three and length two prefixes in this order File "C:\Python27\lib\site-packages\nltk\stem\isri.py", line 198, in pre32 if word.startswith(pre3): UnicodeDecodeError: 'ascii' codec can't decode byte 0xc8 in position 0: ordinal not in range(128) </code></pre> <p>How can i solve this problem ?!</p>
<p>You need to decode the text that's in your file. Assuming your file is encoded as UTF-8:</p> <pre><code>text=f.read().decode('utf-8') </code></pre>
python|nltk|decode|arabic|python-unicode
1
1,904,472
48,803,063
how to convert list of values of a coulmn and assign it 0 or 1
<p>Am trying to assign a one or zero if the particular item was purchased by that user. Below is the dataset used. Am getting '[' brackets when I use dummies. </p> <p>DF:</p> <pre><code>user items A [111,333,444] B [333, 444, 555] C [555, 111, 333] D [222,333, 333,333] E [111,333,444,555] F [222,555,111] </code></pre> <p>output : </p> <pre><code> [111 222 [333 444 [555 A 1 0 1 1 0 B 0 0 1 1 1 C 1 0 1 0 1 D 0 1 1 0 0 E 1 0 1 1 1 F 1 1 0 0 1 </code></pre> <p>CODE: </p> <pre><code>(df.set_index('user')['items'].str.get_dummies(',')) </code></pre>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html" rel="nofollow noreferrer"><code>set_index</code></a> + <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.get_dummies.html" rel="nofollow noreferrer"><code>get_dummies</code></a> for indicators and last reshape by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.stack.html" rel="nofollow noreferrer"><code>stack</code></a> + <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reset_index.html" rel="nofollow noreferrer"><code>reset_index</code></a>:</p> <pre><code>df = (df.set_index('user')['items'] .str.get_dummies(',') .stack() .reset_index(name='Y/N') .rename(columns={'level_1':'item'})) </code></pre> <hr> <pre><code>print (df) user item Y/N 0 A 111 1 1 A 222 0 2 A 333 1 3 A 444 1 4 A 555 0 5 B 111 0 6 B 222 0 7 B 333 1 8 B 444 1 9 B 555 1 10 C 111 1 11 C 222 0 12 C 333 1 13 C 444 0 14 C 555 1 15 D 111 0 16 D 222 1 17 D 333 1 18 D 444 0 19 D 555 0 20 E 111 1 21 E 222 0 22 E 333 1 23 E 444 1 24 E 555 1 25 F 111 1 26 F 222 1 27 F 333 0 28 F 444 0 29 F 555 1 </code></pre> <p><strong>Detail</strong>:</p> <pre><code>print (df.set_index('user')['items'].str.get_dummies(',')) 111 222 333 444 555 user A 1 0 1 1 0 B 0 0 1 1 1 C 1 0 1 0 1 D 0 1 1 0 0 E 1 0 1 1 1 F 1 1 0 0 1 </code></pre>
python|python-3.x|pandas
1
1,904,473
20,167,145
pip install psycopg2 venv freezing
<p>So I'm trying to install psycopg2 in a virtualenv so that I can deploy a Django app to Herkou. I ran into a pg_config executable not found error, so I reinstalled PostgreSQL, but this time via homebrew (I just used the OSX graphical install before) hoping this would fix the error. I'm not getting that error anymore, but now when I sudo pip install psycopg2, I just get this:</p> <pre><code>(venv)xxxx-xxxx:project_name Brandon$ sudo pip install psycopg2 Password: Downloading/unpacking psycopg2 Downloading psycopg2-2.5.1.tar.gz (684kB): 684kB downloaded Running setup.py egg_info for package psycopg2 Installing collected packages: psycopg2 Running setup.py install for psycopg2 building 'psycopg2._psycopg' extension cc -fno-strict-aliasing -fno-common -dynamic -arch x86_64 -arch i386 -g -Os -pipe - fno-common -fno-strict-aliasing -fwrapv -mno-fused-madd -DENABLE_DTRACE -DMACOSX -DNDEBUG -Wall -Wstrict-prototypes -Wshorten-64-to-32 -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -DENABLE_DTRACE -arch x86_64 -arch i386 -pipe -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.5.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x090301 -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -I. -I/usr/local/Cellar/postgresql/9.3.1/include -I/usr/local/Cellar/postgresql/9.3.1/include/server -c psycopg/psycopgmodule.c -o build/temp.macosx-10.9-intel-2.7/psycopg/psycopgmodule.o clang: warning: argument unused during compilation: '-mno-fused-madd' </code></pre> <p>and It just freezes there. I let it go for half an hour before canceling it. It doesn't even give me an error message, just that warning, which I assume isn't a big deal. Does anyone know how to fix this? </p>
<p>From a linux command line, I get this:</p> <pre><code>% sudo apt-get install postgresql libpq-dev ... % pip install psycopg2 Downloading/unpacking psycopg2 Downloading psycopg2-2.5.1.tar.gz (684kB): 684kB downloaded Running setup.py egg_info for package psycopg2 Installing collected packages: psycopg2 Running setup.py install for psycopg2 building 'psycopg2._psycopg' extension x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.5.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x09010A -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/psycopgmodule.c -o build/temp.linux-x86_64-2.7/psycopg/psycopgmodule.o -Wdeclaration-after-statement x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.5.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x09010A -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/green.c -o build/temp.linux-x86_64-2.7/psycopg/green.o -Wdeclaration-after-statement x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.5.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x09010A -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/pqpath.c -o build/temp.linux-x86_64-2.7/psycopg/pqpath.o -Wdeclaration-after-statement x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.5.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x09010A -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/utils.c -o build/temp.linux-x86_64-2.7/psycopg/utils.o -Wdeclaration-after-statement x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.5.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x09010A -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/bytes_format.c -o build/temp.linux-x86_64-2.7/psycopg/bytes_format.o -Wdeclaration-after-statement x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.5.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x09010A -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/connection_int.c -o build/temp.linux-x86_64-2.7/psycopg/connection_int.o -Wdeclaration-after-statement x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.5.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x09010A -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/connection_type.c -o build/temp.linux-x86_64-2.7/psycopg/connection_type.o -Wdeclaration-after-statement x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.5.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x09010A -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/cursor_int.c -o build/temp.linux-x86_64-2.7/psycopg/cursor_int.o -Wdeclaration-after-statement x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.5.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x09010A -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/cursor_type.c -o build/temp.linux-x86_64-2.7/psycopg/cursor_type.o -Wdeclaration-after-statement x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.5.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x09010A -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/diagnostics_type.c -o build/temp.linux-x86_64-2.7/psycopg/diagnostics_type.o -Wdeclaration-after-statement x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.5.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x09010A -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/error_type.c -o build/temp.linux-x86_64-2.7/psycopg/error_type.o -Wdeclaration-after-statement x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.5.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x09010A -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/lobject_int.c -o build/temp.linux-x86_64-2.7/psycopg/lobject_int.o -Wdeclaration-after-statement x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.5.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x09010A -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/lobject_type.c -o build/temp.linux-x86_64-2.7/psycopg/lobject_type.o -Wdeclaration-after-statement x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.5.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x09010A -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/notify_type.c -o build/temp.linux-x86_64-2.7/psycopg/notify_type.o -Wdeclaration-after-statement x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.5.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x09010A -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/xid_type.c -o build/temp.linux-x86_64-2.7/psycopg/xid_type.o -Wdeclaration-after-statement x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.5.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x09010A -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/adapter_asis.c -o build/temp.linux-x86_64-2.7/psycopg/adapter_asis.o -Wdeclaration-after-statement x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.5.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x09010A -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/adapter_binary.c -o build/temp.linux-x86_64-2.7/psycopg/adapter_binary.o -Wdeclaration-after-statement x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.5.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x09010A -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/adapter_datetime.c -o build/temp.linux-x86_64-2.7/psycopg/adapter_datetime.o -Wdeclaration-after-statement x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.5.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x09010A -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/adapter_list.c -o build/temp.linux-x86_64-2.7/psycopg/adapter_list.o -Wdeclaration-after-statement x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.5.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x09010A -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/adapter_pboolean.c -o build/temp.linux-x86_64-2.7/psycopg/adapter_pboolean.o -Wdeclaration-after-statement x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.5.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x09010A -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/adapter_pdecimal.c -o build/temp.linux-x86_64-2.7/psycopg/adapter_pdecimal.o -Wdeclaration-after-statement x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.5.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x09010A -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/adapter_pint.c -o build/temp.linux-x86_64-2.7/psycopg/adapter_pint.o -Wdeclaration-after-statement x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.5.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x09010A -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/adapter_pfloat.c -o build/temp.linux-x86_64-2.7/psycopg/adapter_pfloat.o -Wdeclaration-after-statement x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.5.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x09010A -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/adapter_qstring.c -o build/temp.linux-x86_64-2.7/psycopg/adapter_qstring.o -Wdeclaration-after-statement x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.5.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x09010A -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/microprotocols.c -o build/temp.linux-x86_64-2.7/psycopg/microprotocols.o -Wdeclaration-after-statement x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.5.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x09010A -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/microprotocols_proto.c -o build/temp.linux-x86_64-2.7/psycopg/microprotocols_proto.o -Wdeclaration-after-statement x86_64-linux-gnu-gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -DPSYCOPG_DEFAULT_PYDATETIME=1 -DPSYCOPG_VERSION="2.5.1 (dt dec pq3 ext)" -DPG_VERSION_HEX=0x09010A -DPSYCOPG_EXTENSIONS=1 -DPSYCOPG_NEW_BOOLEAN=1 -DHAVE_PQFREEMEM=1 -I/usr/include/python2.7 -I. -I/usr/include/postgresql -I/usr/include/postgresql/9.1/server -c psycopg/typecast.c -o build/temp.linux-x86_64-2.7/psycopg/typecast.o -Wdeclaration-after-statement x86_64-linux-gnu-gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -D_FORTIFY_SOURCE=2 -g -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security build/temp.linux-x86_64-2.7/psycopg/psycopgmodule.o build/temp.linux-x86_64-2.7/psycopg/green.o build/temp.linux-x86_64-2.7/psycopg/pqpath.o build/temp.linux-x86_64-2.7/psycopg/utils.o build/temp.linux-x86_64-2.7/psycopg/bytes_format.o build/temp.linux-x86_64-2.7/psycopg/connection_int.o build/temp.linux-x86_64-2.7/psycopg/connection_type.o build/temp.linux-x86_64-2.7/psycopg/cursor_int.o build/temp.linux-x86_64-2.7/psycopg/cursor_type.o build/temp.linux-x86_64-2.7/psycopg/diagnostics_type.o build/temp.linux-x86_64-2.7/psycopg/error_type.o build/temp.linux-x86_64-2.7/psycopg/lobject_int.o build/temp.linux-x86_64-2.7/psycopg/lobject_type.o build/temp.linux-x86_64-2.7/psycopg/notify_type.o build/temp.linux-x86_64-2.7/psycopg/xid_type.o build/temp.linux-x86_64-2.7/psycopg/adapter_asis.o build/temp.linux-x86_64-2.7/psycopg/adapter_binary.o build/temp.linux-x86_64-2.7/psycopg/adapter_datetime.o build/temp.linux-x86_64-2.7/psycopg/adapter_list.o build/temp.linux-x86_64-2.7/psycopg/adapter_pboolean.o build/temp.linux-x86_64-2.7/psycopg/adapter_pdecimal.o build/temp.linux-x86_64-2.7/psycopg/adapter_pint.o build/temp.linux-x86_64-2.7/psycopg/adapter_pfloat.o build/temp.linux-x86_64-2.7/psycopg/adapter_qstring.o build/temp.linux-x86_64-2.7/psycopg/microprotocols.o build/temp.linux-x86_64-2.7/psycopg/microprotocols_proto.o build/temp.linux-x86_64-2.7/psycopg/typecast.o -lpq -o build/lib.linux-x86_64-2.7/psycopg2/_psycopg.so Successfully installed psycopg2 Cleaning up... </code></pre> <p>Maybe use a different compiler as your default C/C++ compiler?</p>
python|django|heroku|heroku-postgres|psycopg
1
1,904,474
66,969,455
normal distribution curve doesn't fit well over histogram in subplots using matplotlib
<p>I am using &quot;plt.subplots(2, 2, sharex=True, sharey=True)&quot; to draw a 2*2 subplots. Each subplot has two Y axis and contains normal distribution curve over a histogram. Noting I particularly set &quot;sharex=True, sharey=True&quot; here in order to make all subplots share the same X axis and Y axis.</p> <p>After running my code, everything is fine except the second, three, and fourth subplots where the normal distribution curve doesn't fit the histogram very well (please see the figure here)</p> <p><a href="https://i.stack.imgur.com/Do578.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Do578.png" alt="enter image description here" /></a></p> <p>I did googling but failed to get this issue solved. However, if I set &quot;sharex=True, sharey=False&quot; in my code, then the figure looks correct, but all subplots use their own Y axix which isn't what I want. Please see the figure here</p> <p><a href="https://i.stack.imgur.com/Y1NRk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Y1NRk.png" alt="enter image description here" /></a></p> <p>Hope this issue can be fixed by experts in StackOverflow. Many thanks in advance!</p> <p>Below is my code:</p> <pre><code>import matplotlib.pyplot as plt from scipy.stats import norm def align_yaxis(ax1, v1, ax2, v2): #adjust ax2 ylimit so that v2 in ax2 is aligned to v1 in ax1 _, y1 = ax1.transData.transform((0, v1)) _, y2 = ax2.transData.transform((0, v2)) inv = ax2.transData.inverted() _, dy = inv.transform((0, 0)) - inv.transform((0, y1-y2)) miny, maxy = ax2.get_ylim() ax2.set_ylim(miny+dy, maxy+dy) def drawSingle(myax, mydf , title, offset): num_bins = 200 xs = mydf[&quot;gap&quot;] x = np.linspace(-1,1,1000) mu =np.mean(x) sigma =np.std(xs) n, bins, patche = myax.hist(xs, num_bins, alpha=0.8, facecolor='blue', density=False) myax.set_ylabel('frequency',color=&quot;black&quot;,fontsize=12, weight = &quot;bold&quot;) myax.set_xlabel('X', fontsize=12, weight = &quot;bold&quot;,horizontalalignment='center') ax_twin = myax.twinx() y_normcurve = norm.pdf(bins, mu, sigma) ax_twin.plot(bins, y_normcurve, 'r--') align_yaxis(myax,0,ax_twin,0) peakpoint = norm.pdf(mu,loc=mu,scale=sigma) plt.vlines(mu, 0, peakpoint, 'y', '--', label='example') ax_twin.set_ylabel(&quot;probablility dense&quot;,color=&quot;black&quot;,fontsize=12, weight = &quot;bold&quot;) def drawSubplots(mydf1,mydf2,mydf3,mydf4, pos1,pos2,pos3,pos4, title, filename): plt.rcParams['figure.figsize'] = (18,15 ) my_x_ticks = np.arange(-0.8, 0.8,0.1) rows, cols = 2, 2 fig, ax = plt.subplots(2, 2, sharex=True, sharey=True) drawSingle(ax[0][0], mydf1, &quot;Subplot1&quot;, pos1) drawSingle(ax[0][1], mydf2, &quot;Subplot2&quot;, pos2) drawSingle(ax[1][0], mydf3, &quot;Subplot3&quot;, pos3) drawSingle(ax[1][1], mydf4, &quot;Subplot4&quot;, pos4) plt.text(-1, -1, title, horizontalalignment='center', fontsize=18) plt.show() drawSubplots(df1, df2,df3,df4,3.2,3.1,2.7,2.85,&quot;test9&quot;, &quot;test9&quot;) </code></pre>
<p>Many thanks JohanC, you are amazing.</p> <p>Based on your code, I just added a few lines of code within drawSubplots function in order to make 95% of the Gaussian curve area shaded between the lower bound and upper bound for each subplot. The following is my try. It seems that ax_twin.fill_between doesn't work normally here. As you could see from the figure that the shaded area is out of the Gaussian curve <a href="https://i.stack.imgur.com/xpObh.png" rel="nofollow noreferrer">enter image description here</a>. What I want is only to shade the area under the Gaussian curve between the lower bound and upper bound. If you don't mind, would you please check it out my mistake? Thank you very much!</p> <pre><code>import matplotlib.pyplot as plt import math from scipy.stats import norm def align_yaxis(ax1, v1, ax2, v2): #adjust ax2 ylimit so that v2 in ax2 is aligned to v1 in ax1 _, y1 = ax1.transData.transform((0, v1)) _, y2 = ax2.transData.transform((0, v2)) inv = ax2.transData.inverted() _, dy = inv.transform((0, 0)) - inv.transform((0, y1-y2)) miny, maxy = ax2.get_ylim() ax2.set_ylim(miny+dy, maxy+dy) def drawSingle(myax, mydf , title): num_bins = 200 xs = mydf[&quot;gap&quot;] x = np.linspace(-1,1,1000) mu =np.mean(xs) sigma =np.std(xs) n, bins, patches = myax.hist(xs, num_bins, alpha=0.8, facecolor='blue', density=False) myax.set_ylabel('Frequency', color=&quot;black&quot;, fontsize=12, weight=&quot;bold&quot;) myax.set_xlabel(title, fontsize=12, weight=&quot;bold&quot;, horizontalalignment='center') normalization_factor = len(xs) * (bins[1] - bins[0]) y_normcurve = norm.pdf(x, mu, sigma) * normalization_factor myax.plot(x, y_normcurve, 'r--') myax.vlines(mu, 0, y_normcurve.max(), 'y', '--', color='lime', label='example') plt.xlim(-0.8,0.8) my_x_ticks = np.arange(-0.8, 0.8,0.1) plt.xticks(my_x_ticks) return normalization_factor, mu, sigma def drawSubplots(mydf1,mydf2,mydf3,mydf4, title): plt.rcParams['figure.figsize'] = (18,15 ) norm_factors = [] mus = [] sigmas = [] my_x_ticks = np.arange(-0.8, 0.8,0.1) rows, cols = 2, 2 fig, ax = plt.subplots(nrows=rows, ncols=cols, sharex=True, sharey=True) dfs = [mydf1, mydf2, mydf3, mydf4] #norm_factors = [drawSingle(ax_i, df, title) #for ax_i, df, title in zip(ax.ravel(), dfs, [&quot;Subplot1&quot;, &quot;Subplot2&quot;, &quot;Subplot3&quot;, &quot;Subplot4&quot;])] for ax_i, df, title in zip(ax.ravel(), dfs, [&quot;Subplot1&quot;, &quot;Subplot2&quot;, &quot;Subplot3&quot;, &quot;Subplot4&quot;]): norm_factor, mu, sigma = drawSingle(ax_i, df, title) norm_factors.append(norm_factor) mus.append(mu) sigmas.append(sigma) for ax_i, norm_factor, mu, sigma in zip(ax.ravel(), norm_factors, mus, sigmas ): ax_twin = ax_i.twinx() xmax = ax_i.get_xlim()[1] ax_twin.set_ylim(0, xmax / norm_factor) ax_twin.set_ylabel(&quot;probablility dense&quot;,color=&quot;black&quot;,fontsize=12, weight = &quot;bold&quot;) CI_95_lower = mu - (1.96*sigma) CI_95_upper = mu + (1.96*sigma) px_shaded = np.arange(CI_95_lower,CI_95_upper,0.1) ax_twin.fill_between(px_shaded,norm.pdf(px_shaded,loc=mu,scale=sigma) * norm_factor,alpha=0.75, color='pink') area_shaded_95_CI = norm.cdf(x=CI_95_upper, loc=mu, scale=sigma)-norm.cdf(x=CI_95_lower, loc=mu, scale=sigma) ax_twin.text(-0.06,0.01,str(round(area_shaded_95_CI*100,1))+&quot;%&quot;, fontsize=20) ax_twin.annotate(s=f'lower bound= {CI_95_lower:.3f}',xy=(CI_95_lower,norm.pdf(CI_95_lower,loc=mu,scale=sigma)),xytext=(-0.75,0.01),weight='bold',color='blue',\ arrowprops=dict(arrowstyle='-|&gt;',connectionstyle='arc3',color='green'),\ fontsize=12 ) ax_twin.annotate(s=f'upper bound= {CI_95_upper:.3f}',xy=(CI_95_upper,norm.pdf(CI_95_upper,loc=mu,scale=sigma)),xytext=(0.28,0.01),weight='bold',color='blue',\ arrowprops=dict(arrowstyle='-|&gt;',connectionstyle='arc3',color='green'),\ fontsize=12 ) ax_twin.text(0.05, 0.03, r&quot;$\mu=&quot; + f'{mu:.6f}' + &quot;, \sigma=&quot; + f'{sigma:.6f}' + &quot;$&quot; + &quot;, confidence interval=95%&quot; , horizontalalignment='center', fontsize=15) plt.suptitle(title, fontsize=18) plt.tight_layout() plt.show() df1, df2, df3, df4 = [pd.DataFrame({&quot;gap&quot;: np.random.normal(0, 0.2, n)}) for n in [6000, 4000, 1800, 1200]] drawSubplots(df1, df2, df3, df4, &quot;Title&quot;) </code></pre>
python|matplotlib|histogram|subplot|normal-distribution
0
1,904,475
4,179,062
Python: Problems adding items to list in a class
<p>I've got a class defined with a method to add items to it:</p> <pre><code>class ProdReg: def __init__(self): self.__PListe=[] def addProdukt(self,pItem): self.__Pliste.append(pItem) </code></pre> <p>When I instantiate a ProdReg object and try to add an object to it with the following code i gent an error:</p> <pre><code>pr.addProdukt(b) </code></pre> <p>I get the following error: AttributeError: <code>'ProdReg' object has no attribute '_ProdReg__Pliste'</code></p> <p>What's wrong? I'm not able to figure thisone out.</p> <p>/Andy.l</p>
<p>Because in the <code>__init__</code> you wrote: <code>__PListe</code> and in the the <code>addProdukt</code> method, you wrote <code>__Pliste</code>. Python is case sensitive.</p>
python
6
1,904,476
4,770,870
A table with a composite Primary key, one of the fields with autoincrement
<p>Im using a table constraint to create a composite primary key, I would like the <code>id</code> field to <em>autoincrement</em>, is this possible? or what are the alternatives?</p> <pre><code> CREATE TABLE IF NOT EXISTS atable( id INTEGER NOT NULL, --I want to autoincrement this one name TEXT NOT NULL, anotherCol TEXT, PRIMARY KEY(id, name)); </code></pre>
<p>No, there's only one primary key: that's the composite of id and name.</p> <p>If you mean that you want id to be the primary key, and name to be an indexed alternate key, I'd say that you should give name a unique constraint and make id the primary key.</p>
python|sqlite
5
1,904,477
48,051,219
Cannot import name ContextualZipFile
<p>Here's what I get when I try to install sqalchemy</p> <pre><code>Traceback (most recent call last): File "/usr/bin/easy_install", line 11, in &lt;module&gt; load_entry_point('setuptools==33.1.1', 'console_scripts', 'easy_install')() File "/home/j/.local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 572, in load_entry_point return get_distribution(dist).load_entry_point(group, name) File "/home/j/.local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 2755, in load_entry_point return ep.load() File "/home/j/.local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 2408, in load return self.resolve() File "/home/j/.local/lib/python2.7/site-packages/pkg_resources/__init__.py", line 2414, in resolve module = __import__(self.module_name, fromlist=['__name__'], level=0) File "/usr/lib/python2.7/dist-packages/setuptools/command/easy_install.py", line 51, in &lt;module&gt; from setuptools.archive_util import unpack_archive File "/usr/lib/python2.7/dist-packages/setuptools/archive_util.py", line 12, in &lt;module&gt; from pkg_resources import ContextualZipFile ImportError: cannot import name ContextualZipFile </code></pre> <p>I have tried installing and uninstalling setuptools. </p> <p>I have also tried changing "from pkg_resources import ensure_directory, ContextualZipFile" to "from pkg_resources import ensure_directory" in my usr/local/lib/python2.7/dist-packages/setuptools/archive_util.py. However I get the error </p> <pre><code>Best match: MySQL-python 1.2.5 Processing MySQL-python-1.2.5.zip Traceback (most recent call last): File "/usr/bin/easy_install", line 11, in &lt;module&gt; load_entry_point('setuptools==33.1.1', 'console_scripts', 'easy_install')() File "/usr/lib/python2.7/dist-packages/setuptools/command/easy_install.py", line 2325, in main **kw File "/usr/lib/python2.7/distutils/core.py", line 151, in setup dist.run_commands() File "/usr/lib/python2.7/distutils/dist.py", line 953, in run_commands self.run_command(cmd) File "/usr/lib/python2.7/distutils/dist.py", line 972, in run_command cmd_obj.run() File "/usr/lib/python2.7/dist-packages/setuptools/command/easy_install.py", line 436, in run self.easy_install(spec, not self.no_deps) File "/usr/lib/python2.7/dist-packages/setuptools/command/easy_install.py", line 699, in easy_install return self.install_item(spec, dist.location, tmpdir, deps) File "/usr/lib/python2.7/dist-packages/setuptools/command/easy_install.py", line 725, in install_item dists = self.install_eggs(spec, download, tmpdir) File "/usr/lib/python2.7/dist-packages/setuptools/command/easy_install.py", line 876, in install_eggs unpack_archive(dist_filename, tmpdir, self.unpack_progress) File "/usr/lib/python2.7/dist-packages/setuptools/archive_util.py", line 53, in unpack_archive driver(filename, extract_dir, progress_filter) File "/usr/lib/python2.7/dist-packages/setuptools/archive_util.py", line 102, in unpack_zipfile with ContextualZipFile(filename) as z: NameError: global name 'ContextualZipFile' is not defined </code></pre> <p>I have also tried directly importing pkg_resources but it still can't find ContextualZipFile. I get the error:</p> <pre><code>NameError: global name 'ContextualZipFile' is not defined </code></pre> <p>I use python version = 2.7.13 and pip version = 9.0.1</p> <p>I am also doing this inside a virtualenvironment</p>
<p>This is a pretty nasty hack but I just commented out the import and <code>easy_install</code> seems to be working fine. Change the corresponding line to:</p> <pre><code>from pkg_resources import ensure_directory#, ContextualZipFile </code></pre> <p>NOTE: It's a system file so you're gonna have to use sudo privilege to change it. You can do that using: <code>sudo vim filename</code>.</p> <p>It's not a great fix, but if you need to get it working in a hurry, this should do the trick. Hope this helps!</p>
python
2
1,904,478
48,235,947
tuples of RGB colors to surfarray in Pygame
<p>I have some array of complex numbers z, which I want to convert to rgb values with function ncol. Then I want to use it to create pygame syrfarray. Here example</p> <pre><code>import numpy as np import cmath z = np.array([[(complex(-(x/2),-(y/2))) for x in range(2)]for y in range(2)]) def ncol(z): if cmath.phase(z)&gt;180: w = (255,255,255) else: w = (125,125,0) return w fz = np.frompyfunc(ncol,1,1) w = fz(z) print(w) </code></pre> <p>How could I translate it to surfarray of pygame? I tried </p> <pre><code>pygame.surfarray.blit_array(surf,w) </code></pre> <p>But it gives </p> <pre><code>ValueError: Unsupported array element type </code></pre> <p>As I understand z has shape (2,2) and right shape for surfarray have to be (2,2,3)</p> <p><strong>Answer given by skrx</strong></p> <pre><code>w = np.array([list(arr) for arr in w]) </code></pre>
<p>The shape of your array has to match the size of your surface. This means if you look at the output of <code>numpy.array(z).shape</code> the output has to be <code>(width, height, 3)</code> to the size of your surface. You can check the width and height of your surface using <code>surf.get_width</code> and <code>surf.get_height</code>.</p> <p>Furthermore the third element of the shape tuple has to be <code>3</code> because the surface uses a 3-Tuple to represent the RGB color values.</p>
python|pygame
3
1,904,479
64,546,546
Trying to Make a Virtualenv on Windows 8 for Python but 'pip' is Not Recognized
<p>I am new to programming and am trying to install <code>virtualenv</code> to set up my Python packages properly in one. I can see <code>Python39\scripts</code> is in my path, but cmd still says <code>pip is not recognized</code>.</p> <p>As far as I can see I do not have any copies of Python installed, and have only moved the main file from <code>appdata</code> into a document folder titled Programming Languages for convenience. Any help or tips on this subject would be appreciated!</p>
<p>You have to recongiure the PATH because you moved it from appdata, and when you type pip it is looking for Python39\scripts in appdata. You have to paste the full new path in PATH in order for it to work.</p>
python|windows|path|pip|virtualenv
0
1,904,480
69,766,461
How to replicate a Spark window function with collect_list in Pandas dataframes?
<p>I have an initial dataframe</p> <pre><code>df1 = +---+---+---+ | A| B| C| +---+---+---+ | 1| 1| 10| | 1| 2| 11| | 1| 2| 12| | 3| 1| 13| | 2| 1| 14| | 2| 1| 15| | 2| 1| 16| | 4| 1| 17| | 4| 2| 18| | 4| 3| 19| | 4| 4| 19| | 4| 5| 20| | 4| 5| 20| +---+---+---+ </code></pre> <p>Using pyspark I coded the dataframe with a window function using a <code>collect_list</code> function, taking into account the groupping column <code>'A'</code> and taking into account the column <code>'B'</code> sorted to create a column with cumulative lists</p> <pre class="lang-py prettyprint-override"><code>spec = Window.partitionBy('A').orderBy('B') df1 = df1.withColumn('D',collect_list('C').over(spec)) df1.orderBy('A','B').show() +---+---+---+------------------------+ |A |B |C |D | +---+---+---+------------------------+ |1 |1 |10 |[10] | |1 |2 |11 |[10, 11, 12] | |1 |2 |12 |[10, 11, 12] | |2 |1 |14 |[14, 15, 16] | |2 |1 |15 |[14, 15, 16] | |2 |1 |16 |[14, 15, 16] | |3 |1 |13 |[13] | |4 |1 |17 |[17] | |4 |2 |18 |[17, 18] | |4 |3 |19 |[17, 18, 19] | |4 |4 |19 |[17, 18, 19, 19] | |4 |5 |20 |[17, 18, 19, 19, 20, 20]| |4 |5 |20 |[17, 18, 19, 19, 20, 20]| +---+---+---+------------------------+ </code></pre> <p>Is it possible to do the same calculation using Pandas Dataframe?</p> <p>I tried using some &quot;normal&quot; python code but probably there is a way to do it more directly.</p>
<p>One way to approach this problem in pandas is to use two groupby's i.e. first group the dataframe on column <code>A</code> then for each group apply the custom defined function <code>collect_list</code> which in turns groups input by column <code>B</code> and cumulatively aggregates the column <code>C</code> using <code>list</code></p> <pre><code>def collect_list(g): return g.groupby('B')['C'].agg(list).cumsum() df.sort_values(['A', 'B']).merge( df.groupby('A').apply(collect_list).reset_index(name='D')) </code></pre> <hr /> <pre><code> A B C D 0 1 1 10 [10] 1 1 2 11 [10, 11, 12] 2 1 2 12 [10, 11, 12] 4 2 1 14 [14, 15, 16] 5 2 1 15 [14, 15, 16] 6 2 1 16 [14, 15, 16] 3 3 1 13 [13] 7 4 1 17 [17] 8 4 2 18 [17, 18] 9 4 3 19 [17, 18, 19] 10 4 4 19 [17, 18, 19, 19] 11 4 5 20 [17, 18, 19, 19, 20, 20] 12 4 5 20 [17, 18, 19, 19, 20, 20] </code></pre>
pandas|apache-spark|pyspark|window
1
1,904,481
66,659,545
How to add a timeout for a websocket connect
<p>I have some code as follows:</p> <pre><code>async with websockets.connect(uri) as websocket: # Do my websocket thing.. </code></pre> <p>I'd like to add a connection timeout so if the server is not available it doesn't hang. How can I do this?</p> <p>I think it's adding an await to websockets.connect(), but everything I've tried isn't working..</p>
<p>I ended up doing what Andrei suggested and used the try / except code, then followed it up with similar code as before, but added the close.</p> <pre><code> try: # make connection attempt websocket = await asyncio.wait_for(websockets.connect(uri), timeout) print(&quot;\t\t SS&gt; Connected&quot;) except: # handle error print('SS&gt; Error connecting.') rtn = [None,None] return rtn else: print(&quot;\t SS&gt; Connected &quot; + uri) try: await websocket.send(json_start) print(&quot;\t SS&gt; {json_start}&quot;) except: print(&quot;\t SS&gt; ERROR: Unable to send {json_start}&quot;) # and yet more code.. await websocket.close() return rtn </code></pre>
python-3.x
0
1,904,482
65,048,338
Why do I get relative import error in python?
<p>I'm having this issue in python with relative imports, when I want to import a file from another folder I get an error.</p> <p>File structure:</p> <pre><code>python/ helpers/ index.py posts/ post.py </code></pre> <p>Then, I do in the <code>posts/post.py</code> file:</p> <pre class="lang-py prettyprint-override"><code>from ..helpers.index import conn </code></pre> <p>But when I run the python file, I get this error:</p> <blockquote> <p>ImportError: attempted relative import with no known parent package</p> </blockquote> <p>I don't really understand what I'm doing wrong. Especially that <code>vscode</code> understands that I'm trying to import this file</p> <p>Thanks in advance</p>
<p>I think this answer might be helpful: <a href="https://stackoverflow.com/questions/14132789/relative-imports-for-the-billionth-time/14132912#14132912">Relative imports for the billionth time</a></p> <p>In the section titled &quot;Relative imports&quot;.</p>
python|import|python-import|importerror
0
1,904,483
53,351,541
fillna pandas doesn't affect original dataframe
<p>I'm trying to fill missing values for specific column but the original data frame doesn't change though I'm using <code>inplace=True</code> I tried this:</p> <pre><code>all_data.loc[all_data['GarageType'] == 'Detchd', 'GarageCond'].fillna('TA', inplace=True) </code></pre> <p>and this:</p> <pre><code>all_data.fillna({x:'TA' for x in ['GarageCond'] if ['GarageType'] == 'Detchd'}, inplace=True) </code></pre> <p>Edite : this worked</p> <pre><code>all_data.fillna({x:'TA' for x in ['GarageCond'] if (all_data['GarageType']=='Detchd').any()}, inplace=True) </code></pre>
<p>What are your version and environment?</p> <p>Try this and let me know:</p> <pre><code>all_data = all_data.fillna({x:'TA' for x in ['GarageCond'] if ['GarageType'] == 'Detchd'}, inplace=True) </code></pre>
python|pandas|fillna
0
1,904,484
68,830,111
Creating time series df with category and date and percentage change
<p>I have a dataframe like this:</p> <pre><code>category: number: date: dog 100 2020-01-01 cat 50 2020-01-01 dog 150 2020-01-02 mouse 200 2020-01-01 mouse 150 2020-01-02 cat 100 2020-01-02 </code></pre> <p>I am trying to create a dataframe that gets the percentage change for each individual category across each date, similar to this:</p> <pre><code>category: number: date: percentage_change: dog 100 2020-01-01 - dog 150 2020-01-02 50% cat 50 2020-01-01 - cat 100 2020-01-02 100% mouse 200 2020-01-01 - mouse 150 2020-01-02 25% </code></pre> <p>I have tried this:</p> <pre><code>df['number'].pct_change() </code></pre> <p>But this doesn't get pct_change for each category.</p> <p>Any help greatly appreciated.</p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sort_values.html" rel="nofollow noreferrer"><code>DataFrame.sort_values</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.pct_change.html" rel="nofollow noreferrer"><code>GroupBy.pct_change</code></a>:</p> <pre><code>df = df.sort_values(['category','date']) df['percentage_change'] = df.groupby('category')['number'].pct_change() print (df) category number date percentage_change 1 cat 50 2020-01-01 NaN 5 cat 100 2020-01-02 1.00 0 dog 100 2020-01-01 NaN 2 dog 150 2020-01-02 0.50 3 mouse 200 2020-01-01 NaN 4 mouse 150 2020-01-02 -0.25 </code></pre> <p>For percentages:</p> <pre><code>s = df['percentage_change'].mul(100).round().fillna(0,downcast='infer').astype(str) + '%' df['percentage_change'] = np.where(df['percentage_change'].isna(), '-', s) print (df) category number date percentage_change 1 cat 50 2020-01-01 - 5 cat 100 2020-01-02 100% 0 dog 100 2020-01-01 - 2 dog 150 2020-01-02 50% 3 mouse 200 2020-01-01 - 4 mouse 150 2020-01-02 -25% </code></pre>
python|pandas|dataframe|time-series
1
1,904,485
68,839,045
How to change Default Camera of ChromeDriver when web scraping with Selenium (Python)?
<p>I am using Selenium (with Python on Mac) to web scrape a site that requires my camera.</p> <p>However, I do not want to use my Computer Camera, I would like to use a Virtual Camera (of the OBS Application).</p> <p>At the beginning, my first problem was allowing the ChromeDriver to use the camera, as the pop-up of permission was appearing. I solved this problem with this code:</p> <pre><code>options = webdriver.ChromeOptions() options.add_experimental_option(&quot;prefs&quot;, { \ &quot;profile.default_content_setting_values.media_stream_mic&quot;: 1, # 1:allow, 2:block &quot;profile.default_content_setting_values.media_stream_camera&quot;: 1, # 1:allow, 2:block &quot;profile.default_content_setting_values.geolocation&quot;: 2, # 1:allow, 2:block &quot;profile.default_content_setting_values.notifications&quot;: 2 # 1:allow, 2:block }) </code></pre> <p>In this way, it is using my default camera. I would like to add something like <code>&quot;profile.default_content_setting_values.media_stream_camera.option&quot;: &quot;OBS&quot;,</code> in the code above, but that is not quite right. I read that the Preferences.json file located at /Users/myusername/Library/Application Support/Google/Chrome/Default/Preferences.json can show what can be changed in the above prefs dictionary, but I do not understand it very well.</p> <p>As I am unable tho proceed, any help would be of great value.</p>
<p>You could go to the Chrome camera settings when starting the script and change the default camera:</p> <pre><code>config_camera_url = &quot;chrome://settings/content/camera&quot; driver.get(config_camera_url) sleep(3) # Wait until selector appears selector = driver.execute_script( &quot;return document.querySelector('settings-ui')&quot; &quot;.shadowRoot.querySelector('settings-main')&quot; &quot;.shadowRoot.querySelector('settings-basic-page')&quot; &quot;.shadowRoot.querySelector('settings-section &gt; settings-privacy-page')&quot; &quot;.shadowRoot.querySelector('settings-animated-pages &gt; settings-subpage &gt; media-picker')&quot; &quot;.shadowRoot.querySelector('#picker &gt; #mediaPicker')&quot; &quot;.value = 'OBS-Camera4'&quot; # Change for your default camera ) </code></pre> <p>Due to Selenium doesn't provide support to interact with Shadow DOM elements, you need to go through each shadow-root elements.</p>
python|selenium|selenium-webdriver|web-scraping|selenium-chromedriver
2
1,904,486
71,729,828
How to create sympy equality with matrices
<p>I have two matrices I create like so:</p> <pre><code>A = sp.zeros(N**2, N**2) f = sp.zeros(N**2, 1) </code></pre> <p>And I would like to see the equation A=f (with all the matrices' elements &quot;spelled out&quot;). Using:</p> <pre><code>display(Eq(A, f)) </code></pre> <p>just results in an evaluation to <code>False</code></p> <p>How can I show the equality not without evaluating?</p> <p>Thanks in advance</p>
<p>Just needed to set <code>evaluation=False</code> as arg in <code>Eq</code> *facepalm*</p>
python|matrix|sympy
0
1,904,487
67,471,330
How to filter and group pandas DataFrame to get count for combination of two columns
<p>I am sorry I could not fit the whole problem in the Title in a concise way. Pardon my English. I will explain my problem with an example.</p> <p>Say I have this dataset:</p> <pre><code>dff = pd.DataFrame(np.array([[&quot;2020-11-13&quot;, 0, 3,4], [&quot;2020-10-11&quot;, 1, 3,4], [&quot;2020-11-13&quot;, 2, 1,4], [&quot;2020-11-14&quot;, 0, 3,4], [&quot;2020-11-13&quot;, 1, 5,4], [&quot;2020-11-14&quot;, 2, 2,4],[&quot;2020-11-12&quot;, 1, 1,4],[&quot;2020-11-14&quot;, 1, 2,4],[&quot;2020-11-15&quot;, 2, 5,4], [&quot;2020-11-11&quot;, 0, 1,2],[&quot;2020-11-15&quot;, 1, 1,2], [&quot;2020-11-18&quot;, 1, 2,4],[&quot;2020-11-17&quot;, 0, 1,2],[&quot;2020-11-20&quot;, 0, 3,4]]), columns=['Timestamp', 'ID', 'Name', &quot;slot&quot;]) </code></pre> <p>I want to have a count for each <code>Name</code> and <code>slot</code> combination but disregard multiple timeseries value for same ID. For example, if I simply group by <code>Name</code> and <code>slot</code> I get:</p> <pre><code>dff.groupby(['Name', &quot;slot&quot;]).Timestamp.count().reset_index(name=&quot;count&quot;) Name slot count 1 2 3 1 4 2 2 4 3 3 4 4 5 4 2 </code></pre> <p>However, for <code>ID == 0</code>, there are two combinations for <code>name == 1</code> and <code>slot == 2</code>, so instead of <code>3</code> I want the count to be <code>2</code>.</p> <p>This is the table I would ideally want.</p> <pre><code> Name slot count 1 2 2 1 4 2 2 4 2 3 4 2 5 4 2 </code></pre> <p>I tried:</p> <pre><code>filter_one = dff.groupby(['ID']).Timestamp.transform(min) dff1 = dff.loc[dff.Timestamp == filter_one] dff1.groupby(['Name', &quot;slot&quot;]).Timestamp.count().reset_index(name=&quot;count&quot;) </code></pre> <p>But this gives me:</p> <pre><code> Name slot count 1 2 1 1 4 1 3 4 1 </code></pre> <p>It also does not work if I drop duplicates for <code>ID</code>.</p>
<pre class="lang-py prettyprint-override"><code>x = dff.groupby([&quot;Name&quot;, &quot;slot&quot;]).ID.nunique().reset_index(name=&quot;count&quot;) print(x) </code></pre> <p>Prints:</p> <pre class="lang-none prettyprint-override"><code> Name slot count 0 1 2 2 1 1 4 2 2 2 4 2 3 3 4 2 4 5 4 2 </code></pre>
python|pandas
3
1,904,488
60,544,908
remove indexes added by transpose using pandas
<p>I am reading CSV file using <code>pandas</code></p> <pre><code>data = pd.read_csv(csv_file) print(data) </code></pre> <p>Output:</p> <pre><code> header CGW added date Reference date Project Activity 0 is_mandatory no no yes no 1 datatype NaN NaN date NaN 2 format NaN NaN dd-mmm-yyyy NaN </code></pre> <p><strong>After transpose:</strong></p> <p>I have to transpose the data. I am transposing using <code>transpose()</code> method</p> <pre><code>data = data.transpose() print(data) </code></pre> <p>Output: </p> <pre><code> 0 1 2 header is_mandatory datatype format CGW added date no NaN NaN Reference date no NaN NaN Project yes date dd-mmm-yyyy Activity no NaN NaN </code></pre> <p>But here, <code>header, CGW added date, Reference date, Project, Activity</code> became indexes. I want it as data not as index. How can I do the same? Do I have to add any property while reading CSV itself?</p> <p><strong>Note: column names in CSV are not fix.</strong> </p>
<p>Try this out:</p> <p><code>data.reset_index(inplace=True)</code></p>
python|pandas|csv
2
1,904,489
17,822,135
QLineEdit change border color without changing border style
<p>I can change the background color of a QLineEdit widget in PyQt4 by doing this:</p> <pre><code>myEditField.setStyleSheet("QLineEdit { background-color : green;}") </code></pre> <p>Changing the border color requires me to do something like this:</p> <pre><code>myEditField.setStyleSheet("QLineEdit { border : 2px solid green;}") </code></pre> <p>This however is undesirable because it also changes the default shape and size of the border, I've tried using border-color instead, but it apparently only works if you have already specified border. Is there an easy way to do this?</p>
<p>You can set the stylesheet with following values:</p> <pre><code> border-style: outset; border-width: 2px; border-color: green; </code></pre>
python|qt|pyqt
6
1,904,490
66,267,119
Puzzle with staticmethod in a class variable I can't figure out
<p>I'm trying to make a class static method, and then use it in a class variable as a callback. I'm running into a problem with the static method not being callable that I can't figure out:</p> <pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass from typing import Callable @dataclass class Option: txt: str callback: Callable class Window: @staticmethod def _doit(a=1): print('do it with a', a) cvar = Option(txt='doit', callback=_doit) w1 = Window() w1._doit() # works fine Window._doit() # works fine w1.cvar.callback() # this fails </code></pre> <p>The last call above fails with <code>&quot;TypeError: 'staticmethod' object is not callable&quot;</code>.</p> <p>I'm not sure why here, <em>but even more confusing to me is if I remove the <code>@staticmethod</code> line, then things work 'correctly'</em> - with all 3 calls in the code working.</p> <p>I know I must being doing something stupid wrong, but I can't figure out what's wrong with the above ...any help?</p> <p>thanks!</p>
<p>EDIT: Oops, @Richard correctly pointed out that my original example was broken - I did not see the error since I had already defined the <code>Window</code> class running the original code. A bit tricky, but here is a version that should work along with a more detailed <a href="https://stackoverflow.com/questions/12718187/calling-class-staticmethod-within-the-class-body">explanation</a>.</p> <p>EDIT #2: Sigh, fixed another typo. :^(</p> <p>Example:</p> <pre><code>cvar = Option(txt='doit', callback=_doit.__func__) </code></pre>
python-3.x
1
1,904,491
68,119,794
How to update Tkinter window every time image is processed
<p>I'm trying to create a GUI that will display an image that is taken every 10 seconds and it's processed counter part. I'm having trouble getting the GUI window to update the two images when they get overridden in the file system.</p> <p>Here's the code I have so far</p> <pre><code>import cv2 import threading from imageai.Detection import ObjectDetection import os from tkinter import * import os from PIL import ImageTk,Image root = Tk() canvas = Canvas(root, width = 2000, height = 2000) canvas.pack() execution_path = os.getcwd() inimg = ImageTk.PhotoImage(Image.open(os.path.join(execution_path , &quot;test.jpg&quot;))) outimg = ImageTk.PhotoImage(Image.open(os.path.join(execution_path , &quot;testnew.jpg&quot;))) def capture(): print(&quot;Capturing...&quot;) videoCaptureObject = cv2.VideoCapture(0) ret,frame = videoCaptureObject.read() cv2.imwrite(&quot;test.jpg&quot;,frame) print(&quot;Done capturing.&quot;) print(&quot;Finding&quot;) totalPersons = 0 detections = detector.detectObjectsFromImage(input_image=os.path.join(execution_path , &quot;test.jpg&quot;), output_image_path=os.path.join(execution_path , &quot;testnew.jpg&quot;)) for eachObject in detections: if(eachObject[&quot;name&quot;] == &quot;person&quot;): totalPersons += 1 print(&quot;Total persons &quot; + str(totalPersons)) detector = ObjectDetection() detector.setModelTypeAsRetinaNet() execution_path = os.getcwd() detector.setModelPath(os.path.join(execution_path , &quot;resnet50_coco_best_v2.0.1.h5&quot;)) detector.loadModel(&quot;fastest&quot;) def run(): threading.Timer(10.0, run).start() canvas.create_image(20, 20, anchor=NW, image=inimg) canvas.create_image(750, 500, anchor=SW, image=outimg) root.mainloop() capture() run() </code></pre>
<p>try to use <code>root.update_idletasks()</code> to update tkinter GUI window.</p>
python|tkinter
0
1,904,492
59,110,540
How to create and populate new column in 2nd Pandas dataframe based on condition in 1st df?
<p>I have a pandas dataframe which looks like the following.</p> <pre><code>df1 = {'City' :['London', 'Basel', 'Kairo'], 'Country': ['UK','Switzerland','Egypt']} City Country 0 London UK 1 Basel Switzerland 2 Kairo Egypt </code></pre> <p>Now I want to assign a new column called 'Continent' that should be filled based on the values.</p> <p>So df2 should look like:</p> <pre><code> City Country Continent 0 London UK Europe 1 Basel Switzerland Europe 2 Kairo Egypt Africa </code></pre> <p>So df1 inidicates the values for the new column in df2. How do I do this in the easiest way?</p>
<p>With a dictionary of country->continent, you can pass it pd.DataFrame.replace() method, like this:</p> <pre><code>df['Continent'] = df['Country'].replace(country_dict) </code></pre>
python-3.x|pandas|numpy|dataframe
0
1,904,493
58,941,935
How to scale numpy matrix in Python?
<p>I have this numpy matrix: <code>x = np.random.randn(700,2)</code> What I wanna do is scale the values of the first column between that range: <code>1.5 and 11</code> and the values of the second column between `-0.5 and 5.0. Does anyone have an idea how I could achieve this? Thanks in advance</p>
<p>We use a new array <code>y</code> to hold the result:</p> <pre class="lang-py prettyprint-override"><code>def scale(col, min, max): range = col.max() - col.min() a = (col - col.min()) / range return a * (max - min) + min x = np.random.randn(700,2) y = np.empty(x.shape) y[:, 0] = scale(x[:, 0], 1.5, 11) y[:, 1] = scale(x[:, 1], -0.5, 5) print(y.min(axis=0), y.max(axis=0)) </code></pre> <pre><code>[ 1.5 -0.5] [11. 5.] </code></pre>
python|numpy|machine-learning|scaling|numpy-ndarray
3
1,904,494
48,961,536
Syntax error while using subprocess and Awk
<p>Receiving this error when attempting to run a python script that is using Awk and Subprocess:</p> <pre><code>awk: cmd. line:1: {if ($6 == +) print $1, $2-1000, $2+1000, $4, $5, $6; else print $1, $3-1000, $3+1000, $4, $5, $6} awk: cmd. line:1: ^ syntax error awk: cmd. line:1: {if ($6 == +) print $1, $2-1000, $2+1000, $4, $5, $6; else print $1, $3-1000, $3+1000, $4, $5, $6} awk: cmd. line:1: </code></pre> <p>Here is the code:</p> <pre><code>import subprocess # sort files using bedops and then look for ERa peaks that are not within 1kb of the TSS of a gene with open("refGenes.sorted.bed", "wb") as genes_out: subprocess.Popen("awk -v OFS='\t' '{print $3, $5, $6, $13, 0, $4}' /gpfs/data01/heinzlab/home/cag104/reference_data/Homo_sapiens/UCSC/hg38/Annotation/Genes/refGene.txt | sort-bed -", stdout=genes_out, shell=True) with open("refGenes.promoter1k.sorted.bed", "wb") as promoters_out: subprocess.Popen("awk -v OFS='\t' '{if ($6 == '+') print $1, $2-1000, $2+1000, $4, $5, $6; else print $1, $3-1000, $3+1000, $4, $5, $6}' /gpfs/data01/heinzlab/home/cag104/projects/repeats.rosenfeld/results/04_megatrans_enhancers/refGenes.sorted.bed | sort-bed -", stdout=promoters_out, shell=True) with open("era_peaks.filtered.bed", "wb") as era_peaks_out: subprocess.Popen("sort-bed /gpfs/data01/heinzlab/home/cag104/projects/repeats.rosenfeld/results/03_peak_calls/mcf7_chip_era_e2_01_peaks.narrowPeak | bedops -n 1 - /gpfs/data01/heinzlab/home/cag104/projects/repeats.rosenfeld/results/04_megatrans_enhancers/refGenes.promoter1k.sorted.bed", stdout=era_peaks_out, shell=True) </code></pre> <p>Any ideas? I have no idea what to try at this point.</p>
<p>Your 2nd <code>awk</code> script was broken because of quotes inconsistency:</p> <pre><code>"awk -v OFS='\t' '{if ($6 == '+') print $1, $2-1000, $2+1000, $4, $5, $6; else print $1, $3-1000, $3+1000, $4, $5, $6}' /gpfs/data01/heinzlab/home/cag104/projects/repeats.rosenfeld/results/04_megatrans_enhancers/refGenes.sorted.bed | sort-bed -" ^ ^ | | (start of the command) ('breaking' point) </code></pre> <p>Pass the crucial value as a variable to <code>awk</code> script:</p> <pre><code>"awk -v OFS='\t' -v p='+' '{if ($6 == p) print $1, $2-1000, $2+1000, $4, $5, $6; else print $1, $3-1000, $3+1000, $4, $5, $6}' /gpfs/data01/heinzlab/home/cag104/projects/repeats.rosenfeld/results/04_megatrans_enhancers/refGenes.sorted.bed | sort-bed -" </code></pre>
python-3.x|awk|subprocess
0
1,904,495
70,881,360
Create new binary column conditioned on matching text
<p>I have data frame <code>dftxt_house</code> with one column <code>url_text</code> and 388 rows. I want to make a new column <code>blocked</code> conditional on the text in <code>url_text</code>. Rows in the new column <code>blocked</code> should take value <code>1</code> if the corresponding text in <code>url_text</code> contain either <code>blocked you</code> or <code>You're blocked</code> and if not value <code>0</code>. The code below works for either <code>blocked you</code> or <code>You're blocked</code> but when adding both to the code using the <code>or</code> statement it results in value <code>1</code> on all rows in the column <code>blocked</code>.</p> <p>Am I misunderstanding the <code>or</code> statement?</p> <p>A look at data frame <code>dftxt_house</code></p> <pre><code>dftxt_house.head() url_text 0 Text SIGN LIKUOL to 50409 to send this to your officials\nen)\n1 signer. Let's get to 10!\n\nAN OPEN LETTER to THE U.S. CONGRESS STARTED by JOANNE\n\nMail in Ballot being blocked by Trump\n\nHello... 1 \n\nf;\n\nMark DeSaulnier @\n\n@RepDeSaulnier\n\n@RepDeSaulnier blocked\nyou\nb ColUir- |= 0) (eel, ¢-10 macelaamiell(e\uiiate|\n\n@RepDeSaulnier and viewing\n@RepDeSaulnier’s Tweets.\n\n \n 2 JACKIE SPEIER COMMITTEE OW ARMED SERVICES\ntate Distarcr, CAuiroania SUBCOMMITTEES\n\n2465 Ravaurn House Orrice BuLOmG CHAIRWOMAN, MILiTafy PEAS INHEL\n\nWasumiaton, DC 20515-0514 , StRateaic Fonc... 3 PANGRESSMAN ERIC SWALWELL\n\nPRC &lt;€— VING CALIFORNIA'S 15TH CONGRESSIONAL DISTRICT\n\n \n\nFollow\n\nRep. Eric Swalwell |\n@RepSwalwell\n\n@RepSwalwell\nblocked you\nYou are blocked from followin... </code></pre> <p>Code that should create new binray column</p> <pre><code># match all rows with the string &quot;blocked you&quot; or &quot;You're blocked&quot;. # a user who have been blocked by another user i.e. a politician. Create new column with 0 = no block and 1 = block dftxt_house['blocked'] = dftxt_house.apply( lambda row: 1 if 'blocked you' or &quot;You're blocked&quot; in row['url_text'] else 0, axis=1 ) </code></pre> <p>Current &quot;wrong&quot; result</p> <pre><code>dftxt_house['blocked'].value_counts() 1 388 </code></pre>
<p>you are indeed misunderstanding the <code>or</code> statement, it should look like this</p> <pre><code>dftxt_house['blocked'] = dftxt_house.apply( lambda row: 1 if 'blocked you' in row['url_text'] or &quot;You're blocked&quot; in row['url_text'] else 0, axis=1 </code></pre>
python|pandas|if-statement|lambda
0
1,904,496
67,171,052
Discord schedule function at specific time every day can't pass ctx
<p>I want to run one of the functions every day at 0:00 but I keep getting errors if I use ctx param in the function I get this error:</p> <pre><code>test() missing 1 required positional argument: 'ctx' </code></pre> <p>--- Function ---</p> <pre><code>async def test(ctx): #identical lines skipped category = discord.utils.get(ctx.guild.categories, name = &quot;test&quot;) #identical lines skipped embed.set_thumbnail(url=f'{ctx.author.avatar_url}?size=128') #other lines schedule.every().day.at(&quot;00:00&quot;).do(test) async def task(): while True: schedule.run_pending() await asyncio.sleep(1) @client.event async def on_ready(): client.loop.create_task(task()) </code></pre> <p>But if I don't use the ctx param in the function I can't fetch the &quot;test&quot; category channel names because getting this error:</p> <pre><code>Missing &quot;ctx&quot; parameter </code></pre> <p><strong>UPDATE:</strong></p> <pre><code>async def test(): chan = client.get_channel(833833396979236896) #identical lines skipped def not_pinned(msg): return not msg.pinned await chan.purge(limit=100, check=not_pinned) category = client.get_channel(833732406140993536) #other lines schedule.every().day.at(&quot;16:31&quot;).do(test) loop = asyncio.get_event_loop() while True: loop.run_until_complete(schedule.run_pending()) time.sleep(0.1) </code></pre> <p>Now it runs the schedule for example at 16:31 but I get this error for channel purge (<strong>this is really strange for me because if I don't use schedule then the test function works like a charm but I need a schedule which runs every day at the specific time</strong>):</p> <pre><code>'NoneType' object has no attribute 'purge' </code></pre>
<p>The error is pretty self explanatory, you are using ctx, but don't have ctx. If you really want to use ctx. I suggest starting the task in a command with the ctx, or you would have to change all references of ctx to the hard-coded channel.<br /> Switching to hard-coded channel</p> <pre class="lang-py prettyprint-override"><code>async def test(): #removed ctx #identical lines skipped category = discord.utils.get(chan.guild.categories, name = &quot;test&quot;) #identical lines skipped embed.set_thumbnail(url=client.user.avatar_url_as(size=1024)) #other lines async def on_ready(): asyncio.create_task(test()) </code></pre> <p>We are simply removing ctx references and replacing it with constants, or hard-coded references.</p> <h2>References:</h2> <ul> <li><a href="https://discordpy.readthedocs.io/en/stable/api.html?highlight=client%20user#discord.Client.user" rel="nofollow noreferrer"><code>client.user</code></a></li> <li><a href="https://discordpy.readthedocs.io/en/stable/api.html?highlight=client%20user#discord.ClientUser.avatar_url_as" rel="nofollow noreferrer"><code>ClientUser.avatar_url_as</code></a> I noticed that you manually set the size, which is bad practice.</li> <li><a href="https://discordpy.readthedocs.io/en/stable/api.html?highlight=channel#discord.TextChannel.guild" rel="nofollow noreferrer">channel.guild</a></li> </ul>
python|discord
0
1,904,497
42,887,955
how to download files using pyqt5 webenginewidgets
<p>I am learning pyqt5 and built a simple webbrowser in pyqt5 webengine . Now out of curiosity I want to know that is there a method by which I am able to download files using this minimalistic browser,Thanks in advance.</p> <pre><code>import sys from PyQt5 import QtWidgets,QtGui,QtCore from PyQt5.QtWebEngineWidgets import * app=QtWidgets.QApplication(sys.argv) w=QWebEngineView() w.page().fullScreenRequested.connect(QWebEngineFullScreenRequest.accept) w.load(QtCore.QUrl('https://google.com')) w.showMaximized() app.exec_() </code></pre>
<p>The simplest possible way to download would be something like this...</p> <pre><code>def _downloadRequested(item): # QWebEngineDownloadItem print('downloading to', item.path()) item.accept() w.page().profile().downloadRequested.connect(_downloadRequested) </code></pre>
python|python-3.x|pyqt|pyqt5|python-webbrowser
3
1,904,498
50,671,679
Set/Modify the variable used in a function from imported module
<p>Consider following code:</p> <pre><code>##testing.py namespace = "original" def print_namespace(): print ("Namespace is", namespace) def get_namespace_length(_str = namespace): print(len(_str)) ##Main import testing testing.namespace = "test" testing.printnamespace() testing.get_namespace_length() </code></pre> <p>print_namespace() return 'test' as exepcted, but the get_namespace_length() still return 8 which is the length of 'original'. How can I make get_namespace_length() taking the modified variable? </p> <p>The use case of such implementation is some functions are used the same variable in the imported module, if I can modify/set variable, I can avoid explicitly to call out new variable in each function. Can someone advise? </p> <p>Also, it doesn't have to be implemented in the way shown above, as long as it works. (global variable etc.) </p>
<p>Um... your default argument for <code>get_namespace_length</code> is <code>database</code>, undefined in your code snippet, also you switch from calling <code>testing</code> to <code>test</code> (I'm guessing that was one of many typos).</p> <p>In short though, I believe its to do with how the bytecode is compiled in python. Arguments are 'preloaded', and therefore a change to a variable (such as namespace) does not get included in the compilation of <code>get_namespace_length</code>. If I remember correctly, upon <code>import</code> the entire code of the imported file is compiled and executed (try putting a <code>print()</code> statement at the end of testing.py to see)</p> <p>So what you really want to do to obtain your length of 4 is change testing.py to:</p> <pre><code>namespace = "original" def print_namespace(): print ("Namespace is", namespace) def get_namespace_length(): _str = namespace print(len(_str)) </code></pre> <p>Or just <code>print(len(namespace))</code>. Hope that helps!</p>
python-3.x|module|global-variables
1
1,904,499
26,888,549
Adjust widths of QHBoxLayout
<p>Here is the code:</p> <pre><code>#!/usr/bin/env python3 import sys, time from PySide import QtCore, QtGui import base64 # Usage: Toast('Message') class Toast(QtGui.QDialog): def __init__(self, title, msg, duration=2): QtGui.QDialog.__init__(self) self.duration = duration self.title_label = QtGui.QLabel(title) self.title_label.setAlignment(QtCore.Qt.AlignLeft) self.msg_label = QtGui.QLabel(msg) self.icon_button = QLabelButton() img_b64 = "iVBORw0KGgoAAAANSUhEUgAAABgAAAAYCAYAAADgdz34AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAAITgAACE4BjDEA7AAAABl0RVh0U29mdHdhcmUAd3d3Lmlua3NjYXBlLm9yZ5vuPBoAAABJdEVYdENvcHlyaWdodABQdWJsaWMgRG9tYWluIGh0dHA6Ly9jcmVhdGl2ZWNvbW1vbnMub3JnL2xpY2Vuc2VzL3B1YmxpY2RvbWFpbi9Zw/7KAAAB2ElEQVRIibWVPW/TUBiFz7mJTBFSGgnUqmABRgpMUYi53pCK1IWxUxd2BgYk/goDAzuq+AFILEhIZUuq/ACPrYRKGSJPdHkPQx3UOK7tJOKd7Guf57nXH++lJFRVr9e70el03pLcBnAnH/4t6SzLsvdpml5U5duVdABhGDLLsj6AjSvD9wFshWHIujzrVgBcrqLb7b6U9AoASH6aTqdf62YPAK6WDiBN0wszO52dm9lpEzhQs4LhcNhzzj13zj2TtDUXJH+Z2bGZ/ZhMJulSApL03r+WtNdoluS38Xj8USWw0kcUx/F+UzgASNqL43i/7NqCwHu/A+CgKfxKHeTZagGAPsnWsvQ8028ieLIsvCq7IJD0eFV6WXZO4L3fzFvCSkVy23u/ea2A5KNV4dcx5gRm9nBdQZFRfAcP1hUUGXMC59zagiLjn2AwGNwCsPCjrFA7OWteEATBrqRG3bWqJLkgCHZn523gsrnFcdwi+YXkrGEJAMxMs+OSonNutukwF9DMWiQpSUyS5Kmku+vOvKzM7KxtZu8A3PwfAgB/2iQ/m9m9qrtIxgBuF4bPJY1qBD8b7clJkryQ9KYg/TAajb7XZRt9NVEUHUk6BHAC4ETSYRRFR02yfwEMBLRPQVtfqgAAAABJRU5ErkJggg==" pixmap = QtGui.QPixmap() pixmap.loadFromData(base64.b64decode(img_b64)) self.icon_button.setPixmap(pixmap) self.icon_button.resize(20, 20) self.connect(self.icon_button, QtCore.SIGNAL("clicked()"), self.close) title_layout = QtGui.QVBoxLayout() title_layout.addWidget(self.title_label) title_layout.addWidget(self.msg_label) layout = QtGui.QHBoxLayout() layout.addWidget(self.icon_button) layout.addLayout(title_layout) self.setGeometry(0, 0, 200, 70) self.setLayout(layout) self.setWindowFlags(QtCore.Qt.FramelessWindowHint) # self.setStyleSheet("border: 1px solid red; border-radius: 5px;") self.toastThread = ToastThread(self.duration) self.toastThread.finished.connect(self.close) self.toastThread.start() class ToastThread(QtCore.QThread): def __init__(self, n_seconds): QtCore.QThread.__init__(self) self.n_seconds = n_seconds def run(self): time.sleep(self.n_seconds) class QLabelButton(QtGui.QLabel): def __init(self, parent): QLabel.__init__(self, parent) def mouseReleaseEvent(self, ev): self.emit(QtCore.SIGNAL('clicked()')) if __name__ == "__main__": app = QtGui.QApplication(sys.argv) program = Toast("hi", "there", 10) program.show() sys.exit(app.exec_()) </code></pre> <p><img src="https://i.stack.imgur.com/O5vnS.png" alt="enter image description here"></p> <p>Apparent the image label on the left is taking too much space. How can I fix this?</p>
<p>A horizontal layout will give each widget an equal amount of space by default, but you can adjust the ratio like this:</p> <pre><code> layout.addWidget(self.icon_button, 1) layout.addLayout(title_layout, 3) </code></pre> <p>So that gives the title three times as much space as the icon.</p>
python|qt|user-interface|pyside
4