Unnamed: 0
int64
0
1.91M
id
int64
337
73.8M
title
stringlengths
10
150
question
stringlengths
21
64.2k
answer
stringlengths
19
59.4k
tags
stringlengths
5
112
score
int64
-10
17.3k
1,909,500
59,817,333
Spacy Phrase Matcher space sensitive issue
<pre><code>terms = ["Barack Obama", "Angela Merkel", "Washington, D.C."] doc = nlp("German Chancellor Angela Merkel and US President Barack Obama " "converse in the Oval Office inside the White House in Washington, D.C.") </code></pre> <p>If I enter an extra space between the words "Barack Obama", the phrase matcher does not work since it is space sensitive. Is there a way to overcome this space sensitive issue?</p> <ul> <li>Operating System: Windows 8</li> <li>Python Version Used: 3.7</li> <li>spaCy Version Used: 2.2.3</li> <li>Environment Information: Conda</li> </ul>
<pre><code>import re re.sub(' +',' ', "barack obama") #op 'barack obama' </code></pre> <p>refereing to the docs <a href="https://spacy.io/api/phrasematcher" rel="nofollow noreferrer">https://spacy.io/api/phrasematcher</a></p> <pre><code>import en_core_web_sm nlp = en_core_web_sm.load() matcher = PhraseMatcher(nlp.vocab) matcher.add("OBAMA", None, nlp("Barack Obama")) doc = nlp("Barack Obama urges Congress to find courage to defend his healthcare reforms") matches = matcher(doc) #op [(7732777389095836264, 0, 2)] </code></pre> <p>but when there is a multiple space between the string it will return empty list. i.e there is multiple space between barack obama </p> <pre><code>doc = nlp("Barack Obama urges Congress to find courage to defend his healthcare reforms") print(matcher(doc)) #op [] </code></pre> <p>to Solve this, i thought of removing extra space from the given string </p> <pre><code>string_= 'Barack Obama urges Congress to find courage to defend his healthcare reforms' space_removed_string = re.sub(' +',' ', string_) #now passing the string in model doc = nlp(space_removed_string) print(matcher(doc)) #op [(7732777389095836264, 0, 2)] </code></pre>
python|nlp|spacy
0
1,909,501
59,776,560
Textual parsing
<p>I am a newby with Python and Panda, but i would like to parse from multiple downloaded files (which have the same format). On every HTML there is an section like below where the executives are mentioned.</p> <pre><code>&lt;DIV id=article_participants class="content_part hid"&gt; &lt;P&gt;Redhill Biopharma Ltd. (NASDAQ:&lt;A title="" href="http://seekingalpha.com/symbol/rdhl" symbolSlug="RDHL"&gt;RDHL&lt;/A&gt;)&lt;/P&gt; &lt;P&gt;Q4 2014 &lt;SPAN class=transcript-search-span style="BACKGROUND-COLOR: yellow"&gt;Earnings&lt;/SPAN&gt; Conference &lt;SPAN class=transcript-search-span style="BACKGROUND-COLOR: #f38686"&gt;Call&lt;/SPAN&gt;&lt;/P&gt; &lt;P&gt;February 26, 2015 9:00 AM ET&lt;/P&gt; &lt;P&gt;&lt;STRONG&gt;Executives&lt;/STRONG&gt;&lt;/P&gt; &lt;P&gt;Dror Ben Asher - CEO&lt;/P&gt; &lt;P&gt;Ori Shilo - Deputy CEO, Finance and Operations&lt;/P&gt; &lt;P&gt;Guy Goldberg - Chief Business Officer&lt;/P&gt; </code></pre> <p>and further in the files there is a section called "DIV id=article_qanda class="content_part hid" where the executives like Ori Shilo is named followed by an answer, like: </p> <pre><code>&lt;P&gt;&lt;STRONG&gt;&lt;SPAN class=answer&gt;Ori Shilo&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt; &lt;P&gt;Good morning, Vernon. Both safety which is obvious and fertility analysis under the charter of the data and safety monitoring board will be - will be on up.&lt;/P&gt; </code></pre> <p>Till now i only succeeded with an html parser for one individual by name to collect all their answers. I am not sure how to proceed and base the code on a variable list of executives. Does someone have a suggestion?</p> <pre><code>import textwrap import os from bs4 import BeautifulSoup directory ='C:/Research syntheses - Meta analysis/SeekingAlpha/out' for filename in os.listdir(directory): if filename.endswith('.html'): fname = os.path.join(directory,filename) with open(fname, 'r') as f: soup = BeautifulSoup(f.read(),'html.parser') print('{:&lt;30} {:&lt;70}'.format('Name', 'Answer')) print('-' * 101) for answer in soup.select('p:contains("Question-and-Answer Session") ~ strong:contains("Dror Ben Asher") + p'): txt = answer.get_text(strip=True) s = answer.find_next_sibling() while s: if s.name == 'strong' or s.find('strong'): break if s.name == 'p': txt += ' ' + s.get_text(strip=True) s = s.find_next_sibling() txt = ('\n' + ' '*31).join(textwrap.wrap(txt)) print('{:&lt;30} {:&lt;70}'.format('Dror Ben Asher - CEO', txt), file=open("output.txt", "a")) </code></pre>
<p>To give some color to my original comment, I'll use a simple example. Let's say you've got some code that is looking for the string "Hello, World!" in a file, and you want the line numbers to be aggregated into a list. Your first attempt might look like:</p> <pre class="lang-py prettyprint-override"><code># where I will aggregate my results line_numbers = [] with open('path/to/file.txt') as fh: for num, line in enumerate(fh): if 'Hello, World!' in line: line_numbers.append(num) </code></pre> <p>This code snippet works perfectly well. However, it only works to check <code>'path/to/file.txt'</code> for <code>'Hello, World!'</code>. </p> <p>Now, you want to be able to change the string you are looking for. This is analogous to saying "I want to check for different executives". You could use a function to do this. A function allows you to add flexibility into a piece of code. In this simple example, I would do:</p> <pre class="lang-py prettyprint-override"><code># Now I'm checking for a parameter string_to_search # that I can change when I call the function def match_in_file(string_to_search): line_numbers = [] with open('path/to/file.txt') as fh: for num, line in enumerate(fh): if string_to_search in line: line_numbers.append(num) return line_numbers # now I'm just calling that function here line_numbers = match_in_file("Hello, World!") </code></pre> <p>You'd still have to make a code change, but this becomes much more powerful if you wanted to search for lots of strings. I could feasibly use this function in a loop if I wanted to (though I would do things a little differently in practice), for the sake of the example, I now have the power to do:</p> <pre class="lang-py prettyprint-override"><code>list_of_strings = [ "Hello, World!", "Python", "Functions" ] for s in list_of_strings: line_numbers = match_in_file(s) print(f"Found {s} on lines ", *line_numbers) </code></pre> <p>Generalized to your specific problem, you'll want a parameter for the <code>executive</code> that you want to search for. Your function signature might look like:</p> <pre class="lang-py prettyprint-override"><code>def find_executive(soup, executive): for answer in soup.select(f'p:contains("Question-and-Answer Session") ~ strong:contains({executive}) + p'): # rest of code </code></pre> <p>You've already read in the <code>soup</code>, so you don't need to do that again. You only need to change the executive in your select statement. The reason you want a parameter for <code>soup</code> is so you aren't relying on variables in global scope.</p>
python|html|pandas
0
1,909,502
59,739,526
why are pylint's error squiggle lines not showing in python visual studio code?
<p>i'm using vscode for python3 in Ubuntu. Error-squiggle-lines have stopped working for Python(it works for other languages). And I am using Microsoft's Python extension.<br> <code>vscode v1.41.1</code> <code>Ubuntu v18.04</code></p> <p>this is what i have tried:</p> <ul> <li>I thought maybe it's because i installed anaconda so uninstalled it but didn't fix it.</li> <li>then I re-installed vs code after deleting its config from <code>.config/code</code> but that didn't work either.</li> <li>also set python linting to true from command palette</li> </ul> <p>it's not showing error squiggle lines: <a href="https://i.stack.imgur.com/PXJQ3.png" rel="noreferrer"><img src="https://i.stack.imgur.com/PXJQ3.png" alt="my vscode looks like this:"></a></p> <p>here is the Microsoft's python extension's contributions regarding linting(sorry for poor readability):</p> <pre><code>Whether to lint Python files. true python.linting.flake8Args Arguments passed in. Each argument is a separate item in the array. python.linting.flake8CategorySeverity.E Severity of Flake8 message type 'E'. Error python.linting.flake8CategorySeverity.F Severity of Flake8 message type 'F'. Error python.linting.flake8CategorySeverity.W Severity of Flake8 message type 'W'. Warning python.linting.flake8Enabled Whether to lint Python files using flake8 false python.linting.flake8Path Path to flake8, you can use a custom version of flake8 by modifying this setting to include the full path. flake8 python.linting.ignorePatterns Patterns used to exclude files or folders from being linted. .vscode/*.py,**/site-packages/**/*.py python.linting.lintOnSave Whether to lint Python files when saved. true python.linting.maxNumberOfProblems Controls the maximum number of problems produced by the server. 100 python.linting.banditArgs Arguments passed in. Each argument is a separate item in the array. python.linting.banditEnabled Whether to lint Python files using bandit. false python.linting.banditPath Path to bandit, you can use a custom version of bandit by modifying this setting to include the full path. bandit python.linting.mypyArgs Arguments passed in. Each argument is a separate item in the array. --ignore-missing-imports,--follow-imports=silent,--show-column-numbers python.linting.mypyCategorySeverity.error Severity of Mypy message type 'Error'. Error python.linting.mypyCategorySeverity.note Severity of Mypy message type 'Note'. Information python.linting.mypyEnabled Whether to lint Python files using mypy. false python.linting.mypyPath Path to mypy, you can use a custom version of mypy by modifying this setting to include the full path. mypy python.linting.pycodestyleArgs Arguments passed in. Each argument is a separate item in the array. python.linting.pycodestyleCategorySeverity.E Severity of pycodestyle message type 'E'. Error python.linting.pycodestyleCategorySeverity.W Severity of pycodestyle message type 'W'. Warning python.linting.pycodestyleEnabled Whether to lint Python files using pycodestyle false python.linting.pycodestylePath Path to pycodestyle, you can use a custom version of pycodestyle by modifying this setting to include the full path. pycodestyle python.linting.prospectorArgs Arguments passed in. Each argument is a separate item in the array. python.linting.prospectorEnabled Whether to lint Python files using prospector. false python.linting.prospectorPath Path to Prospector, you can use a custom version of prospector by modifying this setting to include the full path. prospector python.linting.pydocstyleArgs Arguments passed in. Each argument is a separate item in the array. python.linting.pydocstyleEnabled Whether to lint Python files using pydocstyle false python.linting.pydocstylePath Path to pydocstyle, you can use a custom version of pydocstyle by modifying this setting to include the full path. pydocstyle python.linting.pylamaArgs Arguments passed in. Each argument is a separate item in the array. python.linting.pylamaEnabled Whether to lint Python files using pylama. false python.linting.pylamaPath Path to pylama, you can use a custom version of pylama by modifying this setting to include the full path. pylama python.linting.pylintArgs Arguments passed in. Each argument is a separate item in the array. python.linting.pylintCategorySeverity.convention Severity of Pylint message type 'Convention/C'. Information python.linting.pylintCategorySeverity.error Severity of Pylint message type 'Error/E'. Error python.linting.pylintCategorySeverity.fatal Severity of Pylint message type 'Fatal/F'. Error python.linting.pylintCategorySeverity.refactor Severity of Pylint message type 'Refactor/R'. Hint python.linting.pylintCategorySeverity.warning Severity of Pylint message type 'Warning/W'. Warning python.linting.pylintEnabled Whether to lint Python files using pylint. true python.linting.pylintPath Path to Pylint, you can use a custom version of pylint by modifying this setting to include the full path. pylint python.linting.pylintUseMinimalCheckers Whether to run Pylint with minimal set of rules. true </code></pre> <p>python.linting.pylintEnabled is: true</p> <p>python.linting.pylintPath is: pylint</p> <p>all the errors in visual studio's console of developer tools:</p> <pre><code>console.ts:137 [Extension Host] Error Python Extension: 2020-01-18 18:35:53: Failed to serialize gatherRules for DATASCIENCE.SETTINGS [TypeError: Cannot convert object to primitive value at Array.join (&lt;anonymous&gt;) at Array.toString (&lt;anonymous&gt;) at /home/manik/.vscode/extensions/ms-python.python-2020.1.58038/out/client/extension.js:1:12901 at Array.forEach (&lt;anonymous&gt;) at Object.l [as sendTelemetryEvent] (/home/manik/.vscode/extensions/ms-python.python-2020.1.58038/out/client/extension.js:1:12818) at C.sendSettingsTelemetry (/home/manik/.vscode/extensions/ms-python.python-2020.1.58038/out/client/extension.js:75:707093) at C.r.value (/home/manik/.vscode/extensions/ms-python.python-2020.1.58038/out/client/extension.js:1:87512) at Timeout._onTimeout (/home/manik/.vscode/extensions/ms-python.python-2020.1.58038/out/client/extension.js:1:86031) at listOnTimeout (internal/timers.js:531:17) at processTimers (internal/timers.js:475:7)] t.log @ console.ts:137 2console.ts:137 [Extension Host] Notification handler 'textDocument/publishDiagnostics' failed with message: Cannot read property 'connected' of undefined t.log @ console.ts:137 2console.ts:137 [Extension Host] (node:21707) [DEP0005] DeprecationWarning: Buffer() is deprecated due to security and usability issues. Please use the Buffer.alloc(), Buffer.allocUnsafe(), or Buffer.from() methods instead. t.log @ console.ts:137 $logExtensionHostMessage @ mainThreadConsole.ts:39 _doInvokeHandler @ rpcProtocol.ts:398 _invokeHandler @ rpcProtocol.ts:383 _receiveRequest @ rpcProtocol.ts:299 _receiveOneMessage @ rpcProtocol.ts:226 (anonymous) @ rpcProtocol.ts:101 fire @ event.ts:581 fire @ ipc.net.ts:453 _receiveMessage @ ipc.net.ts:733 (anonymous) @ ipc.net.ts:592 fire @ event.ts:581 acceptChunk @ ipc.net.ts:239 (anonymous) @ ipc.net.ts:200 t @ ipc.net.ts:28 emit @ events.js:200 addChunk @ _stream_readable.js:294 readableAddChunk @ _stream_readable.js:275 Readable.push @ _stream_readable.js:210 onStreamRead @ internal/stream_base_commons.js:166 </code></pre> <p>output for <code>python</code> in <code>output</code> panel:</p> <pre><code>User belongs to experiment group 'AlwaysDisplayTestExplorer - control' User belongs to experiment group 'ShowPlayIcon - start' User belongs to experiment group 'ShowExtensionSurveyPrompt - enabled' User belongs to experiment group 'DebugAdapterFactory - experiment' User belongs to experiment group 'AA_testing - experiment' &gt; conda --version &gt; pyenv root &gt; python3.7 -c "import sys;print(sys.executable)" &gt; python3.6 -c "import sys;print(sys.executable)" &gt; python3 -c "import sys;print(sys.executable)" &gt; python2 -c "import sys;print(sys.executable)" &gt; python -c "import sys;print(sys.executable)" &gt; /usr/bin/python3.8 -c "import sys;print(sys.executable)" &gt; conda info --json &gt; conda env list Starting Microsoft Python language server. &gt; conda --version &gt; /usr/bin/python3.8 ~/.vscode/extensions/ms-python.python-2020.1.58038/pythonFiles/interpreterInfo.py &gt; /usr/bin/python3.8 ~/.vscode/extensions/ms-python.python-2020.1.58038/pythonFiles/interpreterInfo.py </code></pre> <p><strong>how to get the squiggle lines to work again?</strong></p>
<p>In your <code>settings.json</code> file(search for <code>settings.json</code> in the command palette), declare the following:</p> <p><code>&quot;python.linting.pylintEnabled&quot;: true, &quot;python.jediEnabled&quot;: false</code></p> <p>if you just want the changes in your workspace then change the <code>settings.json</code> file in <code>.vscode</code> folder</p> <p>In latest version of visual studio code, workspace is not registering settings from checkboxes so you have to explicitly declare in settings.json what settings you want to enable for your workspace. Flake8 is not affected by this. Pylint and Microsoft Python Language Server seem to be not working due to this.</p> <p>side note:got this solution from sys-temd's reply on <a href="https://github.com/microsoft/vscode-python/issues/9657#issuecomment-575935065" rel="noreferrer">github.com/microsoft/vscode-python/issues</a></p>
python|ubuntu|visual-studio-code|intellisense|linter
10
1,909,503
49,331,484
Python 2.7 scoping issue with variable/method when placed inside a function
<p>I'm new to python and notice this code works when written without being put inside a function. </p> <pre><code>from selenium import webdriver driver = lambda: None def setup_browser(): # unnecessary code removed driver = webdriver.Firefox() return driver setup_browser() driver.set_window_size(1000, 700) driver.get("https://icanhazip.com/") </code></pre> <p>As shown above, I get this error:</p> <pre><code>`AttributeError: 'function' object has no attribute 'set_window_size' </code></pre> <p>My reading is that driver is not being updated before it is called. Why is this?</p>
<p>The problem is that inside of <code>setup_browser()</code> you're setting a local variable named <code>driver</code>, but you are not modifying the global variable <code>driver</code>. To do that, you need to use the <code>global</code> keyword:</p> <pre><code>def setup_browser(): global driver driver = webdriver.Firefox() return driver </code></pre> <p>However, overriding the <code>driver</code> global variable and returning it at the same time is redundant. It would be better to not define <code>driver</code> globally as a null function, but to assign it directly. E.g.,</p> <pre><code>from selenium import webdriver def setup_browser(): driver = webdriver.Firefox() return driver driver = setup_browser() driver.set_window_size(1000, 700) driver.get("https://icanhazip.com/") </code></pre>
python
2
1,909,504
24,990,607
Bulbs python Connection to a remote TitanDB + Rexster
<p>I'm using TitanGraphDB + Cassandra. I'm starting Titan as follows</p> <pre><code>cd titan-cassandra-0.3.1 bin/titan.sh config/titan-server-rexster.xml config/titan-server-cassandra.properties </code></pre> <p>I have a Rexster shell that I can use to communicate to Titan + Cassandra above.</p> <pre><code>cd rexster-console-2.3.0 bin/rexster-console.sh </code></pre> <p>I'm attempting to model a network topology using Titan Graph DB. I want to program the Titan Graph DB from my python program. I'm using <code>python bulbs</code> package for that.My code to create the graph is as follows.</p> <pre><code>from bulbs.titan import Graph self.g = Graph() </code></pre> <p>Now I have rexster-console and Titan running on machine with IP Address <code>192.168.65.93</code>.If my python application is runnnig on the same machine I use <code>self.g = Graph()</code>.</p> <p>What if I want to connect to the <code>Titan AND Rexster</code> running on machine with IP <code>192.168.65.93</code> from python application on <code>192.168.65.94</code></p> <p>How do I do that? Can I pass some parameter (e.g a config file to Graph())? Where can I find it?</p>
<p>Simply set the Titan graph URI in the Bulbs <code>Config</code> object:</p> <pre><code>&gt;&gt;&gt; from bulbs.titan import Graph, Config &gt;&gt;&gt; config = Config('http://192.168.65.93:8182/graphs/graph') &gt;&gt;&gt; g = Graph(config) </code></pre> <p>See Bulbs <code>Config</code>...</p> <ul> <li><a href="http://bulbflow.com/docs/api/bulbs/config/" rel="nofollow">http://bulbflow.com/docs/api/bulbs/config/</a></li> <li><a href="https://github.com/espeed/bulbs/blob/master/bulbs/config.py" rel="nofollow">https://github.com/espeed/bulbs/blob/master/bulbs/config.py</a></li> </ul> <p>And Bulbs <code>Graph</code> (note Titan's <code>Graph</code> class is a subclass of Rexster's <code>Graph</code> class)...</p> <ul> <li><a href="http://bulbflow.com/docs/api/bulbs/rexster/graph/" rel="nofollow">http://bulbflow.com/docs/api/bulbs/rexster/graph/</a> </li> <li><a href="https://github.com/espeed/bulbs/blob/master/bulbs/titan/graph.py" rel="nofollow">https://github.com/espeed/bulbs/blob/master/bulbs/titan/graph.py</a></li> </ul> <p>And I encourage you to read through the Bulbs Quickstart and other docs because many of these questions are answered in there...</p> <ul> <li><a href="http://bulbflow.com/docs/" rel="nofollow">http://bulbflow.com/docs/</a></li> <li><a href="http://bulbflow.com/quickstart/" rel="nofollow">http://bulbflow.com/quickstart/</a></li> </ul> <p>The Quickstart uses <code>bulbs.neo4jserver</code> as an example, but since the Bulbs API is consistent regardless of the backend server you are using, the Quickstart examples are also relevant to Titan Server and Rexster.</p> <p>To adapt the Bulbs Quickstart for Titan or Rexster, simply change the <code>Graph</code> import from...</p> <pre><code>&gt;&gt;&gt; from bulbs.neo4jserver import Graph &gt;&gt;&gt; g = Graph() </code></pre> <p>...to...</p> <pre><code>&gt;&gt;&gt; from bulbs.titan import Graph &gt;&gt;&gt; g = Graph() </code></pre> <p>...or...</p> <pre><code>&gt;&gt;&gt; from bulbs.rexster import Graph &gt;&gt;&gt; g = Graph() </code></pre>
python|cassandra|titan|bulbs|rexster
2
1,909,505
70,829,501
How to add Column to DataFrame while keeping dates correlated
<p>I am working with Pandas and Matplotlib to chart some Crypto Transactions.</p> <p>The column I am working with is <code>Amount</code>, where I am trying to chart the incoming and outgoing transactions. Incoming has <code>a +</code> in front of the number, and outgoing has <code>a -</code>.</p> <p>The goal is to use Matplotlib to create a bar chart with the incoming and outgoing transactions.</p> <p>What I think needs to be done is for the <code>Amount</code> column to be sorted by if it contains <code>a +</code> or <code>a -</code>, and then each type have their own column that is correlated with the date of the transaction.</p> <p>For example, the +20,000 Transaction on the first row would be filed under the <code>Incoming Transactions</code> column, while on the same row that it was originally in (to keep the same date).</p> <p>I have attempted to create this but based on my error code I am having trouble when it comes to creating a new column.</p> <pre><code>parse_dates = ['Time'] df = pd.read_csv('DSb5CvAXhXnzFoxmiMaWpgxjDF6CfMK7h2.csv', index_col=0, parse_dates=parse_dates) df2 = df.assign(Outgoing = df.loc[df[&quot;Amount&quot;].str.contains('\-', regex=True)]) #outgoing_transactions = df.loc[df[&quot;Amount&quot;].str.contains('\-', regex=True)] #incoming_transactions = df.loc[df[&quot;Amount&quot;].str.contains('\+', regex=True)] df2 </code></pre> <p>This is the error code I receive:</p> <blockquote> <p>ValueError: Wrong number of items passed 5, placement implies 1</p> </blockquote> <p><a href="https://i.stack.imgur.com/dBF5w.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dBF5w.png" alt="enter image description here" /></a></p>
<p>You could use a regular expression to <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.str.extract.html" rel="nofollow noreferrer"><code>extract</code></a> from the <code>Amount</code> column only the value relative to the Dogecoin. Then, create a variable to indicate the transaction's direction and use it with accessor <code>.dt.date</code> to create the groups.</p> <p>Use <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.agg.html" rel="nofollow noreferrer"><code>agg</code></a> <code>sum</code> to add values within the same day and transaction type, follow by <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.unstack.html" rel="nofollow noreferrer"><code>unstack</code></a> to pivot the transaction type into two different columns. Use the columns created to plot the data using two different <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.bar.html" rel="nofollow noreferrer"><code>plt.bar</code></a> commands, this will give the effect of two bars on the same day, one for each transaction type.</p> <p><strong>df</strong> used as input</p> <pre class="lang-py prettyprint-override"><code> Time Amount 0 2022-01-01 00:00:00.000000000 +7,965,429.87 DOGE (18,343.48 USD) 1 2022-01-01 07:30:54.545454545 -5,986,584.84 DOGE (15,601.86 USD) 2 2022-01-01 15:01:49.090909090 +999,749.16 DOGE (45,924.89 USD) 3 2022-01-01 22:32:43.636363636 +6,011,150.12 DOGE (70,807.26 USD) 4 2022-01-02 06:03:38.181818181 -564,115.79 DOGE (72,199.88 USD) .. ... ... 95 2022-01-30 17:56:21.818181818 -6,454,722.96 DOGE (17,711.07 USD) 96 2022-01-31 01:27:16.363636363 -4,699,445.14 DOGE (27,956.03 USD) 97 2022-01-31 08:58:10.909090909 -3,701,587.0 DOGE (1,545.66 USD) 98 2022-01-31 16:29:05.454545454 -3,307,503.05 DOGE (55,276.5 USD) 99 2022-02-01 00:00:00.000000000 +9,636,199.77 DOGE (85,300.95 USD) [100 rows x 2 columns] </code></pre> <pre class="lang-py prettyprint-override"><code>df['DOGE'] = df['Amount'] \ .str.extract(r'([+-](?:\d+,?)+?(?:.\d+)?)\s') \ .replace(&quot;,&quot;,&quot;&quot;, regex=True).astype(float) flow = df['DOGE'].apply(lambda x: &quot;outcome&quot; if x&lt;0 else &quot;income&quot;) grouped = df.groupby([df['Time'].dt.date, flow]) action = grouped.agg(amount=('DOGE', sum)).unstack() if ('amount','income') in action: plt.bar(action.index, action[('amount', 'income')], color='g', label='income') if ('amount', 'outcome') in action: plt.bar(action.index, action[('amount', 'outcome')], color='r', label='outcome') plt.xticks(rotation=45) plt.legend() plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/QjliV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QjliV.png" alt="income_outcome_barplot" /></a></p>
python|pandas
0
1,909,506
60,255,455
Get Prior Row of Dataframe in For Loop Python
<p>This is what I believe to be a simple logic problem, but I have been working at this for a while and haven't figured it out, so hopefully someone can find the easy solution that I have been missing. I would like to be able to get the prior part of a dataframe using the following code, and have settle for the <code>(row-1)</code> solve in the fourth row, but that obviously did not work.</p> <pre><code>for row in players_at_start_of_period.iterrows(): if(row[1]['PERIOD']): continue elif(row[1]['PERIOD'] - 2) &gt; (row-1)[1]['PERIOD']: sub_map.update = {row[1]['TEAM_ID_1']: split_row(row[1]['TEAM_1_PLAYERS']), row[1]['TEAM_ID_2']: split_row(row[1]['TEAM_2_PLAYERS'])} else: continue </code></pre> <p>What would I be able to do to access the value that exists one iteration prior to the current value of 'row'? Thanks!</p>
<p>I am not sure how your data is, but <code>iterrows()</code> already returns the index, so you could do something like this:</p> <pre><code>import pandas as pd import random # read the data from the downloaded CSV file. df = pd.read_csv('https://s3-eu-west-1.amazonaws.com/shanebucket/downloads/uk-500.csv') # set a numeric id for use as an index for examples. df['index'] = [random.randint(0,1000) for x in range(df.shape[0])] for index, row in df.iterrows(): previous_name = '' if index &gt; 0: previous_name = df.loc[index - 1]['first_name'] print(previous_name, df.loc[index]['first_name']) </code></pre>
python|pandas|dataframe|for-loop|iteration
1
1,909,507
2,403,578
fastest calculation of largest prime factor of 512 bit number in python
<p>i am simulating my crypto scheme in python, i am a new user to it.</p> <p>p = 512 bit number and i need to calculate largest prime factor for it, i am looking for two things:</p> <ol> <li>Fastest code to process this large prime factorization</li> <li>Code that can take 512 bit of number as input and can handle it.</li> </ol> <p>I have seen different implementations in other languages, my whole code is in python and this is last point where i am stuck. So let me know if there is any implementation in python.</p> <p>Kindly explain in simple as i am new user to python</p> <p>sorry for bad english.</p> <p><strong>edit (taken from OP's answer below):</strong></p> <pre><code>#!/usr/bin/env python def highest_prime_factor(n): if isprime(n): return n for x in xrange(2,n ** 0.5 + 1): if not n % x: return highest_prime_factor(n/x) def isprime(n): for x in xrange(2,n ** 0.5 + 1): if not n % x: return False return True if __name__ == "__main__": import time start = time.time() print highest_prime_factor(1238162376372637826) print time.time() - start </code></pre> <p>The code above works (with a bit of delay) for "1238162376372637826" but extending it to </p> <blockquote> <p>10902610991329142436630551158108608965062811746392 57767545600484549911304430471090261099132914243663 05511581086089650628117463925776754560048454991130443047</p> </blockquote> <p>makes python go crazy. Is there any way so that just like above, i can have it calculated it in no time?</p>
<p>For a Python-based solution, you might want to look at <a href="http://sourceforge.net/projects/pyecm/" rel="nofollow noreferrer">pyecm</a> On a system with gmpy installed also, pyecm found the following factors:</p> <p>101, 521, 3121, 9901, 36479, 300623, 53397071018461, 1900381976777332243781</p> <p>There still is a 98 digit unfactored composite:</p> <p>60252507174568243758911151187828438446814447653986842279796823262165159406500174226172705680274911</p> <p>Factoring this remaining composite using ECM may not be practical.</p> <p>Edit: After a few hours, the remaining factors are</p> <p>6060517860310398033985611921721</p> <p>and</p> <p>9941808367425935774306988776021629111399536914790551022447994642391 </p>
python|primes|factorization
3
1,909,508
30,741,756
User input filename when reading in netCDF files in Python
<p>I have a set of soil moisture data files from 1953 to 2014. All of them are of the form cpc_soil_YYYY.nc (where YYYY is one of those years). Is there a way for me to ask for user input of which year the user would like to view, and have my program open the corresponding function? I currently have it where I must manually change the year within gedit, and wrote functions to grab each variable (soil moisture as a function of time, lat, lon):</p> <pre><code> import netCDF4 as nc import numpy as np import numpy.ma as ma import csv as csv fid=nc.MFDataset('/data/reu_data/soil_moisture/cpc_soil_1957.nc','r') fid.close() ncf='/data/reu_data/soil_moisture/cpc_soil_1957.nc' def read_var(ncfile, varname): fid=nc.Dataset(ncfile, 'r') out=fid.variables[varname][:] fid.close() return out time=read_var(ncf, 'time') lat=read_var(ncf, 'lat') lon=read_var(ncf, 'lon') soil=read_var(ncf, 'soilw') </code></pre>
<p>You can use <code>input()</code> to ask user to enter the year. Then you can use that to generate the filepath.</p> <pre><code>... year = input("Enter year: ")) filename = '/data/reu_data/soil_moisture/cpc_soil_%s.nc' % (year,) fid=nc.MFDataset(filename,'r') fid.close() ... </code></pre> <p>You should do error checking to make sure the user entered value is actually a year and falls within the range of your data.</p> <p>You can read more on input/output in Python <a href="http://anh.cs.luc.edu/python/hands-on/3.1/handsonHtml/io.html" rel="nofollow">here</a>.</p>
python|netcdf|raw-input
2
1,909,509
30,709,139
divide list and generate series of new lists. one from each list and rest into other
<p>I have three lists and want to sort and generate two new list. Can any one please tell how it can be done?</p> <pre><code>list1=[12,25,45], list2=[14,69], list3=[54,98,68,78,48] </code></pre> <p>I want to print the output like</p> <pre><code>chosen1=[12,14,54], rest1=[25,45,69,98,68,78,48] chosen2=[12,14,98], rest2=[25,45,69,54,68,78,48] </code></pre> <p>and so on (every possible combination for chosen list)</p> <p>I have tried to write this but I don't know </p> <pre><code>list1=[12,25,45] list2=[14,69] list3=[54,98,68,78,48] for i in xrange (list1[0],list1[2]): for y in xrange(list2[0], list2[1]): for z in xrange(list[0],list[4]) for a in xrange(chosen[0],[2]) chosed1.append() for a in xrange(chosen[0],[7]) rest1.append() Print rest1 Print chosen1 </code></pre>
<p><code>itertools.product</code> generates all permutations of selecting one thing each out of different sets of things:</p> <pre><code>import itertools list1 = [12,25,45] list2 = [14,69] list3 = [54,98,68,78,48] for i,(a,b,c) in enumerate(itertools.product(list1,list2,list3),1): # Note: Computing rest this way will *not* work if there are duplicates # in any of the lists. rest1 = [n for n in list1 if n != a] rest2 = [n for n in list2 if n != b] rest3 = [n for n in list3 if n != c] rest = ','.join(str(n) for n in rest1+rest2+rest3) print('chosen{0}=[{1},{2},{3}], rest{0}=[{4}]'.format(i,a,b,c,rest)) </code></pre> <p>Output:</p> <pre><code>chosen1=[12,14,54], rest1=[25,45,69,98,68,78,48] chosen2=[12,14,98], rest2=[25,45,69,54,68,78,48] chosen3=[12,14,68], rest3=[25,45,69,54,98,78,48] chosen4=[12,14,78], rest4=[25,45,69,54,98,68,48] chosen5=[12,14,48], rest5=[25,45,69,54,98,68,78] chosen6=[12,69,54], rest6=[25,45,14,98,68,78,48] chosen7=[12,69,98], rest7=[25,45,14,54,68,78,48] chosen8=[12,69,68], rest8=[25,45,14,54,98,78,48] chosen9=[12,69,78], rest9=[25,45,14,54,98,68,48] chosen10=[12,69,48], rest10=[25,45,14,54,98,68,78] chosen11=[25,14,54], rest11=[12,45,69,98,68,78,48] chosen12=[25,14,98], rest12=[12,45,69,54,68,78,48] chosen13=[25,14,68], rest13=[12,45,69,54,98,78,48] chosen14=[25,14,78], rest14=[12,45,69,54,98,68,48] chosen15=[25,14,48], rest15=[12,45,69,54,98,68,78] chosen16=[25,69,54], rest16=[12,45,14,98,68,78,48] chosen17=[25,69,98], rest17=[12,45,14,54,68,78,48] chosen18=[25,69,68], rest18=[12,45,14,54,98,78,48] chosen19=[25,69,78], rest19=[12,45,14,54,98,68,48] chosen20=[25,69,48], rest20=[12,45,14,54,98,68,78] chosen21=[45,14,54], rest21=[12,25,69,98,68,78,48] chosen22=[45,14,98], rest22=[12,25,69,54,68,78,48] chosen23=[45,14,68], rest23=[12,25,69,54,98,78,48] chosen24=[45,14,78], rest24=[12,25,69,54,98,68,48] chosen25=[45,14,48], rest25=[12,25,69,54,98,68,78] chosen26=[45,69,54], rest26=[12,25,14,98,68,78,48] chosen27=[45,69,98], rest27=[12,25,14,54,68,78,48] chosen28=[45,69,68], rest28=[12,25,14,54,98,78,48] chosen29=[45,69,78], rest29=[12,25,14,54,98,68,48] chosen30=[45,69,48], rest30=[12,25,14,54,98,68,78] </code></pre>
python|list
0
1,909,510
64,090,105
User Inputs and Conditional statements along with Code Executional delay and Loops
<p>This is my Mad Libs Project. I am a beginner so I tried to mixed all my learnings in Python which are User Inputs, Variables, Conditional Statements, and etc. Unfornately, It doesnt work and I cant identify the problem. For me its all good, I guess. I hope you could help me guys.</p> <p>Please bear with me. I am still a noob.</p> <pre><code>import time name = input('Hello! What is your name? ') print('Hi! ' + name + ' I\'m Sean. Nice to meet you!') time.sleep(2) def main(): ans = input('\'Wanna play a game? ').upper() if ans='YES': print('Great! Lets get started') time.sleep(2) print('The called Mad Libs. \nThe mechanics is simple, your going to give words according to its category \nand your answer will be added to my script I made beforehand.') def main2() ans2=input('Are you ready? ').lower() if ans2=='yes': Vegetable = input('Vegetable: ') Superhero = input('Superhero: ') Celebrity = input('Celebrity: ') Country = input('Country: ') Time_of_day = input(r'Time of day (ex. 11:11): ') Number = input('Number: ') Vegetable2 = input('Another Vegetable: ') Childhood_toy = input('Childhood Toy: ') Liquid = input(r'Liquid (ex. water,ketchup,etc.): ') Joke = input('Joke Quote: ') Emotion = input('Emotion: ') Unusual_pet = input('A unusual pet: ') Plant = input('A plant: ') Body_part = input('A body part: ') Furniture = input('Furniture: ') Number2 = input('Another number: ') Animal = input('Another animal: ') Food = input('Food: ') Catchphrase = input('A Catchphrase: ') elif ans2=='no': print('Aww! Maybe next time.') else: print('I didn\'t quite understand that, come again?').lower() main2() elif ans=='NO': print('Aww! Maybe next time.') time.sleep(3) exit() else: print('I didn\'t quite understand that, come again?').lower() main() main() </code></pre>
<pre><code>import time name = input('Hello! What is your name? ') print('Hi! ' + name + ' I\'m Sean. Nice to meet you!') time.sleep(2) def main(): ans = input('\'Wanna play a game? ').upper() if ans == 'YES': # Was missing an equal sign print('Great! Lets get started') time.sleep(2) print( 'The called Mad Libs. \nThe mechanics is simple, your going to give words according to its category \nand your answer will be added to my script I made beforehand.') def main2(): # Was missing colon ans2 = input('Are you ready? ').lower() # Everything below was not indented if ans2 == 'yes': Vegetable = input('Vegetable: ') Superhero = input('Superhero: ') Celebrity = input('Celebrity: ') Country = input('Country: ') Time_of_day = input(r'Time of day (ex. 11:11): ') Number = input('Number: ') Vegetable2 = input('Another Vegetable: ') Childhood_toy = input('Childhood Toy: ') Liquid = input(r'Liquid (ex. water,ketchup,etc.): ') Joke = input('Joke Quote: ') Emotion = input('Emotion: ') Unusual_pet = input('A unusual pet: ') Plant = input('A plant: ') Body_part = input('A body part: ') Furniture = input('Furniture: ') Number2 = input('Another number: ') Animal = input('Another animal: ') Food = input('Food: ') Catchphrase = input('A Catchphrase: ') elif ans2 == 'no': print('Aww! Maybe next time.') else: print('I didn\'t quite understand that, come again?') # There should be no .lower() on a print function main2() # Everything above was not indented main2() # You only defined main2(), you never actually used it elif ans == 'NO': print('Aww! Maybe next time.') time.sleep(3) exit() else: print('I didn\'t quite understand that, come again?') # There should be no .lower() on a print function main() main() </code></pre>
python-3.x
0
1,909,511
63,956,184
Django form - update boolean field to true
<p>I'm trying to up update a boolean field but I got this issue: save() got an unexpected keyword argument 'update_fields'.</p> <p>I got different issue: at the beginning when seller complete the form it was creating a new channel. I just want to update the current channel.</p> <p>Logic= consumer create a channel with a seller (channel is not active) -&gt; if seller wants to launch it. he has a form to make it true and launch it.</p> <p>models:</p> <pre><code>class Sugargroup(models.Model): consumer = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE, related_name=&quot;sugargroup_consumer&quot;, blank=True, null=True) seller = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE, related_name=&quot;sugargroup_seller&quot;) is_active = models.BooleanField('Make it happen', default=False) slug = models.SlugField(editable=False, unique=True) </code></pre> <p>views:</p> <pre><code>@method_decorator(login_required(login_url='/cooker/login'),name=&quot;dispatch&quot;) class CheckoutDetail(generic.DetailView, FormMixin): model = Sugargroup context_object_name = 'sugargroup' template_name = 'checkout_detail.html' form_class = CreateSugarChatForm validation_form_class = LaunchSugargroupForm def get_context_data(self, **kwargs): context = super(CheckoutDetail, self).get_context_data(**kwargs) context['form'] = self.get_form() context['validation_form'] = self.get_form(self.validation_form_class) #self.validation_form_class() return context def form_valid(self, form): if form.is_valid(): form.instance.sugargroup = self.object form.instance.user = self.request.user form.save() return super(CheckoutDetail, self).form_valid(form) else: return super(CheckoutDetail, self).form_invalid(form) def form_valide(self, validation_form): if validation_form.is_valid(): validation_form.instance.sugargroup = self.object #validation_form.instance.seller = self.request.user validation_form.save(update_fields=[&quot;is_active&quot;]) return super(CheckoutDetail, self).form_valid(validation_form) else: return super(CheckoutDetail, self).form_invalid(validation_form) def post(self,request,*args,**kwargs): self.object = self.get_object() form = self.get_form() validation_form = self.validation_form_class(request.POST) #validation_form = self.get_form(self.validation_form_class) if form.is_valid(): return self.form_valid(form) elif validation_form.is_valid(): return self.form_valide(validation_form) else: return self.form_valid(form) def get_success_url(self): return reverse('checkout:checkout_detail',kwargs={&quot;slug&quot;:self.object.slug}) </code></pre> <p>forms</p> <pre><code>class LaunchSugargroupForm(forms.ModelForm): def __init__(self,*args,**kwargs): super(LaunchSugargroupForm, self).__init__(*args,**kwargs) self.helper = FormHelper() self.helper.form_method=&quot;post&quot; self.helper.layout = Layout( Field(&quot;is_active&quot;,css_class=&quot;single-input&quot;), ) self.helper.add_input(Submit('submit','Launch the channel',css_class=&quot;btn btn-primary single-input textinput textInput form-control&quot;)) class Meta: model = Sugargroup fields = [ 'is_active' ] </code></pre>
<p>Try this:</p> <pre><code>validation_form.is_active = True validation_form.save() </code></pre>
python|django
0
1,909,512
63,899,950
Python / Pandas / PuLP optimization on a column
<p>I'm trying to optimize a column of data in a Pandas dataframe. I've looked through past posts but couldn't find one that addressed the issue of optimizing values in a column in a dataframe. This is my first post and relatively new to coding so apologizes upfront. Below is the code I'm using</p> <pre><code>from pandas import DataFrame import numpy as np from pulp import * heading = [184, 153, 140, 122, 119] df = DataFrame (heading, columns=['heading']) df['speed'] = 50 df['ratio'] = df.speed/df.heading conditions = [ (df['ratio'] &lt; 0.1), (df['ratio'] &gt;= 0.1 ) &amp; (df['ratio'] &lt; 0.2), (df['ratio'] &gt;= 0.2 ) &amp; (df['ratio'] &lt; 0.3), (df['ratio'] &gt;= 0.3 ) &amp; (df['ratio'] &lt; 0.4), (df['ratio'] &gt; 0.4 )] choices = [3, 1, 8, 5, 2] df['choice'] = np.select(conditions, choices) df['final_column'] = df.choice * df.heading print(np.sum(df.final_column)) </code></pre> <p>I use np.select to search through 'conditions' and return the appropriate 'choices'. This is functioning like a vlookup I use in excel.</p> <p>I'm trying to get PuLP or any other appropriate optimization tool or maybe even just a loop to find the optimal values for df.speed (which I start with temporary value of 50) to maximize the sum of values in the 'final_column.' Below is the code I've tried but its not working.</p> <pre><code>prob = LpProblem(&quot;Optimal Values&quot;,LpMaximize) speed_vars = LpVariable(&quot;Variable&quot;,df.speed,lowBound=0,cat='Integer') prob += lpSum(df.new_column_final) prob.solve() </code></pre> <p>Below is the error I'm getting:</p> <p>speed_vars = LpVariable(&quot;Variable&quot;,df.speed,lowBound=0,cat='Integer') TypeError: <strong>init</strong>() got multiple values for argument 'lowBound'</p> <p>Thanks so much for your help. Any help would be appreciated!</p>
<p>First of all the specific error message you are getting: <code>TypeError: __init__() got multiple values for argument 'lowBound'</code></p> <p>In python when calling a function you can pass arguments either by 'position' - which means the order in which you pass the arguments tells the function what each of them is - or by naming them. If you look up the <a href="https://www.coin-or.org/PuLP/pulp.html#pulp.LpVariable" rel="nofollow noreferrer">documentation</a> for the pulp.LpVariable method you'll see the second position argument is <code>'lowbound'</code> which you then also pass as a named argument - hence the error message.</p> <p>I think you might also be slighly misunderstanding how a dataframe works. It is not like excel where you set a 'formula' in a column and it stays updated to that formula as other elements on that row change. You can assign values to columns but if the input data change - the cell would only be updated if that bit of code was run again.</p> <p>In terms of solving your problem - I'm not convinced I've understood what you're trying to do but I've understood the following.</p> <ul> <li>We want to select values of <code>df['speed']</code> to maximise the sum-product of <code>heading</code> and <code>choices</code> columns</li> <li>The value of the choices column depends on the <code>ratio</code> of <code>speed</code> to <code>heading</code> (as per the given 5 ranges)</li> <li><code>Heading</code> column is fixed</li> </ul> <p>By inspection the optimum will be achieved by setting all of the speeds so that the ratios are in the [0.2 - 0.3] range, and where they fall in that range doesn't matter. Code to do this in PuLP within pandas dataframes below. It relised on using binary variables to keep track of which range the ratios fall in.</p> <p>The syntax is a little awkward though - I'd recommend doing the optimisation completely outside of dataframes and just loading results in at the end - using the <code>LpVariable.dicts</code> method to create arrays of variables instead.</p> <pre><code>from pandas import DataFrame import numpy as np from pulp import * headings = [184.0, 153.0, 140.0, 122.0, 119.0] df = DataFrame (headings, columns=['heading']) df['speed'] = 50 max_speed = 500.0 max_ratio = max_speed / np.min(headings) df['ratio'] = df.speed/df.heading conditions_lb = [0, 0.1, 0.2, 0.3, 0.4] conditions_ub = [0.1, 0.2, 0.3, 0.4, max_speed / np.min(headings)] choices = [3, 1, 8, 5, 2] n_range = len(choices) n_rows = len(df) # Create primary ratio variables - one for each variable: df['speed_vars'] = [LpVariable(&quot;speed_&quot;+str(j)) for j in range(n_rows)] # Create auxilary variables - binaries to control # which bit of range each speed is in df['aux_vars'] = [[LpVariable(&quot;aux_&quot;+str(i)+&quot;_&quot;+str(j), cat='Binary') for i in range(n_range)] for j in range(n_rows)] # Declare problem prob = LpProblem(&quot;max_pd_column&quot;,LpMaximize) # Define objective function prob += lpSum([df['aux_vars'][j][i]*choices[i]*headings[j] for i in range(n_range) for j in range(n_rows)]) # Constrain only one range to be selected for each row for j in range(n_rows): prob += lpSum([df['aux_vars'][j][i] for i in range(n_range)]) == 1 # Constrain the value of the speed by the ratio range selected for j in range(n_rows): for i in range(n_range): prob += df['speed_vars'][j]*(1.0/df['heading'][j]) &lt;= \ conditions_ub[i] + (1-df['aux_vars'][j][i])*max_ratio prob += df['speed_vars'][j]*(1.0/df['heading'][j]) &gt;= \ conditions_lb[i]*df['aux_vars'][j][i] # Solve problem and print results prob.solve() # Dislay the optimums of each var in problem for v in prob.variables (): print (v.name, &quot;=&quot;, v.varValue) # Set values in dataframe and print: df['speed_opt'] = [df['speed_vars'][j].varValue for j in range(n_rows)] df['ratio_opt'] = df.speed_opt/df.heading print(df) </code></pre> <p>The last bit of which prints out:</p> <pre><code> heading speed_vars b spd_opt rat_opt 0 184.0 speed_0 [b_0_0, b_1_0, b_2_0, b_3_0, b_4_0] 36.8 0.2 1 153.0 speed_1 [b_0_1, b_1_1, b_2_1, b_3_1, b_4_1] 30.6 0.2 2 140.0 speed_2 [b_0_2, b_1_2, b_2_2, b_3_2, b_4_2] 28.0 0.2 3 122.0 speed_3 [b_0_3, b_1_3, b_2_3, b_3_3, b_4_3] 24.4 0.2 4 119.0 speed_4 [b_0_4, b_1_4, b_2_4, b_3_4, b_4_4] 23.8 0.2 </code></pre>
python|pandas|pulp
1
1,909,513
72,418,227
Access "upload_to" of a Model's FileFIeld in Django?
<p>I have a Model with a FileField like that:</p> <pre><code>class Video(MediaFile): &quot;&quot;&quot; Model to store Videos &quot;&quot;&quot; file = FileField(upload_to=&quot;videos/&quot;) [...] </code></pre> <p>I'm populating the DB using a cron script.</p> <p>Is it possible to somehow access the &quot;upload_to&quot; value of the model? I could use a constant, but that seems messy. Is there any way to access it directly?</p>
<p>You can access this with:</p> <pre><code>Video.file<strong>.field.upload_to</strong> # 'videos/'</code></pre> <p>or through the <code>_meta</code> object:</p> <pre><code>Video<strong>._meta.get_field('file').upload_to # 'videos/'</strong></code></pre> <p>The <a href="https://docs.djangoproject.com/en/dev/ref/models/fields/#django.db.models.FileField.upload_to" rel="nofollow noreferrer"><strong><code>upload_to=…</code></strong> parameter <sup>[Django-doc]</sup></a> can however also be given a function that takes two parameters, and thus in that case it will not return a string, but a reference to that function.</p>
python|django|django-models
1
1,909,514
65,889,465
Which is the maximum number of variables that Gekko library support?
<p>I am trying to solve a problem that has more than one million variables with the Gekko library for python? Does anyone know how many variables can manage that library?</p>
<p>Gekko is not limited by a certain number of variables. Each mode (<code>IMODE</code>) takes a base model and then applies it to each time point (for <code>IMODE&gt;4</code>) or for every data set (<code>IMODE=2</code>). The base model does have a limit of 10,000,000 but that is mostly just as a large upper bound. A problem with 10M simultaneous differential equations x 100 time points would be 1,000,000,000 (1B) variables and this is allowed in Gekko. The developers can increase the 10M limit if a user ever runs into that. It is there as a check just in case someone has an error in their model and didn't intend to spawn a very large problem. Here is a case study that shows the <a href="https://youtu.be/8kx6vC9gTLo" rel="nofollow noreferrer">scale-up comparison</a> with number of differential equations for simulation with MATLAB (ode15s), SciPy (ODEINT), and APMonitor (engine for Gekko).</p> <p><a href="https://i.stack.imgur.com/AhtEY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AhtEY.png" alt="Scale-up solution time" /></a></p> <p>The results show that APMonitor / Gekko isn't as fast for small problems but has good scale-up potential for larger scale problems. The plot only shows up to 3000 simultaneous differential equations. Gekko's current arbitrary limit is set to 10M.</p>
python|gekko
1
1,909,515
3,372,391
py2exe + pywin32 MemoryLoadLibrary import fail when bundle_files=1
<p>I have created a simple program which uses pywin32. I want to deploy it as an executable, so I py2exe'd it. I also didn't want a huge amount of files, so I set <code>bundle_files</code> to 1 (meaning bundle everything together). However, when I attempt running it, I get:</p> <pre><code>Traceback (most recent call last): File "pshelper.py", line 4, in &lt;module&gt; File "zipextimporter.pyc", line 82, in load_module File "win32.pyc", line 8, in &lt;module&gt; File "zipextimporter.pyc", line 98, in load_module ImportError: MemoryLoadLibrary failed loading win32ui.pyd </code></pre> <p>In my setup script, I tried doing <code>packages=["win32ui"]</code> and <code>includes=["win32ui"]</code> as options, but that didn't help. How can I get py2exe to include win32ui.pyd?</p> <p>I don't have this problem if I don't ask it to bundle the files, so I can do that, for now, but I'd like to know how to get it to work properly.</p>
<p>The work-around that has worked best so far is to simply re-implement the pywin32 functions using ctypes. That doesn't require another .pyd or .dll file so the issue is obviated. </p>
python|py2exe|pywin32
1
1,909,516
50,449,353
Pandas behaviour on stack
<p>Lets Suppose I have </p> <pre><code>ID A1 B1 A2 B2 1 3 4 5 6 2 7 8 9 10 </code></pre> <p>I want to use pandas stack and wants to achieve something like this</p> <pre><code>ID A B 1 3 4 1 5 6 2 7 8 2 9 10 </code></pre> <p>but what I got is </p> <pre><code>ID A B 1 3 4 2 7 8 1 5 6 2 9 10 </code></pre> <p>this is what i am using</p> <pre><code>df.stack().reset_index(). </code></pre> <p>Is it possible to achieve something like this using Stack? <code>append()</code> method in pandas does this, but if possible I want to achieve using <code>pandas</code> <code>stack()</code> Any idea ?</p>
<p>You can use <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.wide_to_long.html#pandas-wide-to-long" rel="nofollow noreferrer"><code>pd.wide_to_long</code></a>:</p> <pre><code>pd.wide_to_long(df, ['A','B'], 'ID', 'value', sep='', suffix='.+')\ .reset_index()\ .sort_values('ID')\ .drop('value', axis=1) </code></pre> <p>Output:</p> <pre><code> ID A B 0 1 3 4 2 1 5 6 1 2 7 8 3 2 9 10 </code></pre>
python|pandas|dataframe
5
1,909,517
35,333,367
How to loop through a percentage investment?
<p>I'm working on this simple task where a financial advisor suggests to invest in a stock fund that is guaranteed to increase by 3 percent over the next five years. </p> <p>Here's my code:</p> <pre><code>while True: investment = float(input('Enter your initial investment: ')) if 1000 &lt;= investment &lt;= 100000: break else: print("Investment must be between $1,000 and $100,000") #Annual interest rate apr = 3 / 100 amount = investment for yr in range(5): amount = (amount) * (1. + apr) print('After {:&gt;2d} year{} you have: $ {:&gt;10.2f}'.format(yr, 's,' if yr &gt; 1 else ', ', amount)) </code></pre>
<p>You got it. The only problem is that <code>apr</code> is runing integer math. Use floating point numbers instead, so <code>apr</code> does not round to zero:</p> <pre><code>apr = 3.0 / 100.0 </code></pre> <p>By changing that line your program will probably work</p> <p>This is the whole code changes (as requested in comments):</p> <pre><code>while True: investment = float(input('Enter your initial investment: ')) if 1000 &lt;= investment &lt;= 100000: break else: print("Investment must be between $1,000 and $100,000") #Annual interest rate apr = 3.0 / 100.0 amount = investment for yr in range(5): amount = (amount) * (1. + apr) print('After {:&gt;2d} year{} you have: $ {:&gt;10.2f}'.format(yr, 's,' if yr &gt; 1 else ', ', amount)) </code></pre> <p>The output I get is:</p> <pre> Enter your initial investment: 1002 After 0 year, you have: $ 1032.06 After 1 year, you have: $ 1063.02 After 2 years, you have: $ 1094.91 After 3 years, you have: $ 1127.76 After 4 years, you have: $ 1161.59 </pre>
python|python-3.x
3
1,909,518
45,041,589
Python variable memory management
<p>I just wrote this primitive script: </p> <pre><code>from sys import getsizeof as g x = 0 s = '' while s != 'q': x = (x &lt;&lt; 8) + 0xff print(str(x) + " [" + str(g(x)) + "]") s = input("Enter to proceed, 'q' to quit ") </code></pre> <p>The output is as follows - and quite surprising, as I perceive it: </p> <pre><code>255 [28] 65535 [28] 16777215 [28] 4294967295 [32] 1099511627775 [32] 281474976710655 [32] 72057594037927935 [32] 18446744073709551615 [36] </code></pre> <p>And so on. My point is: it seems that the variable x has some sort of 'overhead' with a size of 25 bytes. Where does this come from? Thanks in advance for any attempt to help me. </p>
<p>A python <code>int</code> is an object, so it's not surprising that it has a small overhead. If this overhead starts to become meaningful for you then this implies you're manipulating substantial collections of ints, which suggests to me that the <a href="http://www.numpy.org/" rel="nofollow noreferrer">numpy</a> library is probably something you should consider. </p>
python
1
1,909,519
60,615,718
Django, RestAPI, Microsoft Azure, website, virtual machine, ubuntu
<p>I have developed a website and REST api using Django and Django REST Framework. On local machine they are working perfectly so my next step is trying to publish it on remote server. I chose Microsoft Azure.</p> <p>I created a virtual machine with Ubuntu server 18.04 and installed everything to run my project there. While I run it locally on virtual machine it's working perfectly, at localhost:8000; my website and rest-api are showing.</p> <p>Now I want it to publish to the world so it can be accessed under the IP of my virtual machine or some different address so everybody can access it. I was looking through azure tutorials on Microsoft website and google, but i cannot find anything working.</p> <p>I don't want to use their Web App solution or Windows Server. It needs to be working with Ubuntu Virtual machine from Azure. Is it possible to do and if yes then how?</p>
<ol> <li><p>(Optional) Set your web application listen publicly on 80 port for http or 443 port for https. You may refer to: <a href="https://stackoverflow.com/questions/1621457/about-ip-0-0-0-0-in-django">About IP 0.0.0.0 in Django</a></p></li> <li><p>In Ubuntu OS, if firewall is enabled, you need to open port 80 and 443, so that others can access your server.</p></li> <li><p>In Azure portal, if NSG is enabled, you need to add inbound rules for 80 and 433. </p></li> <li><p>(Optional) Buy a domain, and add an <code>A</code> record to your VM's IP. In this way, people would be able to access your website via friendly URL.</p></li> </ol>
python|django|azure|ubuntu
0
1,909,520
57,943,987
python: How to pass parameter into SQL query
<p>I have a function with a parameter. This parameter must be replaced in a SQL query and then execute it by <code>pandasql</code>. Here is my function:</p> <pre><code>def getPolypsOfPaitentBasedOnSize(self,size): smallPolypQuery = """ select * from polyp where polyp.`Size of Sessile in Words` == """ +size smallPolyps = ps.sqldf(smallPolypQuery) </code></pre> <p>When i run the code, i get the below error:</p> <pre><code> raise PandaSQLException(ex) pandasql.sqldf.PandaSQLException: (sqlite3.OperationalError) no such column: Small [SQL: select * from polyp where polyp.`Size of Sessile in Words` == Small] </code></pre> <p>it seems that, i have to somehow make it like </p> <pre><code> where polyp.`Size of Sessile in Words` == 'Small' </code></pre> <p>but, i don't know, how to do it!</p> <p><strong><em>Update:</em></strong></p> <p>I have tired the below solution and also there is no error but the query does not return anything</p> <pre><code>""" select * from polyp where polyp.`Size of Sessile in Words` == " """ +size+ """ " """ </code></pre> <p>I am sure (if the <code>size="Small"</code>)the statement like below will work for me:</p> <pre><code>where polyp.`Size of Sessile in Words` == "Small" </code></pre>
<p><code>format</code> can be used.</p> <pre><code>size = 'Small' smallPolypQuery = """ select * from polyp where polyp.`Size of Sessile in Words` == {0}""".format(size) print(smallPolypQuery) </code></pre> <p>The resule is:</p> <pre class="lang-py prettyprint-override"><code>select * from polyp where polyp.`Size of Sessile in Words` == Small </code></pre> <p>If you need quote then put it to the smallPolypQuery such as:</p> <pre><code>smallPolypQuery = """ select * from polyp where polyp.`Size of Sessile in Words` == "{0}" """.format(size) </code></pre>
python-3.x|pandasql
0
1,909,521
57,837,033
Completely Unable to Run pip on Windows 10
<p>I have installed Python 3.7.4 on Windows 10. The scripts folder is empty. I have all paths added to environment variable PATH. Python is working on running scripts. PIP is not yet recognized and even using python get-pip.py execution is not working.</p> <p>I have read all possible fixes online but it does not help.</p> <p>Anyone who can assist? Any help will be much appreciated.</p> <p>C:\Program Files\Python37> python get-pip.py</p>
<p>Add the following directory to your path. </p> <pre><code>C:\Program Files\Python37\Scripts </code></pre> <p>Then, try to download pip again. If this not working, download the get-pip.py manually and install it through the CMD as an Admin. </p> <p>Here is a website: <a href="https://bootstrap.pypa.io/get-pip.py" rel="nofollow noreferrer">https://bootstrap.pypa.io/get-pip.py</a></p>
pip|python-3.7
0
1,909,522
56,164,741
How to prevent 2 threads from overwriting value?
<p>I was trying to run a operation which has varied wait time to be executed parallely in threads. In the operation i am setting a value and waiting for operation to finish, and calling another function. But the thread that started after waiting is overwriting the value for all other threads. </p> <p>I tried using thread.local method but not working</p> <pre class="lang-py prettyprint-override"><code>import threading class temp: def __init__(self): self.temp = {} def set_data(self,data): self.temp['data'] = data def get_data(self): return self.temp['data'] def process(t): # print(t) # mydata = threading.local() print('before sleep',threading.current_thread(),t.get_data()) # sleep(random.randint(0,1)*10) print('after sleep',threading.current_thread(),t.get_data()) if __name__=='__main__': threads = [] test = [] for i in range(0,4): t = temp() t.set_data(i) threads.append(threading.Thread(target=process, args=(t,))) threads[-1].start() for t in threads: t.join() </code></pre> <p>I expect the value that i sent to the thread remain the same after wait time. But the threads are interfering and giving random output</p>
<p>Make <code>temp</code> an instance variable of class <code>temp</code>. Put it in <code>__init__</code> as <code>self.temp = {}</code>.</p>
python|class|object|parallel-processing
1
1,909,523
56,274,861
Save frames of live video with timestamps
<p>I want to capture the frames of video with timestamps in real time using Raspberry pi. The video is made by USB webcam using <code>ffmpeg()</code> function in python code. How do I save the frames of video which is currently made by USB webcam in Raspberry pi?</p> <p>I tried using three functions of opencv. cv2.VideoCapture to detect the video, <code>video.read()</code> to capture the frame and <code>cv2.imwrite()</code> to save the frame.Here is the code, the libraries included is not mentioned for conciseness.</p> <pre><code> os.system('ffmpeg -f v4l2 -r 25 -s 640x480 -i /dev/video0 out.avi') video=cv2.VideoCapture('out.avi') ret, frame=video.read() cv2.imwrite('image'+str(i)+'.jpg',frame) i+=1 </code></pre> <p>The code saves the frames of video which was previously made by webcam. It is not saving the frames of video which is currently being recorded by webcam.</p>
<p>As you can read <a href="https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_gui/py_video_display/py_video_display.html#capture-video-from-camera" rel="nofollow noreferrer">here</a>, you can access the camera with <code>camera=cv2.VideoCapture(0)</code>. 0 is an index of the connected camera. You may have to try a different index, but 0 usually works.<br> Similar as a video file you can use <code>ret, frame = camera.read()</code> to grab a frame. Always check the <code>ret</code> value before continuing processing a frame.<br> Next you can add text to the frame as described <a href="https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_gui/py_drawing_functions/py_drawing_functions.html#adding-text-to-images" rel="nofollow noreferrer">here</a>. You can use <a href="https://stackoverflow.com/a/1557584/10699171">time</a> or <a href="https://docs.python.org/3.6/library/datetime.html#datetime-objects" rel="nofollow noreferrer">datetime</a> to obtain a timestamp. Finally save the frame.</p> <p>Note: if you use <code>imwrite</code> you will quicky get a LOT of images. Depending on your project you could also consider saving the frames as video-file. Explained <a href="https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_gui/py_video_display/py_video_display.html#saving-a-video" rel="nofollow noreferrer">here</a>.</p> <p>Edit after comment:</p> <p>This is how you can use <code>time.time()</code>. First import the time module at the top of your code. <code>time.time()</code> returns the number of seconds since <code>January 1, 1970, 00:00:00</code>. So to get a timestamp, you have to store the starttime - when the program/video starts running.<br> Then, on every frame, you call time.time() and subtract the starttime. The result is the time your program/video has been running. You can use that value for a timestamp.</p> <pre><code>import time starttime = time.time() # get frame timestamp = time.time() - starttime cv2.putText(frame,timestamp,(10,500), font, 4,(255,255,255),2,cv2.CV_AA) </code></pre>
python|opencv|raspberry-pi
-2
1,909,524
56,358,994
How to fix a datepicker in python with Selenium
<p>I'm trying to make an Auto-Reg bot with python and selenium. I'm getting the most things to work, as they aren't that hard. But atm i'm stuck at a datepicker. The code is able to open the date-box but it doesn't select a date. Another problem is, you cant write anything in the date box, you HAVE to select a date in the date box.</p> <p>I tried various methods i found on stackoverflow but nothing works for this site.</p> <p>Site: <a href="https://mobilepanel2.nielsen.com/enrol/home?l=de_de&amp;pid=9" rel="nofollow noreferrer">https://mobilepanel2.nielsen.com/enrol/home?l=de_de&amp;pid=9</a></p> <pre><code>from selenium import webdriver from selenium.webdriver.common.keys import Keys from selenium.webdriver.support.ui import Select from selenium.webdriver.support.ui import WebDriverWait b = webdriver.Chrome(r'''C:\Users\Florian\PycharmProjects\Auto_Reg\chromedriver''') b.get('https://mobilepanel2.nielsen.com/enrol/home?l=de_de&amp;pid=9') b.find_element_by_xpath("//select[@id='platform']/option[contains(text(),'Android')]").click() b.find_element_by_xpath("//select[@id='deviceType']/option[contains(text(),'Smartphone')]").click() b.find_element_by_xpath("//label[contains(text(),'Männlich')]").click() ## until here, everything works fine select = Select(b.find_element_by_name('birthDate')) select.select_by_visible_text("13") </code></pre>
<p>Here you go:</p> <pre><code># click calendar to appear browser.find_element_by_id('birthDateCalendar').click() # get calendar elements calendar = browser.find_elements_by_xpath('//*[@id="ui-datepicker-div"]/table/tbody/tr/td') # click selected day selection = '15' for item in calendar: day = item.get_attribute("innerText") if day == selection: item.click() </code></pre>
python|selenium|selenium-webdriver|datepicker|selenium-chromedriver
0
1,909,525
69,403,345
Python (Numpy Array) - Flipping an image pixel-by-pixel
<p>I have written a code to flip an image vertically pixel-by-pixel. However, the code makes the image being mirrored along the line x = height/2.</p> <p>I have tried to correct the code by setting the range of &quot;i&quot; from (0, h) to (0, h//2) but the result is still the same.</p> <p><a href="https://i.stack.imgur.com/Y6wW0.png" rel="nofollow noreferrer">Original Photo</a> <a href="https://i.stack.imgur.com/bYGoI.png" rel="nofollow noreferrer">Resulted Photo</a></p> <pre><code>#import libraries import numpy as np import matplotlib.pyplot as plt from PIL import Image #read image (set image as m) m = Image.open('lena.bmp') #change image to array (set array as np_array) np_array = np.array(m) #define the width(w) and height(h) of the image h, w = np_array.shape #make the image upside down for i in range(0,h): for j in range(0,w): np_array[i,j] = np_array[h-1-i,j] #change array back to image (set processed image as pil_image) pil_image = Image.fromarray(np_array) #open the processed image pil_image.show() #save the processed image pil_image.save('upsidedown.bmp') </code></pre>
<p>The above given code is replacing the image pixels inplace, that is why the result is a mirrored image. If you want to flip the image pixel by pixel, just create a new array with same shape and then replace pixels in this new array. For example:</p> <pre><code>#import libraries import numpy as np import matplotlib.pyplot as plt from PIL import Image #read image (set image as m) m = Image.open('A-Input-image_Q320.jpg') #change image to array (set array as np_array) np_array = np.array(m) new_np_array = np.copy(np_array) #define the width(w) and height(h) of the image h, w = np_array.shape #make the image upside down for i in range(0,h): for j in range(0,w): new_np_array[i,j] = np_array[h-1-i,j] #change array back to image (set processed image as pil_image) pil_image = Image.fromarray(new_np_array) #open the processed image pil_image.show() #save the processed image pil_image.save('upsidedown.bmp') </code></pre>
python|numpy-ndarray
0
1,909,526
69,470,277
Pandas and Dictionary: How to get all unique values for each key?
<p>I want to build a dictionary such that the value in the key-value pair is every unique value for that key.</p> <p>Consider this example:</p> <pre><code>df = pd.DataFrame({'id': [1, 2, 3, 1, 2, 3], 'vals': ['a1', 'a2', 'a3', 'a2', 'a2a', 'a3a']}) # only yields last entry dict(zip(df['id'], df['vals'])) # results {1: 'a2', 2: 'a2a', 3: 'a3a'} # expected value {1: ['a1', 'a2'], 2: ['a2', 'a2a'], 3: ['a3', 'a3a']} </code></pre>
<p>Use:</p> <pre><code>result = df.groupby(&quot;id&quot;)[&quot;vals&quot;].agg(list).to_dict() print(result) </code></pre> <p><strong>Output</strong></p> <pre><code>{1: ['a1', 'a2'], 2: ['a2', 'a2a'], 3: ['a3', 'a3a']} </code></pre>
python|pandas
3
1,909,527
55,181,332
Filter multiple columns based on row values in pandas dataframe
<p>i have a pandas dataframe structured as follow:</p> <pre><code>In[1]: df = pd.DataFrame({"A":[10, 15, 13, 18, 0.6], "B":[20, 12, 16, 24, 0.5], "C":[23, 22, 26, 24, 0.4], "D":[9, 12, 17, 24, 0.8 ]}) Out[1]: df A B C D 0 10.0 20.0 23.0 9.0 1 15.0 12.0 22.0 12.0 2 13.0 16.0 26.0 17.0 3 18.0 24.0 24.0 24.0 4 0.6 0.5 0.4 0.8 </code></pre> <p>From here my goal is to filter multiple columns based on the last row (index 4) values. More in detail i need to keep those columns that has a value &lt; 0.06 in the last row. The output should be a df structured as follow:</p> <pre><code> B C 0 20.0 23.0 1 12.0 22.0 2 16.0 26.0 3 24.0 24.0 4 0.5 0.4 </code></pre> <p>I'm trying this:</p> <pre><code>In[2]: df[(df[["A", "B", "C", "D"]] &lt; 0.6)] </code></pre> <p>but i get the as follow:</p> <pre><code>Out[2]: A B C D 0 NaN NaN NaN NaN 1 NaN NaN NaN NaN 2 NaN NaN NaN NaN 3 NaN NaN NaN NaN 4 NaN 0.5 0.4 NaN </code></pre> <p>I even try:</p> <pre><code>df[(df[["A", "B", "C", "D"]] &lt; 0.6).all(axis=0)] </code></pre> <p>but It gives me error, It doesn't work.</p> <p>Is there anybody whom can help me?</p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>DataFrame.loc</code></a> with <code>:</code> for return all rows by condition - compare last row by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.iloc.html" rel="nofollow noreferrer"><code>DataFrame.iloc</code></a>:</p> <pre><code>df1 = df.loc[:, df.iloc[-1] &lt; 0.6] print (df1) B C 0 20.0 23.0 1 12.0 22.0 2 16.0 26.0 3 24.0 24.0 4 0.5 0.4 </code></pre>
python-3.x|pandas|dataframe
3
1,909,528
55,507,235
Django sessions expiring despite calling set_expiry(0)
<p>I'm trying to implement a "remember me" checkbox into django's builtin LoginView, as suggested on <a href="https://stackoverflow.com/questions/15100400/django-remember-me-with-built-in-login-view-and-authentication-form">this question</a>, but even though I call set_expiry(0), the sessions still expire after <code>SESSION_COOKIE_AGE</code>, regardless of the cookie expire date (which is correctly set to 1969).</p> <p>I'm using django 2.1.7 with python 3.7.2, and the only session-related settings on my <code>settings.py</code> is <code>SESSION_COOKIE_AGE</code>, which is set to 5 seconds for resting purposes.</p> <p>Django seems to use a database backend as default. I'm using sqlite for development.</p> <p>This is my view class:</p> <pre class="lang-py prettyprint-override"><code>class UserLoginView(LoginView): form_class = registration.UserLoginForm def form_valid(self, form): remember = form.data.get('remember_me', False) if remember: self.request.session.set_expiry(0) return super(UserLoginView, self).form_valid(form) </code></pre> <p>And this is the <strong>original</strong> LoginView <code>form_valid</code> method (being overriden above)</p> <pre class="lang-py prettyprint-override"><code>class LoginView(SuccessURLAllowedHostsMixin, FormView): ... def form_valid(self, form): """Security check complete. Log the user in.""" auth_login(self.request, form.get_user()) return HttpResponseRedirect(self.get_success_url()) ... </code></pre> <p>As you noticed, I'm using a custom form_class. A very simple override of the default form:</p> <pre><code>class UserLoginForm(AuthenticationForm): remember_me = BooleanField(required=False) </code></pre> <p>If I use a debugger right after the set_expiry call, I can see that the sesion expiry age is still the default 5 seconds:</p> <pre><code>&gt; /project/app/views/accounts.py(64)form_valid() -&gt; return super(UserLoginView, self).form_valid(form) (Pdb) self.request.session.get_expiry_age() 5 </code></pre> <p>I get similar results if I let the request complete and redirect, reach the next view and finally render a template where I have:</p> <pre><code>... {{ request.session.get_expiry_age }} ... </code></pre> <p>The rendered result is also 5 (the current default).</p> <p>Sure enough, after 5 seconds, if you refresh the page, django will take you back to the login screen.</p> <p>What am I doing wrong here? It would be nice if someone could clarify what does "Web browser is closed" means here? <a href="https://docs.djangoproject.com/en/2.2/topics/http/sessions/#django.contrib.sessions.backends.base.SessionBase.set_expiry" rel="nofollow noreferrer">https://docs.djangoproject.com/en/2.2/topics/http/sessions/#django.contrib.sessions.backends.base.SessionBase.set_expiry</a></p>
<p><strong>TL; DR;</strong> Seems like Django does not offer support for infinite or truly undefined expiry session times. Set it to 30 days or greater if you need to extend its validity.</p> <p>From <a href="https://docs.djangoproject.com/en/2.2/topics/http/sessions/#django.contrib.sessions.backends.base.SessionBase.set_expiry" rel="nofollow noreferrer">Django documentation</a>:</p> <blockquote> <p>If value is 0, the user’s session cookie will expire when the user’s Web browser is closed.</p> </blockquote> <p>Although it's not clear here, seems like setting the expiry time to <code>0</code> has a similar behavior when setting it to <code>None</code>: it will fallback to the default session expiry policy. The difference here is that when setting it to <code>0</code>, we're also inferring that the <a href="https://docs.djangoproject.com/en/2.2/topics/http/sessions/#browser-length-vs-persistent-sessions" rel="nofollow noreferrer">session should be expired right after the user closes the browser</a>. In both cases, <code>SESSION_COOKIE_AGE</code> works like a session max-age value.</p> <p>I believe you could set a greater number to turn around this problem, for example, something equivalent to 100 years or more. My personal suggestion is to specify an expiry time of 30 days when the user checks the "remember me" field. When you specify a positive integer greater than zero, Django won't fallback to the <code>SESSION_COOKIE_AGE</code> setting.</p> <p>If you're curious about why you're getting 5 seconds even after specifying an expiry of 0 seconds, here's the source code extracted from the <code>get_expiry_age</code> function:</p> <pre><code>if not expiry: # Checks both None and 0 cases return settings.SESSION_COOKIE_AGE if not isinstance(expiry, datetime): return expiry </code></pre> <p>Final considerations:</p> <ul> <li>there's some room for improvements in the Django documentation</li> <li>seems like refreshing a tab could also invalidate the session</li> </ul>
python|django|session
3
1,909,529
42,190,866
ipyparallel strange overhead behavior
<p>Im trying to understand how to do distributed processing with ipyparallel and jupyter notebook, so i did some test and got odd results.</p> <pre><code>from ipyparallel import Client %px import numpy as np rc = Client() dview = rc[:] bview = rc.load_balanced_view() print(len(dview)) print(len(bview)) data = [np.random.rand(10000)] * 4 %time np.sin(data) %%time #45.7ms results = dview.map(np.sin, data) results.get() %%time #110ms dview.push({'data': data}) %px results = np.sin(data) results %%time #4.9ms results = np.sin(data) results %%time #93ms results = bview.map(np.sin, data) results.get() </code></pre> <p>What is the matter with the overhead? Is the task i/o bound in this case and just 1 core can do it better? I tried larger arrays and still got better times with no parallel processing.</p> <p>Thanks for the advice!</p>
<p>The problem seems to be the io. Push pushes the whole set of data to every node. I am not sure about the map function, but most likely it splits the data in chunks that are sent to nodes. So smaller chunks - faster processing. Load balancer most likely sends the data and the task two time to the same node, which significantly hits performance.</p> <p>And how did you manage to send the data in 40 ms? I am used to http protocol where only the handshake takes about a second. For me 40 ms in the network is lightning fast.</p> <p>EDIT About long times (40ms):</p> <p>In local networks the ping time of 1-10ms is considered a normal situation. Taking into account that you first need to make a handshake (minimum 2 signals) and only then send the data (minimum 1 signal) and wait for the response (another signal) you already talk about 20ms just for connecting two computers. Of course you can try to minimize the ping time to 1ms and then use a faster MPI protocol. But as I understand it does not improve the situation significantly. Only one order of magnitude faster.</p> <p>Therefore the general recommendations are to use larger jobs. For example, a pretty fast dask distributed framework (faster than Celery based on benchmarks) recommends tasks times to be more than 100ms. Otherwise the overhead of the framework starts overweighting the time of the execution and the parallelization benefits are disappearing. <a href="http://distributed.readthedocs.io/en/latest/efficiency.html" rel="nofollow noreferrer">Efficiency on Dask Distributed</a></p>
python|parallel-processing|cluster-computing|overhead
1
1,909,530
42,158,107
Countdown Timer doesn't work
<p>The target is: if there is a motion, the recording starts and the counter (x) begins to decrement every 1 second, but if in the meantime there is another motion, the counter restart to x (for example: 5 second).</p> <p>Actually this doesn't work, more specifically, the counter doesn't reset if there's a motion during the recording, so every video has 5secs lenght.</p> <pre><code>from picamera import PiCamera from time import sleep camera = PiCamera() sensor = MotionSensor(7) camera.hflip = True name = "video.h264" x = 5 #seconds of record def countdown(count): while (count &gt;= 0): print (count) count -= 1 sleep(1) if sensor.when_motion == True: count = x def registra_video(): print ("recording started") #camera.start_preview() camera.start_recording(name) countdown(x) def stop_video(): camera.stop_recording() #camera.stop_preview() print ("recording stopped") print("Waiting...") sensor.when_motion = registra_video sensor.when_no_motion = stop_video pause() </code></pre> <p>P.s i know that i have to do a function that name every video differently, but i will do it subsequently.</p>
<h3>INTRO</h3> <p>To begin with, I am pretty sure that this problem is best solved with a multi-threaded approach for two reasons. First of all, event handlers in general are intended to be small snippets of code that run very quickly in a single thread. Secondly, your specific code is blocking itself in the manner I will describe below.</p> <h3>Current Behavior</h3> <p>Before presenting a solution, let's take a look at your code to see why it does not work.</p> <p>You have a motion sensor that is outputting events when it detects the start and end of a motion. These events happen regardless of anything your code is doing. As you correctly indicated, a <a href="https://gpiozero.readthedocs.io/en/v1.3.1.post1/api_input.html#motion-sensor-d-sun-pir" rel="nofollow noreferrer"><code>MotionSensor</code></a> object will call <a href="https://gpiozero.readthedocs.io/en/v1.3.1.post1/api_input.html#gpiozero.MotionSensor.when_motion" rel="nofollow noreferrer"><code>when_motion</code></a> every time it goes into active state (i.e., when a new motion is detected). Similarly, it will call <a href="https://gpiozero.readthedocs.io/en/v1.3.1.post1/api_input.html#gpiozero.MotionSensor.when_no_motion" rel="nofollow noreferrer"><code>when_no_motion</code></a> whenever the motion stops. The way that these methods are called is that events are added to a queue and processed one-by-one in a dedicated thread. Events that can not be queued (because the queue is full) are dropped and never processed. By default, the queue length is one, meaning that any events that occur while another event is waiting to be processed are dropped.</p> <p>Given all that, let's see what happens when you get a new motion event. First, the event will be queued. It will then cause <code>registra_video</code> to be called almost immediately. <code>registra_video</code> will block for five seconds no matter what other events occurred. Once it is done, another event will be popped off the queue and processed. If the next event is a stop-motion event that occurred during the five second wait, the camera will be turned off by <code>stop_video</code>. The only way <code>stop_video</code> will not be called is if the sensor continuously detects motion for more than five seconds. If you had a queue length of greater than one, another event could occur during the blocking time and still get processed. Let's say this is another start-motion event that occurred during the five second block. It will restart the camera and create another five second video, but increasing the queue length will not alter the fact that the first video will be exactly five seconds long.</p> <p>Hopefully by now you get the idea of why it is not a good idea to wait for the entire duration of the video within your event handler. It prevents you from reacting to the following events on time. In your particular case, you have no way to restart the timer when it is still running since you do not allow any other code to run while the timer is blocking your event processing thread.</p> <h3>Design</h3> <p>So here is a possible solution:</p> <ol> <li>When a new motion is detected (<code>when_motion</code> gets called), start the camera if it is not already running.</li> <li>When a stop-motion is detected (<code>when_no_motion</code> gets called), you have two options: <ol> <li>If a countdown is not running, start it. I would not recommend starting a countdown in <code>when_motion</code>, since the motion will be in progress until <code>when_no_motion</code> is called.</li> <li>If the countdown is already running, restart it.</li> </ol></li> </ol> <p>The timer will run in a background thread, which will not interfere with the event processing thread. The "timer" thread can just set the start time, sleep for five seconds and check the start time again. If it is more than five seconds past the start time when it wakes up, it turns off the camera. If the start time was reset by another <code>when_motion</code> call, the thread will go back to sleep for <code>new_start_time + five seconds - current_time</code>. If the timer expires before another <code>when_motion</code> is called, turn off the camera.</p> <h3>Some Threading Concepts</h3> <p>Let's go over some of the building blocks you will need to get the designed solution working.</p> <p>First of all, you will be changing values and reading them from at least two different <a href="https://docs.python.org/3/library/threading.html#thread-objects" rel="nofollow noreferrer">threads</a>. The values I am referring to is the state of the camera (on or off), which will tell you when the timer has expired and needs to be restarted on motion, and the start time of your countdown.</p> <p>You do not want to run into a situation when you have set the "camera is off" flag, but are not finished turning off the camera in your timer thread, while the event processing thread gets a new call to <code>when_motion</code> and decides to restart the camera as you are turning it off. To avoid this, you use <a href="https://docs.python.org/3/library/threading.html#lock-objects" rel="nofollow noreferrer">locks</a>.</p> <p>A lock is an object that will make a thread wait until it can obtain it. So you can lock the entire camera-off operation as a unit until it completes before allowing the event processing thread to check the value of the flag.</p> <p>I will avoid using anything besides basic threads and locks in the code.</p> <h3>Code</h3> <p>Here is an example of how you can modify your code to work with the concepts I have been ranting about ad nauseum. I have kept the general structure as much as I could, but keep in mind that global variables are generally not a good idea. I am using them to avoid going down the rabbit hole of having to explain classes. In fact, I have stripped away as much as I could to present just the general idea, which will take you long enough to process as it is if threading is new to you:</p> <pre><code>from picamera import PiCamera from time import sleep from datetime import datetime from threading import Thread, RLock camera = PiCamera() sensor = MotionSensor(7) camera.hflip = True video_prefix = "video" video_ext = ".h264" record_time = 5 # This is the time from which we measure 5 seconds. start_time = None # This tells you if the camera is on. The camera can be on # even when start_time is None if there is movement in progress. camera_on = False # This is the lock that will be used to access start_time and camera_on. # Again, bad idea to use globals for this, but it should work fine # regardless. thread_lock = RLock() def registra_video(): global camera_on, start_time with thread_lock: if not camera_on: print ("recording started") camera.start_recording('{}.{:%Y%m%d_%H%M%S}.{}'.format(video_prefix, datetime.now(), video_ext)) camera_on = True # Clear the start_time because it needs to be reset to # x seconds after the movement stops start_time = None def stop_video(): global camera_on with thread_lock: if camera_on: camera.stop_recording() camera_on = False print ("recording stopped") def motion_stopped(): global start_time with thread_lock: # Ignore this function if it gets called before the camera is on somehow if camera_on: now = datetime.now() if start_time is None: print('Starting {} second count-down'.format(record_time)) Thread(target=timer).start() else: print('Recording to be extended by {:.1f} seconds'.format((now - start_time).total_seconds())) start_time = now def timer(): duration = record_time while True: # Notice that the sleep happens outside the lock. This allows # other threads to modify the locked data as much as they need to. sleep(duration) with thread_lock: if start_time is None: print('Timer expired during motion.') break else: elapsed = datetime.now() - start_time if elapsed.total_seconds() &gt;= record_time: print('Timer expired. Stopping video.') stop_video() # This here is why I am using RLock instead of plain Lock. I will leave it up to the reader to figure out the details. break else: # Compute how much longer to wait to make it five seconds duration = record_time - elapsed print('Timer expired, but sleeping for another {}'.format(duration)) print("Waiting...") sensor.when_motion = registra_video sensor.when_no_motion = motion_stopped pause() </code></pre> <p>As an extra bonus, I threw in a snippet that will append a date-time to your video names. You can read all you need about string formatting <a href="https://docs.python.org/3/library/string.html#format-string-syntax" rel="nofollow noreferrer">here</a> and <a href="https://pyformat.info/#datetime" rel="nofollow noreferrer">here</a>. The second link is a great quick reference.</p>
python-3.x|raspbian|raspberry-pi3
1
1,909,531
59,215,127
LSTM time series - strange val_accuarcy, which normalizing method to use and what to do in production after model is fited
<p>I am making LSTM time series prediction. My data looks like this</p> <p><a href="https://i.stack.imgur.com/QKK9I.png" rel="noreferrer"><img src="https://i.stack.imgur.com/QKK9I.png" alt="enter image description here"></a> So basically what I have is</p> <p><strong>IDTime</strong>: Int for each day</p> <p><strong>TimePart</strong>: 0 = NightTime, 1 = Morning, 2 = Afternoon</p> <p>And 4 columns for values I am trying to predict</p> <p>I have 2686 values, 3x values per day, so around 900 values in total + added new missing values</p> <p>I read and did something like <a href="https://www.tensorflow.org/tutorials/structured_data/time_series" rel="noreferrer">https://www.tensorflow.org/tutorials/structured_data/time_series</a> </p> <ol> <li>ReplacedMissingData - Added missing IDTimes 0-Max, each containing TimePart 0-3 with 0 values (if missing). And replaced all NULL values with 0. I also removed Date parameter, because I have IDTime</li> <li>Set Data (Pandas DataFrame) index as IDTime and TimePart</li> <li>Copied features that I want</li> </ol> <pre><code>features_considered = ['TimePart', 'NmbrServices', 'LoggedInTimeMinutes','NmbrPersons', 'NmbrOfEmployees'] features = data[features_considered] features.index = data.index </code></pre> <ol start="4"> <li>Used Mean/STD on Trained data. I am creating 4 different models for each feature I am trying to predict. I this current one I have set <code>currentFeatureIndex</code> = 1, which is NmbServices</li> </ol> <pre><code> currentFeatureIndex = 1 TRAIN_SPLIT = int(dataset[:,currentFeatureIndex].size * 80 / 100) tf.random.set_seed(13) dataset = features.values data_mean = dataset[:TRAIN_SPLIT].mean(axis=0) data_std = dataset[:TRAIN_SPLIT].std(axis=0) </code></pre> <ol start="5"> <li>I then created dataset. Previous X values with next 3 Future values I want to predict. I am using multivariate_data from tensorflow example, with removed steps</li> </ol> <pre><code> x_train_multi, y_train_multi = multivariate_data(dataset, dataset[:,currentFeatureIndex], 0,TRAIN_SPLIT, past_history,future_target) x_val_multi, y_val_multi = multivariate_data(dataset, dataset[:,currentFeatureIndex],TRAIN_SPLIT, None, past_history,future_target) print ('History shape : {}'.format(x_train_multi[0].shape)) print ('\n Target shape: {}'.format(y_train_multi[0].shape)) BATCH_SIZE = 1024 BUFFER_SIZE = 8096 train_data_multi = tf.data.Dataset.from_tensor_slices((x_train_multi, y_train_multi)) train_data_multi =train_data_multi.cache().shuffle(BUFFER_SIZE).batch(BATCH_SIZE).repeat() val_data_multi = tf.data.Dataset.from_tensor_slices((x_val_multi, y_val_multi)) val_data_multi = val_data_multi.batch(BATCH_SIZE).repeat() multi_step_model = tf.keras.models.Sequential() multi_step_model.add(tf.keras.layers.LSTM(32, activation='relu')) multi_step_model.add(tf.keras.layers.Dropout(0.1)) multi_step_model.add(tf.keras.layers.Dense(future_target)) multi_step_model.compile(optimizer=tf.keras.optimizers.RMSprop(clipvalue=1.0), loss='mae', metrics=['accuracy']) EVALUATION_INTERVAL = 200 EPOCHS = 25 currentName = 'test' csv_logger = tf.keras.callbacks.CSVLogger(currentName + '.log', separator=',', append=False) multi_step_history = multi_step_model.fit(train_data_multi, epochs=EPOCHS, steps_per_epoch=EVALUATION_INTERVAL, validation_data=val_data_multi, validation_steps=50, callbacks = [csv_logger]) </code></pre> <p>In this example I also removed first 800 values with data[600:], because data is not as it should be, after replacing missing values.</p> <p>And I get this final value after 25 ecphoes</p> <pre><code> 200/200 [==============================] - 12s 61ms/step - loss: 0.1540 - accuracy: 0.9505 - val_loss: 0.1599 - val_accuracy: 1.0000 </code></pre> <p><strong>Questions</strong>:</p> <ol> <li><p>Why is it that the val_accuracy is always 1.0? This happens for most of the features</p></li> <li><p>I also tried normalizing values from 0-1 with:</p> <p><em>features.loc[:,'NmbrServices'] / features.loc[:,'NmbrServices'].max()</em> and I get:</p> <p>200/200 [==============================] - 12s 60ms/step - loss: 0.0461 - accuracy: 0.9538 - val_loss: 0.0434 - val_accuracy: 1.0000</p> <p>For this feature, I use here, it looks better using feature/featureMax, but for other features I can get: Using mean/std:</p> <ul> <li>loss: 0.1461 - accuracy: 0.9338 - val_loss: 0.1634 - val_accuracy: 1.0000</li> </ul> <p>And when using feature / featureMax, I get:</p> <ul> <li>loss: 0.0323 - accuracy: 0.8523 - val_loss: 0.0463 - val_accuracy: 1.0000</li> </ul> <p>In this case, which one is better? The one with higher accuracy or the one with lower losses?</p></li> <li><p>If I get some good Val_loss and Train_loss at around 8 epochs and then it goes up, can I then just train model until 8 epochs an save it?</p></li> <li><p>In the end I save model in H5 format and load it, because I want to predict new values for the next day, using last 45 values for prediction. How can I then fit this new data to the model. Do you just call model.fit(newDataX, newDataY)? Or do you need to compile it again on new data?</p> <p>4.1 How many times should you rerun this model then if you ran it on Year 2016-2018 and u are currently in year 2020, should you for example recompile it once per year with data from 2017-2019?</p></li> <li><p>Is it possible to predict multiple features for next day or is it better to use multiple models?</p></li> </ol>
<p>I would suggest you to use <a href="https://www.tensorflow.org/api_docs/python/tf/keras/layers/BatchNormalization" rel="nofollow noreferrer">batch normalization</a> and it completely depends on you if you want to use <strong>Vanilla LSTM</strong> or <strong>Stacked LSTM</strong>.</p> <p>I would recommend you to go through <a href="https://machinelearningmastery.com/how-to-develop-lstm-models-for-time-series-forecasting/" rel="nofollow noreferrer">this</a>.</p>
python|tensorflow|machine-learning|keras|lstm
1
1,909,532
53,999,962
How to alter return function/change variable using a function?
<p>I was wondering how I can have a changing variable from a function.</p> <p>I attempted:</p> <pre><code>class Text(): File=open("SomeFile.txt", "r") MyText=(File.read()+MoreText) def AddMoreText(): MoreText=("This is some more text") </code></pre> <p>before realising that I needed to run the <code>MyText</code> variable again which I'm not sure how to do.</p> <p>I intend to call this text by running something along the lines of <code>print(Text.MyText)</code> which doesn't update after running <code>Text.AddMoreText()</code></p> <p>I then tried:</p> <pre><code>class Text(): global MoreText File=open("SomeFile.txt", "r") def ChangeTheText(): return(File.read()+MoreText) MyText=ChangeTheText() def AddMoreText(): MoreText=("This is some more text") </code></pre> <p>What I didn't know was that the return function preserves its value so when I ran <code>print(Text.MyText)</code> <code>Text.AddMoreText()</code> <code>print(Text.MyText)</code> it displayed the same text twice.</p>
<p>I think you want something like:</p> <pre><code>class Text: def __init__(self): self.parts = [] with open('SomeFile.txt', 'r') as contents: self.parts.append(contents.read()) self.parts.append('More text') def add_more_text(self, text): self.parts.append(text) @property def my_text(self): return ''.join(self.parts) </code></pre> <p>This makes <code>.my_text</code> a dynamic <em>property</em> that will be re-computed each time <code>.my_text</code> is retreived. </p>
python|function|class|variables|return
0
1,909,533
58,191,094
Logging into EventBrite with Scrapy
<p>I'm looking to learn more about how Scrapy can be used to login to websites. I looked at some documentations and tutorials and ended up at <a href="https://docs.scrapy.org/en/latest/topics/request-response.html#using-formrequest-from-response-to-simulate-a-user-login" rel="nofollow noreferrer">Using FormRequest.from_response() to simulate a user login</a>. Using Chrome dev tools, I look at the "login" response after logging in from the page <a href="https://eventbrite.ca/signin/login" rel="nofollow noreferrer">https://eventbrite.ca/signin/login</a>. </p> <p>Some things that may be important to note is that when attempting to login in browser, the web page will direct you to <a href="https://eventbrite.ca/signin" rel="nofollow noreferrer">https://eventbrite.ca/signin</a>, where you enter your email and submit the form. </p> <p>This sends a POST request to <a href="https://www.eventbrite.ca/api/v3/users/lookup/" rel="nofollow noreferrer">https://www.eventbrite.ca/api/v3/users/lookup/</a> with just the email provided, and if all is dandy, the webpage will use JS to "redirect" you to <a href="https://www.eventbrite.ca/ajax/login/" rel="nofollow noreferrer">https://eventbrite.ca/signin/login</a> and generate the "password" input element. </p> <p>Once you fill your password and hit the form button, if successful, it will then redirect+generate the login response as a result of POST sent to <a href="https://www.eventbrite.ca/ajax/login/" rel="nofollow noreferrer">https://www.eventbrite.ca/ajax/login/</a> with email, pw, and some other info (which can be found in my code snippet). </p> <p>First I tried doing it step by step: going from .ca/signup, sending a POST with my email to the lookup endpoint, but I get a 401 error. Next I tried directly going to .ca/signup/login, and submitting all the info found in the login response, but receive 403.</p> <p>I'm sure I must be missing something, though it seems I am POSTing to the correct URLs and finding the correct form, but can't figure out what's left. Also after trying this for a while, wondering if Selenium would provide a better alternative for logging in and doing some automation on a web page that has loads of JS. Any help appreciated.</p> <pre><code>def login(self, response): yield FormRequest.from_response( response, formxpath="//form[(@novalidate)]", url='https://www.eventbrite.ca/ajax/login/', formdata={ 'email': 'email@email.com', 'password': 'password', 'forward':'', 'referrer': '/', 'pckg': '', 'stld': '' }, callback=self.begin_event_parse ) </code></pre> <p>.ca/signup/login attempt (403):</p> <pre><code> [scrapy.core.engine] DEBUG: Crawled (403) &lt;POST https://www.eventbrite.ca/ajax/login/&gt; (referer: https://www.eventbrite.ca/signin/login) </code></pre> <p>.ca/signup attempt (401):</p> <pre><code>[scrapy.core.engine] DEBUG: Crawled (401) &lt;POST https://www.eventbrite.ca/api/v3/users/lookup/&gt; (referer: https://www.eventbrite.ca/signin/login) </code></pre>
<p>It looks like you are missing the <code>X-CSRFToken</code> in your headers. This token is used to protect the resource from Cross-site Request Forgery.</p> <p>In this case, it is provided in the cookies, and you need to store it and pass it along.</p> <p>A simple implementation that works for me:</p> <pre><code>import re import scrapy class DarazspidySpider(scrapy.Spider): name = 'darazspidy' def start_requests(self): yield scrapy.Request('https://www.eventbrite.ca/signin/?referrer=%2F%3Finternal_ref%3Dlogin%26internal_ref%3Dlogin%26internal_ref%3Dlogin', callback=self.lookup) def lookup(self, response): yield scrapy.FormRequest( 'https://www.eventbrite.ca/api/v3/users/lookup/', formdata={"email":"email@mail-v.net"}, headers={'X-CSRFToken': self._get_xcsrf_token(response),}, callback=self.login, ) def _get_xcsrf_token(self, response): cookies = response.headers.getlist('Set-Cookie') cookie, = [c for c in cookies if 'csrftoken' in str(c)] self.token = re.search(r'csrftoken=(\w+)', str(cookie)).groups()[0] return self.token def login(self, response): yield scrapy.FormRequest( url='https://www.eventbrite.ca/ajax/login/', formdata={ 'email': 'email@mail-v.net', 'password': 'pwd', 'forward':'', 'referrer': '/?internal_ref=login&amp;internal_ref=login', 'pckg': '', 'stld': '' }, callback=self.parse, headers={'X-CSRFToken': self.token} ) def parse(self, response): self.logger.info('Logged in!') </code></pre> <p>Ideally, you'd want to create a middleware to do that for you.</p> <p>Generally, when you face this kind of behavior, you want to try to mimic what the browser is sending as close as possible, so look at the headers closely and try to replicate them.</p>
python|authentication|scrapy|eventbrite
0
1,909,534
58,473,998
Find area with content and get its bouding rect
<p>I'm using OpenCV 4 - python 3 - to find an specific area in a black &amp; white image.</p> <p>This area is not a 100% filled shape. It may hame some gaps between the white lines.</p> <p>This is the base image from where I start processing: <img src="https://i.imgur.com/SU1RjTG.png" alt="base"></p> <p>This is the rectangle I expect - made with photoshop -: <img src="https://i.imgur.com/wQn7Jqw.png" alt="expected"></p> <p>Results I got with hough transform lines - not accurate - <img src="https://i.imgur.com/Ly2gO4p.png" alt="wrong result"></p> <p>So basically, I start from the first image and I expect to find what you see in the second one.</p> <p>Any idea of how to get the rectangle of the second image?</p>
<p>I'd like to present an approach which might be computationally less expensive than the solution in <a href="https://stackoverflow.com/a/58475504/11089932">fmw42's answer</a> only using NumPy's <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.nonzero.html" rel="nofollow noreferrer"><code>nonzero</code></a> function. Basically, all non-zero indices for both axes are found, and then the minima and maxima are obtained. Since we have binary images here, this approach works pretty well.</p> <p>Let's have a look at the following code:</p> <pre class="lang-py prettyprint-override"><code>import cv2 import numpy as np # Read image as grayscale; threshold to get rid of artifacts _, img = cv2.threshold(cv2.imread('images/LXSsV.png', cv2.IMREAD_GRAYSCALE), 0, 255, cv2.THRESH_BINARY) # Get indices of all non-zero elements nz = np.nonzero(img) # Find minimum and maximum x and y indices y_min = np.min(nz[0]) y_max = np.max(nz[0]) x_min = np.min(nz[1]) x_max = np.max(nz[1]) # Create some output output = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) cv2.rectangle(output, (x_min, y_min), (x_max, y_max), (0, 0, 255), 2) # Show results cv2.imshow('img', img) cv2.imshow('output', output) cv2.waitKey(0) cv2.destroyAllWindows() </code></pre> <p>I borrowed the cropped image from fmw42's answer as input, and my output should be the same (or most similar):</p> <p><a href="https://i.stack.imgur.com/pmtih.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pmtih.png" alt="Output"></a></p> <p>Hope that (also) helps!</p>
python|opencv|image-processing
4
1,909,535
22,476,166
coverage of django application deployed on production server
<p>Can anyone please tell how to find the coverage of django application deployed in apache.I want to hook in coverage.py in the deployed django application.</p>
<p>I think you are referring to Ned Batchelder's excellent coverage.py. </p> <p><a href="http://nedbatchelder.com/code/coverage/" rel="nofollow">http://nedbatchelder.com/code/coverage/</a></p> <p>Why dont you make use of <a href="https://pypi.python.org/pypi/django-coverage" rel="nofollow">https://pypi.python.org/pypi/django-coverage</a>?</p>
django|apache|python-2.7|code-coverage|coverage.py
0
1,909,536
45,716,212
Keep Getting ZeroDivisonError Whenever using module
<p>So I am working on a problem which need me to get factors of a certain number. So as always I am using the module <code>%</code> in order to see if a number is divisible by a certain number and is equal to zero. But when ever I am trying to do this I keep getting an error saying <code>ZeroDivisionError</code> . I tried adding a block of code like this so python does not start counting from zero instead it starts to count from one <code>for potenial in range(number + 1):</code> But this does not seem to work. Below is the rest of my code any help will be appreciated.</p> <pre><code>def Factors(number): factors = [] for potenial in range(number + 1): if number % potenial == 0: factors.append(potenial) return factors </code></pre>
<p>In your for loop you are iterating from 0 (range() assumes starting number to be 0 if only 1 argument is given) up to "number". There is a ZeroDivisionError since you are trying to calculate number modulo 0 (number % 0) at the start of the for loop. When calculating the modulo, Python tries to divide number by 0 causing the ZeroDivisionError. Here is the corrected code (fixed the indentation):</p> <pre><code>def get_factors(number): factors = [] for potential in range(1, number + 1): if number % potential == 0: factors.append(potential) return factors </code></pre> <p>However, there are betters ways of calculating factors. For example, you can iterate only up to sqrt(n) where n is the number and then calculate "factor pairs" e.g. if 3 is a factor of 15 then 15/3 which is 5 is also a factor of 15. I encourage you to try an implement a more efficient algorithm.</p> <p>Stylistic note: According to PEP 8, function names should be lowercase with words separated by underscores. Uppercase names generally indicate class definitions.</p>
python-3.x|module
1
1,909,537
14,709,717
Determining whether a number is prime or not
<p>I know it's been discussed many times; I've read it, but somehow I can't get it. I want to write a program that determines if the entered number is prime or not.</p> <p>One of the implementations I found somewhere on the Internet:</p> <pre><code>from math import * def main(): n = abs(input("Enter a number: ")) i = 2 msg = 'is a prime number.' while i &lt;= sqrt(n): if n % i == 0: msg = 'is not a prime number.' i = i + 1 print n, msg main() </code></pre> <p>A couple of questions here:</p> <ul> <li>In the above, what is <code>i</code>, and why does it have a starting value of <code>2</code>?</li> <li>What does <code>i = i + 1</code> do in this program?</li> <li>How does the interpreter know when to print <code>'is a prime number.'</code> even though it is out of the body loop? </li> </ul>
<p>A prime number is a number that's only divisible by 1 and itself. The method it's using is to try dividing your candidate number <code>n</code> by every other number from 2 up to itself; however if any number <code>i</code> is a divisor of your number <code>n</code> then so is <code>n / i</code> and at least one of them is less than or equal to <code>sqrt(n)</code> therefore we need only test up to <code>sqrt(n)</code> inclusive. In practice we need only test the divisors that are actually prime themselves but since we don't have a list of primes to hand we'll test every one.</p> <blockquote> <p>what in the above <code>i</code> is? and why it got a 2 starting value?</p> </blockquote> <p><code>i</code> is the potential factor of <code>n</code> we're testing. It starts with 2 because we don't care if 1 divides <code>n</code> (and trivially it will) because the prime definition allows / expects that.</p> <blockquote> <p>what is the i = i + 1 statement, in this concrete example for? Can't see its use in the program.</p> </blockquote> <p>It's incrementing the <code>i</code> value at the end of the loop defined by the <code>while i &lt;= sqrt(n)</code>; it means we advance <code>i</code> to test the next candidate divisor of <code>n</code>.</p> <blockquote> <p>and finally, how python knows when to print 'is a prime number.' although it is out of the body loop?</p> </blockquote> <p>We initialise <code>msg</code> to "is a prime number" and if we find any divisor then we change it to "is not a prime number" inside the loop. If the loop doesn't find a divisor, or if the loop never runs, we'll use the initial value we set which is "is a prime number". Incidentally you could <code>break</code> out of the loop when you find a divisor; there's no point carrying on the test after that.</p> <p>As another aside you probably want to compute <code>sqrt(n)</code> outside the while and store than in a variable to use in the <code>while</code> - you may be recalculating the square root for every iteration, which is relatively expensive.</p>
python|python-2.7|nested-loops|primality-test
4
1,909,538
25,456,969
How to access Django Test database to debug?
<p>Django tests are very helpful. However, when it's time to debug it's more complicated.</p> <p>I would like to:</p> <ul> <li>The test database does not disapear at the end of the tests suite to analyse it</li> <li>Be able to read in this database, using my graphical DB Manager (Navicat, pgAdmin, etc.) (which is more friendly than command line)</li> </ul> <p>How to do this? Thanks!</p>
<p>The <a href="https://github.com/ericholscher/django-test-utils" rel="nofollow">django-test-utils</a> app includes a <a href="https://django-test-utils.readthedocs.org/en/latest/keep_database_runner.html" rel="nofollow">Persistent Database Test Runner</a> to achieve this. I haven't tested the app myself though.</p>
python|django|unit-testing
0
1,909,539
44,468,676
MemoryError while counting edges in graph using Networkx
<p>My initial goal was to do some structural property analysis (diameter, clustering coefficient etc.) using Networkx. However, I stumbled already by simply trying to count how many edges there are present in the given graph. This graph, which can be downloaded <a href="https://snap.stanford.edu/data/soc-pokec-relationships.txt.gz" rel="nofollow noreferrer">from over here (beware: 126 MB zip file)</a> consists of 1,632,803 nodes and 30,622,564 edges. <em>Please note, if you want to download this file, make sure to remove the comments from it (including the #) which are placed on top of the file</em></p> <p>I have 8 GB of memory in my machine. Are my plans (diameter/clustering coefficient) too ambitious for a graph of this size? I hope not, because I like networkx due to its simplicity and it just seems complete.. If it is ambitious however, could you please advice another library that I can use for this job? </p> <pre><code>import networkx as nx graph = nx.Graph() graph.to_directed() def create_undirected_graph_from_file(path, graph): for line in open(path): edges = line.rstrip().split() graph.add_edge(edges[0], edges[1]) print(create_undirected_graph_from_file("C:\\Users\\USER\\Desktop\\soc-pokec-relationships.txt", graph).g.number_of_edges()) </code></pre> <p>Error:</p> <pre><code>Traceback (most recent call last): File "C:/Users/USER/PycharmProjects/untitled/main.py", line 12, in &lt;module&gt; print(create_undirected_graph_from_file("C:\\Users\\USER\\Desktop\\soc-pokec-relationships.txt", graph).g.number_of_edges()) File "C:/Users/User/PycharmProjects/untitled/main.py", line 8, in create_undirected_graph_from_file edges = line.rstrip().split() MemoryError </code></pre>
<p>One potential problem is that strings have a large memory footprint. Since all of your edges are integers you can benefit by converting them to ints before creating the edges. You'll benefit from faster tracking internally and also have a lower memory footprint! Specifically:</p> <pre><code>def create_undirected_graph_from_file(path, graph): for line in open(path): a, b = line.rstrip().split() graph.add_edge(int(a), int(b)) return graph </code></pre> <p>I'd recommend also changing your <code>open</code> to use contexts and ensure the file gets opened:</p> <pre><code>def create_undirected_graph_from_file(path, graph): with open(path) as f: for line in f: a, b = line.rstrip().split() graph.add_edge(int(a), int(b)) return graph </code></pre> <p>Or the magic one-liner:</p> <pre><code>def create_undirected_graph_from_file(path, graph): with open(path) as f: [graph.add_edge(*(int(point) for point in line.rstrip().split())) for line in f] return graph </code></pre> <p>One more thing to keep in mind. <code>Graph.to_directed</code> returns a new graph. So be sure you set graph to the result of this instead of throwing out the result.</p>
python|graph|networkx
2
1,909,540
24,021,477
Matplotlib tripcolor bug?
<p>I want to use tripcolor from matplotlib.pyplot to view the colored contours of some of my data.</p> <p>The data is extracted from an XY plane at z=cst using Paraview. I directly export the data in csv from Paraview which triangulates the plane for me.</p> <p>The problem is that depending on the plane position (ie the mesh) tripcolor gives me sometimes good or bad results. </p> <p>Here is a simple example code and results to illustrate it:</p> <p><strong>Code</strong></p> <pre><code>import matplotlib.pyplot as plt import numpy as np p,u,v,w,x,y,z = np.loadtxt('./bad.csv',delimiter=',',skiprows=1,usecols=(0,1,2,3,4,5,6),unpack=True) NbLevels = 256 plt.figure() plt.gca().set_aspect('equal') plt.tripcolor(x,y,w,NbLevels,cmap=plt.cm.hot_r,edgecolor='black') cbar = plt.colorbar() cbar.set_label('Velocity magnitude',labelpad=10) plt.show() </code></pre> <p><strong>Results with tripcolor</strong></p> <p><img src="https://i.stack.imgur.com/wVVHi.png" alt="enter image description here"></p> <p>Here is the <a href="http://dl.free.fr/ggxUYIc7M" rel="nofollow noreferrer">file</a> that causes the problem. </p> <p>I've heard that matplotlib's tripcolor is sometimes buggy, so is it a bug or not ?</p>
<p>As highlighted by @Hooked this is the normal behaviour for a Delaunay triangulation. To remove unwanted triangles you should provide your own <code>Triangulation</code> by passing explicitly the triangles.</p> <p>This is quite easy in your case as your data is almost structured: I suggest performing a Delaunay triangulation in the plane (r, theta) then passing these triangles to the initial (x, y) arrays. You can make use of the the built-in <code>TriAnalyzer</code> class to remove very <em>flat</em> triangles from the (r, theta) triangulation (they might exists due to round-off errors).</p> <pre><code>import matplotlib.pyplot as plt import numpy as np import matplotlib.tri as mtri p,u,v,w,x,y,z = np.loadtxt('./bad.csv',delimiter=',',skiprows=1,usecols=(0,1,2,3,4,5,6),unpack=True) r = np.sqrt(y**2 + x**2) tan = (y / x) aux_tri = mtri.Triangulation(r/np.max(r), tan/np.max(tan)) triang = mtri.Triangulation(x, y, aux_tri.triangles) triang.set_mask(mtri.TriAnalyzer(aux_tri).get_flat_tri_mask()) NbLevels = 256 plt.figure() plt.gca().set_aspect('equal') plt.tripcolor(triang, w, NbLevels, cmap=plt.cm.jet, edgecolor='black') cbar = plt.colorbar() cbar.set_label('Velocity magnitude',labelpad=10) plt.show() </code></pre> <p><img src="https://i.stack.imgur.com/uso8n.png" alt="enter image description here"></p>
python|matplotlib|contour|triangulation
7
1,909,541
20,484,394
How to make my python integration faster?
<p>Hi i want to integrate a function from 0 to several different upper limits (around 1000). I have written a piece of code to do this using a for loop and appending each value to an empty array. However i realise i could make the code faster by doing smaller integrals and then adding the previous integral result to the one just calculated. So i would be doing the same number of integrals, but over a smaller interval, then just adding the previous integral to get the integral from 0 to that upper limit. Heres my code at the moment:</p> <pre><code>import numpy as np #importing all relevant modules and functions from scipy.integrate import quad import pylab as plt import datetime t0=datetime.datetime.now() #initial time num=np.linspace(0,10,num=1000) #setting up array of values for t Lt=np.array([]) #empty array that values for L(t) are appended to def L(t): #defining function for L return np.cos(2*np.pi*t) for g in num: #setting up for loop to do integrals for L at the different values for t Lval,x=quad(L,0,g) #using the quad function to get the values for L. quad takes the function, where to start the integral from, where to end the integration Lv=np.append(Lv,[Lval]) #appending the different values for L at different values for t </code></pre> <p>What changes do I need to make to do the optimisation technique I've suggested?</p>
<p>Basically, we need to keep track of the previous values of <code>Lval</code> and <code>g</code>. 0 is a good initial value for both, since we want to start by adding 0 to the first integral, and 0 is the start of the interval. You can replace your for loop with this:</p> <pre><code>last, lastG = 0, 0 for g in num: Lval,x = quad(L, lastG, g) last, lastG = last + Lval, g Lv=np.append(Lv,[last]) </code></pre> <p>In my testing, this was noticeably faster.</p> <p>As @askewchan points out in the comments, this is even faster:</p> <pre><code>Lv = [] last, lastG = 0, 0 for g in num: Lval,x = quad(L, lastG, g) last, lastG = last + Lval, g Lv.append(last) Lv = np.array(Lv) </code></pre>
python|optimization|for-loop|scipy|physics
5
1,909,542
36,204,575
Shared memory cache for non-serialized data
<p>I have a (Django) web app that needs to construct large (numpy) arrays, let's say 1MB per vector. It works on several processes (spawned by Apache/mod_wsgi).</p> <p>For the moment I am using <strong>in-memory cache</strong>, which simplest version is a global variable. Retrieving the data from cache is instantaneous - all I need. However, each process needs to replicate the cache in its own memory, and it is <a href="https://stackoverflow.com/questions/36200096/cache-randomly-removing-items/36200479?noredirect=1#comment60038283_36200479">unpredictable</a> which process has the data loaded and which hasn't (I want to load it once and for all at startup).</p> <p>I tried <strong>Memcached</strong> and <strong>Redis</strong> to have a shared cache among processes. Both need the data to be serialized first: strings and ints only. Now, de-serializing when I want to read a vector takes about 10s, a bit long for a user waiting after clicking a button. </p> <p>Isn't there any solution that can at the same time store some arbitrary data in RAM without serializing to string, and have it shared among different processes ? (I am not interested in persistence after restart).</p>
<p>Redis supports many <a href="http://redis.io/topics/data-types" rel="nofollow">data types</a>, including raw bytes</p> <blockquote> <p>Strings are the most basic kind of Redis value. <strong>Redis Strings are binary safe, this means that a Redis string can contain any kind of data</strong>, for instance a JPEG image or a serialized Ruby object.</p> </blockquote> <p>Redis is proven to be fast, so maybe your focus should be on an efficient serialization format that deserializes quickly, e.g.</p> <ul> <li><a href="https://github.com/lebedov/msgpack-numpy" rel="nofollow">https://github.com/lebedov/msgpack-numpy</a></li> <li><a href="https://developers.google.com/protocol-buffers/docs/pythontutorial#why-use-protocol-buffers" rel="nofollow">https://developers.google.com/protocol-buffers/docs/pythontutorial#why-use-protocol-buffers</a></li> <li><a href="http://slides.zetatech.org/haenel-bloscpack-talk-2014-PyDataBerlin.pdf" rel="nofollow">http://slides.zetatech.org/haenel-bloscpack-talk-2014-PyDataBerlin.pdf</a></li> </ul>
python|django|caching|redis|memcached
2
1,909,543
49,755,144
Reading CSV with comma at last line
<p>i'm using Python to read in a series of CSVs that were obtained via a web scraper (there's thousands so editing by hand is a no go). The data looks like this:</p> <pre><code>"Client: Secret Client" "G/L Account: (#-#-#) Secret Type of Account" "Process Date: MM/DD/YYYY" "Export Date: MM/DD/YYYY" "Unit Name ","Description","Pay. Type ","Amount","Tran. Date " "last, first","some note (dates with commas like 17 Aug, 2018 could be here)","Credit Card ","$AMNT.CHANGE","Date and Timestamp" "Total","","","$AMNT.CHANGE"," </code></pre> <p>If you count carefully you'll see a final comma followed by a rogue ". The code I'm trying to use is here:</p> <pre><code>import os import pandas as pd import csv def read_temp(file): tmp = pd.read_csv(file, header=None, error_bad_lines=False, quotechar='"', skiprows=5, quoting=csv.QUOTE_ALL,skipinitialspace=True, skipfooter=1) gl = pd.read_csv(file, header=None, error_bad_lines=False, quotechar='"', skiprows=1, nrows=1, quoting=csv.QUOTE_ALL,skipinitialspace=True) proc_date = pd.read_csv(file, header=None, error_bad_lines=False, quotechar='"', skiprows=2, nrows=1, quoting=csv.QUOTE_ALL,skipinitialspace=True) cols = ['NAME', 'DESCRIPTION', 'PAY_TYP', 'AMOUNT', 'TRAN_DATE'] tmp.columns = cols # print(tmp.columns) # print(file) tmp['G/L_ACCOUNT'] = gl[0][0].split(':')[1] tmp['PROCESS_DATE'] = proc_date[0][0].split(':')[1] for col in tmp.columns: tmp[col] = tmp[col].str.strip('"') return tmp master = "C:\\path\\to\\master\\" want=[] flag = 0 for direc in os.listdir(master): for file in os.listdir(master+direc): temp = read_temp(master+direc+'\\'+file) want.append(temp) df = pd.concat(want) </code></pre> <p>the error is: </p> <pre><code>',' expected after '"' </code></pre> <p>I think if I could use a CSV Reader and regular expressions (which I have zero experience with) to read each line before hand and find everything that's surrounded by " " then I could change it somehow or posisbly delete that ending comma and double quote. Any ideas would be appreciated!</p>
<p>A quick test with the <a href="https://docs.python.org/3.6/library/csv.html" rel="nofollow noreferrer"><code>csv</code></a> module does not fail</p> <pre><code>import csv data = """"Client: Secret Client" "G/L Account: (#-#-#) Secret Type of Account" "Process Date: MM/DD/YYYY" "Export Date: MM/DD/YYYY" "Unit Name ","Description","Pay. Type ","Amount","Tran. Date " "last, first","some note (dates with commas like 17 Aug, 2018 could be here)","Credit Card ","$AMNT.CHANGE","Date and Timestamp" "Total","","","$AMNT.CHANGE"," """ reader = csv.reader(data.split("\n"), delimiter=',', quotechar='"') for row in reader: print(', '.join(row)) </code></pre> <p>but also get "confused" by the last, incomplete element:</p> <pre><code>Client: Secret Client G/L Account: (#-#-#) Secret Type of Account Process Date: MM/DD/YYYY Export Date: MM/DD/YYYY Unit Name , Description, Pay. Type , Amount, Tran. Date last, first, some note (dates with commas like 17 Aug, 2018 could be here), Credit Card , $AMNT.CHANGE, Date and Timestamp Total, , , $AMNT.CHANGE, </code></pre> <p>But you could just remove the offending characters from your data, e.g. with <a href="https://docs.python.org/3.6/library/stdtypes.html#str.rfind" rel="nofollow noreferrer"><code>rfind</code></a> and "<a href="https://docs.python.org/3.6/tutorial/introduction.html" rel="nofollow noreferrer">slicing</a>":</p> <pre><code>pos = data.rfind(',"', -5) if pos != -1: data = data.strip()[:pos] print( data[-15:] ) </code></pre> <p>should print <code>,"$AMNT.CHANGE"</code>. It searches for <code>,"</code> on the last 5 characters of the string. If it is found, the position is returned, which is used to remove the respective characters (or rather, return a string without them).</p> <p>The <code>strip()</code> is just to remove any newline (introduced by embedding your data with a string literal """).</p> <p>Alternatively, if the problem is <em>always</em> those two extra characters, you could <em>slice</em> them off by providing a negative slice index, e.g. <code>data[:-2]</code></p> <p>No real need for a <a href="https://docs.python.org/3.6/library/re.html" rel="nofollow noreferrer">regular expression</a>, however</p> <pre><code>import re data = re.sub(",\"?$", "", data, 1) </code></pre> <p>would do the trick, and it also works in case there is just a trailing <code>,</code>. You can <a href="https://regex101.com/r/iNECJY/1" rel="nofollow noreferrer">play with this on regex101.com</a> which also explains what the expression does.</p> <p>Now pandas should not have any trouble parsing the data.</p>
python|pandas|csv
1
1,909,544
49,525,317
how to strip the beginning of a file with python library re.sub?
<p>I'm happy to ask my first python question !!! I would like to strip the beginning (the part before the first occurrence of the article) of the sample file below. To do this I use re.sub library.</p> <p>below this is my file sample.txt:</p> <pre><code>fdasfdadfa adfadfasdf afdafdsfas adfadfadf adfadsf afdaf article: name of the first article aaaaaaa aaaaaaa aaaaaaa article: name of the first article bbbbbbb bbbbbbb bbbbbbb article: name of the first article ccccccc ccccccc ccccccc </code></pre> <p>And my Python code to parse this file:</p> <pre><code>for line in open('sample.txt'): test = test + line result = re.sub(r'.*article:', 'article', test, 1, flags=re.S) print result </code></pre> <p>Sadly this code only displays the last article. The output of the code:</p> <pre><code>article: name of the first article ccccccc ccccccc ccccccc </code></pre> <p>Does someone know how to strip only the beginning of the file and display the 3 articles?</p>
<p>You can use <a href="https://docs.python.org/3/library/itertools.html#itertools.dropwhile" rel="nofollow noreferrer"><code>itertools.dropwhile</code></a> to get this effect</p> <pre><code>from itertools import dropwhile with open('filename.txt') as f: articles = ''.join(dropwhile(lambda line: not line.startswith('article'), f)) print(articles) </code></pre> <p>prints</p> <pre><code>article: name of the first article aaaaaaa aaaaaaa aaaaaaa article: name of the first article bbbbbbb bbbbbbb bbbbbbb article: name of the first article ccccccc ccccccc ccccccc </code></pre>
python|regex|substitution
3
1,909,545
70,238,451
Why do I have empty rows when I create a CSV file?
<p>Im trying to create a new csv file which evaluates data about a construction site operation from an ASCII table in CSV format file. I have figured out how to create a CSV file, but I always get a blank line between the lines. Why is that?</p> <pre><code>import csv header = ['name', 'area', 'country_code2', 'country_code3'] data = ['Afghanistan', 652090, 'AF', 'AFG'] file_object = open(&quot;new_file.csv&quot;, &quot;w&quot;) writer = csv.writer(file_object, delimiter=&quot;;&quot;) writer.writerow(header) writer.writerow(data) file_object.close() </code></pre> <p>that how my csv file looks like:</p> <pre><code>name area country_code2 country_code3 Afghanistan 652090 AF AFG </code></pre>
<p>Specify <code>newline=''</code> to eliminate the extra new line.</p> <p>If newline='' is not specified on platforms that use \r\n linendings on write an extra \r will be added. It should always be safe to specify newline='', since the csv module does its own (<a href="https://docs.python.org/3/glossary.html#term-universal-newlines" rel="nofollow noreferrer">universal</a>) newline handling. <a href="https://docs.python.org/3/library/csv.html#id3" rel="nofollow noreferrer">[1]</a></p> <pre class="lang-python prettyprint-override"><code>with open('new_file.csv', 'w', newline='') as file_object: writer = csv.writer(file_object, delimiter=&quot;;&quot;) writer.writerow(header) writer.writerow(data) </code></pre>
python|csv
2
1,909,546
53,504,338
How to filter choices in fields(forms) in Django admin?
<p>I have model Tech, with name(Charfield) and firm(ForeignKey to model Firm), because one Tech(for example, smartphone) can have many firms(for example Samsung, apple, etc.)</p> <p>How can I create filter in admin panel for when I creating model, If I choose 'smartphone' in tech field, it show me in firm field only smartphone firms? Coz if I have more than one value in firm field (for example Apple, Samsung, IBM), it show me all of it. But IBM must show only if in tech field I choose 'computer'. How release it?</p>
<p>class MyModelName(admin.ModelAdmin):</p> <pre><code>list_filter = (field1,field3,....) </code></pre> <p>refer:- <a href="https://docs.djangoproject.com/en/2.1/ref/contrib/admin/" rel="nofollow noreferrer">https://docs.djangoproject.com/en/2.1/ref/contrib/admin/</a></p>
django|python-3.x|filter|django-admin
2
1,909,547
54,971,517
Python3 threading combining .start() doesn't create the join attribute
<p>This works fine:</p> <pre><code>def myfunc(): print('inside myfunc') t = threading.Thread(target=myfunc) t.start() t.join() print('done') </code></pre> <p>However this, while apparently creating and executing the thread properly:</p> <pre><code>def myfunc(): print('inside myfunc') t = threading.Thread(target=myfunc).start() t.join() print('done') </code></pre> <p>Generates the following fatal error when it hits join():</p> <blockquote> <p>AttributeError: 'NoneType' object has no attribute 'join'</p> </blockquote> <p>I would have thought that these statements are equivalent. What is different?</p>
<pre><code>t = threading.Thread(target=myfunc).start() </code></pre> <p>threading.Thread(target=myfunc) returns a thread object, However object.start() returns None. That's why there is an AttributeError.</p>
python|python-3.x|python-multithreading
9
1,909,548
54,952,803
regex storing matches in wrong capture group
<p>I am trying to build a python regex with optional capture group. My regex works for most case but fails to put the matches in the right group in one of the test case.</p> <p>I want to match and capture the following cases:</p> <ol> <li><p>namespace::tool_name::1.0.1</p></li> <li><p>namespace::tool_name</p></li> <li><p>tool_name::1.0.1</p></li> <li><p>tool_name</p></li> </ol> <p>Here is the regex I have so far:</p> <pre><code>(?:(?P&lt;namespace&gt;^[^:]+)::)?(?P&lt;name&gt;[^:]*)(?:::(?P&lt;version&gt;[0-9\.]+))? </code></pre> <p>This regex works fine for all my 4 test cases but the problem I have is in case 3, the tool_name is capture in the namespace group and the 1.0.1 is captured in the name group. I would like them to be captured in the right groups, name and version respectively</p> <p>Thanks</p>
<p>You may make tool_name regex part obligatory by replacing <code>*</code> with <code>+</code> (it looks like it always is present) and restrict this pattern from matching three dot-separated digit chunks with a negative lookahead:</p> <pre><code>^(?:(?P&lt;namespace&gt;[^:]+)::)?(?!\d+(?:\.\d+){2})(?P&lt;name&gt;[^:]+)(?:::(?P&lt;version&gt;\d+(?:\.\d+){2}))? </code></pre> <p>See the <a href="https://regex101.com/r/fopVtC/1/" rel="nofollow noreferrer">regex demo</a></p> <p><strong>Details</strong></p> <ul> <li><code>^</code> - start of string</li> <li><code>(?:(?P&lt;namespace&gt;[^:]+)::)?</code> - an optional non-capturing group matching any 1+ chars other than <code>:</code> into Group "namespace" and then just matches <code>::</code></li> <li><code>(?!\d+(?:\.\d+){2})</code> - a negative lookahead that does not allow <code>digits.digits.digits</code> pattern to appear right after the current position</li> <li><code>(?P&lt;name&gt;[^:]+)</code> - Group "name": any 1 or more chars other than <code>:</code></li> <li><code>(?:::(?P&lt;version&gt;\d+(?:\.\d+){2}))?</code> - an optional non-capturing group matching <code>::</code> and then Group "version" captures 1+ digits and 2 repetitions of <code>.</code> and 1+ digits.</li> </ul>
python|regex
3
1,909,549
33,501,554
Extracting html using beautifulsoup
<p>I am trying to extract data from the html of the following site:</p> <blockquote> <p><a href="http://www.irishrugby.ie/guinnesspro12/results_and_fixtures_pro_12_section.php" rel="nofollow">http://www.irishrugby.ie/guinnesspro12/results_and_fixtures_pro_12_section.php</a> </p> </blockquote> <p>I want to be able to extract the team names and the score for example the first fixture is <code>Connacht vs Newport Gwent Dragons</code>. </p> <p>I want my python program too print the result, i.e <code>Connacht Rugby 29 - 23 Newport Gwent Dragons</code>.</p> <p>Here is the html I want too extract it from:</p> <pre><code>&lt;!-- 207974 sfms --&gt; &lt;tr class="odd match-result group_celtic_league" id="fixturerow0" onclick="if( c lickpriority == 0 ) { redirect('/guinnesspro12/35435.php') }" onmouseout="classN ame='odd match-result group_celtic_league';" onmouseover="clickpriority=0; class Name='odd match-result group_celtic_league rollover';" style=""&gt; &lt;td class="field_DateShort" style=""&gt; Fri 4 Sep &lt;/td&gt; &lt;td class="field_TimeLong" style=""&gt; 19:30 &lt;/td&gt; &lt;td class="field_CompStageAbbrev" style=""&gt; PRO12 &lt;/td&gt; &lt;td class="field_LogoTeamA" style=""&gt; &lt;img alt="Connacht Rugby" height="50" src="http://cdn.soticservers.net/tools/i mages/teams/logos/50x50/16.png" width="50"/&gt; &lt;/td&gt; &lt;td class="field_HomeDisplay" style=""&gt; Connacht Rugby &lt;/td&gt; &lt;td class="field_Score" style=""&gt; 29 - 23 &lt;/td&gt; &lt;td class="field_AwayDisplay" style=""&gt; Newport Gwent Dragons &lt;/td&gt; &lt;td class="field_LogoTeamB" style=""&gt; &lt;img alt="Newport Gwent Dragons" height="50" src="http://cdn.soticservers.net/ tools/images/teams/logos/50x50/19.png" width="50"/&gt; &lt;/td&gt; &lt;td class="field_HA" style=""&gt; H &lt;/td&gt; &lt;td class="field_OppositionDisplay" style=""&gt; &lt;br/&gt; &lt;/td&gt; &lt;td class="field_ResScore" style=""&gt; W 29-23 &lt;/td&gt; &lt;td class="field_VenName" style=""&gt; Sportsground &lt;/td&gt; &lt;td class="field_BroadcastAttend" style=""&gt; 3,624 &lt;/td&gt; &lt;td class="field_Links" style=""&gt; &lt;a href="/guinnesspro12/35435.php" onclick="clickpriority=1"&gt; Report &lt;/a&gt; &lt;/td&gt; &lt;/tr&gt; </code></pre> <p>This is my program so far:</p> <pre><code>from httplib2 import Http from bs4 import BeautifulSoup # create a "web object" h = Http() # Request the specified web page response, content = h.request('http://www.irishrugby.ie/guinnesspro12/results_and_fixtures_pro_12_section.php') # display the response status print(response.status) # display the text of the web page print(content.decode()) soup = BeautifulSoup(content) # check the response if response.status == 200: #print(soup.get_text()) rows = soup.find_all('tr')[1:-2] for row in rows: data = row.find_all('td') #print(data) else: print('Unable to connect:', response.status) print(soup.get_text()) </code></pre>
<p>Instead of finding all the <code>&lt;td&gt;</code> tags you should be more specific. I would convert this:</p> <pre><code>for row in rows: data = row.find_all('td') </code></pre> <p>to this:</p> <pre><code>for row in rows: home = row.find("td",attrs={"class":"field_HomeDisplay") score = row.find("td",attrs={"class":"field_Score") away = row.find("td",attrs={"class":"field_AwayDisplay") print(home.get_text() + " " + score.get_text() + " " + away.get_text()) </code></pre>
python|html|web-scraping|beautifulsoup
1
1,909,550
33,410,450
segment em with known data opencv
<p>I use opencv <a href="http://docs.opencv.org/modules/ml/doc/expectation_maximization.html" rel="nofollow">EM</a> algorithm for segment an image into 2 shapes. One shape is always inside. I use <a href="http://docs.opencv.org/modules/ml/doc/expectation_maximization.html" rel="nofollow">EM segment</a>.</p> <p>I want to use some known model of RGB colors: I have input table of 30*3 which is common colors for the background. How to input this to EM? should I calculate means and std and input to the constructor?</p> <pre><code>Python: cv2.EM.trainE(samples, means0[, covs0[, weights0[, logLikelihoods[, labels[, probs]]]]]) → retval, logLikelihoods, labels, probs Python: cv2.EM.trainM(samples, probs0[, logLikelihoods[, labels[, probs]]]) </code></pre> <p>thanks !!</p>
<p>You may use the <code>cv.EM.trainE</code> interface and provide the algorithm with you initial <code>30x3</code> values as <code>means0</code> input argument.</p>
python|opencv|image-processing|computer-vision|image-segmentation
0
1,909,551
33,158,219
Why isn't a class's __new__ method in its __dict__?
<p>Brief context: I'm attempting to edit a class's default arguments to its <code>__new__</code> method. I need access to the method, and I was attempting to get access in the same way I accessed its other methods - through its <code>__dict__</code>. </p> <p>But here, we can see that its <code>__new__</code> method isn't in its <code>__dict__</code>.</p> <p>Is this related to <code>__new__</code> being a Static Method? If so, why aren't those in a class's <code>__dict__</code>? Where are they stored in the object model?</p> <pre><code>class A(object): def __new__(cls, a): print(a) return object.__new__(cls) def f(a): print(a) ....: In [12]: A.__dict__['f'] Out[12]: &lt;function __main__.A.f&gt; In [13]: A.__dict__['__new__'] Out[13]: &lt;staticmethod at 0x103a6a128&gt; In [14]: A.__new__ Out[14]: &lt;function __main__.A.__new__&gt; In [16]: A.__dict__['__new__'] == A.__new__ Out[16]: False In [17]: A.__dict__['f'] == A.f Out[17]: True </code></pre>
<p><code>A.__dict__['new']</code> is the staticmethod descriptor, where as <code>A.__new__</code> is the actual underlying function.</p> <p><a href="https://docs.python.org/2/howto/descriptor.html#static-methods-and-class-methods" rel="nofollow">https://docs.python.org/2/howto/descriptor.html#static-methods-and-class-methods</a></p> <p>if you need to call the function, or get it by using a string (at runtime), use <code>getattr(A, '__new__')</code></p> <pre><code>&gt;&gt;&gt; A.__new__ &lt;function A.__new__ at 0x02E69618&gt; &gt;&gt;&gt; getattr(A, '__new__') &lt;function A.__new__ at 0x02E69618&gt; </code></pre> <p>Python 3.5.1</p> <pre class="lang-py prettyprint-override"><code>class A(object): def __new__(cls, a): print(a) return object.__new__(cls) def f(a): print(a) &gt;&gt;&gt; A.__dict__['__new__'] &lt;staticmethod object at 0x02E66B70&gt; &gt;&gt;&gt; A.__new__ &lt;function A.__new__ at 0x02E69618&gt; &gt;&gt;&gt; object.__new__ &lt;built-in method __new__ of type object at 0x64EC98E8&gt; &gt;&gt;&gt; A.__new__(A, 'hello') hello &lt;__main__.A object at 0x02E73BF0&gt; &gt;&gt;&gt; A.__dict__['__new__'](A, 'hello') Traceback (most recent call last): File "&lt;pyshell#7&gt;", line 1, in &lt;module&gt; TypeError: 'staticmethod' object is not callable &gt;&gt;&gt; getattr(A, '__new__') &lt;function A.__new__ at 0x02E69618&gt; </code></pre>
python
2
1,909,552
73,637,654
Issue running advertools crawler
<p>I'm relative newbie to python. I've been using advertools however I've run into the following error</p> <pre><code>import advertools as adv adv.crawl('https://sandpipercomms.com', 'my_output_file.jl', follow_links=True) import pandas as pd crawl_df = pd.read_json('my_output_file.jl', lines=True) Traceback (most recent call last): File &quot;c:\users\tom\mu_code\vampire.py&quot;, line 2, in &lt;module&gt; adv.crawl('https://sandpipercomms.com', 'my_output_file.jl', follow_links=True) File &quot;C:\Users\Tom\AppData\Local\python\mu\mu_venv-38-20220808-225806\lib\site-packages\advertools\spider.py&quot;, line 971, in crawl subprocess.run(command) File &quot;C:\Users\Tom\AppData\Local\Programs\Mu Editor\Python\lib\subprocess.py&quot;, line 493, in run with Popen(*popenargs, **kwargs) as process: File &quot;C:\Users\Tom\AppData\Local\Programs\Mu Editor\Python\lib\subprocess.py&quot;, line 858, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, File &quot;C:\Users\Tom\AppData\Local\Programs\Mu Editor\Python\lib\subprocess.py&quot;, line 1311, in _execute_child hp, ht, pid, tid = _winapi.CreateProcess(executable, args, FileNotFoundError: [WinError 2] The system cannot find the file specified </code></pre> <p>I'm currently running windows 10, python 3 and recently installed julia. Any suggestions on what the issue might be would be appreciated.</p> <p>Cheers,</p>
<p>The code is correct, and there may be issues in the setup of your machine.</p> <p>As a quick solution, you can run the same code from this notebook, and you can do all the following work there:</p> <p><a href="https://colab.research.google.com/drive/1fXLx9dIBVBB5Due6VjDV947bsc7hii5x" rel="nofollow noreferrer">https://colab.research.google.com/drive/1fXLx9dIBVBB5Due6VjDV947bsc7hii5x</a></p> <p>Do you know how to setup a <a href="https://docs.python.org/3/library/venv.html" rel="nofollow noreferrer">virtual environment</a>? This might help isolate the issue, and provide some insight, to understand the problem and come up with a solution.</p> <p>Hope this helps.</p>
python
0
1,909,553
64,600,536
Python - can't extract data from statsmodel STL plot
<p>I produce the following plot using <a href="https://www.statsmodels.org/stable/examples/notebooks/generated/stl_decomposition.html#" rel="nofollow noreferrer">statsmodels STL</a>:</p> <p><a href="https://i.stack.imgur.com/Qc2JLl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Qc2JLl.png" alt="enter image description here" /></a></p> <p>The output is displayed using <code>matplotlib.pyplot</code>.</p> <p>I would like to get the data from the lines but can't figure out how to extract them, even after trying the recommended solutions <a href="https://stackoverflow.com/questions/8938449/how-to-extract-data-from-matplotlib-plot">here</a>.</p> <p>How can I 'extract' the underlying data for each of the 4 lines?</p> <p>I need to do this to actually use the output.</p> <p>Code:</p> <pre><code> import pandas as pd import seaborn as sns import matplotlib.pyplot as plt from pandas.plotting import register_matplotlib_converters from statsmodels.tsa.seasonal import STL register_matplotlib_converters() sns.set_style('darkgrid') plt.rc('figure',figsize=(8,8)) plt.rc('font',size=13) raw = [ 315.58, 316.39, 316.79, 317.82, 318.39, 318.22, 316.68, 315.01, 314.02, 313.55, 315.02, 315.75, 316.52, 317.10, 317.79, 319.22, 320.08, 319.70, 318.27, 315.99, 314.24, 314.05, 315.05, 316.23, 316.92, 317.76, 318.54, 319.49, 320.64, 319.85, 318.70, 316.96, 315.17, 315.47, 316.19, 317.17, 318.12, 318.72, 319.79, 320.68, 321.28, 320.89, 319.79, 317.56, 316.46, 315.59, 316.85, 317.87, 318.87, 319.25, 320.13, 321.49, 322.34, 321.62, 319.85, 317.87, 316.36, 316.24, 317.13, 318.46, 319.57, 320.23, 320.89, 321.54, 322.20, 321.90, 320.42, 318.60, 316.73, 317.15, 317.94, 318.91, 319.73, 320.78, 321.23, 322.49, 322.59, 322.35, 321.61, 319.24, 318.23, 317.76, 319.36, 319.50, 320.35, 321.40, 322.22, 323.45, 323.80, 323.50, 322.16, 320.09, 318.26, 317.66, 319.47, 320.70, 322.06, 322.23, 322.78, 324.10, 324.63, 323.79, 322.34, 320.73, 319.00, 318.99, 320.41, 321.68, 322.30, 322.89, 323.59, 324.65, 325.30, 325.15, 323.88, 321.80, 319.99, 319.86, 320.88, 322.36, 323.59, 324.23, 325.34, 326.33, 327.03, 326.24, 325.39, 323.16, 321.87, 321.31, 322.34, 323.74, 324.61, 325.58, 326.55, 327.81, 327.82, 327.53, 326.29, 324.66, 323.12, 323.09, 324.01, 325.10, 326.12, 326.62, 327.16, 327.94, 329.15, 328.79, 327.53, 325.65, 323.60, 323.78, 325.13, 326.26, 326.93, 327.84, 327.96, 329.93, 330.25, 329.24, 328.13, 326.42, 324.97, 325.29, 326.56, 327.73, 328.73, 329.70, 330.46, 331.70, 332.66, 332.22, 331.02, 329.39, 327.58, 327.27, 328.30, 328.81, 329.44, 330.89, 331.62, 332.85, 333.29, 332.44, 331.35, 329.58, 327.58, 327.55, 328.56, 329.73, 330.45, 330.98, 331.63, 332.88, 333.63, 333.53, 331.90, 330.08, 328.59, 328.31, 329.44, 330.64, 331.62, 332.45, 333.36, 334.46, 334.84, 334.29, 333.04, 330.88, 329.23, 328.83, 330.18, 331.50, 332.80, 333.22, 334.54, 335.82, 336.45, 335.97, 334.65, 332.40, 331.28, 330.73, 332.05, 333.54, 334.65, 335.06, 336.32, 337.39, 337.66, 337.56, 336.24, 334.39, 332.43, 332.22, 333.61, 334.78, 335.88, 336.43, 337.61, 338.53, 339.06, 338.92, 337.39, 335.72, 333.64, 333.65, 335.07, 336.53, 337.82, 338.19, 339.89, 340.56, 341.22, 340.92, 339.26, 337.27, 335.66, 335.54, 336.71, 337.79, 338.79, 340.06, 340.93, 342.02, 342.65, 341.80, 340.01, 337.94, 336.17, 336.28, 337.76, 339.05, 340.18, 341.04, 342.16, 343.01, 343.64, 342.91, 341.72, 339.52, 337.75, 337.68, 339.14, 340.37, 341.32, 342.45, 343.05, 344.91, 345.77, 345.30, 343.98, 342.41, 339.89, 340.03, 341.19, 342.87, 343.74, 344.55, 345.28, 347.00, 347.37, 346.74, 345.36, 343.19, 340.97, 341.20, 342.76, 343.96, 344.82, 345.82, 347.24, 348.09, 348.66, 347.90, 346.27, 344.21, 342.88, 342.58, 343.99, 345.31, 345.98, 346.72, 347.63, 349.24, 349.83, 349.10, 347.52, 345.43, 344.48, 343.89, 345.29, 346.54, 347.66, 348.07, 349.12, 350.55, 351.34, 350.80, 349.10, 347.54, 346.20, 346.20, 347.44, 348.67 ] co2 = pd.Series(raw, index=pd.date_range('1-1-1959', periods=len(raw), freq='M'), name = 'co2') co2 = co2.interpolate(method='spline', order=3) stl = STL(co2, period = 12, seasonal=13) stl.fit().plot() plt.show() </code></pre>
<p>I don't have much experience with this, but I looked it up and found the following <a href="https://stackoverflow.com/questions/34457281/decomposing-trend-seasonal-and-residual-time-series-elements">SO answers</a> You can get it with the following code.</p> <pre><code>import statsmodels.api as sm res = sm.tsa.seasonal_decompose(co2, freq=12) trend = res.trend 1959-01-31 NaN 1959-02-28 NaN 1959-03-31 NaN 1959-04-30 317.124286 1959-05-31 317.042857 ... 1987-08-31 348.374286 1987-09-30 347.992857 1987-10-31 NaN 1987-11-30 NaN 1987-12-31 NaN Freq: M, Name: trend, Length: 348, dtype: float64 seasonal = res.seasonal 1959-01-31 -0.108146 1959-02-28 0.534131 1959-03-31 1.314622 1959-04-30 2.408149 1959-05-31 2.920247 ... 1987-08-31 -1.174813 1987-09-30 -2.912923 1987-10-31 -3.174024 1987-11-30 -2.027476 1987-12-31 -0.964634 Freq: M, Name: seasonal, Length: 348, dtype: float64 residual = res.resid 1959-01-31 NaN 1959-02-28 NaN 1959-03-31 NaN 1959-04-30 NaN 1959-05-31 NaN .. 1987-08-31 NaN 1987-09-30 NaN 1987-10-31 NaN 1987-11-30 NaN 1987-12-31 NaN Freq: M, Name: resid, Length: 348, dtype: float64 observed = res.observed 1959-01-31 315.58 1959-02-28 316.39 1959-03-31 316.79 1959-04-30 317.82 1959-05-31 318.39 ... 1987-08-31 347.54 1987-09-30 346.20 1987-10-31 346.20 1987-11-30 347.44 1987-12-31 348.67 Freq: M, Name: co2, Length: 348, dtype: float64 </code></pre>
python|matplotlib|statsmodels
1
1,909,554
64,580,619
Python and pylint in VSCode
<p>On my VSCode editor I run a venv on conda.<br /> Python version is 3.8 in the venv.<br /> Importing package OpenCV as <code>import cv2</code> spits out pylint error like<br /> <code>Module 'cv2' has no 'xyz' member</code><br /> But importing the package using <code>from cv2 import cv2</code> runs perfectly well. Why is that and what is the way to correct that permanently on vscode on my ubuntu machine?</p>
<p>According to the information you provided, I installed the module &quot;<code>opencv</code>&quot; on my computer, and VSCode did not display the pylint error when using it:</p> <p><a href="https://i.stack.imgur.com/pyCqT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pyCqT.png" alt="enter image description here" /></a></p> <p>The way I installed the module &quot;<code>opencv</code>&quot;: <code>pip install opencv-python</code></p> <p>My <code>settings.json</code>:</p> <blockquote> <pre><code>{ &quot;terminal.integrated.shell.windows&quot;: &quot;C:\\windows\\System32\\cmd.exe&quot;, &quot;workbench.iconTheme&quot;: &quot;vscode-icons&quot;, &quot;files.autoSave&quot;: &quot;afterDelay&quot;, &quot;files.autoSaveDelay&quot;: 1000, &quot;python.linting.enabled&quot;: true, &quot;python.linting.pylintEnabled&quot;: true, &quot;python.languageServer&quot;: &quot;Pylance&quot;, } </code></pre> </blockquote> <p>Reference: <a href="https://pypi.org/project/opencv-python/" rel="nofollow noreferrer">Opencv-python</a>.</p>
python|opencv|visual-studio-code|package|pylint
0
1,909,555
69,835,430
PermissionError: [Errno 13] Permission denied on mac
<p>When I try to run the code below, the mac refuse the connection.</p> <pre><code>from http.server import BaseHTTPRequestHandler, HTTPServer class RequestHandler(BaseHTTPRequestHandler): def do_GET(self): message = &quot;Welcome to COE 550!&quot; self.protocol_version = &quot;HTTP/1.1&quot; self.send_response(200) self.send_header(&quot;Content-Length&quot;,len(message)) self.end_headers() self.wfile.write(bytes(message, &quot;utf8&quot;)) return server = ('localhost', 80) httpd = HTTPServer(server, RequestHandler) httpd.serve_forever() </code></pre> <p>The output message is</p> <blockquote> <p>PermissionError: [Errno 13] Permission denied</p> </blockquote>
<p>The port <code>80</code> is considered as privileged port(<a href="https://www.w3.org/Daemon/User/Installation/PrivilegedPorts.html" rel="nofollow noreferrer">TCP/IP port numbers below 1024</a>) so the process using them must be owned by root. When you run a server as a test from a non-priviliged account, you have to test it on other ports, such as <a href="https://www.w3.org/Daemon/User/Installation/PrivilegedPorts.html" rel="nofollow noreferrer">2784, 5000, 8001 or 8080</a>.</p> <p>You could either run the <em>python process as root</em> or you have to use any <em>non privileged port</em> to fix this issue.</p> <pre> server = ('localhost', <b>8001</b>) httpd = HTTPServer(server, RequestHandler) httpd.serve_forever() </pre>
python
0
1,909,556
69,861,402
Import python from sibling folder without -m or syspath hacks
<p>So I've spent the last three days trying to figure out a workable solution to this problem with imports.</p> <p>I have a subfolder in my project where I have scripts for database control, which has sibling folders that would like to call it. I have tried many online solutions but couldn't find anything that properly works. It seems some changes in Python 3.3/4 nullify a lot of solutions, or something.</p> <p>So I made a very simple test case.</p> <pre><code>IMPORTS/ ├─ folder1/ │ ├─ script1.py │ ├─ __init__.py ├─ folder2/ │ ├─ script2.py │ ├─ __init__.py ├─ __init__.py </code></pre> <p>How do I, from script1.py, call a function inside script2.py?</p>
<p>I generally prefer to install my module as a dependency so I can import from the project root. This seems to be the correct approach, though I've rarely seen it talked about online.</p> <p>E.g. from IMPORTS you would run <code>pip install -e .</code> (install the package in this folder in editable mode). This will require that you have a setup.py:</p> <pre><code>from setuptools import setup, find_packages setup( name='IMPORTS', version='x.x.x', description='What the package does.', author='Your Name', author_email='x@x.com', install_requires=[], packages=find_packages() ) </code></pre> <p><a href="https://github.com/ShaynAli/Parallel-Web-Crawler/blob/master/setup.py" rel="nofollow noreferrer">Here</a> is an example from one of my personal packages.</p> <p>Then you can import from the root folder (where setup.py is). Following your example:</p> <pre><code>from folder1 import script1 </code></pre> <p>Or vice versa.</p> <p>In summary:</p> <ol> <li>Write a setup.py.</li> <li>Install your package in editable mode with <code>pip install -e .</code></li> <li>Write import statements from the package root.</li> </ol>
python|import|architecture|python-import
0
1,909,557
64,093,063
Problem using itertools and zip in combination to create dictionary from two lists of different lengths
<p>I want keys to repeat the same way in each dictionary. I.e. start from A and go till E. But it seems itertools.cycle is skipping one every time it cycles over. I also want the values to follow the order in the list (i.e. start from 1 in the first dictionary and end with 15 in the last dictionary). Please see code below:</p> <pre><code>import itertools allKeys=['A','B','C','D','E'] a=[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15] g=itertools.cycle(allKeys) b=[] for i in range(3): dishDict=dict(zip(g,a)) b.append(dishDict) b </code></pre> <p>Generates:</p> <pre><code>[{'A': 11, 'B': 12, 'C': 13, 'D': 14, 'E': 15}, {'B': 11, 'C': 12, 'D': 13, 'E': 14, 'A': 15}, {'C': 11, 'D': 12, 'E': 13, 'A': 14, 'B': 15}] </code></pre> <p>As you see, keys in the second dictionary start from B (instead of A, as I would like). Also the values are the same in all three dictionaries in the list.</p> <p>This is what I want the output to look like:</p> <pre><code>[{'A': 1, 'B': 2, 'C': 3, 'D': 4, 'E': 5}, {'A': 6, 'B': 7, 'C': 8, 'D': 9, 'E': 10}, {'A': 11, 'B': 12, 'C': 13, 'D': 14, 'E': 15}] </code></pre> <p>I'd really appreciate it if someone could shed some light on what's happening and what I should do to fix it. I have already spent quite a bit of time to solve it myself and also checked the documentation on itertools.cycle. But haven't been able to figure it out yet.</p>
<p>For the required output, you don't need <code>cycle()</code>:</p> <pre><code>allKeys=['A','B','C','D','E'] a=[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15] it = iter(a) b=[] for i in range(3): dishDict=dict(zip(allKeys,it)) b.append(dishDict) print(b) </code></pre> <p>Prints:</p> <pre><code>[{'A': 1, 'B': 2, 'C': 3, 'D': 4, 'E': 5}, {'A': 6, 'B': 7, 'C': 8, 'D': 9, 'E': 10}, {'A': 11, 'B': 12, 'C': 13, 'D': 14, 'E': 15}] </code></pre>
python-3.x|loops|dictionary|itertools|cycle
2
1,909,558
63,995,215
Multiple bitrates HLS with ffmpeg-python
<p>I am currently using <code>ffmpeg-python</code> library to convert a <code>.mp4</code> video into HLS format with output looking like this:</p> <pre class="lang-py prettyprint-override"><code>ffmpeg.output( mp4_input, m3u8_name, format='hls', start_number=0, hls_time=5, hls_list_size=0, ), </code></pre> <p>How do I make <code>ffmpeg-python</code> output HLS with in multiple bitrates and create a master playlist for them?</p>
<p>Actually you can achieve the same without <code>ffmpeg-python</code>. I'm the creator of <a href="https://abhitronix.github.io/vidgear/latest" rel="nofollow noreferrer">VidGear</a> Video Processing Python Project that contains <a href="https://abhitronix.github.io/vidgear/latest/gears/streamgear/introduction/" rel="nofollow noreferrer">StreamGear</a> API for this very purpose. The example code is as follows:</p> <pre><code># import required libraries from vidgear.gears import StreamGear # activate Single-Source Mode and also define various streams stream_params = { &quot;-video_source&quot;: &quot;foo.mp4&quot;, &quot;-streams&quot;: [ {&quot;-resolution&quot;: &quot;1920x1080&quot;, &quot;-video_bitrate&quot;: &quot;4000k&quot;}, # Stream1: 1920x1080 at 4000kbs bitrate {&quot;-resolution&quot;: &quot;1280x720&quot;, &quot;-framerate&quot;: 30.0}, # Stream2: 1280x720 at 30fps framerate {&quot;-resolution&quot;: &quot;640x360&quot;, &quot;-framerate&quot;: 60.0}, # Stream3: 640x360 at 60fps framerate {&quot;-resolution&quot;: &quot;320x240&quot;, &quot;-video_bitrate&quot;: &quot;500k&quot;}, # Stream3: 320x240 at 500kbs bitrate ], } # describe a suitable master playlist location/name and assign params streamer = StreamGear(output=&quot;hls_out.m3u8&quot;, format = &quot;hls&quot;, **stream_params) # trancode source streamer.transcode_source() # terminate streamer.terminate() </code></pre> <p>and that's it. Goodluck!</p>
python|ffmpeg|mp4|http-live-streaming|bitrate
0
1,909,559
62,877,425
Validation loss curve is flat and training loss curve is higher than validation error curve
<p>I'm building a LSTM model for prediction senario. My dataset has around 248000 piece of data and I use 24000 (around 10%) as validation set, others are training set. My model learning curve is the following: <a href="https://i.stack.imgur.com/98F8B.png" rel="nofollow noreferrer">learning curve</a></p> <p>The validation error is always 0.00002 from scratch, and the training error decreased to 0.013533 at epoch 20.</p> <p>I've read this carefully: <a href="https://machinelearningmastery.com/learning-curves-for-diagnosing-machine-learning-model-performance/" rel="nofollow noreferrer">https://machinelearningmastery.com/learning-curves-for-diagnosing-machine-learning-model-performance/</a></p> <p>Is my validation set is unrepresentative? Is the solution to use larger validation set?</p>
<p>It might be that, first, your underlying concept is very simple which leads to extremely low validation error early on. Second, your data augmentation makes it harder to learn, which yields higher training error.</p> <p>Yet, I would still run a couple of experiments in your case. First: divide data as 10/90 instead of 90/10 and see how does your validation error changes then - hopefully, you would see some sort of a curve between (now shorter and harder) epochs. Second, I would run validation before training (or after an epoch of 1 batch) to produce a random result.</p>
tensorflow|model
0
1,909,560
62,041,410
PyOWM installed but not recognized?
<p>Disclaimer - I am quite new to Python.</p> <p>I wanted to use the OWM API to make a simple Python weather program. I found some guides to using this key on the web, and they said to use the PyOWM library. I DuckDuckGoed how to install it and I downloaded Pip. I put it in C:/pip and tried to run 'python get-pip.py' (yes, I was in the directory in CMD). </p> <p>It didn't work, and it sent me to the Microsoft Store page for Python. I installed it (even though i had the normal ver installed) and tried again. Pip installed. </p> <p>I ran pip install pyowm and it installed. Everything seemed fine. When I went back into PyCharm, it wouldn't work. This is the code from the tutorial I am watching:</p> <pre><code>import pyowm owm = pyowm.OWM('&lt;api_key&gt;') # TODO: Replace &lt;api_key&gt; with your API key la = owm.three_hours_forecast('Los Angeles, US') print(la.will_have_clouds()) </code></pre> <p>Any ideas?</p>
<p>In Pycharm, you have to install your library in the project interpreter.</p> <p>In your Pycharm go to <code>File -&gt; settings -&gt; Project:test(In my case test means my project name) -&gt; select project interpreter -&gt; click add button</code></p> <p><a href="https://i.imgur.com/Evwxgpb.png" rel="nofollow noreferrer"><img src="https://i.imgur.com/Evwxgpb.png" alt="enter image description here"></a></p> <p>After clicking add button and <strong>search</strong> <code>pyowm</code> then <strong>install</strong> it.</p>
python|pip|weather|openweathermap
1
1,909,561
61,948,507
SSL Error : Python Multiprocessing, PostgresSQL and Psycopg2
<h1>How can I call with more than one processes?</h1> <p>Below code is working fine for processes = 1.</p> <p>Definition:</p> <pre><code>def origin_and_url_from_url(url): ori_url = url.strip() cursor = connection.cursor() cursor.execute(&quot;SELECT DISTINCT url, id FROM origin where url = %s&quot;,[ori_url]) rows = cursor.fetchall() cursor.close() for a_row in rows: with open('TopListwith100CardsWithID.csv','a') as file: file.write(str(a_row[1])+ &quot;, &quot;) file.write(str(a_row[0])) file.write('\n') </code></pre> <p>Call:</p> <pre><code>with open('SEMethodologies/TopListwith100Cards.csv', 'r') as f: reader = csv.reader(f) top_list = list(reader) p = multiprocessing.Pool(1, initializer, ()) logger.info(&quot;Pool Started for ids&quot;) results = p.starmap(origin_and_url_from_url, top_list) print(results) p.close() </code></pre> <p>But if I call by changing this line <code>p = multiprocessing.Pool(2, initializer, ())</code> for two processes, Its shows this error <strong>psycopg2.OperationalError: SSL error: decryption failed or bad record Mac</strong></p>
<p>I had a very pesky bug that sounds similar - my service would restart with this error</p> <pre><code>Corruption detected. Cipher functions:OPENSSL_internal:BAD_DECRYPT routines:OPENSSL_internal:DECRYPTION_FAILED_OR_BAD_RECORD_MAC Decryption error: TSI_DATA_CORRUPTED </code></pre> <p>Running a Gunicorn service in Google Cloud with a Postgres DB. Ended up debugging a lot of multiprocessing configurations, Postgres settings, etc.</p> <p>The thing that fixed it for me was freezing the <code>grpcio</code> python package at <code>1.29.0</code>, based on <a href="https://github.com/grpc/grpc/issues/11011#issuecomment-301845722" rel="nofollow noreferrer">this</a> answer which says that the decryption failure error happens after GRPC v1.3.x.</p>
python|postgresql|multiprocessing|psycopg2
0
1,909,562
67,568,651
Using asyncio in python to run two infinitely running functions
<p>I am trying to run two infinitely looping functions concurrently and will later implement this into a socket chatroom application for each client that is connected to my server. The problem is, whenever the function that I am trying to gather is run in an infinite while loop, my program will only run the first function that is gathered.</p> <p>Here is my code:</p> <pre><code> async def increment(): global money while True: money += 1 async def displayMoney(): global money while True: input(money) async def main(): global money await asyncio.gather(increment(), displayMoney()) asyncio.run(main()) </code></pre> <p>I am new to asynchronous programming, apologies.</p>
<p>If you add <code>await asyncio.sleep(0)</code> at the end of the loop, it allows the loops to give each other time to run. However, this means you cannot run anything that stops the main event loop, such as <code>time.sleep(1)</code> or <code>input()</code>, like I was trying to do. This is fine though as I do not need to use any of these in my main program as it utilises tkinter gui.</p>
python-3.x|asynchronous|python-asyncio
0
1,909,563
60,532,791
Timeout expired pgadmin Unable to connect to server
<p>I am following the step by step instructions from this link <a href="https://www.postgresqltutorial.com/connect-to-postgresql-database/" rel="noreferrer">https://www.postgresqltutorial.com/connect-to-postgresql-database/</a> here to create a simple server on pgadmin. Please check the picture <a href="https://i.stack.imgur.com/luRvq.png" rel="noreferrer"><img src="https://i.stack.imgur.com/luRvq.png" alt="enter image description here"></a></p> <p>What am I doing wrong, I installed pgadmin on my macOS but I don't see why I am getting this error. Please help</p>
<p>It's an issue with AWS inbound rules not pgAdmin. Follow <a href="https://serverfault.com/a/1011181">this guide</a> to solve it. It works.</p>
python|mysql|sql|database|pgadmin
6
1,909,564
70,694,196
Python "ModuleNotFoundError:", but module does appear to be installed per Command Prompt
<p>I am very new to Python/programming, having recently installed Python 3.10. I have already installed the Openpyxl module, i.e. when I check on CMD I get this:</p> <pre><code>C:\Users\hadam&gt;pip install openpyxl Requirement already satisfied: openpyxl in c:\users\hadam\appdata\local\programs\python\python310\lib\site-packages (3.0.9) Requirement already satisfied: et-xmlfile in c:\users\hadam\appdata\roaming\python\python310\site-packages (from openpyxl) (1.1.0) </code></pre> <p>I am trying to run some code which I have just copied from here (i.e. I have just edited the file path names): <a href="https://www.geeksforgeeks.org/python-how-to-copy-data-from-one-excel-sheet-to-another/" rel="nofollow noreferrer">https://www.geeksforgeeks.org/python-how-to-copy-data-from-one-excel-sheet-to-another/</a></p> <p>However, when I try to run this script (via the Mu editor), I get the following error message:</p> <pre><code>Traceback (most recent call last): File &quot;c:\users\hadam\appdata\local\programs\python\python310\scripts\test1.py&quot;, line 2, in &lt;module&gt; import openpyxl as xl; ModuleNotFoundError: No module named 'openpyxl' &gt;&gt;&gt; </code></pre> <p>Can anyone tell me why the Mu editor cannot find Openpyxl, or what I can do to execute this programme?</p> <p>Thanks</p>
<p>Try to open python from the command line, e.g.</p> <pre><code> C:\users\you&gt; python </code></pre> <p>or</p> <pre><code>C:\users\you&gt; python3 </code></pre> <p>or</p> <pre><code>C:\users\you&gt; path\to\python </code></pre> <p>then when python is open</p> <pre><code>&gt;&gt;&gt; import openpyxl as xl </code></pre> <p>If the problem is not present anymore, your Mu editor might be using a different python interpreter/environment: check for its configurations and change it to the one you opened from the terminal.</p>
python|openpyxl|modulenotfounderror|mu
0
1,909,565
70,415,297
Saving images in a loop faster than multithreading / multiprocessing
<p>Here's a timed example of multiple image arrays of different sizes being saved in a loop as well as concurrently using threads / processes:</p> <pre><code>import tempfile from concurrent.futures import ProcessPoolExecutor, ThreadPoolExecutor, as_completed from pathlib import Path from time import perf_counter import numpy as np from cv2 import cv2 def save_img(idx, image, dst): cv2.imwrite((Path(dst) / f'{idx}.jpg').as_posix(), image) if __name__ == '__main__': l1 = np.random.randint(0, 255, (100, 50, 50, 1)) l2 = np.random.randint(0, 255, (1000, 50, 50, 1)) l3 = np.random.randint(0, 255, (10000, 50, 50, 1)) temp_dir = tempfile.mkdtemp() workers = 4 t1 = perf_counter() for ll in l1, l2, l3: t = perf_counter() for i, img in enumerate(ll): save_img(i, img, temp_dir) print(f'Time for {len(ll)}: {perf_counter() - t} seconds') for executor in ThreadPoolExecutor, ProcessPoolExecutor: with executor(workers) as ex: futures = [ ex.submit(save_img, i, img, temp_dir) for (i, img) in enumerate(ll) ] for f in as_completed(futures): f.result() print( f'Time for {len(ll)} ({executor.__name__}): {perf_counter() - t} seconds' ) </code></pre> <p>And I get these durations on my i5 mbp:</p> <pre><code>Time for 100: 0.09495482999999982 seconds Time for 100 (ThreadPoolExecutor): 0.14151873999999998 seconds Time for 100 (ProcessPoolExecutor): 1.5136184309999998 seconds Time for 1000: 0.36972280300000016 seconds Time for 1000 (ThreadPoolExecutor): 0.619205703 seconds Time for 1000 (ProcessPoolExecutor): 2.016624468 seconds Time for 10000: 4.232915643999999 seconds Time for 10000 (ThreadPoolExecutor): 7.251599262 seconds Time for 10000 (ProcessPoolExecutor): 13.963426469999998 seconds </code></pre> <p>Aren't threads / processes expected to require less time to achieve the same thing? and why not in this case?</p>
<p>The timings in the code are wrong because the timer <code>t</code> is not reset before testing the Pools. Nevertheless, the relative order of the timings are correct. A possible code with a timer reset is:</p> <pre class="lang-py prettyprint-override"><code>import tempfile from concurrent.futures import ProcessPoolExecutor, ThreadPoolExecutor, as_completed from pathlib import Path from time import perf_counter import numpy as np from cv2 import cv2 def save_img(idx, image, dst): cv2.imwrite((Path(dst) / f'{idx}.jpg').as_posix(), image) if __name__ == '__main__': l1 = np.random.randint(0, 255, (100, 50, 50, 1)) l2 = np.random.randint(0, 255, (1000, 50, 50, 1)) l3 = np.random.randint(0, 255, (10000, 50, 50, 1)) temp_dir = tempfile.mkdtemp() workers = 4 for ll in l1, l2, l3: t = perf_counter() for i, img in enumerate(ll): save_img(i, img, temp_dir) print(f'Time for {len(ll)}: {perf_counter() - t} seconds') for executor in ThreadPoolExecutor, ProcessPoolExecutor: t = perf_counter() with executor(workers) as ex: futures = [ ex.submit(save_img, i, img, temp_dir) for (i, img) in enumerate(ll) ] for f in as_completed(futures): f.result() print( f'Time for {len(ll)} ({executor.__name__}): {perf_counter() - t} seconds' ) </code></pre> <p>Multithreading is faster specially for I/O bound processes. In this case, compressing the images is cpu-intensive, so depending on the implementation of OpenCV and of the python wrapper, multithreading can be much slower. In many cases the culprit is CPython's GIL, but I am not sure if this is the case (I do not know if the GIL is released during the <code>imwrite</code> call). In my setup (i7 8th gen), Threading is as fast as the loop for 100 images and barely faster for 1000 and 10000 images. If <code>ThreadPoolExecutor</code> reuses threads, there is an overhead involved in assigning a new task to an existing thread. If it does not reuses threads, there is an overhead involved in launching a new thread.</p> <p>Multiprocessing circumvents the GIL issue, but has some other problems. First, pickling the data to pass between processes takes some time, and in the case of images it can be <em>very</em> expensive. Second, in the case of windows, spawning a new process takes a lot of time. A simple test to see the overhead (both for processes and threads) is to change the <code>save_image</code> function by one that does nothing, but still need pickling, etc:</p> <pre class="lang-py prettyprint-override"><code>def save_img(idx, image, dst): if idx != idx: print(&quot;impossible!&quot;) </code></pre> <p>and by a similar one without parameters to see the overhead of spawning the processes, etc.</p> <p>The timings in my setup show that 2.3 seconds are needed just to spawn the 10000 processes and 0.6 extra seconds for pickling, which is much more than the time needed for processing.</p> <p>A way to improve the throughput and keep the overhead to a minimum is to break the work on chunks, and submit each chunk to the worker:</p> <pre class="lang-py prettyprint-override"><code>import tempfile from concurrent.futures import ProcessPoolExecutor, ThreadPoolExecutor, as_completed from pathlib import Path from time import perf_counter import numpy as np from cv2 import cv2 def save_img(idx, image, dst): cv2.imwrite((Path(dst) / f'{idx}.jpg').as_posix(), image) def multi_save_img(idx_start, images, dst): for idx, image in zip(range(idx_start, idx_start + len(images)), images): cv2.imwrite((Path(dst) / f'{idx}.jpg').as_posix(), image) if __name__ == '__main__': l1 = np.random.randint(0, 255, (100, 50, 50, 1)) l2 = np.random.randint(0, 255, (1000, 50, 50, 1)) l3 = np.random.randint(0, 255, (10000, 50, 50, 1)) temp_dir = tempfile.mkdtemp() workers = 4 for ll in l1, l2, l3: t = perf_counter() for i, img in enumerate(ll): save_img(i, img, temp_dir) print(f'Time for {len(ll)}: {perf_counter() - t} seconds') chunk_size = len(ll)//workers ends = [chunk_size * (_+1) for _ in range(workers)] ends[-1] += len(ll) % workers starts = [chunk_size * _ for _ in range(workers)] for executor in ThreadPoolExecutor, ProcessPoolExecutor: t = perf_counter() with executor(workers) as ex: futures = [ ex.submit(multi_save_img, start, ll[start:end], temp_dir) for (start, end) in zip(starts, ends) ] for f in as_completed(futures): f.result() print( f'Time for {len(ll)} ({executor.__name__}): {perf_counter() - t} seconds' ) </code></pre> <p>This should give you a significant boost over a simple for, both for a multiprocessing and multithreading approach.</p>
python|multithreading|image|multiprocessing
0
1,909,566
63,670,330
How to convert to log base 2?
<p>How can i convert the following code to log base 2?</p> <pre><code>df[&quot;col1&quot;] = df[&quot;Target&quot;].map(lambda i: np.log(i) if i &gt; 0 else 0) </code></pre>
<p>I think you just want to use <a href="https://numpy.org/doc/stable/reference/generated/numpy.log2.html#numpy.log2" rel="nofollow noreferrer"><code>np.log2</code></a> instead of <a href="https://numpy.org/doc/stable/reference/generated/numpy.log.html#numpy.log" rel="nofollow noreferrer"><code>np.log</code></a>.</p>
python|numpy|math|logarithm
2
1,909,567
55,668,238
Get cumulative sum Pandas conditional on other column
<p>I want to create a column that shows the cumulative count (rolling sum) of previous purchases (per customer) that took place in department 99</p> <p>My data frame looks like this ; where each row is a separate transaction. </p> <pre><code> id chain dept category company brand date productsize productmeasure purchasequantity purchaseamount sale 0 86246 205 7 707 1078778070 12564 2012-03-02 12.00 OZ 1 7.59 268.90 1 86246 205 63 6319 107654575 17876 2012-03-02 64.00 OZ 1 1.59 268.90 2 86246 205 97 9753 1022027929 0 2012-03-02 1.00 CT 1 5.99 268.90 3 86246 205 25 2509 107996777 31373 2012-03-02 16.00 OZ 1 1.99 268.90 4 86246 205 55 5555 107684070 32094 2012-03-02 16.00 OZ 2 10.38 268.90 5 86246 205 97 9753 1021015020 0 2012-03-02 1.00 CT 1 7.80 268.90 6 86246 205 99 9909 104538848 15343 2012-03-02 16.00 OZ 1 2.49 268.90 7 86246 205 59 5907 102900020 2012 2012-03-02 16.00 OZ 1 1.39 268.90 8 86246 205 9 921 101128414 9209 2012-03-02 4.00 OZ 2 1.50 268.90 </code></pre> <p>I did this : </p> <pre><code> shopdata6['transactions_99'] = 0 shopdata6['transactions_99'] = shopdata6[shopdata6['dept'] == 99].groupby(['id', 'dept'])['transaction_99'].cumsum() </code></pre> <p>Update : </p> <pre><code>id dept date purchase purchase_count_dept99(desired) id1 199 date1 $10 0 id1 99 date1 $10 1 id1 100 date1 $50 1 id1 99 date2 $30 2 id2 100 date1 $10 0 id2 99 date1 $10 1 id3 99 date3 $10 1 </code></pre> <p>Applied this :</p> <pre><code>shopdata6['transaction_99'] = np.where(shopdata6['dept']==99, 1, 0) shopdata6['transaction_99'] = shopdata6.groupby(['id'])['transaction_99'].transform('cumsum') </code></pre> <p>The result does look okay, but is it correct ? </p>
<p>Your code should be simplify:</p> <pre><code>s = (shopdata6['dept']==99).astype(int) shopdata6['transaction_99'] = s.groupby(shopdata6['id']).cumsum() print (shopdata6) id dept date purchase purchase_count_dept99(desired) transaction_99 0 id1 199 date1 $10 0 0 1 id1 99 date1 $10 1 1 2 id1 100 date1 $50 1 1 3 id1 99 date2 $30 2 2 4 id2 100 date1 $10 0 0 5 id2 99 date1 $10 1 1 6 id3 99 date3 $10 1 1 </code></pre>
python|pandas|pandas-groupby
0
1,909,568
56,814,435
random error message popping up, I am very confused onto why this is happening?
<pre><code>numbers = range(1,10) for number in numbers: if number == 1: print(number + "st") elif number == 2: print(number + "nd") elif number == 3: print(number + "rd") elif number: print(number + "th") </code></pre> <p>There is an unexpected error that keeps on popping up. It keeps on saying "unsupported operand type(s) for +: 'int' and 'str'". I tried changing some things but nothing seems to work! If you can possibly help me, please give me an aswer. :)</p>
<p>In Python, strings can only be concatenated with other strings. You can't add a string and an integer. Instead, you would convert the integer to a string and then perform concatenation.</p> <p>Like so:</p> <pre><code>print(str(number) + "st") </code></pre>
python|python-3.x
0
1,909,569
56,713,874
How to add custom HTML elements and images to a blog in Django?
<p>I am trying to create a blog in Django. Most of the tutorials and examples available shows just retrieving some content from the database and displaying it dynamically on the predefined HTML structure.</p> <p>After looking at some solution I found something called flatpages in Django which provide the facility to write HTML. But its recommended to use it for About Us and Contact Us kind of pages. Should I use this?</p> <p>I want to do it as I can write my own HTML data for each blog and add some images so that the structure of HTML should not be similar in each blog. </p> <p>For example, In the case of WordPress, it allows the user to completely write each part of the blog except the heading part and the structure of HTML is not constant always. </p> <p>I want such functionality. Please help.</p>
<p>What you are looking for is to upload images and embed them as html in your content field. This can be done using a WYSIWYG Editor such as CKEditor. In CK you can write your text, format it and upload files. You could use django-ckeditor to do the heavy lifting for you: <a href="https://github.com/django-ckeditor/django-ckeditor" rel="nofollow noreferrer">https://github.com/django-ckeditor/django-ckeditor</a></p> <p>In your template you then have to render your content with safe filter so that the content will be rendered as html:</p> <pre><code> {{ post.content |safe }} </code></pre>
python|django
3
1,909,570
69,803,370
splitting words into syllables python
<p>I have a function called syllable_split(word_input) that receives a word and counts the number of Syllables and returns only a list containing the syllables of the given word.<br /> e.g.<br /> pandemonium ----&gt; ['pan', 'de', 'mo', 'ni', 'um']</p> <p>self-righteously ---&gt; ['self', 'right','eous', 'ly']</p> <p>hello ---&gt; ['hel','lo]</p> <p>diet ----&gt; ['di','et]</p> <p>seven ---&gt; ['sev','en']</p> <p>my function counts the syllables correctly but I'm having trouble splitting the word to its corresponding syllables. I only managed to split the word to its first correspondent syllable but it tends not to work for some words.For example I enter in 'seven' and I only get 'se' instead of 'sev'. I was thinking of following the syllable division pattern(vc/cv,c/cv,vc/v,v/v) but I'm having trouble implementing that into my function.</p> <pre><code>def syllable_split(word_input): count = 0 word = word_input.lower() vowels = set(&quot;aeiou&quot;) syll = list() temp = 0 for letter in word: if letter in vowels: count += 1 if count == 1: return word for index in range(count, len(word)): if word[index] in vowels and word[index - 1] not in vowels: w = word[temp: index - 1] if len(w) != 0: syll.append(w) temp = index - 1 return syll user_input = input() print(syllable_split(user_input)) </code></pre>
<p>While I agree with the comments that your approach will have many failings, but if that's okay, based on your implementation you could write a function that splits the words exactly how you describe:</p> <pre><code>vowels = 'AEIOU' consts = 'BCDFGHJKLMNPQRSTVWXYZ' consts = consts + consts.lower() vowels = vowels + vowels.lower() def is_vowel(letter): return letter in vowels def is_const(letter): return letter in consts # get the syllables for vc/cv def vc_cv(word): segment_length = 4 # because this pattern needs four letters to check pattern = [is_vowel, is_const, is_const, is_vowel] # functions above split_points = [] # find where the pattern occurs for i in range(len(word) - segment_length): segment = word[i:i+segment_length] # this will check the four letter each match the vc/cv pattern based on their position # if this is new to you I made a small note about it below if all([fi(letter) for letter, fi in zip(segment, pattern)]): split_points.append(i+segment_length/2) # use the index to find the syllables - add 0 and len(word) to make it work split_points.insert(0, 0) split_points.append(len(word)) syllables = [] for i in range(len(split_points) - 1): start = split_points[i] end = split_points[i+1] syllables.append(word[start:end]) return syllables word = 'vortex' print(vc_cv(word)) # ['vor', 'text'] </code></pre> <p>You can do something similar for the other patterns, for example, c/cv will be <code>patterns=[is_const, is_const, is_vowel]</code> with a segment length of 3</p> <ul> <li>Note You can put functions in list:</li> </ul> <pre><code>def linear(x): return x def squared(x): return x * x def cubed(x): return x * x * x funcs = [linear, squared, cubed] numbers = [2, 2, 2] transforms = [fi(ni) for ni, fi in zip(numbers, funcs)] # results -&gt; [2, 4, 8] </code></pre>
python
1
1,909,571
69,990,401
Print only the numbers in the string in python
<p>I need to print only the numbers in the string and I don't know how to do it I mean for example mystring=&quot;ab543&quot;, How to get 543 as int?</p> <p>I tried something like that</p> <pre><code>my_string=&quot;ab543&quot; numlst=[&quot;0&quot;,&quot;1&quot;,&quot;2&quot;,&quot;3&quot;,&quot;4&quot;,&quot;5&quot;,&quot;6&quot;,&quot;7&quot;,&quot;8&quot;,&quot;9&quot;] countfinish=0 whichnum=&quot;&quot; for charr in my_string: for num in numlst: if num==charr: whichnum=whichnum+str(num) break countfinish=countfinish+int(whichnum) print(countfinish) </code></pre>
<p>You can try:</p> <pre><code>&gt;&gt;&gt; my_string=&quot;ab543&quot; &gt;&gt;&gt; &quot;&quot;.join([str(s) for s in my_string if s.isdigit()]) '543' &gt;&gt;&gt; int(&quot;&quot;.join([str(s) for s in my_string if s.isdigit()])) 543 </code></pre> <p>You also can use <code>filter</code> :</p> <pre><code>&gt;&gt;&gt; my_string=&quot;ab543&quot; &gt;&gt;&gt; int(''.join(filter(str.isdigit, my_string))) 543 </code></pre>
python|string|integer
0
1,909,572
69,891,674
how i can open zoom window directly using python?
<p>I use subprocess library but didnt work</p> <pre><code>import subprocess subprocess.Popen(&quot;C:\Users\STUDENT\AppData\Roaming\Zoom\bin\Zoom.exe&quot;) </code></pre> <p>show this error message</p> <pre><code>^SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 2-3 truncated \UXXXXXXXX escape </code></pre>
<p>I would recommend first confirming that you can run Zoom from the command line from the specified path: <code>C:\Users\STUDENT\AppData\Roaming\Zoom\bin\Zoom.exe</code></p> <p>My installation of Zoom on Windows 10 uses this path: <code>C:\Program Files (x86)\zoom\bin\Zoom.exe</code></p> <p>If you can open Zoom from the command line (such as powershell or cmd prompt), you should be able to open with subprocess using python.</p> <p>The error message associated is likely the path string, specifically <code>PEP 8: W605 invalid escape sequence '\'.</code> If you don't escape the backlashes, python incorrectly parses the string.</p> <p>Try:</p> <pre class="lang-py prettyprint-override"><code>import subprocess def main(): subprocess.Popen(&quot;C:\\Program Files (x86)\\zoom\\bin\\Zoom.exe&quot;) if __name__ == '__main__': main() </code></pre>
python
0
1,909,573
17,834,650
Scrapy: ValueError: need more than 0 values to unpack
<p>I am using scrapy to axtract some data, last time I had a problem in the ligne of regx. the error message is like this one : </p> <p>**File "ProjetVinNicolas3\spiders\nicolas_spider3.py", line 70, in parse_wine_page</p> <pre><code>classement, appelation, couleur = res.select('.//div[@class="pro_col_right"]/div[@class="pro_blk_trans"] div[@class="pro_blk_trans_titre"]/text()').re(r'^(\d\w+\s*Vin)\S\s+(\w+-\w+|\w+)\S\s+(\w+)\s*$') exceptions.ValueError: need more than 0 values to unpack** </code></pre> <p><a href="https://gist.github.com/XeroxGH/295c723c565d889fb2dc" rel="nofollow">link program</a> </p>
<p>The call to <code>.re</code> is returning a zero-length tuple. You cannot perform a sequence assignment to n variables using a sequence which is not of exactly length n. </p>
python|regex|scrapy
1
1,909,574
17,926,699
python sqlite3 insert command
<p>any idea what I'm doing wrong?</p> <p>I'm creating a table called General:</p> <pre><code> conn = sqlite3.connect(self.dbLocation) c = conn.cursor() sql = "create table if not exists General (id integer NOT NULL,current char[20] NOT NULL,PRIMARY KEY (id))" c.execute(sql) c.close() conn.close() </code></pre> <p>I'm then using max(id) to see if the table is empty. If it is, I create a table called Current1 and insert a row in General (id, 'Current1'). id is autoincrementing integer:</p> <pre><code> self.currentDB = "Current1" self.currentDBID = "1" #create the table sql = "create table %s (id integer NOT NULL,key char[90] NOT NULL,value float NOT NULL,PRIMARY KEY (id))" % (str(self.currentDB)) c.execute(sql) c.close() conn.close() conn = sqlite3.connect(self.dbLocation) c = conn.cursor() sql = "insert into General(current) values('%s')" % (str(self.currentDB)) print "sql = %s" % (str(sql)) ---&gt; *sql = insert into General(current) values('Current1')* c.execute(sql) print "executed insert Current" c.execute ("select max(id) from General") temp = c.next()[0] print "temp = %s" % (str(temp)) ---&gt; *temp = 1* c.close() conn.close() </code></pre> <p>The problem is that if I open the database, I do not find any rows in the General table. Current1 table is being created, but the insert statement into General does not seem to be doing anything. What am I doing wrong? Thanks.</p>
<p>You have to commit the changes before closing the connection:</p> <pre><code>conn.commit() </code></pre> <p>check the example in the docs : <a href="http://docs.python.org/2/library/sqlite3.html" rel="nofollow">http://docs.python.org/2/library/sqlite3.html</a></p>
python|insert|sqlite
1
1,909,575
61,089,528
tensorflow dataset from_generator() out of range error
<p>I'm trying to use <code>tf.data.Dataset.from_generator()</code> to generate training and validation data.</p> <p>I have my own data generator which does feature preparation on the fly:</p> <pre><code>def data_iterator(self, input_file_list, ...): for f in input_file_list: X, y = get_feature(f) yield X, y </code></pre> <p>Initially I was feeding this directly to tensorflow keras model but I encounter data out of range error after the first batch. Then I decided to wrap this within tensorflow data generator:</p> <pre><code>train_gen = lambda: data_iterator(train_files, ...) valid_gen = lambda: data_iterator(valid_files, ...) output_types = (tf.float32, tf.float32) output_shapes = (tf.TensorShape([499, 13]), tf.TensorShape([2])) train_dat = tf.data.Dataset.from_generator(train_gen, output_types=output_types, output_shapes=output_shapes) valid_dat = tf.data.Dataset.from_generator(valid_gen, output_types=output_types, output_shapes=output_shapes) train_dat = train_dat.repeat().batch(batch_size=128) valid_dat = valid_dat.repeat().batch(batch_size=128) </code></pre> <p>Then fit:</p> <pre><code>model.fit(x=train_dat, validation_data=valid_dat, steps_per_epoch=train_steps, validation_steps=valid_steps, epochs=100, callbacks=callbacks) </code></pre> <p>However, I'm still getting the error despite having <code>.repeat()</code> in the generator:</p> <blockquote> <p>BaseCollectiveExecutor::StartAbort Out of range: End of sequence</p> </blockquote> <p>My question is:</p> <ul> <li>why is <code>.repeat()</code> not working here?</li> <li>should I add a <code>while True</code> in my own iterator to avoid this? I feel like this can fix it but it doesn't look like the proper way of doing it.</li> </ul>
<p>I added a while True in my own generator so that it never run out and I'm not getting error any more:</p> <pre><code>def data_iterator(self, input_file_list, ...): while True; for f in input_file_list: X, y = get_feature(f) yield X, y </code></pre> <p>However, I don't know why <code>.repeat()</code> is not working for <code>.from_generator()</code></p>
python|tensorflow|keras|generator|tensorflow-datasets
1
1,909,576
61,077,864
Maximum length of consecutive ones in binary representation
<p>Trying to find maximum length of ones in a binary representation including negative numbers. In the following code <code>input_file</code> is a text file where:</p> <ul> <li>first line is a number of lines with sample integers</li> <li>every line staring from the second line has just one sample integer</li> </ul> <p>An example file:</p> <p>4 - number of samples</p> <p>3 - sample</p> <p>0 - ...</p> <p>1 - ...</p> <p>2 - ...</p> <p>Result: 2</p> <p>Task: print the maximum number of ones found among all sample integers in input file. Find solution that takes O(n) time and makes just one pass through all samples.</p> <p>How to modify solution to work with negative integers of arbitrary (or at least for <code>n ≤ 10000</code>) size?</p> <p><strong>Update:</strong> </p> <p>As I understand binary representation of negative numbers is based on Two's complement (<a href="https://en.wikipedia.org/wiki/Two" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Two</a>'s_complement). So, for example:</p> <p>+3 -> 011</p> <p>-3 -> 101</p> <p>How to convert integer to binary string representation taking its sign into account in general case?</p> <pre><code>def maxConsecutive(input): return max(map(len,input.split('0'))) def max_len(input_file): max_len = 0 with open(input_file) as file: first_line = file.readline() if not first_line: return 0 k = int(first_line.strip()) # number of tests for i in range(k): line = file.readline().strip() n = int(line) xs = "{0:b}".format(n) n = maxConsecutive(xs) if n &gt; max_len: max_len = n return max_len print(max_len('input.txt')) </code></pre> <p><strong>Update 2:</strong> This is a second task <strong>B</strong> from Yandex contest training page: <a href="https://contest.yandex.ru/contest/8458/enter/?lang=en" rel="nofollow noreferrer">https://contest.yandex.ru/contest/8458/enter/?lang=en</a></p> <p>You need to register there to test your solution.</p> <p><em>So far All solutions given here fail at test 9.</em></p> <p><strong>Update 3: Solution in Haskell that pass all Yandex tests</strong></p> <pre><code>import Control.Monad (replicateM) onesCount :: [Char] -&gt; Int onesCount xs = onesCount' xs 0 0 where onesCount' "" max curr | max &gt; curr = max | otherwise = curr onesCount' (x:xs) max curr | x == '1' = onesCount' xs max $ curr + 1 | curr &gt; max = onesCount' xs curr 0 | otherwise = onesCount' xs max 0 getUserInputs :: IO [Char] getUserInputs = do n &lt;- read &lt;$&gt; getLine :: IO Int replicateM n $ head &lt;$&gt; getLine main :: IO () main = do xs &lt;- getUserInputs print $ onesCount xs </code></pre>
<p>For negative numbers, you will either have to decide on a word length (32 bits, 64 bits, ...) or process them as absolute values (i.e. ignoring the sign) or use the minimum number of bits for each value.</p> <p>An easy way to control the word length is to use format strings. you can obtain the negative bits by adding the value to the power 2 corresponding to the selected word size. This will give you the appropriate bits for positive and for negative numbers. </p> <p>For example:</p> <pre><code>n = 123 f"{(1&lt;&lt;32)+n:032b}"[-32:] --&gt; '00000000000000000000000001111011' n = -123 f"{(1&lt;&lt;32)+n:032b}"[-32:] --&gt; '11111111111111111111111110000101' </code></pre> <p>Processing that to count the longest series of consecutive 1s is just a matter of string manipulation:</p> <p>If you choose to represent negative numbers using a varying word size you can use one bit more than the minimal representation of the positive number. For example -3 is represented as two bits ('11') when positive so it will need a minimum of 3 bits to be represented as a negative number: '101'</p> <pre><code>n = -123 wordSize = len(f"{abs(n):b}")+1 bits = f"{(1&lt;&lt;wordSize)+n:0{wordSize}b}"[-wordSize:] maxOnes = max(map(len,bits.split("0"))) print(maxOnes) # 1 ('10000101') </code></pre>
python|algorithm|binary
1
1,909,577
69,176,768
Django: How can I filtering a foreign key of a class in models from users.forms
<p>I create a Patient model in patient app</p> <pre><code>from django.contrib.auth.models import User # Create your models here. class Patient(models.Model): doctor = models.ForeignKey(User, on_delete=models.CASCADE) first_name = models.CharField(max_length=100) last_name = models.CharField(max_length=100) sex = models.CharField(max_length=20) phone = models.IntegerField() birth_date = models.DateField() </code></pre> <p>I want filtering the doctor fields which is foreign key from Users for just groups='Docteur', so when I want to add a patient I can find the users with 'Docteur' groups only not the other account's group.</p> <p>This is the forms.py in users app:</p> <pre><code>from django import forms from django.contrib.auth.forms import UserCreationForm import datetime class RegisterForm(UserCreationForm): BIRTH_YEAR_CHOICES = [] for years in range(1900,2021): BIRTH_YEAR_CHOICES.append(str(years)) sex_choice = [('1', 'Men'), ('2', 'Women')] groups_choice = [('1','Docteur'), ('2','Docteur remplaçant'), ('3','Secrétaire')] first_name = forms.CharField(max_length=200) last_name = forms.CharField(max_length=200) sex = forms.ChoiceField(widget=forms.Select, choices=sex_choice) date_of_birth = forms.DateField(widget=forms.SelectDateWidget(years=BIRTH_YEAR_CHOICES)) email = forms.EmailField() phone = forms.IntegerField() cin = forms.IntegerField() groups = forms.ChoiceField(widget=forms.Select, choices=groups_choice) password1 = forms.CharField(widget=forms.PasswordInput(), label='Password') password2 = forms.CharField(widget=forms.PasswordInput(), label='Repeat Password') class Meta(UserCreationForm.Meta): fields = UserCreationForm.Meta.fields + ('username','first_name','last_name','sex','date_of_birth','email','phone','cin','groups') </code></pre> <p>So what am I suppose to do to add this condition?</p>
<p>If I understand correctly, you want a form for creating <code>Patient</code>s in which you can select a <code>User</code> for the <code>doctor</code> foreign key, but restrict the choices to users that have selected the <code>('1', 'Docteur')</code> choice as <code>groups</code> field.</p> <p>In that case you can use a <a href="https://docs.djangoproject.com/en/dev/ref/forms/fields/#modelchoicefield" rel="nofollow noreferrer"><code>ModelChoiceField</code></a> and provide a filtered <code>queryset</code>:</p> <pre class="lang-py prettyprint-override"><code>from django.contrib.auth.models import User from django import forms from .models import Patient class AddPatientForm(forms.ModelForm): doctor = forms.ModelChoiceField(queryset=User.objects.filter(groups='1')) class Meta: model = Patient fields = ['first_name', 'last_name', ...] </code></pre>
python|django
0
1,909,578
68,953,476
conditions inside conditions pandas
<p>below is my DF in which I want to create a column based on other columns</p> <pre><code>test = pd.DataFrame({&quot;Year_2017&quot; : [np.nan, np.nan, np.nan, 4], &quot;Year_2018&quot; : [np.nan, np.nan, 3, np.nan], &quot;Year_2019&quot; : [np.nan, 2, np.nan, np.nan], &quot;Year_2020&quot; : [1, np.nan, np.nan, np.nan]}) Year_2017 Year_2018 Year_2019 Year_2020 0 NaN NaN NaN 1 1 NaN NaN 2 NaN 2 NaN 3 NaN NaN 3 4 NaN NaN NaN </code></pre> <p>The aim will be to create a new column and take value of the columns which is notna()</p> <p>Below is what I tried without success..</p> <pre><code>test['Final'] = np.where(test.Year_2017.isna(), test.Year_2018, np.where(test.Year_2018.isna(), test.Year_2019, np.where(test.Year_2019.isna(), test.Year_2020, test.Year_2019))) Year_2017 Year_2018 Year_2019 Year_2020 Final 0 NaN NaN NaN 1 NaN 1 NaN NaN 2 NaN NaN 2 NaN 3 NaN NaN 3 3 4 NaN NaN NaN NaN </code></pre> <p>The expected output:</p> <pre><code> Year_2017 Year_2018 Year_2019 Year_2020 Final 0 NaN NaN NaN 1 1 1 NaN NaN 2 NaN 2 2 NaN 3 NaN NaN 3 3 4 NaN NaN NaN 4 </code></pre>
<p>You can forward or back filling missing values and then select last or first column:</p> <pre><code>test['Final'] = test.ffill(axis=1).iloc[:, -1] </code></pre> <hr /> <pre><code>test['Final'] = test.bfill(axis=1).iloc[:, 0] </code></pre> <p>If there is only one non missing values per rows and numeric use:</p> <pre><code>test['Final'] = test.min(1) test['Final'] = test.max(1) test['Final'] = test.mean(1) test['Final'] = test.sum(1, min_count=1) </code></pre>
python|pandas|numpy
2
1,909,579
59,148,867
Call a class with append that targets class variable
<p>For example:</p> <pre class="lang-py prettyprint-override"><code>class Foo: def __init__(self): self.bar = ["baz", "qux", "quux", "quuz", "corge", "grault", "garply", "waldo", "fred", "plugh", "xyzzy", "thud"] </code></pre> <p>How can I call <code>Foo().append()</code> that appends to <code>Foo().bar</code>?</p> <p>Ex:</p> <pre class="lang-py prettyprint-override"><code>x = Foo() x.append("asd") # What I want to happen: # self.bar now is [..., "asd"] # What actually happens: # AttributeError: 'Foo' object has no attribute 'append' </code></pre> <p>Is this possible?</p>
<p>I added an <code>append</code> function myself:</p> <pre class="lang-py prettyprint-override"><code># ... in the Foo() class def append(self, value): return self.bar.append(value) </code></pre> <p>Edit: A simpler method that would also work</p> <pre class="lang-py prettyprint-override"><code># ... in Foo().__init__(self) self.append = self.bar.append </code></pre> <p>(Thank you @RaySteam)</p>
python|python-3.x
1
1,909,580
63,316,411
How to open another window in and take user input in Pyqt5 Python
<p>I am trying to create a GUI using pyqt5. I have one main window with pushbutton. When i click on pushbutton it should open another window which is having input form to take first name and last name. Below is my code. I am able to open another window but when i am submitting the details on opened window and clicking on Submit Details button, nothing is happening.</p> <p>Please note, if i directly call Child_ui in Main_Ui then then I am able to see the output form PrintInput function but same is not happening when I converted ui files in class.</p> <p>Main_ui.py:</p> <pre><code>from PyQt5 import QtCore, QtGui, QtWidgets class Ui_MainWindow(object): def setupUi(self, MainWindow): MainWindow.setObjectName(&quot;MainWindow&quot;) MainWindow.resize(299, 148) self.centralwidget = QtWidgets.QWidget(MainWindow) self.centralwidget.setObjectName(&quot;centralwidget&quot;) self.pushButton = QtWidgets.QPushButton(self.centralwidget) self.pushButton.setGeometry(QtCore.QRect(90, 70, 75, 23)) self.pushButton.setObjectName(&quot;pushButton&quot;) MainWindow.setCentralWidget(self.centralwidget) self.retranslateUi(MainWindow) QtCore.QMetaObject.connectSlotsByName(MainWindow) def retranslateUi(self, MainWindow): _translate = QtCore.QCoreApplication.translate MainWindow.setWindowTitle(_translate(&quot;MainWindow&quot;, &quot;MainWindow&quot;)) self.pushButton.setText(_translate(&quot;MainWindow&quot;, &quot;Register user&quot;)) if __name__ == &quot;__main__&quot;: import sys app = QtWidgets.QApplication(sys.argv) MainWindow = QtWidgets.QMainWindow() ui = Ui_MainWindow() ui.setupUi(MainWindow) MainWindow.show() sys.exit(app.exec_()) </code></pre> <p>I have converted this Qt designed file to class file:</p> <p>Main.py:</p> <pre><code>from PyQt5 import QtCore, QtGui, QtWidgets from Main_ui import * from Child import * class Main(QtWidgets.QMainWindow, Ui_MainWindow): def __init__(self, parent=None): super().__init__(parent) self.setupUi(self) self.pushButton.clicked.connect(self.openChild) def openChild(self): self.child = QtWidgets.QMainWindow() self.ui = userRegistation() self.ui.setupUi(self.child) self.child.show() if __name__ == &quot;__main__&quot;: import sys app = QtWidgets.QApplication(sys.argv) MainWindow = QtWidgets.QMainWindow() ui = Main() ui.setupUi(MainWindow) MainWindow.show() sys.exit(app.exec_()) </code></pre> <p>Below is my Child_ui.py qt designer script:</p> <pre><code>from PyQt5 import QtCore, QtGui, QtWidgets class Ui_ChildWindow(object): def setupUi(self, MainWindow): MainWindow.setObjectName(&quot;MainWindow&quot;) MainWindow.resize(284, 141) self.centralwidget = QtWidgets.QWidget(MainWindow) self.centralwidget.setObjectName(&quot;centralwidget&quot;) self.label = QtWidgets.QLabel(self.centralwidget) self.label.setGeometry(QtCore.QRect(20, 30, 71, 16)) self.label.setObjectName(&quot;label&quot;) self.label_2 = QtWidgets.QLabel(self.centralwidget) self.label_2.setGeometry(QtCore.QRect(20, 60, 71, 16)) self.label_2.setObjectName(&quot;label_2&quot;) self.pushButton = QtWidgets.QPushButton(self.centralwidget) self.pushButton.setGeometry(QtCore.QRect(20, 100, 251, 23)) self.pushButton.setObjectName(&quot;pushButton&quot;) self.lineEdit = QtWidgets.QLineEdit(self.centralwidget) self.lineEdit.setGeometry(QtCore.QRect(100, 30, 171, 20)) self.lineEdit.setObjectName(&quot;lineEdit&quot;) self.lineEdit_2 = QtWidgets.QLineEdit(self.centralwidget) self.lineEdit_2.setGeometry(QtCore.QRect(100, 60, 171, 20)) self.lineEdit_2.setObjectName(&quot;lineEdit_2&quot;) MainWindow.setCentralWidget(self.centralwidget) self.retranslateUi(MainWindow) QtCore.QMetaObject.connectSlotsByName(MainWindow) def retranslateUi(self, MainWindow): _translate = QtCore.QCoreApplication.translate MainWindow.setWindowTitle(_translate(&quot;MainWindow&quot;, &quot;MainWindow&quot;)) self.label.setText(_translate(&quot;MainWindow&quot;, &quot;First Name&quot;)) self.label_2.setText(_translate(&quot;MainWindow&quot;, &quot;Last Name&quot;)) self.pushButton.setText(_translate(&quot;MainWindow&quot;, &quot;Submit&quot;)) if __name__ == &quot;__main__&quot;: import sys app = QtWidgets.QApplication(sys.argv) MainWindow = QtWidgets.QMainWindow() ui = Ui_MainWindow() ui.setupUi(MainWindow) MainWindow.show() sys.exit(app.exec_()) </code></pre> <p>Child.py : Class file of Child_ui.py</p> <pre><code>from PyQt5 import QtCore, QtGui, QtWidgets from Child_ui import * class userRegistation(QtWidgets.QMainWindow, Ui_ChildWindow): def __init__(self, parent=None): super().__init__(parent) self.setupUi(self) self.pushButton.clicked.connect(self.PrintInput) def PrintInput(self): print (self.lineEdit.text()) print (self.lineEdit_2.text()) </code></pre>
<p>Try it:</p> <pre><code>from PyQt5 import QtCore, QtGui, QtWidgets class Ui_ChildWindow(object): def setupUi(self, MainWindow): MainWindow.setObjectName(&quot;MainWindow&quot;) MainWindow.resize(284, 141) self.centralwidget = QtWidgets.QWidget(MainWindow) self.centralwidget.setObjectName(&quot;centralwidget&quot;) self.label = QtWidgets.QLabel(self.centralwidget) self.label.setGeometry(QtCore.QRect(20, 30, 71, 16)) self.label.setObjectName(&quot;label&quot;) self.label_2 = QtWidgets.QLabel(self.centralwidget) self.label_2.setGeometry(QtCore.QRect(20, 60, 71, 16)) self.label_2.setObjectName(&quot;label_2&quot;) self.pushButton = QtWidgets.QPushButton(self.centralwidget) self.pushButton.setGeometry(QtCore.QRect(20, 100, 251, 23)) self.pushButton.setObjectName(&quot;pushButton&quot;) self.lineEdit = QtWidgets.QLineEdit(self.centralwidget) self.lineEdit.setGeometry(QtCore.QRect(100, 30, 171, 20)) self.lineEdit.setObjectName(&quot;lineEdit&quot;) self.lineEdit_2 = QtWidgets.QLineEdit(self.centralwidget) self.lineEdit_2.setGeometry(QtCore.QRect(100, 60, 171, 20)) self.lineEdit_2.setObjectName(&quot;lineEdit_2&quot;) MainWindow.setCentralWidget(self.centralwidget) self.retranslateUi(MainWindow) QtCore.QMetaObject.connectSlotsByName(MainWindow) def retranslateUi(self, MainWindow): _translate = QtCore.QCoreApplication.translate MainWindow.setWindowTitle(_translate(&quot;MainWindow&quot;, &quot;MainWindow&quot;)) self.label.setText(_translate(&quot;MainWindow&quot;, &quot;First Name&quot;)) self.label_2.setText(_translate(&quot;MainWindow&quot;, &quot;Last Name&quot;)) self.pushButton.setText(_translate(&quot;MainWindow&quot;, &quot;Submit&quot;)) #from Main_ui import Ui_MainWindow class Ui_MainWindow(object): def setupUi(self, MainWindow): MainWindow.setObjectName(&quot;MainWindow&quot;) MainWindow.resize(299, 148) self.centralwidget = QtWidgets.QWidget(MainWindow) self.centralwidget.setObjectName(&quot;centralwidget&quot;) self.pushButton = QtWidgets.QPushButton(self.centralwidget) self.pushButton.setGeometry(QtCore.QRect(90, 70, 75, 23)) self.pushButton.setObjectName(&quot;pushButton&quot;) MainWindow.setCentralWidget(self.centralwidget) self.retranslateUi(MainWindow) QtCore.QMetaObject.connectSlotsByName(MainWindow) def retranslateUi(self, MainWindow): _translate = QtCore.QCoreApplication.translate MainWindow.setWindowTitle(_translate(&quot;MainWindow&quot;, &quot;MainWindow&quot;)) self.pushButton.setText(_translate(&quot;MainWindow&quot;, &quot;Register user&quot;)) #from Child import * class UserRegistation(QtWidgets.QMainWindow, Ui_ChildWindow): def __init__(self, parent=None): super().__init__(parent) self.setupUi(self) self.pushButton.clicked.connect(self.PrintInput) def PrintInput(self): print (self.lineEdit.text()) print (self.lineEdit_2.text()) class Main(QtWidgets.QMainWindow, Ui_MainWindow): def __init__(self, parent=None): super().__init__(parent) self.setupUi(self) self.pushButton.clicked.connect(self.openChild) def openChild(self): # self.child = QtWidgets.QMainWindow() self.ui = UserRegistation() # &lt;--- # self.ui.setupUi(self.ui) # (self.child) # self.child.show() self.ui.show() # &lt;--- if __name__ == &quot;__main__&quot;: import sys app = QtWidgets.QApplication(sys.argv) # MainWindow = QtWidgets.QMainWindow() ui = Main() # &lt;--- # ui.setupUi(MainWindow) ui.show() # &lt;--- sys.exit(app.exec_()) </code></pre> <p><a href="https://i.stack.imgur.com/vm2MZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vm2MZ.png" alt="enter image description here" /></a></p> <hr /> <blockquote> <p>.. yes it is working and i was also able to do it this way. I want to do it using two different file. Also I don't want to write logic in Qt designed file because if i do any change in Qt designer then whole script needs to change</p> </blockquote> <p><strong>Update</strong></p> <p><strong>Main.py</strong></p> <pre><code>from PyQt5 import QtCore, QtGui, QtWidgets from Main_ui import Ui_MainWindow from Child import UserRegistation class Main(QtWidgets.QMainWindow, Ui_MainWindow): def __init__(self, parent=None): super().__init__(parent) self.setupUi(self) self.pushButton.clicked.connect(self.openChild) def openChild(self): # self.child = QtWidgets.QMainWindow() self.ui = UserRegistation() # &lt;--- # self.ui.setupUi(self.ui) # (self.child) # self.child.show() self.ui.show() # &lt;--- if __name__ == &quot;__main__&quot;: import sys app = QtWidgets.QApplication(sys.argv) # MainWindow = QtWidgets.QMainWindow() ui = Main() # &lt;--- # ui.setupUi(MainWindow) ui.show() # &lt;--- sys.exit(app.exec_()) </code></pre> <p><strong>Main_ui.py</strong></p> <pre><code>from PyQt5 import QtCore, QtGui, QtWidgets class Ui_MainWindow(object): def setupUi(self, MainWindow): MainWindow.setObjectName(&quot;MainWindow&quot;) MainWindow.resize(299, 148) self.centralwidget = QtWidgets.QWidget(MainWindow) self.centralwidget.setObjectName(&quot;centralwidget&quot;) self.pushButton = QtWidgets.QPushButton(self.centralwidget) self.pushButton.setGeometry(QtCore.QRect(90, 70, 75, 23)) self.pushButton.setObjectName(&quot;pushButton&quot;) MainWindow.setCentralWidget(self.centralwidget) self.retranslateUi(MainWindow) QtCore.QMetaObject.connectSlotsByName(MainWindow) def retranslateUi(self, MainWindow): _translate = QtCore.QCoreApplication.translate MainWindow.setWindowTitle(_translate(&quot;MainWindow&quot;, &quot;MainWindow&quot;)) self.pushButton.setText(_translate(&quot;MainWindow&quot;, &quot;Register user&quot;)) </code></pre> <p><strong>Child.py</strong></p> <pre><code>from PyQt5 import QtCore, QtGui, QtWidgets from Child_ui import Ui_ChildWindow class UserRegistation(QtWidgets.QMainWindow, Ui_ChildWindow): def __init__(self, parent=None): super().__init__(parent) self.setupUi(self) self.pushButton.clicked.connect(self.PrintInput) def PrintInput(self): print (self.lineEdit.text()) print (self.lineEdit_2.text()) </code></pre> <p><strong>Child_ui.py</strong></p> <pre><code>from PyQt5 import QtCore, QtGui, QtWidgets class Ui_ChildWindow(object): def setupUi(self, MainWindow): MainWindow.setObjectName(&quot;MainWindow&quot;) MainWindow.resize(284, 141) self.centralwidget = QtWidgets.QWidget(MainWindow) self.centralwidget.setObjectName(&quot;centralwidget&quot;) self.label = QtWidgets.QLabel(self.centralwidget) self.label.setGeometry(QtCore.QRect(20, 30, 71, 16)) self.label.setObjectName(&quot;label&quot;) self.label_2 = QtWidgets.QLabel(self.centralwidget) self.label_2.setGeometry(QtCore.QRect(20, 60, 71, 16)) self.label_2.setObjectName(&quot;label_2&quot;) self.pushButton = QtWidgets.QPushButton(self.centralwidget) self.pushButton.setGeometry(QtCore.QRect(20, 100, 251, 23)) self.pushButton.setObjectName(&quot;pushButton&quot;) self.lineEdit = QtWidgets.QLineEdit(self.centralwidget) self.lineEdit.setGeometry(QtCore.QRect(100, 30, 171, 20)) self.lineEdit.setObjectName(&quot;lineEdit&quot;) self.lineEdit_2 = QtWidgets.QLineEdit(self.centralwidget) self.lineEdit_2.setGeometry(QtCore.QRect(100, 60, 171, 20)) self.lineEdit_2.setObjectName(&quot;lineEdit_2&quot;) MainWindow.setCentralWidget(self.centralwidget) self.retranslateUi(MainWindow) QtCore.QMetaObject.connectSlotsByName(MainWindow) def retranslateUi(self, MainWindow): _translate = QtCore.QCoreApplication.translate MainWindow.setWindowTitle(_translate(&quot;MainWindow&quot;, &quot;MainWindow&quot;)) self.label.setText(_translate(&quot;MainWindow&quot;, &quot;First Name&quot;)) self.label_2.setText(_translate(&quot;MainWindow&quot;, &quot;Last Name&quot;)) self.pushButton.setText(_translate(&quot;MainWindow&quot;, &quot;Submit&quot;)) </code></pre>
python|pyqt5
0
1,909,581
62,228,008
Shuffling rows in pandas but orderly
<p>Let's say that I have a data frame of three columns: age, gender, and country. </p> <p>I want to randomly shuffle this data <strong>but in an ordered fashion</strong> according to gender. There are n males and m females, where n could be less than, greater than, or equal to m. The shuffling should happen in such a way that we get the following results for a size of 8 people:</p> <p>male, female, male, female, male, female, female, female,.... (if there are more females: m > n) male, female, male, female, male, male, male, male (if there are more males: n > m) male, female, male, female, male, female, male, female, male, female (if equal males and females: n = m)</p> <pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({'Age': [10, 20, 30, 40, 50, 60, 70, 80], 'Gender': ["Male", "Male", "Male", "Female", "Female", "Male", "Female", "Female"], 'Country': ["US", "UK", "China", "Canada", "US", "UK", "China", "Brazil"]}) </code></pre>
<p>First add the sequence numbers within each group:</p> <pre><code>df['Order'] = df.groupby('Gender').cumcount() </code></pre> <p>Then sort:</p> <pre><code>df.sort_values('Order') </code></pre> <p>It gives you:</p> <pre><code> Age Gender Country Order 0 10 Male US 0 3 40 Female Canada 0 1 20 Male UK 1 4 50 Female US 1 2 30 Male China 2 6 70 Female China 2 5 60 Male UK 3 7 80 Female Brazil 3 </code></pre> <p>If you want to shuffle, do that at the very beginning, e.g. <code>df = df.sample(frac=1)</code>, see: <a href="https://stackoverflow.com/questions/29576430/shuffle-dataframe-rows">Shuffle DataFrame rows</a></p>
python|python-3.x|pandas
2
1,909,582
62,099,645
Why is my Binary Search slower than Linear Search?
<p>I was trying to code a Binary Search and Linear Search and I was shocked by seeing that binary search is slower than Linear Search by sometimes even by 2 times. Please help me. Here is my code.</p> <p>Binary Search Code: </p> <pre><code>def binary_search(array, target, n=0): l = len(array)-1 i = l//2 try: ai = array[i] except: return False if ai == target: n += i return (True, n) elif target &gt;= ai: array = array[i+1:l+1] n += i + 1 return binary_search(array, target, n) elif target &lt;= ai: array = array[0: i] return binary_search(array, target, n) </code></pre> <p>Linear Search Code</p> <pre><code>def linear_search(array, target): for i, num in enumerate(array): if num == target: return True, i return False </code></pre> <p>Test Case Code:</p> <pre><code>import random import time n = 10000000 num = sorted([random.randint(0, n) for x in range(n)]) start = time.time() print(linear_search(num, 1000000)) print(f'Linear Search: {time.time() - start}') start_new = time.time() print(binary_search(num, 1000000)) print(f'Binary Search: {time.time() - start_new}') </code></pre>
<p>As @khelwood said, your code will be much faster with no slicing.</p> <pre><code>def binary_search_no_slice(array, target, low, high): if low &gt; high: return False mid = (low + high) // 2 if array[mid] == target: return True elif array[mid] &gt; target: return binary_search_no_slice(array, target, low, mid - 1) else: return binary_search_no_slice(array, target, mid + 1, high) </code></pre> <p>Added below to your test code.</p> <pre><code>start_new2 = time.time() print(binary_search_no_slice(num, 1000000, 0, len(num) - 1)) print(f'Binary Search no slice: {time.time() - start_new2}') </code></pre> <p>Here is the result on my machine(macOS Catalina, 2.8GHz Corei7, 8GB RAM)</p> <pre><code>False Linear Search: 2.172485113143921 False Binary Search: 0.56640625 False Binary Search no slice: 2.8133392333984375e-05 </code></pre>
python|python-3.x|big-o|binary-search|linear-search
1
1,909,583
62,111,276
My text in my clock python is not aligning properly
<p>My text in my turtle module is not aligning properly, it is aligned up and to the left. I want it to align exactly where the turtle is. Can anyone help? I tried setting the xcor and ycor of the turtle up and to the left by 5 units and that did not work. Any help would be greatly appreciated.</p> <p>Code:</p> <pre><code>import time from datetime import datetime,date import turtle t = turtle.Pen() while True: turtle.tracer(0, 0) hour_hand = float(datetime.today().hour) minute_hand = float(datetime.today().minute) second_hand = float(datetime.today().second) # Draw circle t.hideturtle() t.circle(150) t.left(90) t.up() t.forward(150) t.down() # Draw hands t.right(float(float(minute_hand) * 6)) t.forward(100) t.backward(100) t.left(float(float(minute_hand) * 6)) t.right(int(float(hour_hand) * 30 + float(minute_hand) / 60 * 30)) t.forward(50) t.backward(50) t.left(int(float(hour_hand) * 30 + float(minute_hand) / 60 * 30)) t.right(second_hand * 6) t.forward(125) t.backward(125) t.left(second_hand * 6) # Draw ticks for x in range(0, 12): t.up() t.forward(130) t.down() t.forward(20) t.backward(20) t.up() t.backward(130) t.down() t.right(30) for y in range(0, 60): t.up() t.forward(140) t.down() t.forward(10) t.backward(10) t.up() t.backward(140) t.down() t.right(6) t.up() # Draw numbers t.right(32.5) for z in range(1, 12): t.forward(130) t.sety(t.ycor() - 5) t.setx(t.xcor() - 5) t.write(z, align = 'center', font = ('Times New Roman', 16)) t.sety(t.ycor() + 5) t.setx(t.xcor() + 5) t.backward(130) t.right(30) t.forward(130) t.write(12, align = 'center', font = ('Times New Roman', 16)) turtle.update() t.hideturtle() time.sleep(0.85) t.reset() </code></pre> <p>I don't really want to use tkinter, it is too complicated.</p>
<p>A simpler, though potentially less accurate, way to do this completely within turtle:</p> <pre><code>FONT_SIZE = 16 FONT = ('Times New Roman', FONT_SIZE) t.color('red') t.dot(2) # show target of where we want to center text, for debugging t.color('black') t.sety(t.ycor() - FONT_SIZE/2) t.write(12, align='center', font=FONT) </code></pre> <p><a href="https://i.stack.imgur.com/XVhve.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XVhve.png" alt="enter image description here"></a></p> <p>Now let's address your program as a whole. The primary issues I see is that it flickers and is more complicated than necessary. The first thing to do is to switch turtle into <em>Logo</em> mode, which makes positive angles clockwise and makes 0 degrees at the top (not unlike a clock!).</p> <p>Then we split the dial drawing onto it's own turtle to be drawn <em>once</em> an we put the hands on their own turtle to be erased and redraw over and over. We all toss the <code>while True:</code> and <code>sleep()</code>, which have no place in an event-driven world like turtle, and use a turtle timer event instead:</p> <pre><code>from datetime import datetime from turtle import Screen, Turtle OUTER_RADIUS = 150 LARGE_TICK = 20 SMALL_TICK = 10 FONT_SIZE = 16 FONT = ('Times New Roman', FONT_SIZE) def draw_dial(): dial = Turtle() dial.hideturtle() dial.dot() dial.up() dial.forward(OUTER_RADIUS) dial.right(90) dial.down() dial.circle(-OUTER_RADIUS) dial.up() dial.left(90) dial.backward(OUTER_RADIUS) for mark in range(60): distance = LARGE_TICK if mark % 5 == 0 else SMALL_TICK dial.forward(OUTER_RADIUS) dial.down() dial.backward(distance) dial.up() dial.backward(OUTER_RADIUS - distance) dial.right(6) dial.sety(-FONT_SIZE/2) dial.setheading(30) # starting at 1 o'clock for z in range(1, 13): dial.forward(OUTER_RADIUS - (LARGE_TICK + FONT_SIZE/2)) dial.write(z, align='center', font=FONT) dial.backward(OUTER_RADIUS - (LARGE_TICK + FONT_SIZE/2)) dial.right(30) def tick(): hour_hand = datetime.today().hour minute_hand = datetime.today().minute second_hand = datetime.today().second hands.reset() hands.hideturtle() # redo as undone by reset() hands.right(hour_hand * 30 + minute_hand / 60 * 30) hands.forward(1/3 * OUTER_RADIUS) hands.backward(1/3 * OUTER_RADIUS) hands.left(hour_hand * 30 + minute_hand / 60 * 30) hands.right(minute_hand * 6) hands.forward(2/3 * OUTER_RADIUS) hands.backward(2/3 * OUTER_RADIUS) hands.left(minute_hand * 6) hands.right(second_hand * 6) hands.forward(OUTER_RADIUS - (LARGE_TICK + FONT_SIZE)) hands.backward(OUTER_RADIUS - (LARGE_TICK + FONT_SIZE)) hands.left(second_hand * 6) screen.update() screen.ontimer(tick, 1000) screen = Screen() screen.mode('logo') # make 0 degrees straight up, positive angles clockwise (like a clock!) screen.tracer(False) draw_dial() hands = Turtle() tick() screen.mainloop() </code></pre> <p><a href="https://i.stack.imgur.com/Gnvwm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Gnvwm.png" alt="enter image description here"></a></p>
python|text|graphics|turtle-graphics|python-turtle
3
1,909,584
35,713,357
Applying functions to DataFrame columns in plots
<p>I'd like to apply functions to columns of a DataFrame when plotting them. </p> <p>I understand that the standard way to plot when using Pandas is the .plot method. </p> <p>How can I do math operations within this method, say for example multiply two columns in the plot? </p> <p>Thanks!</p>
<p>Series actually have a plot method as well, so it should work to apply</p> <pre><code>(df['col1'] * df['col2']).plot() </code></pre> <p>Otherwise, if you need to do this more than once it would be the usual thing to make a new column in your dataframe:</p> <pre><code>df['newcol'] = df['col1'] * df['col2'] </code></pre>
pandas
1
1,909,585
15,972,941
Remove PIL from raspberry Pi
<p>Hi i am getting an error "IOError: decoder jpeg not available" when trying to implement some functions from the PIL. What i would like to do is remove PIL, install the jpeg decoder then re-install the PIL, but im lost as to how to uninstall the PIL? Any help would be greatly appreciated</p>
<p>You can do this to re-install PIL</p> <p>pip install -I PIL</p>
jpeg|python-imaging-library|uninstallation|raspberry-pi
0
1,909,586
59,726,138
Can method operating on array of class object use array methods?
<p>I'm new here, and new in Python. I had some C/C++ in colleague. I'm doing course from udemy and I'm wonderig if there is some better idea of the issiue of finding element of an array of class object based on one value. The course task was to find "the oldest cat". Solution there is just using no Lists/arrays but I wanna know how to operate on arrays of objects and if there is better option than my static method getoldest, becouse for me it seems like I'm trying to "cheat" python.</p> <pre><code> class Cat: def getoldest(Cat=[]): age_table=[] for one in Cat: age_table.append(one.age) return Cat[age_table.index(max(age_table))] def __init__(self, name, age): self.name = name self.age = age # 1 Instantiate the Cat object with few cats kotki3=[] kotki3.append(Cat("zimka", 5)) kotki3.append(Cat("korek", 9)) kotki3.append(Cat("oczko", 10)) kotki3.append(Cat("kotek", 1)) kotki3.append(Cat("edward", 4)) # 2 Create a function that finds the oldest cat oldest = Cat.getoldest(kotki3) # 3 Print out: "The oldest cat is x years old.". x will be the oldest cat age by using the function in #2 print(f'The oldest cat is {oldest.name} and it\'s {oldest.age} years old') </code></pre> <p>Thanks a lot. </p>
<p>I think this example could help you see a better way of doing that </p> <pre class="lang-py prettyprint-override"><code>class Cat: def __init__(self, name, age): self.name = name self.age = age def get_details(self): return self.name, self.age cats = [Cat("zimka", 5), Cat("oczko", 10), Cat("kotek", 1), Cat("edward", 4) ] results = [] for cat in cats: (name, age) = cat.get_details() results.append((name,age)) print(sorted(results, key = lambda x: -x[1])) </code></pre>
python|arrays|class|methods
0
1,909,587
49,216,840
Can someone answer why this Tkinter doesn't work?
<h1>I'M SO CONFUSED</h1> <p><strong>Keep in mind that I'm a beginner at programming/python so if my code is unorganized or badly worded, ignore it, I'm getting better lol</strong></p> <p>I'm just playing with tkinter and I'm trying to get a login screen that has a checkbox that toggles the visibility of the password. I just don't understand anymore. The &quot;show&quot; argument won't change based on the variable it was assigned and I don't know why.</p> <pre><code>showPassword = IntVar() show = None def apply(): print(showPassword.get()) sspass = showPassword.get() print(type(sspass)) if sspass == 1: show = None elif sspass == 0: show = &quot;*&quot; spB = Checkbutton(root, text=&quot;Toggle Show Password&quot;, variable=showPassword).grid(row=10, column=1) applyButton = Button(root, text=&quot;Apply&quot;, command=apply).grid(column=1, row=5) Password = entry(root, show=show) </code></pre>
<p>I have arranged a snippet of code (that is not perfect for you to follow but...) based on yours that at least works for you to progress. Your code is incomplete I suppose, it has also some errors. You have to configure the show parameter in the widget. Changing your show var won't do anything to the widget. You'll have to use the form <code>widget['show'] = somevalue</code>. Or the <code>.configure</code>widget method. For both you'll need a widget reference. If you grid a widget in the same line you create it, <code>grid</code> will return nothing so you loose it. Break that in two steps and keep the reference for the widget at creation (first step). <code>entry</code> is actually called <code>Entry</code>. These where the most prominet errors I saw.</p> <pre><code>from tkinter import Button, Checkbutton, Entry, Tk, IntVar root = Tk() showPassword = IntVar() show = None def apply(): print(showPassword.get()) sspass = showPassword.get() print(type(sspass)) if sspass == 1: Password['show'] = "" elif sspass == 0: Password['show'] = "*" Password.update() spB = Checkbutton(root, text="Show Password", variable=showPassword).grid(row=10, column=1) applyButton = Button(root, text="Apply", command=apply).grid(column=1, row=5) Password = Entry(root, show=show) Password.grid(row=3, column=1) root.mainloop() </code></pre>
python|checkbox|tkinter|passwords|python-3.6
0
1,909,588
25,007,042
Generating nested lists from XML doc
<p>Working in python, my goal is to parse through an XML doc I made and create a nested list of lists in order to access them later and parse the feeds. The XML doc resembles the following snippet:</p> <pre><code>&lt;?xml version="1.0'&gt; &lt;sources&gt; &lt;!--Source List by Institution--&gt; &lt;sourceList source="cbc"&gt; &lt;f&gt;http://rss.cbc.ca/lineup/topstories.xml&lt;/f&gt; &lt;/sourceList&gt; &lt;sourceList source="bbc"&gt; &lt;f&gt;http://feeds.bbci.co.uk/news/rss.xml&lt;/f&gt; &lt;f&gt;http://feeds.bbci.co.uk/news/world/rss.xml&lt;/f&gt; &lt;f&gt;http://feeds.bbci.co.uk/news/uk/rss.xml&lt;/f&gt; &lt;/sourceList&gt; &lt;sourceList source="reuters"&gt; &lt;f&gt;http://feeds.reuters.com/reuters/topNews&lt;/f&gt; &lt;f&gt;http://feeds.reuters.com/news/artsculture&lt;/f&gt; &lt;/sourceList&gt; &lt;/sources&gt; </code></pre> <p>I would like to have something like nested lists where the inner most list would be the content between the <code>&lt;f&gt;&lt;/f&gt;</code> tags and the list above that one would be created with the names of the sources ex. <code>source="reuters"</code> would be reuters. Retrieving the info from the XML doc isn't a problem and I'm doing it with <code>elementtree</code> with loops retrieving with <code>node.get('source')</code> etc. The problem is I'm having trouble generating the lists with the desired names and different lengths required from the different sources. I have tried appending but am unsure how to append to list with the names retrieved. Would a dictionary be better? What would be the best practice in this situation? And how might I make this work? If any more info is required just post a comment and I'll be sure to add it.</p>
<p>From your description, a dictionary with keys according to the source name and values according to the feed lists might do the trick.</p> <p>Here is one way to construct such a beast:</p> <pre><code>from lxml import etree from pprint import pprint news_sources = { source.attrib['source'] : [feed.text for feed in source.xpath('./f')] for source in etree.parse('x.xml').xpath('/sources/sourceList')} pprint(news_sources) </code></pre> <p>Another sample, without <code>lxml</code> or <code>xpath</code>:</p> <pre><code>import xml.etree.ElementTree as ET from pprint import pprint news_sources = { source.attrib['source'] : [feed.text for feed in source] for source in ET.parse('x.xml').getroot()} pprint(news_sources) </code></pre> <p>Finally, if you are allergic to list comprehensions:</p> <pre><code>import xml.etree.ElementTree as ET from pprint import pprint xml = ET.parse('x.xml') root = xml.getroot() news_sources = {} for sourceList in root: sourceListName = sourceList.attrib['source'] news_sources[sourceListName] = [] for feed in sourceList: feedName = feed.text news_sources[sourceListName].append(feedName) pprint(news_sources) </code></pre>
python|xml|list|nested
0
1,909,589
60,129,530
How to use minAreaRect() function without getting error?
<p>I have a problem that has ruined my project:</p> <pre><code>def extract_candidate_rectangles(image, contours): rectangles = [] for i, cnt in enumerate(contours): min_rect = cv.minAreaRect(cnt) if validate_contour(min_rect): x, y, w, h = cv.boundingRect(cnt) plate_img = image[y:y+h, x:x+w] if is_max_white(plate_img): copy = image.copy() cv.rectangle(copy, (x, y), (x + w, y + h), (0, 255, 0), 2) rectangles.append(plate_img) cv.imshow("candidates", copy) cv.waitKey(0) return rectangles </code></pre> <p>and the error is:</p> <pre><code>Using TensorFlow backend. Traceback (most recent call last): File "/home/muhammad/Coding/Python/PlateDetectionCodes/PlateDetection/main.py", line 43, in &lt;module&gt; plates = extract_candidate_rectangles(resized.copy(), contours) File "/home/muhammad/Coding/Python/PlateDetectionCodes/PlateDetection/extractor.py", line 65, in extract_candidate_rectangles min_rect = cv.minAreaRect(cnt) cv2.error: OpenCV(4.2.0) /io/opencv/modules/imgproc/src/convhull.cpp:137: error: (-215:Assertion failed) total &gt;= 0 &amp;&amp; (depth == CV_32F || depth == CV_32S) in function 'convexHull' </code></pre> <p>I'll be glad if anyone can help!</p>
<p>The stack trace shows you the line in which the error occured:</p> <pre><code>min_rect = cv.minAreaRect(cnt) </code></pre> <p>Now, you want to take a look at this line of the error:</p> <pre><code>cv2.error: OpenCV(4.2.0) /io/opencv/modules/imgproc/src/convhull.cpp:137: error: (-215:Assertion failed) total &gt;= 0 &amp;&amp; (depth == CV_32F || depth == CV_32S) in function 'convexHull' </code></pre> <p>especially this part:</p> <pre><code>Assertion failed) total &gt;= 0 &amp;&amp; (depth == CV_32F || depth == CV_32S) in function 'convexHull' </code></pre> <p>I assume that <em>cv.minAreaRect</em> internally calls <em>convexHull</em>. OpenCV uses the <em>Assert</em> function to make sure that the parameters passed into a function are in the correct format. Here, either the <em>cnt</em> is empty (total >= 0 is not satisfied) or the format of the points inside the contour array is neither CV_32F (32 bit float) or CV_32S (32 bit signed integer).</p>
python|opencv
2
1,909,590
3,141,534
Another Python Scope Question - losing information going into if statement
<p>Not sure if I'm missing something obvious, but here's what is happening: I have a python 2.4.3 script that contains several RegEx objects. Below one of the regex objects is searching for all matches in a string (tMatchList). Even if tMatchList is not null, it is printing an empty set after the 'if p:' step. This behavior occurs even if it prints correctly before the 'if p:' step. I thought it may have been a scope issue, but everything is declared &amp; contained within one function. I'm not quite seeing how the 'if p:' step is not able to see tMatchList. I am able to print tMatchList after the if statement as well.</p> <pre><code>tMatchList = [] for lines in r: linecount += 1 tMatchList = self._testReplacePDFTag.findall(lines) p = self._pdfPathRegex.search(lines) print tMatchList #tMatchList is printing just fine here if it has any elements if p: print tMatchList #now it's empty, #even if it printed elements in prior statement lines = ..... else: &lt;something else gets done&gt; print tMatchList #now it prints again </code></pre> <p>Including entire function definition for those who would like to see it....</p> <pre><code>def FindFilesAndModifyPDFTag(self, inRootDirArg, inRollBackBool): for root, dirs, files in os.walk(inRootDirArg): for d in dirs: if d.startswith('.'):#excludes directories that start with '.' continue for file in files: if os.path.splitext(file)[1] == self._fileExt: #Backup original. just do it shutil.copy2(os.path.join(root, file), os.path.join(root, file)+"~") r = open(os.path.join(root, file)+"~", "r") f = open(os.path.join(root, file), "w") linecount = 0 tMatchList = [] for lines in r: linecount += 1 tMatchList = self._testReplacePDFTag.findall(lines) t = self._testReplacePDFTag.search(lines) #find pdf path(s) in line pMatchList = self._pdfPathRegex.findall(lines) p = self._pdfPathRegex.search(lines) #fix the pdf tracking code print id(tMatchList), "BEFORE" if p: print id(tMatchList), "INSIDE" lines = self.processPDFTagLine(pMatchList, lines, linecount, file, tMatchList) else: lines = self.processCheckMetaTag(lines, linecount, file) #print id(tMatchList), "INSIDE ELSE" print id(tMatchList), "AFTER" f.writelines(lines) f.close() r.close() os.remove(os.path.join(root, file)+"~") enter code here </code></pre>
<p>The <code>findall</code> may not create a list object. If it is some kind of generator function, then it has a value which is "consumed" by traversing the results once.</p> <p>After consuming the results yielded by this function, there are no more results.</p> <pre><code>tMatchList = self._testReplacePDFTag.findall(lines) p = self._pdfPathRegex.search(lines) print tMatchList #tMatchList is printing just fine here if it has any elements if p: print tMatchList #now it's empty, </code></pre> <p>Try this.</p> <pre><code>tMatchList = list( self._testReplacePDFTag.findall(lines) ) </code></pre>
python|scope
0
1,909,591
2,598,493
Python + GPG (edit-key change password)
<p>I'm looking for a gpg Python library that let me change password for my key. I saw python-gnupg but there aren't that function :( Anyone can help me please? If is possibile i wish have also some examples from docs</p>
<p>Python gnupg module already has a method (<code>GPG._handle_io</code>) to invoke gpg command and pass input to it and parse output. It may solve the portability issue.</p> <pre><code>gpg = gnupg.GPG() result = gnupg.Verify(gpg) gpg._handle_io(['--command-fd', '0', '--edit-key', keyname], StringIO(u'\n'.join(commands)), result) </code></pre> <p><code>commands</code> is your command sequence to execute in edit-key mode. Note, some commands behave little differently when issuing them in <code>--no-tty</code> mode, eg. <code>save</code> command asks a <code>y</code> for confirmation. <code>result</code> is an arbitrary gpg class and needed only to capture the output. See machine-readable output in <code>result.stderr</code>.</p>
python|gnupg
0
1,909,592
2,626,636
Pickling a class definition
<p>Is there a way to pickle a class definition?</p> <p>What I'd like to do is pickle the definition (which may created dynamically), and then send it over a TCP connection so that an instance can be created on the other end.</p> <p>I understand that there may be dependencies, like modules and global variables that the class relies on. I'd like to bundle these in the pickling process as well, but I'm not concerned about automatically detecting the dependencies because it's okay if the onus is on the user to specify them.</p>
<p>If you use <code>dill</code>, it enables you to treat <code>__main__</code> as if it were a python module (for the most part). Hence, you can serialize interactively defined classes, and the like. <code>dill</code> also (by default) can transport the class definition as part of the pickle.</p> <pre><code>&gt;&gt;&gt; class MyTest(object): ... def foo(self, x): ... return self.x * x ... x = 4 ... &gt;&gt;&gt; f = MyTest() &gt;&gt;&gt; import dill &gt;&gt;&gt; &gt;&gt;&gt; with open('test.pkl', 'wb') as s: ... dill.dump(f, s) ... &gt;&gt;&gt; </code></pre> <p>Then shut down the interpreter, and send the file <code>test.pkl</code> over TCP. On your remote machine, now you can get the class instance.</p> <pre><code>Python 2.7.9 (default, Dec 11 2014, 01:21:43) [GCC 4.2.1 Compatible Apple Clang 4.1 ((tags/Apple/clang-421.11.66))] on darwin Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; import dill &gt;&gt;&gt; with open('test.pkl', 'rb') as s: ... f = dill.load(s) ... &gt;&gt;&gt; f &lt;__main__.MyTest object at 0x1069348d0&gt; &gt;&gt;&gt; f.x 4 &gt;&gt;&gt; f.foo(2) 8 &gt;&gt;&gt; </code></pre> <p>But how to get the class definition? So this is not exactly what you wanted. The following is, however.</p> <pre><code>&gt;&gt;&gt; class MyTest2(object): ... def bar(self, x): ... return x*x + self.x ... x = 1 ... &gt;&gt;&gt; import dill &gt;&gt;&gt; with open('test2.pkl', 'wb') as s: ... dill.dump(MyTest2, s) ... &gt;&gt;&gt; </code></pre> <p>Then after sending the file… you can get the class definition.</p> <pre><code>Python 2.7.9 (default, Dec 11 2014, 01:21:43) [GCC 4.2.1 Compatible Apple Clang 4.1 ((tags/Apple/clang-421.11.66))] on darwin Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; import dill &gt;&gt;&gt; with open('test2.pkl', 'rb') as s: ... MyTest2 = dill.load(s) ... &gt;&gt;&gt; print dill.source.getsource(MyTest2) class MyTest2(object): def bar(self, x): return x*x + self.x x = 1 &gt;&gt;&gt; f = MyTest2() &gt;&gt;&gt; f.x 1 &gt;&gt;&gt; f.bar(4) 17 </code></pre> <p>So, within <code>dill</code>, there's <code>dill.source</code>, and that has methods that can detect dependencies of functions and classes, and take them along with the pickle (for the most part).</p> <pre><code>&gt;&gt;&gt; def foo(x): ... return x*x ... &gt;&gt;&gt; class Bar(object): ... def zap(self, x): ... return foo(x) * self.x ... x = 3 ... &gt;&gt;&gt; print dill.source.importable(Bar.zap, source=True) def foo(x): return x*x def zap(self, x): return foo(x) * self.x </code></pre> <p>So that's not "perfect" (or maybe not what's expected)… but it does serialize the code for a dynamically built method and it's dependencies. You just don't get the rest of the class -- but the rest of the class is not needed in this case.</p> <p>If you wanted to get everything, you could just pickle the entire session.</p> <pre><code>&gt;&gt;&gt; import dill &gt;&gt;&gt; def foo(x): ... return x*x ... &gt;&gt;&gt; class Blah(object): ... def bar(self, x): ... self.x = (lambda x:foo(x)+self.x)(x) ... x = 2 ... &gt;&gt;&gt; b = Blah() &gt;&gt;&gt; b.x 2 &gt;&gt;&gt; b.bar(3) &gt;&gt;&gt; b.x 11 &gt;&gt;&gt; dill.dump_session('foo.pkl') &gt;&gt;&gt; </code></pre> <p>Then on the remote machine...</p> <pre><code>Python 2.7.9 (default, Dec 11 2014, 01:21:43) [GCC 4.2.1 Compatible Apple Clang 4.1 ((tags/Apple/clang-421.11.66))] on darwin Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; import dill &gt;&gt;&gt; dill.load_session('foo.pkl') &gt;&gt;&gt; b.x 11 &gt;&gt;&gt; b.bar(2) &gt;&gt;&gt; b.x 15 &gt;&gt;&gt; foo(3) 9 </code></pre> <p>Lastly, if you want the transport to be "done" for you transparently, you could use <code>pathos.pp</code> or <code>ppft</code>, which provide the ability to ship objects to a second python server (on a remote machine) or python process. They use <code>dill</code> under the hood, and just pass the code across the wire.</p> <pre><code>&gt;&gt;&gt; class More(object): ... def squared(self, x): ... return x*x ... &gt;&gt;&gt; import pathos &gt;&gt;&gt; &gt;&gt;&gt; p = pathos.pp.ParallelPythonPool(servers=('localhost,1234',)) &gt;&gt;&gt; &gt;&gt;&gt; m = More() &gt;&gt;&gt; p.map(m.squared, range(5)) [0, 1, 4, 9, 16] </code></pre> <p>The <code>servers</code> argument is optional, and here is just connecting to the local machine on port <code>1234</code>… but if you use the remote machine name and port instead (or as well), you'll fire off to the remote machine -- "effortlessly".</p> <p>Get <code>dill</code>, <code>pathos</code>, and <code>ppft</code> here: <a href="https://github.com/uqfoundation" rel="noreferrer">https://github.com/uqfoundation</a></p>
python|pickle
8
1,909,593
5,724,407
Tail file into message queue
<p>I launch a process on a linux machine via python's subprocess (specifically on AWS EC2) which generates a number of files. I need to "tail -f" these files and send each of the resulting jsonified outputs to their respective AWS SQS queues. How would I go about such a task?</p> <p><strong>Edit</strong></p> <p>As suggested by this answer, <a href="https://stackoverflow.com/questions/636561/how-can-i-run-an-external-command-asynchronously-from-python/636719#636719" title="asyncproc">asyncproc</a>, and <a href="http://www.python.org/dev/peps/pep-3145/" rel="nofollow noreferrer">PEP3145</a>, I can do this with the following:</p> <pre><code>from asyncproc import Process import Queue import os import time # Substitute AWS SQS for Queue sta_queue = Queue.Queue() msg_queue = Queue.Queue() running_procs = {'status':(Process(['/usr/bin/tail', '--retry', '-f','test.sta']),sta_queue),'message':(Process(['/usr/bin/tail', '--retry', '-f', 'test.msg' ]),msg_queue)} def handle_proc(p,q): latest = p.read() if latest: # If nothing new, latest will be an empty string q.put(latest) retcode = p.wait(flags=os.WNOHANG) return retcode while len(running_procs): proc_names = running_procs.keys() for proc_name in proc_names: proc, q = running_procs[proc_name] retcode = handle_proc(proc, q) if retcode is not None: # Process finished. del running_procs[proc_name] time.sleep(1.0) print("Status queue") while not sta_queue.empty(): print(sta_queue.get()) print("Message queue") while not msg_queue.empty(): print(msg_queue.get()) </code></pre> <p>This should be sufficient, I think, unless others can provide a better answer.</p> <p><strong>More Edits</strong></p> <p>I'm overthinking the problem. Although the above works nicely, I think the simplest solution is: -check for the existence of the files -if the files exist, copy them to a bucket on AWS S3 and send a message through AWS SQS that files have been copied. Repeat every 60 seconds -consumer app polls SQS and eventually receives message that files have been copied -consumer app downloads files from S3 and replaces the previous contents with the latest contents. Repeat until job completes</p> <p>Although the whole issue of asynchronous IO in subprocess is still an issue. </p>
<p>You can use the <a href="http://docs.python.org/library/subprocess.html#subprocess.Popen" rel="nofollow noreferrer">subprocess.Popen</a> class to run <em>tail</em> and read its output.</p> <pre><code>try: process = subprocess.Popen(['tail', '-f', filename], stdout=PIPE) except (OSError, ValueError): pass # TODO: handle errors output = process.stdout.read() </code></pre> <p>The <a href="http://docs.python.org/library/subprocess.html#convenience-functions" rel="nofollow noreferrer">subprocess.check_output</a> function provides this functionality in a one-liner. It is new in Python version 2.7.</p> <pre><code>try: output = subprocess.check_output(['tail', '-f', filename], stdout=PIPE) except CalledProcessError: pass # TODO: handle errors </code></pre> <p>For non-blocking I/O, see <a href="https://stackoverflow.com/questions/375427/non-blocking-read-on-a-stream-in-python">this question</a>.</p>
python|asynchronous|subprocess
0
1,909,594
5,945,427
Algorithm in Python To Solve This Problem
<p>I have a list of lists such as: <code>[[foo,1],[baz,1],[foo,0],[bar,3],[foo,1],[bar,2],[baz,2]]</code>. I want to get all the different items in the inner lists and find the total number of them. I mean the result should be like: <code>[[foo,2],[bar,5],[baz,3]]</code>. How can I do this task?</p> <p>Thanks in advance.</p>
<p>Create a dictionary</p> <pre><code>D = {} for item in list: left,right=item D[left] = D.get(left, 0) + right </code></pre> <p>There may be faster ways to do this though.</p> <p>As suggested in the comments by Joce, Gnibbler and Blair you coud do this to get a list again.</p> <pre><code># To get a list of lists pairs = map(list, D.items()) # To get a list of tuples pairs = D.items() </code></pre>
python|algorithm
6
1,909,595
5,987,011
Should I return an empty dict instead of None?
<p>I have a method that currently returns <code>None</code> or a <code>dict</code>.</p> <pre><code>result,error = o.apply('grammar') </code></pre> <p>The caller currently has to check for the existence of two keys to decide what kind of object was returned.</p> <pre><code>if 'imperial' in result: # yay elif 'west' in result: # yahoo else: # something wrong? </code></pre> <p>Because result can be <code>None</code>, I'm thinking of returning an empty dict instead, so the caller does not need to check for that. What do you think ?</p> <p>For comparison, in the <code>re</code> module, the result of calling <code>match</code> can result in <code>None</code>.</p> <pre><code>p = re.compile('\w+') m = p.match( 'whatever' ) </code></pre> <p>But in this case, <code>m</code> is an object instance. In my case, I am returning a dict which should either be empty or have some entries.</p>
<p>Yes I think returning an empty dict (or where applicable an empty list) is preferable to returning None as this avoids an additional check in the client code.</p> <p>EDIT: Adding some code sample to elaborate:</p> <pre><code>def result_none(choice): mydict = {} if choice == 'a': mydict['x'] = 100 mydict['y'] = 1000 return mydict else: return None def result_dict(choice): mydict = {} if choice == 'a': mydict['x'] = 100 mydict['y'] = 1000 return mydict test_dict = result_dict('b') if test_dict.get('x'): print 'Got x' else: print 'No x' test_none = result_none('b') if test_none.get('x'): print 'Got x' else: print 'No x' </code></pre> <p>In the above code the check <code>test_none.get(x)</code> throws an <strong>AttributeError</strong> as result_none method can possibly return a None. To avoid that I have to add an additional check and might rewrite that line as: <code>if test_none is not None and test_none.get('x')</code> which is not at all needed if the method were returning an empty dict. As the example shows the check <code>test_dict.get('x')</code> works fine as the method <code>result_dict</code> returns an empty dict.</p>
python
21
1,909,596
68,005,013
Slight difference in objective function of linear programming makes program extremely slow
<p>I am using Google's OR Tool SCIP (Solving Constraint Integer Programs) solver to solve a Mixed integer programming problem using Python. The problem is a variant of the standard scheduling problem, where there are constraints limiting that each worker works maximum once per day and that every shift is covered by only one worker. The problem is modeled as follows:</p> <p><a href="https://i.stack.imgur.com/yiWsT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yiWsT.png" alt="Mixed Integer Programming Model" /></a></p> <p>Where <em>n</em> represents the worker, <em>d</em> the day and <em>i</em> the specific shift in a given day. The problem comes when I change the objective function that I want to minimize from</p> <p><a href="https://i.stack.imgur.com/YcMiQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YcMiQ.png" alt="Fast Objective Function" /></a></p> <p>To:</p> <p><a href="https://i.stack.imgur.com/9yZrr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9yZrr.png" alt="Slow Objective Function" /></a></p> <p>In the first case an optimal solution is found within 5 seconds. In the second case, after 20 minutes running, the optimal solution was still not reached. Any ideas to why this happens? How can I change the objective function without impacting performance this much?</p> <p>Here is a sample of the values taken by the variables <em>tier</em> and <em>acceptance</em> used in the objective function. <a href="https://i.stack.imgur.com/6VaMF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6VaMF.png" alt="Sample data for tier and acceptance of the objective function" /></a></p>
<p>You should ask the SCIP team.</p> <p>Have you tried using the SAT backend with 8 threads ?</p>
python|optimization|or-tools|mixed-integer-programming|scip
1
1,909,597
30,412,956
Tkinter: Changing value of a Textbox after calculation to avoid duplicates
<pre><code>from tkinter import * class HHRG: def __init__(self, root): self.root = root self.RnReg = 50 self.RnResump = 80 self.RnCert = 80 self.RnDC = 70 self.RnSOC = 90 self.LvnReg = 40 self.LvnOut = 35 self.Hha = 25 self.Pt = 75 self.Ot = 75 self.St = 75 self.HHRGValue = IntVar() self.RnRegValue = IntVar() self.RnResumpValue = IntVar() self.RnCertValue = IntVar() self.RnDCValue = IntVar() self.RnSOCValue = IntVar() self.LvnRegValue = IntVar() self.LvnOutValue = IntVar() self.HhaValue = IntVar() self.PtValue = IntVar() self.OtValue = IntVar() self.StValue = IntVar() ###LABELS### self.HHRGLabel = Label(self.root, text="HHRG") self.RnRegLabel = Label(self.root, text="Regular Rn Visits") self.RnResumpLabel = Label(self.root, text="Rn Resumption Visits") self.RnCertLabel = Label(self.root, text="Rn recertification Visits") self.RnDCLabel = Label(self.root, text="Rn D/C Visits") self.RnSOCLabel = Label(self.root, text="Rn SOC Visits") self.LvnRegLabel = Label(self.root, text="Regular Lvn Visits") self.LvnOutLabel = Label(self.root, text="Lvn Outlier Visits") self.HhaLabel = Label(self.root, text="HHA visits") self.PtLabel = Label(self.root, text="Pt Visits") self.OtLabel = Label(self.root, text="Ot Visits") self.StLabel = Label(self.root, text="St Visits") self.TotalLabel = Label(self.root, text="Net Total") ###ENTRY BOXES### self.HHRGEntry = Entry(self.root, textvariable=self.HHRGValue) self.RnRegEntry = Entry(self.root, textvariable=self.RnRegValue) self.RnResumpEntry = Entry(self.root, textvariable=self.RnResumpValue) self.RnCertEntry = Entry(self.root, textvariable=self.RnCertValue) self.RnDCEntry = Entry(self.root, textvariable=self.RnDCValue) self.RnSOCEntry = Entry(self.root, textvariable=self.RnSOCValue) self.LvnRegEntry = Entry(self.root, textvariable=self.LvnRegValue) self.LvnOutEntry = Entry(self.root, textvariable=self.LvnOutValue) self.HhaEntry = Entry(self.root, textvariable=self.HhaValue) self.PtEntry = Entry(self.root, textvariable=self.PtValue) self.OtEntry = Entry(self.root, textvariable=self.OtValue) self.StEntry = Entry(self.root, textvariable=self.StValue) self.TotalEntry = Text(root, height=2, width=10) self.clearButton = Button(root, text="Clear") self.clearButton.bind("&lt;Button-1&gt;", self.clear) self.calculatebutton = Button(root, text="Calculate", width=10) self.calculatebutton.bind("&lt;Button-1&gt;", self.clear) self.calculatebutton.bind("&lt;Button-1&gt;", self.calculate) ####LABEL GRIDS### self.HHRGLabel.grid(row=0, column=0) self.RnRegLabel.grid(row=1, column=0) self.RnResumpLabel.grid(row=2, column=0) self.RnCertLabel.grid(row=3, column=0) self.RnDCLabel.grid(row=4, column=0) self.RnSOCLabel.grid(row=5, column=0) self.LvnRegLabel.grid(row=6, column=0) self.LvnOutLabel.grid(row=7, column=0) self.HhaLabel.grid(row=8, column=0) self.PtLabel.grid(row=9, column=0) self.OtLabel.grid(row=10, column=0) self.StLabel.grid(row=11, column=0) self.TotalLabel.grid(row=12, column=0) ###ENTRY GRIDS### self.HHRGEntry.grid(row=0, column=1) self.RnRegEntry.grid(row=1, column=1) self.RnResumpEntry.grid(row=2, column=1) self.RnCertEntry.grid(row=3, column=1) self.RnDCEntry.grid(row=4, column=1) self.RnSOCEntry.grid(row=5, column=1) self.LvnRegEntry.grid(row=6, column=1) self.LvnOutEntry.grid(row=7, column=1) self.HhaEntry.grid(row=8, column=1) self.PtEntry.grid(row=9, column=1) self.OtEntry.grid(row=10, column=1) self.StEntry.grid(row=11, column=1) self.TotalEntry.grid(row=12, column=1) self.calculatebutton.grid(columnspan=2, pady=10) self.clearButton.grid(row=13, column=1) def calculate(self, event): values = [(self.RnRegValue.get() * self.RnReg), (self.RnResumpValue.get() * self.RnResump), (self.RnCertValue.get() * self.RnCert), (self.RnDCValue.get() * self.RnDC), (self.RnSOCValue.get() * self.RnSOC), (self.LvnRegValue.get() * self.LvnReg), (self.LvnOutValue.get() * self.LvnOut), (self.HhaValue.get() * self.Hha), (self.PtValue.get() * self.Pt), (self.OtValue.get() * self.Ot), (self.StValue.get() * self.St)] self.total = 0 for i in values: self.total += i result = self.HHRGValue.get() - self.total self.TotalEntry.insert(END, result) def clear(self, event): self.TotalEntry.delete("1.0", END) root = Tk() a = HHRG(root) root.mainloop() </code></pre> <p>So i've got this modified calculator of mine and the problem with it is everytime you calculate. it returns outputs as desired but if you click it twice it'll duplicate</p> <p><img src="https://i.stack.imgur.com/MCDyM.png" alt="Duplicated Result"></p> <p>I tried binding the <code>self.calculatebutton</code> to my <code>clear()</code> method but it wouldn't prevent the duplication of the results</p> <p>my question is. How can we make it calculate the desired output but wipe the previous output at the same time to prevent duplicates? so if someone presses the calculate button multiple times it'll only output one total not multiple ones like the picture above</p>
<p>This code is where the problem lies: </p> <pre><code>self.calculatebutton = Button(root,text="Calculate",width=10) self.calculatebutton.bind("&lt;Button-1&gt;",self.clear) self.calculatebutton.bind("&lt;Button-1&gt;",self.calculate) </code></pre> <p>When you call <code>bind</code>, it will <em>replace</em> any previous binding of the same event to the same widget. So, the binding to <code>self.clear</code> goes away when you add the binding to <code>self.calculate</code>. While there are ways to bind multiple functions to an event, usually that is completely unnecessary and leads to difficult-to-maintain code. </p> <p>The simple solution is for your calculate function to call the clear function before adding a new result:</p> <pre><code>def calculate(self,event): ... result = self.HHRGValue.get() - self.total self.clear(event=None) self.TotalEntry.insert(END,result) </code></pre> <p>Note: if this is the only time you'll call clear, you can remove the <code>event</code> parameter from the function definition, and remove it from the call. </p> <p>On a related note: generally speaking you should <em>not</em> use <code>bind</code> on buttons. The button has built-in bindings that normally work better than your custom binding (they handle keyboard traversal and button highlighting, for example).</p> <p>The button widget has a <code>command</code> attribute which you normally use instead of a binding. In your case it would look like this:</p> <pre><code>self.calculatebutton = Button(..., command=self.calculate) </code></pre> <p>When you do that, your <code>calculate</code> method no longer needs the <code>event</code> parameter, so you'll need to remove it. If you want to use the <code>calculate</code> function both from a <code>command</code> and from a binding, you can make the event optional:</p> <pre><code>def calculate(self, event=None) </code></pre>
python|python-3.x|tkinter
2
1,909,598
63,801,848
Numpy: Conserving sum in average over two arrays of integers
<p>I have two arrays of positive integers A and B that each sum to 10:</p> <ul> <li>A = [1,4,5]</li> <li>B = [5,5,0]</li> </ul> <p>I want to write a code (<em>that will work for a general size of the array and the sum</em>) to calculate the array C who is <strong>also a array of positive integers</strong> that <strong>also sums to 10</strong> that is the <strong>closest to the element-wise average as possible</strong>:</p> <ul> <li>Pure average <code>C = (A + B) / 2</code>: C=[3,4.5,2.5]</li> <li>Round <code>C = np.ceil((A + B) / 2).astype(int)</code>: C=[3,5,3], (sum=11, incorrect!)</li> <li>Fix the sum <code>C = SOME CODE</code>: c=[3,4,3], (sum=10, correct!)</li> </ul> <p>Any value can be adjusted to make the sum correct, as long as all elements remain positive integers.</p> <p>What should <code>C = SOME CODE</code> be?</p> <p>Minimum reproducible example:</p> <pre><code>A = np.array([1,4,5]) B = np.array([5,5,0]) C = np.ceil((A + B) / 2).astype(int) print(np.sum(C)) 11 </code></pre> <p>This should give 10.</p>
<p>You can ceil/floor every other non-int element. This works for any shape/size and any sum value (in fact you do not need to know the sum at all. It is enough if <code>A</code> and <code>B</code> have same sum):</p> <pre><code>C = (A + B) / 2 C_c = np.ceil(C) C_c[np.flatnonzero([C!=C.astype(int)])[::2]] -= 1 print(C_c.sum()) #10.0 print(C_c.astype(int)) #[3 4 3] </code></pre>
python|arrays|numpy
2
1,909,599
50,899,831
Camera Behavior In Pyglet
<p>I would like to know from you how I can make sure that the camera in pyglet (2D) always follows the player keeping it always in the middle of the screen. Also, I would like to know how I can make a linear zoom, with the mouse wheel, always holding the player in the middle of the screen. To be clear, if anyone knows Factorio, I would like the camera to behave the same way. Around I found only examples on how to do it by moving the mouse etc. Unfortunately, I have not found anything that interests me.</p> <p>This is the script I'm currently using:</p> <p>Main class (I do not report all the script, but the parts related to the camera):</p> <pre><code>def on_resize(self, width, height): self.camera.init_gl(width, height) def on_mouse_scroll(self, x, y, dx, dy): self.camera.scroll(dy) def _world(self): self.camera = camera(self) self.player = player(self, 0, 0) self.push_handlers(self.player.keyboard) </code></pre> <p>Camera script:</p> <pre><code>class camera(object): zoom_in_factor = 1.2 zoom_out_factor = 1 / zoom_in_factor def __init__(self, game): self.game = game self.left = 0 self.right = self.game.width self.bottom = 0 self.top = self.game.height self.zoom_level = 1 self.zoomed_width = self.game.width self.zoomed_height = self.game.height def init_gl(self, width, height): self.width = width self.height = height glViewport(0, 0, self.width, self.height) def draw(self): glPushMatrix() glOrtho(self.left, self.right, self.bottom, self.top, 1, -1) glTranslatef(-self.game.player.sprite.x + self.width / 2, -self.game.player.sprite.y + self.height / 2, 0) self.game.clear() if self.game.runGame: for sprite in self.game.mapDraw_3: self.game.mapDraw_3[sprite].draw() glPopMatrix() print(self.game.player.sprite.x, self.game.player.sprite.y) def scroll(self, dy): f = self.zoom_in_factor if dy &gt; 0 else self.zoom_out_factor if dy &lt; 0 else 1 if .1 &lt; self.zoom_level * f &lt; 2: self.zoom_level *= f vx = self.game.player.sprite.x / self.width vy = self.game.player.sprite.y / self.height vx_in_world = self.left + vx * self.zoomed_width vy_in_world = self.bottom + vy * self.zoomed_height self.zoomed_width *= f self.zoomed_height *= f self.left = vx_in_world - vx * self.zoomed_width self.right = vx_in_world + (1 - vx) * self.zoomed_width self.bottom = vy_in_world - vy * self.zoomed_height self.top = vy_in_world + (1 - vy) * self.zoomed_height </code></pre> <p>This is what I get: <a href="https://i.stack.imgur.com/vm8Tz.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vm8Tz.gif" alt="enter image description here"></a></p> <p>This is what I would like to get (use Factorio as an example):</p> <p><a href="https://i.imgur.com/FteOKOV.gif" rel="nofollow noreferrer"><img src="https://i.imgur.com/FteOKOV.gif" alt="enter image description here"></a></p> <p>The script that I use at the moment I took it from here and modified for my need:</p> <p><a href="https://stackoverflow.com/questions/19428258/how-to-pan-and-zoom-properly-in-2d">How to pan and zoom properly in 2D?</a></p> <p>However, the script I am using, as you see, is based on something that has been created by someone else and I hate using something this way, because it does not belong to me. So I'm using it just to experiment and create my own camera class. That's why I asked for advice.</p> <p>Other examples I watched:</p> <p><a href="https://www.programcreek.com/python/example/91285/pyglet.gl.glOrtho" rel="nofollow noreferrer">https://www.programcreek.com/python/example/91285/pyglet.gl.glOrtho</a></p> <p><a href="https://groups.google.com/forum/#!topic/pyglet-users/g4dfSGPNCOk" rel="nofollow noreferrer">https://groups.google.com/forum/#!topic/pyglet-users/g4dfSGPNCOk</a></p> <p><a href="https://www.tartley.com/2d-graphics-with-pyglet-and-opengl" rel="nofollow noreferrer">https://www.tartley.com/2d-graphics-with-pyglet-and-opengl</a></p> <p>There are other places I've watched, but I do not remember the links</p> <p>To avoid repetition, yes, I looked on pyglet's guide, but at least that I am so stupid (I do not exclude it), I did not find anything that would help me to understand how to do it.</p>
<p>Well, I'm unsure of your first problem but I can help with the zoom.</p> <pre><code>def on_mouse_scroll(self, x, y, scroll_x, scroll_y): zoom = 1.00 if scroll_y &gt; 0: zoom = 1.03 elif scroll_x &lt; 0: zoom = 0.97 glOrtho(-zoom, zoom, -zoom, zoom, -1, 1) </code></pre>
python-3.x|pyglet
1