Unnamed: 0
int64
0
1.91M
id
int64
337
73.8M
title
stringlengths
10
150
question
stringlengths
21
64.2k
answer
stringlengths
19
59.4k
tags
stringlengths
5
112
score
int64
-10
17.3k
1,905,100
51,630,948
Handling multiple Django forms in a template
<p>I am passing two forms to a template, only one of these forms is compulsory while the other one is optional. Everything is fine if the user chooses to fill-out both forms, the problem comes when the user only fill-out the compulsory form and leaves the optional one, in this case, when the user submits the form, Django will prompt the user to fill the form fields of the optional form even though the user may not be interested in it. </p> <p>"bankingDetailsForm" is the the optional for below while "companyProfileForm" is the compulsory one.</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-js lang-js prettyprint-override"><code>#userRegForm = CustomUserForm() companyProfileForm = CompanyProfileForm() bankingDetailsForm = BankingDetailsForm() args = {#'userRegForm': userRegForm, 'package': packageOption, 'billing_cycle': b_cycle, 'companyProfileForm': companyProfileForm, 'bankingDetailsForm': bankingDetailsForm } args.update(csrf(request)) return render(request, 'user_account/subscribe.html', args)</code></pre> </div> </div> </p> <p>How can I force the "bankingDetailsForm" form to be optional on submit?</p>
<p>Quick and dirty solution: make all your BankingDetailsForm's fields optional (<code>required=False</code>), and override the form's <code>clean()</code> method to only trigger full validation if one of the fields has been filled.</p>
django|python-3.x|django-forms|django-templates|django-views
1
1,905,101
47,763,139
if error, do this and return, else continue execution in one line
<p>There are lots of repetitions in my code like this block:</p> <pre><code>if err: print('Could not parse text!') print('Error code={}'.format(err.code)) print('Error message={}'.format(err.message)) return err.code </code></pre> <p>I want to make it look nicer, maybe in just one line of code.</p> <p>So I want to order the compiler to do this in one line:</p> <p><strong>if there is an error, print necessary information and return error code, otherwise continue execution.</strong></p> <p>Something like this:</p> <pre><code>def error_output(err, text): print(text) print('Error code={}'.format(err.code)) print('Error message={}'.format(err.message)) return err.code return_if(err, error_output, 'Parse error') </code></pre> <p>Tried this:</p> <pre><code>return error_output(err,'parse error') if err else continue </code></pre> <p>But of course it's not possible to use <code>continue</code> like this.</p>
<p>How about:</p> <pre><code>if err: return error_output(err, 'parse error') # more code here </code></pre>
python|python-3.x
3
1,905,102
39,436,773
code for greatest common divisor doesn't output anything
<p>I've written code for finding greatest common divisor:</p> <pre><code>def gcd(a, b): if b == 0: return a else: return gcd(b, a%b) </code></pre> <p>However, grader doesn't accept it saying that it doesn't output anything. How can I fix it?</p>
<blockquote> <p>How can I fix it?</p> </blockquote> <p>Most of these online judges expect output on stdout so you will actually need to output something...</p> <pre><code>print(gcd(2, 6)) </code></pre>
python-2.7|python-3.x|greatest-common-divisor
0
1,905,103
51,469,009
Suddenly .py files can't import modules but still works with CMD
<p>I have a weird issue.</p> <p><strong>Main issue:</strong> My .py files that used to work fine like 3 hours ago now can't import any external modules. I can still run them from Spyder (similar to a PyCharm editor) and from CMD with <code>python run.py</code>. However when clicked on I get the error <code>ModuleNotFoundError: No module named ModuleName</code>. However the module is found when running through everything else, the module is there in the Anaconda libs; the folder doesn't have any permisson restrictions, and it's not just one file it's any .py file that imports an external module.</p> <p>At first I thought this may be a pip issue as I had just update to pip 18, but even when retracting to pip 10.0.1 the issue remains.</p> <p>[EDIT]: I've tried making a PyInstaller .exe and that still works as intended, however the app still doesn't work with cx_freeze even though it used to a fez hours ago.</p> <p><strong>Backstory:</strong> I was playing around with PyInstaller and Cx_Freeze to turn my app into an executable.</p> <p>I have my working .py file that I edit and test inside of the Anaconda's Spyder app.</p> <p>And so I'm testing the executables, and they work fine, just like my python code. The Pyinstaller standalone and the cx_freeze app work as intended.</p> <p>So I change a few things in the main .py file (nothing crazy just removed a print('')), reuse cx_freeze and then at some point I start working on a setup wizard for my cx_freezed app.</p> <p>It's all good except that when running the app, the cmd prompt just closes. I think 'huh weird', I test the .py file in Spyder it works fine, so I screenshot what's written on the cmd : <code>ModuleNotFoundError: No module named ModuleName</code>, so I think it's an issue with the Wizard installer, so I try the original .exe file, same error. So I try the .py file and to my desmise, same error. I double check the modules, reinstall them succesfully, error persists.</p> <p>And so I try to run a backup I know for sure worked and in which I haven't edited anything, and now same error.</p> <p>This is really anoying as I want to make a .exe of the app, managed to and now nothing works anymore</p>
<p>Here are some things you can try. Add this code to get a print out of the system path.</p> <pre><code>import sys from pprint import pprint pprint(sys.path) </code></pre> <p>That should tell you all the paths where modules can be loaded from. If your file is not in one of the paths it won't be loaded.</p> <p>For a bit more info you can run python with the <code>-v</code> flag and it will verbosely let you know what is going on as python starts as well as when you attempt to load modules. You may be able to glean information about what is going wrong that way.</p>
python|python-3.x|module|pip
3
1,905,104
69,945,567
Remove duplicates from dataframe but keep the values of other dataframe columns
<p>Is the following dataframe</p> <pre><code>import numpy as np import pandas as pd df = pd.DataFrame([[1001, 120,np.nan], [1001,np.nan ,30], [1004, 160,np.nan],[1005, 160,np.nan], [1006,np.nan ,8], [1010, 160,np.nan],[1010,np.nan ,4]], columns=['CustomerNr','Period1','Period2']) </code></pre> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th></th> <th>CustomerNr</th> <th>Period1</th> <th>Period2</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>1001</td> <td>120.0</td> <td>NaN</td> </tr> <tr> <td>1</td> <td>1001</td> <td>NaN</td> <td>30.0</td> </tr> <tr> <td>2</td> <td>1004</td> <td>160.0</td> <td>NaN</td> </tr> <tr> <td>3</td> <td>1005</td> <td>160.0</td> <td>NaN</td> </tr> <tr> <td>4</td> <td>1006</td> <td>NaN</td> <td>8.0</td> </tr> <tr> <td>5</td> <td>1010</td> <td>NaN</td> <td>4.0</td> </tr> <tr> <td>6</td> <td>1010</td> <td>160.0</td> <td>NaN</td> </tr> </tbody> </table> </div> <p>and i need to generate this where actually duplicated CustomerNr are eliminated but the values of Period1 and Period 2 are kept.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th></th> <th>CustomerNr</th> <th>Period1</th> <th>Period2</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>1001</td> <td>120.0</td> <td>30.0</td> </tr> <tr> <td>1</td> <td>1004</td> <td>160.0</td> <td>NaN</td> </tr> <tr> <td>2</td> <td>1005</td> <td>160.0</td> <td>NaN</td> </tr> <tr> <td>3</td> <td>1006</td> <td>NaN</td> <td>8.0</td> </tr> <tr> <td>4</td> <td>1010</td> <td>160.0</td> <td>4</td> </tr> </tbody> </table> </div>
<pre class="lang-py prettyprint-override"><code>df.groupby('CustomerNr').agg('min') </code></pre>
python|pandas|dataframe
1
1,905,105
55,660,847
How to stitch two pdf pages into one in python
<p>I am using python, and I want to combine two PDF pages into a single page. My purpose is to combine these two pages into one, not two PDFs. Is there any way to combine the two PDFs one by one? I don't want to merge these two. Without overlapping, is there any way to combine them? </p>
<p>If I understood you correctly, you want to stitch two pages this way:</p> <pre><code>--------- | | | | 1 | 2 | | | | --------- </code></pre> <p>The pyPDF3 module allows you to do this.</p> <pre class="lang-py prettyprint-override"><code>from PyPDF3 import PdfFileWriter, PdfFileReader from PyPDF3.pdf import PageObject pdf_filenames = [&quot;out_mitry.pdf&quot;, &quot;out_cdg.pdf&quot;] input1 = PdfFileReader(open(pdf_filenames[0], &quot;rb&quot;), strict=False) input2 = PdfFileReader(open(pdf_filenames[1], &quot;rb&quot;), strict=False) page1 = input1.getPage(0) page2 = input2.getPage(0) total_width = page1.mediaBox.upperRight[0] + page2.mediaBox.upperRight[0] total_height = max([page1.mediaBox.upperRight[1], page2.mediaBox.upperRight[1]]) new_page = PageObject.createBlankPage(None, total_width, total_height) # Add first page at the 0,0 position new_page.mergePage(page1) # Add second page with moving along the axis x new_page.mergeTranslatedPage(page2, page1.mediaBox.upperRight[0], 0) output = PdfFileWriter() output.addPage(new_page) output.write(open(&quot;result.pdf&quot;, &quot;wb&quot;)) </code></pre>
python|pdf|pypdf
3
1,905,106
55,950,030
What is the fastest way to sort and unpack a large bytearray?
<p>I have a large binary file that needs to be converted into hdf5 file format. </p> <p>I am using Python3.6. My idea is to read in the file, sort the relevant information, unpack it and store it away. My information is stored in a way that the 8 byte time is followed by 2 bytes of energy and then 2 bytes of extra information, then again time, ... My current way of doing it, is the following (my information is read as an bytearray, with the name byte_array):</p> <pre><code>for i in range(0, len(byte_array)+1, 12): if i == 0: timestamp_bytes = byte_array[i:i+8] energy_bytes = byte_array[i+8:i+10] extras_bytes = byte_array[i+10:i+12] else: timestamp_bytes += byte_array[i:i+8] energy_bytes += byte_array[i+8:i+10] extras_bytes += byte_array[i+10:i+12] timestamp_array = np.ndarray((len(timestamp_bytes)//8,), '&lt;Q',timestamp_bytes) energy_array = np.ndarray((len(energy_bytes) // 2,), '&lt;h', energy_bytes) extras_array = np.ndarray((len(timestamp_bytes) // 8,), '&lt;H', extras_bytes) </code></pre> <p>I assume there is a much faster way of doing this, maybe by avoiding to loop over the the whole thing. My files are up to 15GB in size so every bit of improvement would help a lot.</p>
<p>You should be able to just tell NumPy to interpret the data as a structured array and extract fields:</p> <pre><code>as_structured = numpy.ndarray(shape=(len(byte_array)//12,), dtype='&lt;Q, &lt;h, &lt;H', buffer=byte_array) timestamps = as_structured['f0'] energies = as_structured['f1'] extras = as_structured['f2'] </code></pre> <p>This will produce three arrays backed by the input bytearray. Creating these arrays should be effectively instant, but I can't guarantee that working with them will be fast - I think NumPy may need to do some implicit copying to handle alignment issues with these arrays. It's possible (I don't know) that explicitly copying them yourself with <code>.copy()</code> first might speed things up.</p>
python|python-3.x|numpy
1
1,905,107
73,505,005
Why OpenCV VideoWriter is producing videos in which one image is flashing out of sequence repeatedly?
<p>I am creating a video from images using OpenCV VideoWriter but the output result has one image flashing repeatedly out of sequence.</p> <p>I am not getting why is this happening.</p> <p>Here is my code:</p> <pre><code>import cv2 import os import numpy as np image_folder = 'C:\\Users\\OneDrive\\Desktop\\new\\folder' video_name = 'video.avi' images = [img for img in os.listdir(image_folder) if img.endswith(&quot;.jpg&quot;)] frame = cv2.imread(os.path.join(image_folder, images[0])) height, width, layers = frame.shape fourcc = cv2.VideoWriter_fourcc(*'MP4V') video = cv2.VideoWriter(video_name, fourcc,30,(width,height)) for image in images: video.write(np.uint8(cv2.imread(os.path.join(image_folder, image)))) cv2.destroyAllWindows() video.release() </code></pre> <p>I have also tried varying fps and codec, but the results do not vary.</p> <p>Here is the original video: <a href="https://vimeo.com/743523133" rel="nofollow noreferrer">https://vimeo.com/743523133</a> and here is the video created by OpenCV: <a href="https://vimeo.com/743525698" rel="nofollow noreferrer">https://vimeo.com/743525698</a></p> <p>Images in the folder are in the correct order and the folder does not contain that flashing image repeatedly.</p> <p>Please help me. Thanks :)</p>
<p>You haven't <strong>sorted</strong> the images.</p> <p><strong><code>os.listdir</code> makes no guarantees about order.</strong> You are getting them in some nonspecific order.</p> <p>You should sort them. Make sure your images contain a counter with <em>leading zeros</em> (<code>001, 002, 003, ...</code>), so that lexical sorting gives the correct result. If you don't, you'll get an order like <code>1, 10, 11, ..., 19, 2, 20, 21, ...</code>.</p> <p>If <code>os.listdir</code> <em>did</em> happen to sort the list before returning it, then the names you gave those images are unsuitable for a simple lexical sort. Either fix your file names or go to the trouble of parsing the numbers out of the file names and sorting numerically.</p> <p>In any case, not an OpenCV problem. Please review <a href="https://stackoverflow.com/help/minimal-reproducible-example">minimal reproducible example</a> to learn how <em>you need to debug</em> your code before asking.</p>
python|video
1
1,905,108
50,016,048
Django channels 2 with selenium test failed
<p>I am trying to follow Django channels tutorial. I was able to implement chat functionality as discribed <a href="https://channels.readthedocs.io/en/latest/tutorial/part_3.html" rel="nofollow noreferrer">here</a>. But unittests completly copy pasted from this <a href="https://channels.readthedocs.io/en/latest/tutorial/part_4.html" rel="nofollow noreferrer">page</a> failed with following error <code>AttributeError: Can't pickle local object 'DaphneProcess.__init__.&lt;locals&gt;.&lt;lambda&gt;'</code>.</p> <p>Full traceback:</p> <pre><code>Traceback (most recent call last): File "C:\Users\user\PycharmProjects\django_channels_test\venv35\lib\site-packages\django\test\testcases.py", line 202, in __call__ self._pre_setup() File "C:\Users\user\PycharmProjects\django_channels_test\venv35\lib\site-packages\channels\testing\live.py", line 42, in _pre_setup self._server_process.start() File "C:\Users\user\AppData\Local\Programs\Python\Python35\Lib\multiprocessing\process.py", line 105, in start self._popen = self._Popen(self) File "C:\Users\user\AppData\Local\Programs\Python\Python35\Lib\multiprocessing\context.py", line 212, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "C:\Users\user\AppData\Local\Programs\Python\Python35\Lib\multiprocessing\context.py", line 313, in _Popen return Popen(process_obj) File "C:\Users\user\AppData\Local\Programs\Python\Python35\Lib\multiprocessing\popen_spawn_win32.py", line 66, in __init__ reduction.dump(process_obj, to_child) File "C:\Users\user\AppData\Local\Programs\Python\Python35\Lib\multiprocessing\reduction.py", line 59, in dump ForkingPickler(file, protocol).dump(obj) AttributeError: Can't pickle local object 'DaphneProcess.__init__.&lt;locals&gt;.&lt;lambda&gt;' </code></pre> <p>My consumer class:</p> <pre><code>class ChatConsumer(AsyncWebsocketConsumer): async def connect(self): self.room_name = self.scope['url_route']['kwargs']['room_name'] self.room_group_name = 'chat_%s' % self.room_name # Join room group await self.channel_layer.group_add( self.room_group_name, self.channel_name ) await self.accept() async def disconnect(self, close_code): # Leave room group await self.channel_layer.group_discard( self.room_group_name, self.channel_name ) # Receive message from WebSocket async def receive(self, text_data): text_data_json = json.loads(text_data) message = text_data_json['message'] # Send message to room group await self.channel_layer.group_send( self.room_group_name, { 'type': 'chat_message', 'message': message } ) # Receive message from room group async def chat_message(self, event): message = event['message'] # Send message to WebSocket await self.send(text_data=json.dumps({ 'message': message })) </code></pre> <p>My tests module:</p> <pre><code>from channels.testing import ChannelsLiveServerTestCase from selenium import webdriver from selenium.webdriver.common.action_chains import ActionChains from selenium.webdriver.support.wait import WebDriverWait class ChatTests(ChannelsLiveServerTestCase): serve_static = True # emulate StaticLiveServerTestCase @classmethod def setUpClass(cls): super().setUpClass() try: # NOTE: Requires "chromedriver" binary to be installed in $PATH cls.driver = webdriver.Chrome() except: super().tearDownClass() raise @classmethod def tearDownClass(cls): cls.driver.quit() super().tearDownClass() def test_when_chat_message_posted_then_seen_by_everyone_in_same_room(self): try: self._enter_chat_room('room_1') self._open_new_window() self._enter_chat_room('room_1') self._switch_to_window(0) self._post_message('hello') WebDriverWait(self.driver, 2).until(lambda _: 'hello' in self._chat_log_value, 'Message was not received by window 1 from window 1') self._switch_to_window(1) WebDriverWait(self.driver, 2).until(lambda _: 'hello' in self._chat_log_value, 'Message was not received by window 2 from window 1') finally: self._close_all_new_windows() def test_when_chat_message_posted_then_not_seen_by_anyone_in_different_room(self): try: self._enter_chat_room('room_1') self._open_new_window() self._enter_chat_room('room_2') self._switch_to_window(0) self._post_message('hello') WebDriverWait(self.driver, 2).until(lambda _: 'hello' in self._chat_log_value, 'Message was not received by window 1 from window 1') self._switch_to_window(1) self._post_message('world') WebDriverWait(self.driver, 2).until(lambda _: 'world' in self._chat_log_value, 'Message was not received by window 2 from window 2') self.assertTrue('hello' not in self._chat_log_value, 'Message was improperly received by window 2 from window 1') finally: self._close_all_new_windows() # === Utility === def _enter_chat_room(self, room_name): self.driver.get(self.live_server_url + '/chat/') ActionChains(self.driver).send_keys(room_name + '\n').perform() WebDriverWait(self.driver, 2).until(lambda _: room_name in self.driver.current_url) def _open_new_window(self): self.driver.execute_script('window.open("about:blank", "_blank");') self.driver.switch_to.window(self.driver.window_handles[-1]) def _close_all_new_windows(self): while len(self.driver.window_handles) &gt; 1: self.driver.switch_to.window(self.driver.window_handles[-1]) self.driver.execute_script('window.close();') if len(self.driver.window_handles) == 1: self.driver.switch_to.window(self.driver.window_handles[0]) def _switch_to_window(self, window_index): self.driver.switch_to.window(self.driver.window_handles[window_index]) def _post_message(self, message): ActionChains(self.driver).send_keys(message + '\n').perform() @property def _chat_log_value(self): return self.driver.find_element_by_css_selector('#chat-log').get_property('value') </code></pre> <p>I'm using Python 3.5 and Django 2.0. </p>
<p><code>reduction.py</code> is failing to serialize objects containing lambdas. After a little research it seems that this is related to an issue with multiprocessing in a Windows environment (and is not limited to this example.) </p> <p>One way to work around the issue is in <code>reduction.py</code></p> <p>replace: <code>import pickle</code> with <code>import dill as pickle</code></p> <p>The dill package can serialize these objects where pickle fails. However, I would not suggest this for a production environment without digging in to make sure this change doesn't break anything else.</p>
python|django|selenium|django-channels
4
1,905,109
66,583,962
How can I drop duplicates within a dataframe that has a colum that's a numpy array?
<p>I'm trying to drop duplicates, it works with normal pandas columns but I'm getting a error when I'm trying to do it on a column that's a numpy array:</p> <pre><code>new_df = new_df.drop_duplicates(subset=['ticker', 'year', 'embedding']) </code></pre> <p>I get this error:</p> <pre><code>4 frames /usr/local/lib/python3.7/dist-packages/pandas/core/algorithms.py in _factorize_array(values, na_sentinel, size_hint, na_value, mask) 509 table = hash_klass(size_hint or len(values)) 510 uniques, codes = table.factorize( --&gt; 511 values, na_sentinel=na_sentinel, na_value=na_value, mask=mask 512 ) 513 pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable.factorize() pandas/_libs/hashtable_class_helper.pxi in pandas._libs.hashtable.PyObjectHashTable._unique() TypeError: unhashable type: 'numpy.ndarray' </code></pre> <p>Also if it helps here's how my data looks:</p> <pre><code>ticker year embedding 0 a.us 2020.0 [0.0, 0.0, 0.0, 0.62235785, 0.0, 0.27049118, 0... 1 a.us 2020.0 [0.0, 0.0, 0.0, 0.62235785, 0.0, 0.27049118, 0.. </code></pre> <p>I thought about casting to string but I need the arrays in the pandas column to stay as numpy so I'm not sure how to remove duplicates cleanly here.</p>
<p>Here what I will do:</p> <pre><code>&gt;&gt;&gt; df ticker year embedding 0 a.us 2020 [0.0, 0.0, 0.0, 0.62235785, 0.0, 0.27049118] 1 a.us 2020 [0.0, 0.0, 0.0, 0.62235785, 0.0, 0.27049118] &gt;&gt;&gt; cond1 = df.drop(columns=&quot;embedding&quot;).duplicated() &gt;&gt;&gt; cond1 0 False 1 True dtype: bool &gt;&gt;&gt; cond2 = pd.DataFrame(df[&quot;embedding&quot;].to_list()).duplicated() &gt;&gt;&gt; cond2 0 False 1 True dtype: bool </code></pre> <p>To remove duplicate values:</p> <pre><code>&gt;&gt;&gt; df[~(cond1 &amp; cond2)] ticker year embedding 0 a.us 2020 [0.0, 0.0, 0.0, 0.62235785, 0.0, 0.27049118] </code></pre>
python|pandas
1
1,905,110
63,988,316
How to convert a textfile into a list after every word
<p><a href="https://i.stack.imgur.com/Zadiw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Zadiw.png" alt="enter image description here" /></a></p> <p>I have a texfile looking like this:</p> <pre><code>K Alex Music 15 B Anna Soccer 19 U Franco Rugby 29 A Carmen Tennis 27 </code></pre> <hr /> <p>How do I convert the textfile into a list where each word is one element so it looks like this [K,Alex,Music,15 ...etc]. The problem I find is that the texfile doesn't contain any commas (,) that I could have split the words after. One of the reasons why I need the words in a list is that I need to find a way to organize the order of the 4 word sentences depending on which variable the person that uses the code wants. So for example I have to be able to organize after the age or the order of the names etc.</p>
<p>Try this:</p> <pre><code>with open(&quot;File.txt&quot;) as file: content = file.read() list = content.split() </code></pre> <p>This should read the whole content and then split by spaces, turning them into a list.</p>
python|list
1
1,905,111
64,007,019
Json to list of objects with marshmellow Python
<p>I have json with data about my object I created schema for serialize it into list of objects, but it isn't work</p> <p>Schema:</p> <pre><code>from marshmellow import fields, Schema class ContactSchema(Schema): first_name = fields.String(attribute=&quot;ftNm&quot;) last_name = fields.String(attribute=&quot;ltNm&quot;) phone = fields.Integer(attribute=&quot;pn&quot;) </code></pre> <p>Model:</p> <pre><code>class Contact: id: int first_name: str last_name: str phone: str </code></pre> <p>And i have a function to convert (but is not working)</p> <pre><code>def json_to_list(): json = [{'ftNm': 'Name1', 'ltNm': 'Surname1', 'pn': 343434}, {'ftNm': 'Name2', 'ltNm': 'Surname2', 'pn': 141414}, {'ftNm': 'Name3', 'ltNm': 'Surname3', 'pn': 656565}] schema = ContactSchema() result = schema.dump(json) </code></pre> <p>I will appreciate if someone help me with function to convert json to list of objects</p>
<p>I'm not exactly sure what your intentions are. However, serialization and deserialization are possible in the following ways.</p> <p>Serialization and deserialization with renaming of the attributes specified by the variable names.</p> <pre><code>from marshmallow import Schema, fields class ContactSchema(Schema): first_name = fields.Str(attribute=&quot;ftNm&quot;, data_key=&quot;ftNm&quot;) last_name = fields.Str(attribute=&quot;ltNm&quot;, data_key=&quot;ltNm&quot;) phone = fields.Integer(attribute=&quot;pn&quot;, data_key=&quot;pn&quot;) # serialization to json def from_list(): data = [ {'ftNm': 'Name1', 'ltNm': 'Surname1', 'pn': 343434}, {'ftNm': 'Name2', 'ltNm': 'Surname2', 'pn': 141414}, {'ftNm': 'Name3', 'ltNm': 'Surname3', 'pn': 656565} ] schema = ContactSchema(many=True) return schema.dump(data) # deserialization from json def to_list(): json = [ {'ftNm': 'Name1', 'ltNm': 'Surname1', 'pn': 343434}, {'ftNm': 'Name2', 'ltNm': 'Surname2', 'pn': 141414}, {'ftNm': 'Name3', 'ltNm': 'Surname3', 'pn': 656565} ] schema = ContactSchema(many=True) return schema.load(json) </code></pre> <p>Deserialization without renaming of the attributes specified by the variable names.</p> <pre><code>class ContactSchema(Schema): first_name = fields.Str(attribute=&quot;ftNm&quot;) last_name = fields.Str(attribute=&quot;ltNm&quot;) phone = fields.Integer(attribute=&quot;pn&quot;) # deserialization from json def to_list(): json = [ {'first_name': 'Name1', 'last_name': 'Surname1', 'phone': 343434}, {'first_name': 'Name2', 'last_name': 'Surname2', 'phone': 141414}, {'first_name': 'Name3', 'last_name': 'Surname3', 'phone': 656565} ] schema = ContactSchema(many=True) return schema.load(json) </code></pre> <p>The direction of the conversion may not be indicated correctly.</p> <pre><code>from marshmallow import Schema, fields, post_load from dataclasses import dataclass @dataclass class Contact: # id: int first_name: str last_name: str phone: str class ContactSchema(Schema): first_name = fields.Str(data_key=&quot;ftNm&quot;) last_name = fields.Str(data_key=&quot;ltNm&quot;) phone = fields.Integer(data_key=&quot;pn&quot;) @post_load def make_user(self, data, **kwargs): return Contact(**data) # deserialization from json def to_list(): json = [ {'ftNm': 'Name1', 'ltNm': 'Surname1', 'pn': 343434}, {'ftNm': 'Name2', 'ltNm': 'Surname2', 'pn': 141414}, {'ftNm': 'Name3', 'ltNm': 'Surname3', 'pn': 656565} ] schema = ContactSchema(many=True) return schema.load(json) </code></pre> <p>To serialize and deserialize a database model, I recommend <a href="https://flask-marshmallow.readthedocs.io/en/latest/" rel="nofollow noreferrer">flask-marshmallow</a> and <a href="https://marshmallow-sqlalchemy.readthedocs.io/en/latest/" rel="nofollow noreferrer">marshmallow-sqlalchemy</a>.</p>
python|json|flask|serialization|marshmallow
2
1,905,112
53,257,727
How to access SERVER variable in Flask?
<p>like in PHP we can print the array $_server like this : </p> <pre><code>&lt;?php echo '&lt;pre&gt;'; print_r($_SERVER); echo '&lt;/pre&gt;'; ?&gt; </code></pre> <p>how can we access equivalent $_server variable of php in flask. </p>
<p>Using <strong>mod_wsgi</strong>, over mod_python, your application is passed an environment variable such as:</p> <pre><code>def application(environ, start_response): ... </code></pre> <p>And the environment contains typical elements from $_SERVER in PHP</p> <pre><code>... environ['REQUEST_URI']; ... </code></pre>
php|python|flask
2
1,905,113
65,482,427
How do I read DOUBLE from a memory of a process
<p>So I am making an AI BOT for game Bloons TD6, but for it to work I need to get money value so he knows when he can buy something. For that I decided to find pointer to in-game money but I don't know how to read memory with python, I managed to do it in cpp but for bot to work I need it in python. I already managed to get PID, now I just need to read an address from memory.</p> <p>Also important to mention is that value that I want to read is double.</p> <pre class="lang-py prettyprint-override"><code>PROCESS_ALL_ACCESS = 0x1F0FFF HWND = win32ui.FindWindow(None,&quot;BloonsTD6&quot;).GetSafeHwnd() PID = win32process.GetWindowThreadProcessId(HWND)[1] </code></pre>
<p>You could try <strong>Pymem</strong>; here you can find a quickstart showing how you can read/write integer values from/to process memory: <a href="https://pymem.readthedocs.io/en/latest/quickstart.html" rel="nofollow noreferrer">https://pymem.readthedocs.io/en/latest/quickstart.html</a>.</p> <p>You'll find this simple example (there's actually a typo in it, it's <em>pm.process_id</em>, not <em>process_id</em>):</p> <pre><code>from pymem import Pymem pm = Pymem('notepad.exe') print('Process id: %s' % pm.process_id) address = pm.allocate(10) print('Allocated address: %s' % address) pm.write_int(address, 1337) value = pm.read_int(address) print('Allocated value: %s' % value) pm.free(address) </code></pre> <p>In the same way it is possible to read/write a double by using the <em><strong>read_double()</strong></em> and <em><strong>write_double()</strong></em> functions. You can find some docs in here: <a href="https://pymem.readthedocs.io/en/documentation/api.html" rel="nofollow noreferrer">https://pymem.readthedocs.io/en/documentation/api.html</a></p> <p>Also check this out: <a href="https://stackoverflow.com/questions/52521963/reading-data-from-process-memory-with-python">reading data from process&#39; memory with Python</a></p>
python|ctypes
0
1,905,114
65,190,544
pandas grouping by multiple categories for duplicates
<p>Given this sample dataset, I am attempting to alert various companies that they have duplicates in our database so that they can all communicate with each other and determine which company the person belongs to:</p> <pre><code>Name SSN Company Smith, John 1234 A Smith, John 1234 B Jones, Mary 4567 C Jones, Mary 4567 D Williams, Joe 1212 A Williams, Joe 1212 C </code></pre> <p>The ideal output is a data frame provided to each company alerting them to duplicates in the data and the identity of the other company claiming the same person as assigned to them. Something like this:</p> <p>Company A dataframe</p> <pre><code>Name SSN Company Smith, John 1234 A Smith, John 1234 B Williams, Joe 1212 A Williams, Joe 1212 C </code></pre> <p>Company C dataframe</p> <pre><code>Name SSN Company Jones, Mary 4567 C Jones, Mary 4567 D Williams, Joe 1212 A Williams, Joe 1212 C </code></pre> <p>So, tried groupby ['Company'], but, of course, that only groups all the Company results in one group, it omits the other Company with the duplicate person and SSN. Some version of groupby (deep in the logic of that one) seems like it should work, but grouping by multiple columns, not quite. The output would be a grouped by company but containing the duplicate value associated with all the values in that company's group. A enigma, hence my post.</p> <p>Perhaps groupby Company and then concatenate each Company group with each other group on the Name column?</p>
<p>First we pivot on <code>Company</code> to see employees who are in multiple companies easily:</p> <pre><code>df2 = pd.pivot_table(df.assign(count = 1), index = ['Name','SSN'], columns='Company', values='count', aggfunc = 'count') </code></pre> <p>produces</p> <pre><code> Company A B C D Name SSN Jones,Mary 4567 NaN NaN 1.0 1.0 Smith,John 1234 1.0 1.0 NaN NaN Williams,Joe 1212 1.0 NaN 1.0 NaN </code></pre> <p>where values are the count of an employee in that company and NaN means he is not in it</p> <p>now we can manipilate to extract useful views for different companies. For A we can say 'pull everyone who is in company A <em>and</em> in any of the other companies':</p> <pre><code>dfA = df2[(~df2['A'].isna()) &amp; (~df2[['B','C','D']].isna()).any(axis=1) ].dropna(how = 'all', axis=1) dfA </code></pre> <p>this produces</p> <pre><code> Company A B C Name SSN Smith,John 1234 1.0 1.0 NaN Williams,Joe 1212 1.0 NaN 1.0 </code></pre> <p>Note we dropped companies that are irrelevant here, via <code>dropna(...)</code>, in this case D, as there were no overlaps between A and D. and column D had all NaNs</p> <p>We can easily write a function to produce a report for any company</p> <pre><code>def report_for(company_name): companies = df2.columns other_companies = [c for c in companies if c != company_name] return (df2[(~df2[company_name].isna()) &amp; (~df2[other_companies].isna()).any(axis=1) ] .loc[:,[company_name] + other_companies] .dropna(how = 'all', axis=1) ) </code></pre> <p>Note we also re-order columns so the table for company 'B' has column 'B' first:</p> <pre><code>report_for('B') </code></pre> <p>generates</p> <pre><code> Company B A Name SSN Smith,John 1234 1.0 1.0 </code></pre>
python|pandas|pandas-groupby
1
1,905,115
68,665,200
Rename a file in a for-loop with python and linux
<p>I have the following &quot;problem&quot;:</p> <p>I have a file with a lot of inodes, each of which is on a single line. Now I would like to go through these lines with Python and insert each individual line into a Linux command. To do this, I iterate the file with a for loop. With the icat command I can extract a single inode to a destination with one command. However, I have to specify the target name of the file every time. My problem now is that I enter the icat command with subprocess.run in the for loop and want to assign a variable name to the file for each run. Unfortunately, I don't know how.</p> <p>File and file_dir are variables.</p> <pre><code>with open(file, &quot;r&quot;) as f: for i in f.readlines(): subprocess.run([&quot;icat /dev/loop1/&quot; + i + &quot;&gt; &quot; + file_dir], shell=True) print(&quot;finish&quot;) </code></pre> <p>How can I use a variable or a Linux command to name the file to be extracted differently for each pass?</p>
<p>Change file_dir with every for loop run.</p> <pre><code>with open(file, &quot;r&quot;) as f: counter = 0 for i in f.readlines(): subprocess.run([&quot;icat /dev/loop1/&quot; + i + &quot;&gt; &quot; + file_dir + &quot;img&quot; + counter], shell=True) counter += 1 print(&quot;finish&quot;) </code></pre>
python|linux
0
1,905,116
10,538,717
Passing expression as argument: keyword can't be an expression
<p>Here is my actions:</p> <pre><code>&gt;&gt;&gt; def show(d): print d ... &gt;&gt;&gt; test = {"result": True} &gt;&gt;&gt; show(test) {'result': True} &gt;&gt;&gt; show(test["info"]="Some info") File "&lt;console&gt;", line 1 SyntaxError: keyword can't be an expression </code></pre> <p>Why can I not pass expression as argument to a function?</p>
<p>The <code>=</code> sign indicates to Python that this is a keyword parameter, not a positional one. Since the part to the left of the <code>=</code> is an expression <code>test["info"]</code> you get the error.</p>
python|python-2.7
8
1,905,117
5,159,674
Filling out paragraph text via urllib?
<p>Say I have a paragraph in a site I am managing and I want a python program to change the contents of that paragraph. Is this plausible with urllib?</p>
<p>If you have access to any server-side scripting language, its easy.</p>
python
0
1,905,118
5,050,615
How to get application root path in GAE
<p>I am using Jinja2 templates for my GAE Python application. Actually there are a couple of small applications inside one project. They are, for example, blog and site. So, the first one is for blog and the second one is for site =). I have this folders structure:</p> <pre><code>/ /apps /blog /site /templates /blog /site </code></pre> <p>I also have a code for accessing templates folder for each application. It looks like this:</p> <pre><code>template_dirs = [] template_dirs.append(os.path.join(os.path.dirname(__file__), 'templates/project')) </code></pre> <p>Of course, it doesn't work as it's wrong. It returns a string like this: base/data/home/apps/myapplication/1.348460209502075158/apps/project/templates/project</p> <p>And I need it to return a string like this: base/data/home/apps/myapplication/1.348460209502075158/apps/templates/project How can I do that using absolute paths, not relative? I suppose I need to get the root of the my whole GAE project some way. Thanks!</p>
<p>The easiest way to get the root path of your app is to put a module in the root of your app, which stores the result of <code>os.path.dirname(__file__)</code>, then import that where needed. Alternately, call <code>os.path.dirname(module.__file__)</code> on a module that's in the root of your app.</p>
python|google-app-engine|path
15
1,905,119
62,642,130
How to send HTTP request from file
<p>I have a .txt file with a http request (Header + Body). How can I send the HTTP request present in the file using Python or a tool such as curl or netcat (from command line)?</p> <p>I would like to &quot;repeat&quot; the request included in the file using python/command-line. How can I do it?</p>
<p>if you just want to send requests to a certain url, then there's a library called requests. if i got this wrong then maybe you meant how to send a custom packet? in that case there's a useful library called scapy</p> <pre><code>import requests class spammer(): def __init__(self,nameoffile): self.urls = {} self.nameoffile = nameoffile self.hamspam(-1) self.results() print(' call hamspam with a variable indicating the amount of spam') def hamspam(self, times_spammed): if times_spammed &lt; 0: with open(f'{self.nameoffile}.txt','r')as urls: lines = urls.readlines() for element in lines: resp = requests.get(element) self.urls[element] = {&quot;resp&quot;:resp.status_code,&quot;reason&quot;:resp.reason} else: for key in self.urls.keys(): _ = requests.get(key) print(f'spamming url {k}') def results(self): for k,v in self.urls.items(): print(f&quot;for url: {k} the response was {self.urls[k]['resp']}, for reason {self.urls[k]['reason']}&quot;) Spammer = spammer('nya') </code></pre> <p>this is a small script ive made to do this with python, ive renamed variables to be self explanatory, hope it helps :)</p>
python|curl|request|python-requests|netcat
0
1,905,120
62,654,951
How to get feature importance of FB prophet?
<p>I am using FB Prophet to do time-series forecast. I added two features--discount and promotion, and add holiday effect. The model fits well. But I want to get the feature importance to check how much contribution of 2 features. It seems FB Prophet does not have the feature importance function like other machine learning models &quot;model.feature_importances_&quot;.</p> <p>In FB Prophet, I can get the &quot;forecast&quot; dataframe, which contains : <em>trend<br /> yhat_lower<br /> yhat_upper<br /> trend_lower trend_upper discount_x<br /> discount_lower<br /> discount_upper<br /> extra_regressors_multiplicative extra_regressors_multiplicative_lower<br /> extra_regressors_multiplicative_upper<br /> holidays holidays_lower<br /> holidays_upper<br /> multiplicative_terms multiplicative_terms_lower<br /> multiplicative_terms_upper<br /> promotion_x promotion_lower promotion_upper promotion_Day<br /> promotion_Day_lower promotion_Day_upper weekly<br /> weekly_lower<br /> weekly_upper<br /> additive_terms<br /> additive_terms_lower<br /> additive_terms_upper<br /> yhat<br /> y</em></p> <p>In that case, how can I analyze the feature importance?</p> <p>THANK YOU!</p>
<p>You can receive coefficients with following lines:</p> <pre class="lang-py prettyprint-override"><code>from prophet.utilities import regressor_coefficients regressor_coef = regressor_coefficients(model) regressor_coef[['regressor', 'regressor_mode', 'coef']].sort_values('coef')* </code></pre>
python|machine-learning|time-series|facebook-prophet
1
1,905,121
61,865,511
Keep a Key Pressed in Python
<p>I Like to keep a key pressed until a certain time for ex. I want to Keep key<code>c</code> pressed for 5 seconds and release it after that. How can i do so with a python script? I tried using the keyboard module but can't seem to find a way to do that.</p>
<p>Maybe with pyautogui and time ? Pyautogui allows you to control easily your keyboard and mouse.</p> <pre class="lang-py prettyprint-override"><code>def hold_C (hold_time): import time, pyautogui start = time.time() while time.time() - start &lt; hold_time: pyautogui.press('w') </code></pre>
python|input|keyboard
0
1,905,122
67,588,093
Exported dashboard with Kibana API cannot be imported manually in Kibana UI
<p>I can get my exported dashboard using this code. The API is from the Kibana documentation : <a href="https://www.elastic.co/guide/en/kibana/master/dashboard-api-export.html" rel="nofollow noreferrer">https://www.elastic.co/guide/en/kibana/master/dashboard-api-export.html</a></p> <pre><code>tmpdir = '/tmp/kibana/dashboards/' if not os.path.exists(tmpdir): os.makedirs(tmpdir) dashboard = requests.get('http://localhost:5601/api/kibana/dashboards/export?dashboard=d83837a0-7c21-11eb-9dad-4b1b4ebf9d55') json_dashboard = dashboard.json() dashboards_exported = [] dashboards_exported.append(json_dashboard) with open(tmpdir+'Dash'+'.json', 'w') as outfile: json.dump(dashboards_exported, outfile, indent=2, sort_keys=True) </code></pre> <p>The exported dashboard json file is the following : <a href="https://pastebin.com/YZTKJFn3" rel="nofollow noreferrer">https://pastebin.com/YZTKJFn3</a></p> <p>However, when I want to import it manually to Kibana UI, it says &quot;No objects imported&quot;.</p> <p>When I export the dashboard manually from Kibana UI, I get the following NDJSON file : <a href="https://pastebin.com/nuRFKjPx" rel="nofollow noreferrer">https://pastebin.com/nuRFKjPx</a></p> <p>You can notice that the two files are slightly different and don't have the same format (JSON through API and NDJSON manually exported). Therefore, I am not able to import manually the API generated JSON file. Do you have any idea of why Kibana does not find any object when I import the first JSON file ?</p>
<p>Little late but I think you should import ndjson aswell. curl way:</p> <pre><code>curl -X POST &quot;{{ host_ip }}:{{ kibana_port }}/api/saved_objects/_import&quot; -H &quot;kbn-xsrf: true&quot; --form file=@/tmp/kibana_stored_objects.ndjson </code></pre>
python|import|export|kibana|dashboard
0
1,905,123
67,336,669
Cython class numpy fixed size array declaration
<p>Im trying to initiate a fixed sized array within a cython class so multiple methods can use it. How can that be done?</p> <pre><code>cdef class My Class: cdef public np.ndarray[np.float, ndim=1] myarr def __cinit__(self,int history_length): self.myarr = np.empty(history_length, dtype=np.float) </code></pre> <p>I am getting an error saying:</p> <p><code>buffer types only allowed as function local variables</code></p> <p>Is there a way to declare this and access this?</p> <p>Thanks</p>
<p>I believe the buffer syntax type[::1] is preferred in cython i.e.</p> <pre><code>import numpy as np cimport numpy as np cdef class MyClass: cdef float[::1] myarr def __cinit__(self,int history_length): self.myarr = np.empty(history_length, dtype=np.float) </code></pre> <p>Edit: the above code assumes that you define an array continuous in memory, which by default for numpy arrays is the c style (i.e. row continuous). Defining it float[:] would state that you are expecting a float buffer not necessarily continuous.</p>
python|cython
3
1,905,124
60,710,363
how to get filename from TkInter filedialog.askopenfile from a button command
<p>I'm quite new to python, and even more new to Tkinter. Sorry in advance for any obvious mistake I might be doing here ...</p> <pre><code>class application(): def __init__(self): self.root = Tk() frameCSV = LabelFrame(self.root) Button(frameCSV, text="browse csv", command= self.browseCSV) Label(frameCSV,text=csvFilename ,bg='white').grid(row =1,column=1) def browseCSV(self): global csvFilename csvFilename = filedialog.askopenfilename( initialdir="/Volumes/", title="select the file", filetypes=[("CSV files", ".csv"),("all files", "*.*")] ) </code></pre> <p>The <code>frameCSV</code> is a frame within my root window. I'd like to add inside this frame a <code>Label</code> with the returned path of the selected file.</p> <p>But it doesn't work!</p> <pre><code>Traceback (most recent call last): File "/Users/guillaume/Downloads/uploader_v1_0_200312.py", line 106, in &lt;module&gt; f=application() File "/Users/guillaume/Downloads/uploader_v1_0_200312.py", line 70, in __init__ Label(frameCSV,text=csvFilename ,bg='white').grid(row =1,column=1) NameError: name 'csvFilename' is not defined </code></pre> <p>What am I doing wrong? I don't get why the function doesn't pass the PATH to the <code>Label</code> to display it.</p> <p>Thanks a lot for your help.</p>
<p>however, i have this message when doing the build in SublimeText : objc[3025]: Class FIFinderSyncExtensionHost is implemented in both /System/Library/PrivateFrameworks/FinderKit.framework/Versions/A/FinderKit (0x7fff85a04cd0) and /System/Library/PrivateFrameworks/FileProvider.framework/OverrideBundles/FinderSyncCollaborationFileProviderOverride.bundle/Contents/MacOS/FinderSyncCollaborationFileProviderOverride (0x10c6e1cd8). One of the two will be used. Which one is undefined</p> <p>does this mean anything i should worry about ?</p>
python|function|button|tkinter|command
0
1,905,125
60,514,048
How compare elements in a list with elements in another list whose elements are dicts?
<p>I have a list of strings, for example:</p> <pre><code>list1 = ['apple', 'orange', 'pear', 'peach'] </code></pre> <p>and another list whose elements are dictionaries, like so:</p> <pre><code>list2 = [{'fruit': 'pear', 'size': 'big', 'rating': 7}, {'fruit': 'apple', 'size': 'small', 'rating': 6},{'fruit': 'peach', 'size': 'medium', 'rating': 7}, {'fruit': 'banana', 'size': 'big', 'rating': 9}] </code></pre> <p>For each element in list1, I need to determine if it appears as a value for any of the 'fruit' keys in list2's dictionaries. In this case, apple, pear and peach are all values of at least one 'fruit' key in list2, while orange is not. For each element in list1, how can I get a boolean true/false of whether it appears as a value for any 'fruit' key in list2?</p>
<p>You do some kind of <code>for</code> or <code>list comprehension</code> checking if the value is present in <code>any</code> of the elements of the second <code>list</code>, for example:</p> <pre class="lang-py prettyprint-override"><code>list1 = ['apple', 'orange', 'pear', 'peach'] list2 = [{'fruit': 'pear', 'size': 'big', 'rating': 7}, {'fruit': 'apple', 'size': 'small', 'rating': 6},{'fruit': 'peach', 'size': 'medium', 'rating': 7}, {'fruit': 'banana', 'size': 'big', 'rating': 9}] booleans = [ any(fruit == f_dict['fruit'] for f_dict in list2) for fruit in list1 ] print(booleans) &gt;&gt;&gt; [True, False, True, True] </code></pre>
python|list|dictionary
0
1,905,126
11,172,993
Analyzing Python Code: Modulus Operator
<p>I was looking at some code in Python (I know nothing about Python) and I came across this portion:</p> <pre><code>def do_req(body): global host, req data = "" s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect((host, 80)) s.sendall(req % (len(body), body)) tmpdata = s.recv(8192) while len(tmpdata) &gt; 0: data += tmpdata tmpdata = s.recv(8192) s.close() return data </code></pre> <p>This is then called later on with body of huge size, as in over 500,000 bytes. This is sent to an Apache server that has the max request size on the default 8190 bytes. </p> <p>My question is what is happening at the "<code>s.sendall()</code>" part? Obviously the entire body cannot be sent at once and I'm guessing it is reduced by way of the modulus operator. I don't know how it works in Python, though. Can anyone explain? Thanks.</p>
<p>It is not really the modulus operator (technically it is since strings simply implement <a href="http://docs.python.org/reference/datamodel.html#object.__mod__" rel="nofollow"><code>__mod__</code></a>) but the <a href="http://docs.python.org/library/stdtypes.html#string-formatting-operations" rel="nofollow">python2-style string formatting operator</a>.</p> <blockquote> <p>Given <code>format % values</code> (where <em>format</em> is a string or Unicode object), <code>%</code> conversion specifications in <em>format</em> are replaced with zero or more elements of <em>values</em>. The effect is similar to the using <code>sprintf()</code> in the C language. </p> </blockquote> <blockquote> <p>Obviously the entire body cannot be sent at once</p> </blockquote> <p>While it indeed doesn't fit into a single packet that's a low-level thing which is handled internally (most likely not even by python but by the underlying syscall that writes to a socket)</p>
python|networking
7
1,905,127
11,145,531
Download JPG from website with no extension
<p>Is there any way I can download an image from a website that has no extension associated with it? I'm currently using Python to do this but when I try to use the command:</p> <pre><code>url = 'http://dcdbs.ssec.wisc.edu/inventory/image.php?sat=GOES-13&amp;date=2012-06-12&amp;time=03:32&amp;type=Imager&amp;band=2' urllib.urlretrieve(url) </code></pre> <p>the image is downloaded, however the file is empty. This command works with images that have extensions. </p> <p>I'm not all that familiar with urllib. I'd like to do this preferably with Python, but Java would be ok too. This seems like a fairly simple thing to do, but I've spent quite a bit of time on it with no luck. </p> <p>Thanks! </p>
<p>You're using the wrong <code>urllib</code> function.</p> <pre><code>url = 'http://dcdbs.ssec.wisc.edu/inventory/image.php?sat=GOES-13&amp;date=2012-06-12&amp;time=03:32&amp;type=Imager&amp;band=2' resp = urllib.urlopen(url) image_data = resp.read() # Open output file in binary mode, write, and close. f = open('aaa.jpg','wb') f.write(image_data) f.close() </code></pre>
python
5
1,905,128
63,393,073
Assigning output of for loop as variables in dynamic list
<p>Is there a way to assign the output from a for loop as variables for the output from a method? Both outputs will be lists of the same length.</p> <p>Below, I am performing the .pages method in pdfplumber on each page of a pdf. I want to assign each page to a variable p0, p1, p2... etc, then extract the text using the .extracttext method. The total number of pages will be <em>dynamic</em> so I can't simply unpack the list as (p1, p2, p3) = .....</p> <p>I am printing just to provide an output for visual aid.</p> <pre><code>import pdfplumber with pdfplumber.open(file) as pdf: print(pdf.pages) for pages in total_pages_range: print(&quot;p&quot; + str(pages)) </code></pre> <p>The outputs are:</p> <pre><code>[&lt;pdfplumber.page.Page object at 0x7ff6b75e9c50&gt;, &lt;pdfplumber.page.Page object at 0x7ff6b761a4d0&gt;] p0 p1 </code></pre> <p>I need p0 = &lt;pdfplumber.page.Page object at 0x7ff6b75e9c50&gt; and p1 = &lt;pdfplumber.page.Page object at 0x7ff6b761a4d0&gt;. But with the capability for p2 = ....., p3 = ...... etc. Could a dictionary be used here?</p> <p>Many thanks, G</p>
<p>If I understand your request correctly, use dict comprehension:</p> <pre><code>pages_map = {f'p{i}': page for i, page in enumerate(pdf.pages)} </code></pre>
python|for-loop|list-comprehension|dynamic-list
5
1,905,129
56,757,040
Keras Preprocessing Rotates Image While Importing It With load_img()
<p>I just started learning the Keras API and I am experimenting with the MNIST dataset. I got it working correctly but I have a problem with the function <code>load_img()</code> <code>from the keras.preprocessing.image</code> library, when I try to test a picture that I took. It imports a portrait oriented image as a landscape one. I took the photo with my smartphone in portrait mode and Windows correctly shows width 3024 and height 4032 pixels.</p> <p>When I load that image and print the width and height it shows 4032x3024. Also when I do <code>img.show()</code>, it seems to have been rotated 90 degrees counterclockwise. All that is happening right after loading it, without any processing. I tried looking into the API for the <code>load_img()</code> and couldn't find any arguments that make it rotate while loading.</p> <p>This is a dummy example to show you the problem:</p> <pre class="lang-py prettyprint-override"><code>from keras.preprocessing.image import load_img img = load_img('filepath/test.jpg') # Load portrait mode image Windows says 3024x4032 width, height = img.size print(width, height) # Prints 4032 3024 img.show() # Shows it rotated by 90 degrees counterclockwise </code></pre> <p>I want it to be imported in portrait mode. Why does it get rotated? The problem is that a picture taken in landscape mode is also imported as 4032 x 3024, so I can't differentiate between the 2 orientations. I want to be able to rotate the image if it's in portrait mode but not rotate it if it's in landscape mode.</p> <p>EDIT: I just tried to load the image with Pillow and the results are exactly the same</p>
<p>Use:</p> <pre><code>jhead -v YourImage.jpg </code></pre> <p>to check the EXIF parameter called <code>Orientation</code> - phone cameras set it so that images can be rotated. Try it for one image that works and another image that is <em>"unhappy"</em>.</p> <p>You can correct it with <strong>ImageMagick</strong>:</p> <pre><code>convert unhappy.jpg -auto-orient happy.jpg </code></pre> <p>Or maybe more easily with <code>exiftool</code>. Discussion and example <a href="https://leancrew.com/all-this/2009/04/derotating-jpegs-with-exiftool/" rel="nofollow noreferrer">here</a>.</p>
python|python-3.x|tensorflow|image-processing|keras
2
1,905,130
69,925,529
k8s cronjob not running updated codes, but manual create this job works
<p>I have a cronjob running in k8s and inside a specific container. I have a python script to run in this cronjob, however it didn't run the latest codes, but I've checked the images it pulled was the latest.</p> <p>When I <strong>manually</strong> run <em><strong>kubectl create job --from=....</strong></em>, it did run the latest python codes.</p> <p>Am I missing something?</p> <p>I've already tried to delete the existed cronjob and apply it again, it still not running the latest codes. It runs the latest codes only when I manually create job.</p> <p>Quite strange behavior between auto and manually run the same job....</p> <p>Describe job - cronjob auto run</p> <pre><code>Name: severity-1637733600 Namespace: security Selector: controller-uid=167b250b-831c-4725-a1f8-bb46553e2948 Labels: controller-uid=167b250b-831c-4725-a1f8-bb46553e2948 job-name=severity-1637733600 Annotations: &lt;none&gt; Controlled By: CronJob/severity Parallelism: 1 Completions: 1 Start Time: Wed, 24 Nov 2021 14:00:00 +0800 Completed At: Wed, 24 Nov 2021 14:00:51 +0800 Duration: 51s Pods Statuses: 0 Running / 1 Succeeded / 0 Failed Pod Template: Labels: controller-uid=167b250b-831c-4725-a1f8-bb46553e2948 job-name=severity-1637733600 Containers: fetch-y-info: Image: security/portal:3c62acai Port: &lt;none&gt; Host Port: &lt;none&gt; Command: /bin/sh Args: -c python scripts/severity.py -vv Environment: DB_DRIVER: &lt;set to the key 'driver' in secret 'security-secret'&gt; Optional: false Mounts: &lt;none&gt; Volumes: &lt;none&gt; Events: &lt;none&gt; </code></pre> <p>Describe job - manual run</p> <pre><code>Name: severity-manual Namespace: security Selector: controller-uid=97952b85-24a5-4bbc-8e49-247e8bf2dcb1 Labels: controller-uid=97952b85-24a5-4bbc-8e49-247e8bf2dcb1 job-name=severity-manual Annotations: cronjob.kubernetes.io/instantiate: manual Parallelism: 1 Completions: 1 Start Time: Wed, 24 Nov 2021 15:34:56 +0800 Completed At: Wed, 24 Nov 2021 15:35:18 +0800 Duration: 22s Pods Statuses: 0 Running / 1 Succeeded / 0 Failed Pod Template: Labels: controller-uid=97952b85-24a5-4bbc-8e49-247e8bf2dcb1 job-name=severity-manual Containers: fetch-y-info: Image: security/portal:3c62acai Port: &lt;none&gt; Host Port: &lt;none&gt; Command: /bin/sh Args: -c python scripts/severity.py -vv Environment: DB_DRIVER: &lt;set to the key 'driver' in secret 'security-secret'&gt; Optional: false Mounts: &lt;none&gt; Volumes: &lt;none&gt; Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 2m7s job-controller Created pod: severity-manual-hbtzd Normal Completed 105s job-controller Job completed </code></pre>
<p>There could be a couple of reasons. Check if both jobs (the ones manually created and the ones created by the cronjob) are using the same image ID: (Assuming there is only one container in your pod)</p> <p><code>kubectl get job &lt;job-name&gt; -o=jsonpath='{.spec.template.spec.containers[0].image}'</code></p> <p>If they both match, it could be two different images with the same tag, which are already present on different nodes in your cluster. This relates to the <a href="https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy" rel="nofollow noreferrer">image pull policy</a> specified on the cronjob. You can check if this is the case by changing the image ID in your cronjob to an image digest.</p> <blockquote> <p>To make sure the Pod always uses the same version of a container image, you can specify the image's digest; replace : with @ (for example, image@sha256:45b23dee08af5e43a7fea6c4cf9c25ccf269ee113168c19722f87876677c5cb2).</p> </blockquote>
python|kubernetes|cron|yaml|containers
1
1,905,131
61,081,241
clickable parameter in html go to python function using flask
<p>i need help in python flask environment i have some html page that in that page iam getting list of ip addresses from SQL database and the ip addresses in the list are clickable. what i need is the ability to click on some IP and be able to use that ip in another function in FLASK.</p> <p>part of code my code example: HTML</p> <pre><code>&lt;!DOCTYPE html&gt; &lt;html&gt; {% extends "base.html" %} {% block content %} &lt;body&gt; &lt;script src="http://code.jquery.com/jquery-3.3.1.min.js" integrity="sha256-FgpCb/KJQlLNfOu91ta32o/NMZxltwRo8QtmkMRdAu8=" crossorigin="anonymous"&gt;&lt;/script&gt; &lt;script&gt; function goPython(){ $.ajax({ url: "/clicked", context: document.body }).done(function() { alert('finished python script');; }); } &lt;/script&gt; {% for Devices in ip %} &lt;form action = "http://192.168.1.1:8081/clicked" method = "post"&gt; &lt;ul id="Devicesid"&gt; &lt;li class="label even-row"&gt; &lt;a onclick="goPython()" value="btnSend"&gt;&lt;button type="button" name="btnSend"&gt;{{ Devices.ip }}&lt;/button&gt;&lt;/&lt;/a&gt; &lt;/li&gt; &lt;/ul&gt; &lt;/form&gt; &lt;/body&gt; {% endfor %} &lt;/html&gt; {% endblock %} </code></pre> <p>and part from the python main.py code:</p> <pre><code> @main.route('/clicked',methods = ['POST', 'GET']) def clicked(): while True: IP = request.form['btnSend'] URL = 'https://' + IP credentials = {'username': 'apiuser', 'secretkey': 'f00D$'} session = requests.session() ###and so on...... </code></pre> <p>as you can see in the part of the HTML iam using FOR loop and getting the ip addresses from my database, now iam trying to have the ability for clicking on the IP address and use it in another function of python FLASK for connect to that practicular device.</p> <p>how can i do it simple and correct ? as iam understand that for making it work i need to use AJAX or JQuery...</p> <p>please help</p>
<p>Try this below in your JS/HTML code :</p> <pre><code> &lt;!DOCTYPE html&gt; &lt;html&gt; {% extends "base.html" %} {% block content %} &lt;body&gt; &lt;script src="http://code.jquery.com/jquery-3.3.1.min.js" integrity="sha256-FgpCb/KJQlLNfOu91ta32o/NMZxltwRo8QtmkMRdAu8=" crossorigin="anonymous"&gt;&lt;/script&gt; &lt;script&gt; function goPython(currentIp){ $.ajax({ type: "POST", url: "http://192.168.1.1:8081/clicked", data: {'current_ip' :currentIp}, success: success, dataType: dataType }); } &lt;/script&gt; &lt;form&gt; &lt;ul id="Devicesid"&gt; {% for Devices in ip %} &lt;li class="label even-row"&gt; &lt;a value="btnSend"&gt;&lt;button onclick="goPython(Devices.ip)" type="button" name="btnSend"&gt;{{ Devices.ip }}&lt;/button&gt;&lt;/&lt;/a&gt; &lt;/li&gt; {% endfor %} &lt;/ul&gt; &lt;/form&gt; &lt;/body&gt; {% endblock %} &lt;/html&gt; And in your Flask code, do this : @main.route('/clicked',methods = ['POST', 'GET']) def clicked(): IP = request.json.get('current_ip', '') URL = 'https://' + IP credentials = {'username': 'apiuser', 'secretkey': 'f00D$'} session = requests.session() ###and so on...... </code></pre>
python|ajax|flask|request.form
0
1,905,132
61,007,511
Simulated Annealing for string matching with Python
<p>I have a problem of implementing a string matching algorithm with SA. After all the iterations are done, I am not getting even closer to the string I want! I tried to decrease the temperature change but nothing has changed.</p> <p>For me, I think that the problem is because <code>p</code> is not decreasing steadily. The reason I think is that <code>de</code> is changing &quot;randomly&quot;. Am I right? If so, how to fix it?</p> <p>The goal is that the score should reach 0 at the end. The score sums up all the distances between the random letters and the actual ones. <code>change_cur_solution</code> changes only one random letter each time.</p> <pre><code>def eval_current_sol(target,cur_sol): dist = 0 for i in range(len(target)): c = cur_sol[i] t = target[i] dist += abs(ord(c) - ord(t)) return dist t = 10000 # loop until match the target it = 0 while True: if t == 0: break print('Current best score ', bestScore, 'Solution', &quot;&quot;.join(bestSol)) if bestScore == 0: break newSol = list(bestSol) change_cur_solution(newSol) score = eval_current_sol(newSol,targetSol) de = score - bestScore if de &lt; 0: ## score &lt; bestScore i.e. (score of new solution &lt; score of previous solution) ===&gt; #better bestSol = newSol bestScore = score else: r = random.random() try: p = math.exp(-(de / t)) except: p = 0 print(&quot;p is %f de is %d t is %d&quot; %(p, de,t)) if p &gt; r: bestSol = newSol bestScore = score it += 1 t -= 0.5 print('Found after, ',it, 'Iterations' ) </code></pre> <p>Here is a sample of the code running when t is about 700</p> <p><a href="https://i.stack.imgur.com/lIohk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lIohk.png" alt="Here is a sample of the code running when t is about 700" /></a></p> <p>Here is another sample run at the end:</p> <p><a href="https://i.stack.imgur.com/DaxH3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DaxH3.png" alt="Here is another sample run at the end" /></a></p> <p>Note: a similar code was done with hill climbing and worked fine.</p>
<pre><code>t -= 0.5 </code></pre> <p>Is a linear cooling. This is generally not the best. (?) Have you tried geometric?</p> <pre><code>t = t * 0.95 </code></pre> <p>Of course, 0.95 is a guess, and you want to explore difference start/stop temp combinations, and cooling factor.</p>
python|artificial-intelligence|simulated-annealing
0
1,905,133
65,967,096
Multi object tracking - Expected Ptr<cv::legacy::Tracker> for argument 'newTracker'
<p>I have been playing with opencv2 using Python for tracking multiple objects -</p> <p><code>cv2.__version__</code> = 4.5.1</p> <p>code -</p> <pre><code>import imutils import time import cv2 import numpy as np trackers = cv2.legacy_MultiTracker.create() vs = cv2.VideoCapture('4.mp4') while True: frame = vs.read() if frame is None: break frame = frame[1] (success, boxes) = trackers.update(frame) for box in boxes: (x, y, w, h) = [int(v) for v in box] cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2) cv2.imshow(&quot;Frame&quot;, frame) key = cv2.waitKey(1) &amp; 0xFF if key == ord(&quot;s&quot;): box = cv2.selectROI(&quot;Frame&quot;, frame, fromCenter=False, showCrosshair=True) print(box) tracker = cv2.TrackerKCF() trackers.add(tracker, frame, box) elif key == ord(&quot;q&quot;): break vs.release() cv2.destroyAllWindows() </code></pre> <p>I got an error -</p> <pre><code>---&gt;trackers.add(tracker, frame, box) TypeError: Expected Ptr&lt;cv::legacy::Tracker&gt; for argument 'newTracker' </code></pre> <p>I like to know about this error, but cant find any blogs. Add, I think, cv2.MultiTracker_create() function is replaced with cv2.legacy_MultiTracker.create()</p> <p>Help me, Thanks</p>
<p>cv.legacy.TrackerXXX_create() + cv2.legacy_MultiTracker.create() work for me. This is the new code....</p> <pre><code>from __future__ import print_function import sys import cv2 from random import randint trackerTypes = ['BOOSTING', 'MIL', 'KCF', 'TLD', 'MEDIANFLOW', 'GOTURN', 'MOSSE', 'CSRT'] def createTrackerByName(trackerType): # Create a tracker based on tracker name if trackerType == trackerTypes[0]: tracker = cv2.legacy.TrackerBoosting_create() elif trackerType == trackerTypes[1]: tracker = cv2.legacy.TrackerMIL_create() elif trackerType == trackerTypes[2]: tracker = cv2.legacy.TrackerKCF_create() elif trackerType == trackerTypes[3]: tracker = cv2.legacy.TrackerTLD_create() elif trackerType == trackerTypes[4]: tracker = cv2.legacy.TrackerMedianFlow_create() elif trackerType == trackerTypes[5]: tracker = cv2.legacy.TrackerGOTURN_create() elif trackerType == trackerTypes[6]: tracker = cv2.TrackerMOSSE_create() elif trackerType == trackerTypes[7]: tracker = cv2.legacy.TrackerCSRT_create() else: tracker = None print('Incorrect tracker name') print('Available trackers are:') for t in trackerTypes: print(t) return tracker # Set video to load videoPath = &quot;bikefit.mov&quot; # Create a video capture object to read videos cap = cv2.VideoCapture(videoPath) # Read first frame success, frame = cap.read() # quit if unable to read the video file if not success: print('Failed to read video') sys.exit(1) ## Select boxes bboxes = [] colors = [] # OpenCV's selectROI function doesn't work for selecting multiple objects in Python # So we will call this function in a loop till we are done selecting all objects while True: # draw bounding boxes over objects # selectROI's default behaviour is to draw box starting from the center # when fromCenter is set to false, you can draw box starting from top left corner bbox = cv2.selectROI('MultiTracker', frame) bboxes.append(bbox) colors.append((randint(0, 255), randint(0, 255), randint(0, 255))) print(&quot;Press q to quit selecting boxes and start tracking&quot;) print(&quot;Press any other key to select next object&quot;) k = cv2.waitKey(0) &amp; 0xFF print(k) if (k == 113): # q is pressed break print('Selected bounding boxes {}'.format(bboxes)) # Specify the tracker type trackerType = &quot;CSRT&quot; createTrackerByName(trackerType) # Create MultiTracker object multiTracker = cv2.legacy.MultiTracker_create() # Initialize MultiTracker for bbox in bboxes: multiTracker.add(createTrackerByName(trackerType), frame, bbox) # Process video and track objects while cap.isOpened(): success, frame = cap.read() if not success: break # get updated location of objects in subsequent frames success, boxes = multiTracker.update(frame) # draw tracked objects for i, newbox in enumerate(boxes): p1 = (int(newbox[0]), int(newbox[1])) p2 = (int(newbox[0] + newbox[2]), int(newbox[1] + newbox[3])) cv2.rectangle(frame, p1, p2, colors[i], 2, 1) # show frame cv2.imshow('MultiTracker', frame) # quit on ESC button if cv2.waitKey(1) &amp; 0xFF == 27: # Esc pressed break </code></pre>
python|opencv|opencv3.0|cv2
7
1,905,134
65,985,093
Python update method
<p>I have been looking how to achieve that, but Im really missing something there as I can't find any solution. Im trying to group all get_campaign_result['data'] keys with empty values, grouped by get_campaign_result['campaignId']</p> <p>Any idea would help, thank you so much</p> <pre><code>get_campaign_result = [{'id': '549972d5c469885e548b4577', 'campaignId': '5499612ec4698839368b4573', 'userAgent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36', 'location': 'Amsterdam, Netherlands', 'date': '2014-01-15T19:48:06.003Z', 'customData': {'form_name': 'form1'}, 'data': {'text': 'test'}, 'time': 5000, 'url': 'https://usabilla.com'}, {'id': '549972d5c469885e548b4570', 'campaignId': '5499612ec4698839368b4573', 'userAgent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36', 'location': 'Amsterdam, Netherlands', 'date': '2014-01-15T19:48:06.003Z', 'customData': {'form_name': 'form1'}, 'data': {'text': 'test'}, 'time': 5000, 'url': 'https://usabilla.com'}, {'id': '549972d5c469885e548b4575', 'campaignId': '5499612ec4698839368b4573', 'userAgent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36', 'location': 'Amsterdam, Netherlands', 'date': '2014-01-15T19:48:06.003Z', 'customData': {'form_name': 'form1'}, 'data': {'uuuu': 'test'}, 'time': 5000, 'url': 'https://usabilla.com'}, {'id': '549972d5c469885e548b4522', 'campaignId': '5499612ec4698839368b4578', 'userAgent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36', 'location': 'Amsterdam, Netherlands', 'date': '2014-01-15T19:48:06.003Z', 'customData': {'form_name': 'form1'}, 'data': {'text2': 'test'}, 'time': 5000, 'url': 'https://usabilla.com'}, {'id': '549972d5c469885e548b4533', 'campaignId': '5499612ec4698839368b4578', 'userAgent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36', 'location': 'Amsterdam, Netherlands', 'date': '2014-01-15T19:48:06.003Z', 'customData': {'form_name': 'form1'}, 'data': {'text__': 'test'}, 'time': 5000, 'url': 'https://usabilla.com'}] </code></pre> <p>MY CODE</p> <pre><code>structure= {} for campaignresult in get_campaign_result: custom_element = campaignresult['data'] campaign_id = campaignresult['campaignId'] structure[campaign_id] = {} for element in custom_element: d = {element:''} print(structure) structure[campaign_id].update(d) </code></pre> <p>result i get</p> <pre><code>{'5499612ec4698839368b4573': {'uuuu': ''}, '5499612ec4698839368b4578': {'text__': ''}} </code></pre> <p>result I'am expecting</p> <pre><code>{'5499612ec4698839368b4573': {'text': '', 'uuuu': ''}, '5499612ec4698839368b4578': {'text2': '', 'text__': ''}} </code></pre>
<p>I am not sure about the answer in terms of code, but I think the result you are expecting is not possible. You have a dictionary with two keys: <code>5499612ec4698839368b4573</code> and <code>5499612ec4698839368b4578</code>. These two are possible.</p> <p>However, these two (<code>{'uuuu': ''}</code> and <code>'{text__': ''}</code>) are not. Every item in a dictionary must have a key. In this case, both these items have no keys on the parent dict. You might be mixing the list and dictionary syntaxes.</p> <p>Hope this helps you understand better the problem!</p>
python
1
1,905,135
72,658,962
Double hourly notification Python, Ubuntu, Telegram
<p>When developing the telegram bot in python, I faced the problem of independently triggering notifications in the Ubuntu system. Let's start from the beginning. For everyday notification, I use a library called &quot;Schedule&quot;. I won't fully describe it in the code, but it looks something like this:</p> <pre><code>from multiprocessing import * import schedule def start_process(): Process(target=P_schedule.start_schedule, args=()).start() class P_schedule(): def start_schedule(): schedule.every().day.at(&quot;19:00&quot;).do(P_schedule.send_message) while True: schedule.run_pending() time.sleep(1) def send_message(): bot.send_message(user_ID, 'Message Text') </code></pre> <p>There don't seem to be any errors here for correct operation. Then I loaded all this into the Ubuntu system and connected &quot;systemd&quot; to autorun with commands:</p> <pre><code>vim /etc/systemd/system/bot.service [Unit] Description=Awesome Bot After=syslog.target After=network.target [Service] Type=simple User=bot WorkingDirectory=/home/bot/tgbot ExecStart=/usr/bin/python3 /home/bot/tgbot/bot.py Restart=always [Install] WantedBy=multi-user.target systemctl daemon-reload systemctl enable bot systemctl start bot </code></pre> <p>I restart &quot;systemd&quot; after making edits to the code with a command:</p> <pre><code>systemctl restart bot </code></pre> <p>The problem arose in the following, when I change the time of the notification, it starts coming at the time that I specified and at the time that was before, as I understand it, &quot;systemd&quot; somewhere in the cache stores the old time value. How can I update &quot;systemd&quot; to clear this very cache.</p>
<p>It helped to reboot the system with the command:</p> <pre><code>sudo systemctl reboot </code></pre>
python|ubuntu|telegram|telegram-bot|systemd
0
1,905,136
68,446,731
Getting the coordinates of elements in clusters without a loop in numpy
<p>I have a 2D array, where I label clusters using the <code>ndimage.label()</code> function like this:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np from scipy.ndimage import label input_array = np.array([[0, 1, 1, 0], [1, 1, 0, 0], [0, 0, 0, 1], [0, 0, 0, 1]]) labeled_array, _ = label(input_array) # Result: # labeled_array == [[0, 1, 1, 0], # [1, 1, 0, 0], # [0, 0, 0, 2], # [0, 0, 0, 2]] </code></pre> <p>I can get the element counts, the centroids or the bounding box of the labeled clusters. But I would like to also get the coordinates of each element in clusters. Something like this (the data structure doesn't have to be like this, any data structure is okay):</p> <pre class="lang-py prettyprint-override"><code>{ 1: [(0, 1), (0, 2), (1, 0), (1, 1)], # Coordinates of the elements that have the label &quot;1&quot; 2: [(2, 3), (3, 3)] # Coordinates of the elements that have the label &quot;2&quot; } </code></pre> <p>I can loop over the label list and call <code>np.where()</code> for each one of them but I wonder if there is a way to do this without a loop, so that it would be faster?</p>
<p>You can make a map of the coordinates, sort and split it:</p> <pre class="lang-py prettyprint-override"><code># Get the indexes (coordinates) of the labeled (non-zero) elements ind = np.argwhere(labeled_array) # Get the labels corresponding to those indexes above labels = labeled_array[tuple(ind.T)] # Sort both arrays so that lower label numbers appear before higher label numbers. This is not for cosmetic reasons, # but we will use sorted nature of these label indexes when we use the &quot;diff&quot; method in the next step. sort = labels.argsort() ind = ind[sort] labels = labels[sort] # Find the split points where a new label number starts in the ordered label numbers splits = np.flatnonzero(np.diff(labels)) + 1 # Create a data structure out of the label numbers and indexes (coordinates). # The first argument to the zip is: we take the 0th label number and the label numbers at the split points # The second argument is the indexes (coordinates), split at split points # so the length of both arguments to the zip function is the same result = {k: v for k, v in zip(labels[np.r_[0, splits]], np.split(ind, splits))} </code></pre>
python|numpy|scipy|cluster-analysis|ndimage
4
1,905,137
59,251,630
Pandas style background gradient not showing in jupyter notebook
<p>I am trying to print a pandas dataframe with a background gradient for better readability. I tried to apply what I found in the <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/style.html" rel="noreferrer">docs</a> to a simple use case, but I can't get jupyter notebook to actually print the table with the colors - I keep getting the plain dataframe. Small example:</p> <pre><code>import seaborn as sns import pandas as pd cm = sns.light_palette('green', as_cmap=True) df_res = pd.DataFrame(index =['foo','bar'],columns = ['Value 1','Value 2','Value 3']) df_res.loc['foo'] = [-.5*100, .3,.2] df_res.loc['bar'] = [.3*100, .6,.9] df_res.style.background_gradient(cmap=cm) </code></pre> <p>which just prints</p> <p><a href="https://i.stack.imgur.com/qYo3o.png" rel="noreferrer"><img src="https://i.stack.imgur.com/qYo3o.png" alt="this simple dataframe"></a>.</p> <p>I tried different printing techniques, i.e.</p> <pre><code>pretty = df_res.style.background_gradient(cmap=cm) display(pretty) </code></pre> <p>or</p> <pre><code>print(pretty) </code></pre> <p>or a different colormap</p> <pre><code>df_res.style.background_gradient(cmap='viridis') </code></pre> <p>but none of them work. I also tried if the styler works at all, but at least the applymap function does what it's supposed to:</p> <pre><code>def color_negative_red(val): """ Takes a scalar and returns a string with the css property `'color: red'` for negative strings, black otherwise. """ color = 'red' if val &lt; 0 else 'black' return 'color: %s' % color df_res.style.applymap(color_negative_red) </code></pre> <p>which prints</p> <p><a href="https://i.stack.imgur.com/jhT7F.png" rel="noreferrer"><img src="https://i.stack.imgur.com/jhT7F.png" alt=""></a></p> <p>So not sure why the background_gradient doesn't seem to have any effect.</p> <p><strong>EDIT</strong>: Just found the reason. It's a simple fix but in case someone else struggles with the same problem I'm keeping this up. Apparently pandas initialized the dataframe with the elements being objects instead of floats. So simple changing initialization to</p> <pre><code>df_res = pd.DataFrame(index =['foo','bar'],columns = ['Value 1','Value 2','Value 3']).astype('float') </code></pre> <p>solved the issue.</p>
<p>Your dtypes of your dataframe are 'object' and not numeric.</p> <p>First, change the dtype in your dataframe to numeric.</p> <pre><code>df_res.apply(pd.to_numeric).style.background_gradient(cmap=cm) </code></pre> <p>Output: <a href="https://i.stack.imgur.com/kjuiR.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/kjuiR.jpg" alt="enter image description here"></a></p> <hr> <p>Note dtypes:</p> <pre><code>import seaborn as sns import pandas as pd cm = sns.light_palette('green', as_cmap=True) df_res = pd.DataFrame(index =['foo','bar'],columns = ['Value 1','Value 2','Value 3']) df_res.loc['foo'] = [-.5*100, .3,.2] df_res.loc['bar'] = [.3*100, .6,.9] df_res.info() </code></pre> <p>Output:</p> <pre><code>&lt;class 'pandas.core.frame.DataFrame'&gt; Index: 2 entries, foo to bar Data columns (total 3 columns): Value 1 2 non-null object Value 2 2 non-null object Value 3 2 non-null object dtypes: object(3) memory usage: 144.0+ bytes </code></pre>
python|pandas|jupyter-notebook|pandas-styles
8
1,905,138
59,242,582
How to destroy the icon tray when the program ends
<p>I have a class that is called when my .py program starts and it creates an Icon tray in the windows task bar. In it, there is the option <code>quit</code>, that is mapped to the function <code>kill_icon_tray</code> from my class, that should terminate the Icon and then finish my program. </p> <p>This is the class (some methods were ommited as they are not needed):</p> <pre class="lang-py prettyprint-override"><code>from infi.systray import SysTrayIcon class Tray_icon_controller: def __init__(self): self.menu_options = (("Open Chat Monitor", None, self.open_chat),) self.systray = SysTrayIcon("chat.ico", "Engineer Reminder", self.menu_options, on_quit=self.kill_icon_tray); def init_icon_tray(self): self.systray.start(); def kill_icon_tray(self, systray): self.systray.shutdown() </code></pre> <p>But this is returning me the following exception whenever I click on <code>quit</code> in the Icon tray:</p> <pre><code>$ py engineer_reminder.py Traceback (most recent call last): File "_ctypes/callbacks.c", line 237, in 'calling callback function' File "C:\Users\i866336\AppData\Local\Programs\Python\Python38-32\lib\site-packages\infi\systray\traybar.py", line 79, in WndProc self._message_dict[msg](hwnd, msg, wparam.value, lparam.value) File "C:\Users\i866336\AppData\Local\Programs\Python\Python38-32\lib\site-packages\infi\systray\traybar.py", line 195, in _destroy self._on_quit(self) File "C:\Users\i866336\Documents\GitHub\chat_reminder\cl_tray_icon_controller.py", line 17, in kill_icon_tray self.systray.shutdown() File "C:\Users\i866336\AppData\Local\Programs\Python\Python38-32\lib\site-packages\infi\systray\traybar.py", line 123, in shutdown self._message_loop_thread.join() File "C:\Users\i866336\AppData\Local\Programs\Python\Python38-32\lib\threading.py", line 1008, in join raise RuntimeError("cannot join current thread") RuntimeError: cannot join current thread </code></pre> <p>I tried to modify the method <code>kill_icon_tray</code> to this instead, but it threw the same exception:</p> <pre class="lang-py prettyprint-override"><code> def kill_icon_tray(self, systray): self.systray.shutdown() </code></pre> <p>As per <code>infi.systray</code> <a href="https://github.com/Infinidat/infi.systray" rel="nofollow noreferrer">documentation</a>, I am doing it correctly:</p> <blockquote> <p>To destroy the icon when the program ends, call <code>systray.shutdown()</code></p> </blockquote> <p>So I'm not sure what I'm missing here... could anyone assist? Thanks!</p>
<p>experienced the same problem as you, you could have found the solution but for anyone else that has the same problem.</p> <p>What fixed it for me is changing the systray.shutdown() to SysTrayIcon.shutdown</p> <pre><code> def kill_icon_tray(systray): SysTrayIcon.shutdown </code></pre> <p>Hope this helps</p>
python|systray
0
1,905,139
72,938,818
Python Cant Write Results To File
<p>I wrote the code below. it works but it can't write results to the file. It's about testing collatz conjecture. pls help its important</p> <pre><code>with open(&quot;MyFile.txt&quot;, &quot;w&quot;) as file1: def test(x): if x==1: print(x) file1.write(str(x)) print(&quot;=================================================================================&quot;) file1.write(&quot;================================================================================= \n&quot;) elif x%2==0: print(x) file1.write(str(x)) test(x/2) else: print(x) file1.write(str(x)) test(3*x+1) for x in range (2**100,9**100): print (&quot;testing for &quot;,x) file1.write(str(x)) test(x) file1.close() </code></pre>
<p>Running your program creates an extremely large file really fast (more or less 10MB/s on my machine). The first line of generated file looks like this:</p> <pre><code>126765060022822940149670320537612676506002282294014967032053766.338253001141147e+293.1691265005705735e+291.5845632502852868e+297.922816251426434e+283.961408125713217e+281.9807040628566084e+289.903520314283042e+274.951760157141521e+272.4758800785707605e+271.2379400392853803e+276.189700196426902e+263.094850098213451e+261.5474250491067253e+267.737125245533627e+253.8685626227668134e+251.9342813113834067e+259.671406556917033e+244.835703278458517e+242.4178516392292583e+241.2089258196146292e+246.044629098073146e+233.022314549036573e+231.5111572745182865e+237.555786372591432e+223.777893186295716e+221.888946593147858e+229.44473296573929e+214.722366482869645e+212.3611832414348226e+211.1805916207174113e+215.902958103587057e+202.9514790517935283e+201.4757395258967641e+207.378697629483821e+193.6893488147419103e+191.8446744073709552e+199.223372036854776e+184.611686018427388e+182.305843009213694e+181.152921504606847e+185.764607523034235e+172.8823037615171174e+171.4411518807585587e+177.205759403792794e+163.602879701896397e+161.8014398509481984e+169007199254740992.04503599627370496.02251799813685248.01125899906842624.0562949953421312.0281474976710656.0140737488355328.070368744177664.035184372088832.017592186044416.08796093022208.04398046511104.02199023255552.01099511627776.0549755813888.0274877906944.0137438953472.068719476736.034359738368.017179869184.08589934592.04294967296.02147483648.01073741824.0536870912.0268435456.0134217728.067108864.033554432.016777216.08388608.04194304.02097152.01048576.0524288.0262144.0131072.065536.032768.016384.08192.04096.02048.01024.0512.0256.0128.064.032.016.08.04.02.01.0================================================================================= </code></pre> <p>What you might want to do, is to put a new line after each number, so that the numbers don't mix (some are integers, some are floats in scientific notation):</p> <pre class="lang-py prettyprint-override"><code>file1.write(str(x) + '\n') # or file1.write(f'{x}\n') </code></pre> <p>Now the file looks like this:</p> <pre><code>1267650600228229401496703205376 1267650600228229401496703205376 6.338253001141147e+29 3.1691265005705735e+29 1.5845632502852868e+29 7.922816251426434e+28 3.961408125713217e+28 1.9807040628566084e+28 9.903520314283042e+27 4.951760157141521e+27 ... </code></pre> <h3>File location</h3> <p>Please also remember that the file will be created in your current working directory (you can check it with <a href="https://docs.python.org/3/library/os.html#os.getcwd" rel="nofollow noreferrer"><code>os.getcwd</code></a>). If you want it to reside in a fixed place, use <a href="https://docs.python.org/3/library/os.html#os.chdir" rel="nofollow noreferrer"><code>os.chdir</code></a> (as suggested by <a href="https://stackoverflow.com/users/3589122/gordonaitchjay">GordonAitchJay</a> in the comments)</p>
python|file
0
1,905,140
62,453,151
pandas.read_csv turns strings into 'numbers' in scientific notation (which I don't want)
<p>I have a dataset where some of the sample identifiers (found in the index column) can be interpreted as numbers. Examples: 20010104123140E5 and 2001010412314529. I try to specify that the index column has type string, but pandas.read_csv insists on turning identifiers into floats. See example below. </p> <p>Does anyone know how I can get around this? Or am I doing something wrong here?</p> <pre><code>import pandas as pd with open('test.data', mode = 'w') as infile: infile.write('id\tval\n20010104123140E5\t1\n2001010412314529\t2') df = pd.read_csv('test.data', dtype = {'id':'str', 'val':'float'}, sep='\t', index_col='id') print(df) </code></pre>
<p>Use df.index = df.index.astype(str)</p>
python|pandas|scientific-notation
1
1,905,141
62,224,113
Find all elements by partially matched tag in Python ElementTree using XPath
<p>I'm trying to find all heading elements in an XHTML ElementTree, and I was wondering if there is any way to do this with XPath.</p> <pre class="lang-html prettyprint-override"><code>&lt;body&gt; &lt;h1&gt;title&lt;/h1&gt; &lt;h2&gt;heading 1&lt;/h2&gt; &lt;p&gt;text&lt;/p&gt; &lt;h3&gt;heading 2&lt;/h3&gt; &lt;p&gt;text&lt;/p&gt; &lt;h2&gt;heading 3&lt;/h2&gt; &lt;p&gt;text&lt;/p&gt; &lt;/body&gt; </code></pre> <p>My aim is to get all the heading elements in order, and the naive solution doesn't work:</p> <pre class="lang-py prettyprint-override"><code>for element in tree.iterfind("h*"): foo(element) </code></pre> <p>Because they should be ordered, I cannot iterate through each heading element individually</p> <pre class="lang-py prettyprint-override"><code>headings = {f"h{n}" for n in range(1, 6+1)} for heading in headings: for element in tree.iterfind(heading): foo(element) </code></pre> <p>(but <code>for element in filter(lambda el: el.tag in headings, tree.iterfind())</code> works)</p> <p>and I can't use regex because it breaks on comments (which doesn't use string tags)</p> <pre class="lang-py prettyprint-override"><code>import re pattern = re.compile("^h[1-6]$") is_heading = lambda el: pattern.match(el.tag) for element in filter(is_heading, tree.iterfind()): foo(element) </code></pre> <p>(but <code>is_heading = lambda el: isinstance(el.tag, str) and pattern.match(el.tag)</code> works)</p> <p>None of the solutions are particularly elegant, so I was wondering if there was a better way of finding all heading elements in order using xpath?</p>
<p>Like this:</p> <pre class="lang-sh prettyprint-override"><code>//*[self::h1 or self::h2 or self::h3] </code></pre>
python-3.x|xml|xpath|elementtree
2
1,905,142
59,003,364
Why are some events don't execute when called?
<p>Python 3.</p> <p>Hello. I made a game which starts off with a main menu and when 'd' is pressed, it will cut to the game screen.</p> <p>Before I made this main menu, when I would hold space bar, the shapes would rumble. Now when I press 'd' to start the game, the objects are displayed, but holding space bar doesn't do anything, and neither does pressing escape or closing the game. It seems like the keyboard events / game events are not being called anymore once the 'd' is pressed. Code:</p> <pre><code>import pygame import random import time BLACK = (0, 0, 0) WHITE = (255, 255, 255) GREEN = (0, 255, 0) RED = (255, 0, 0) # Edit the intensity of the shake (Must be one number apart) # Ex: a = -100, b = 101. A is negative, B is positive a = -4 b = 5 up = 10 intensity = (a, b) startGame = True # Image Loading pygame.init() size = (700, 500) screen = pygame.display.set_mode(size) pygame.display.set_caption("My Game") done = False clock = pygame.time.Clock() class Rectangle(): def __init__(self): self.x = random.randrange(0, 700) self.y = random.randrange(0, 500) self.height = random.randrange(20, 70) self.width = random.randrange(20, 70) self.x_change = random.randrange(-3, 3) self.y_change = random.randrange(-3, 3) self.color = random.sample(range(250), 4) def draw(self): pygame.draw.rect(screen, self.color, [self.x, self.y, self.width, self.height]) def move(self): self.x += self.x_change self.y += self.y_change class Ellipse(Rectangle): pass def draw(self): pygame.draw.ellipse(screen, self.color, [self.x, self.y, self.width, self.height]) def move(self): self.x += self.x_change self.y += self.y_change def text_objects(text, font): textSurface = font.render(text, True, BLACK) return textSurface, textSurface.get_rect() def game_intro(): global event intro = True keys = pygame.key.get_pressed() while intro: for event in pygame.event.get(): print(event) if event.type == pygame.QUIT: pygame.quit() quit() screen.fill(WHITE) largeText = pygame.font.Font('freesansbold.ttf', 45) smallText = pygame.font.Font('freesansbold.ttf', 30) TextSurf, TextRect = text_objects("Welcome to Crazy Rumble.", largeText) TextRect.center = ((700 / 2), (100 / 2)) TextSurff, TextRectt = text_objects("Press enter to start", smallText) TextRectt.center = ((700 / 2), (900 / 2)) TextStart, TextRecttt = text_objects("Hold space to make the shapes shake!", smallText) TextRecttt.center = ((700 / 2), (225 / 2)) screen.blit(TextSurf, TextRect) screen.blit(TextSurff, TextRectt) screen.blit(TextStart, TextRecttt) pygame.display.update() if event.type == pygame.KEYUP: intro = False startGame = True global intro my_list = [] for number in range(600): my_object = Rectangle() my_list.append(my_object) for number in range(600): my_object = Ellipse() my_list.append(my_object) # -------- Main Program Loop ----------- while not done: game_intro() game_intro = True if event.type == pygame.KEYUP: game_intro = False keys = pygame.key.get_pressed() # --- Main event loop while game_intro == False: for event in pygame.event.get(): if event.type == pygame.QUIT: done = True screen.fill(BLACK) for rect in my_list: rect.draw() rect.move() for rectElli in my_list: rectElli.draw() if keys[pygame.K_SPACE]: rectElli.y_change = random.randrange(a, b) rectElli.x_change = random.randrange(a, b) rectElli.move() if keys[pygame.K_UP]: print(up) print(intensity) up += 1 if up % 10 == 0: a -= 1 b -= -1 else: a, b = -4, 5 pygame.display.flip() clock.tick(60) </code></pre>
<p>You're just setting <code>keys</code> once with</p> <pre><code>keys = pygame.key.get_pressed() </code></pre> <p>You need to put that call inside the loop, so it gets updated after every event.</p>
python|pygame
2
1,905,143
59,034,806
cosine similarity plot is jumbled up with names running together
<p>I have a small list of docs for which I am plotting cosine similarity. The doc names are pretty long, I can't figure out how to keep them from running together on the plot. Here is what the file names look like:</p> <pre><code>['0-W909MY17R0016', '10 ID04160056 TOR 3.17.17', 'ENVG', 'FA5270-14-R-0027', 'GSS', 'H9240819R0001_1Oct19', 'LCLSC16R0005', 'LTLMII RFPFINALRELEASED', 'N00019-15-R-2004', 'N0010418RK032_for_PR_N0010418NB058', 'N00164-16-R-JQ94_RFP', 'N0025319R0001', 'N6134019R0007_RFP', 'N66604-18-R-0881_Conformed_Through_Amendment_0006', 'NGLD_M_Draft_RFP_Final (3)', 'SOL-615-16-000001_-PLSO_SOL', 'SPRDL115R0414_0000', 'W15QKN-18-R-0065_-_MMO', 'W58RGZ-17-R-0211', 'W912P618B0009_FB_FAC_SUPPORT_SVCS-_FBO', 'W91CRB17R0004_STORM_II', 'Full_Project_Announcement_RIK-OTA-F16EW_03_Jan_2019', 'MQ-25 Final RFP N00019-17-R-0087', 'Solicitation N00421-18-R-0091 - Enhanced Visual Acuity (EVA)'] </code></pre> <p>I did a basic cosine distance between docs:</p> <pre><code>from sklearn.metrics.pairwise import cosine_distances cos_distances = cosine_distances(dtm) mds_map = MDS(dissimilarity='precomputed') pos = mds_map.fit_transform(cos_distances) </code></pre> <p>And a basic matplotlib scatterplot:</p> <pre><code>#pos contains the x and y coordinates of each of the documents x = pos[:,0] y = pos[:,1] #we will need matplotlib to generate a scatter plot import matplotlib.pyplot as plt for i, j, name in zip(x,y,files): plt.scatter(i,j) plt.text(i,j,name) plt.show() </code></pre> <p>Which looks like this:</p> <p><a href="https://i.stack.imgur.com/cvjW8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cvjW8.png" alt="enter image description here"></a></p> <p>I'm having trouble finding documentation that deals with this specifically. </p>
<p>You can plot every point with a different color and/or marker, and create a legend to put outside the plot where you can show the filenames:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt # Random 2D points to make scatter plot x = [np.random.random() for i in range(len(names))] y = [np.random.random() for i in range(len(names))] fig = plt.figure(figsize=(20, 8)) ax = plt.subplot(111) </code></pre> <p>If you don't want to manually assign a color to each filename, you can map a pyplot colormap to a list of colors and use that in the scatter plot:</p> <pre><code>colors = plt.cm.rainbow(np.linspace(0, 1, len(names))) for i, j, name in zip(x, y, names): ax.scatter(i, j, label=name, c=colors[names.index(name)]) fig.subplots_adjust(right=0.6) # This is needed so that the legend is not cut out of the figure ax.legend(loc='center left', bbox_to_anchor=(1, 0.5), fontsize=12) plt.show() </code></pre> <p>Result: <a href="https://i.stack.imgur.com/ih4YM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ih4YM.png" alt="enter image description here"></a></p> <p>You can use the <code>bbox_to_anchor</code> parameter to move the legend around.</p> <p>If you want to assign individual colors or markers, the only way I can think of doing it is by creating a dictionary. For example:</p> <pre><code>colors = plt.cm.rainbow(np.linspace(0, 1, len(names))) plot_names = {'0-W909MY17R0016': [colors[0], 'o'], '10 ID04160056 TOR 3.17.17': [colors[1], 'x'], 'ENVG': [colors[2], '*'], 'FA5270-14-R-0027': [colors[3], '^']} for i, j, name in zip(x, y, names): ax.scatter(i, j, label=name, c=plot_names[name][0], marker=plot_names[name][1]) fig.subplots_adjust(right=0.6) ax.legend(loc='center left', bbox_to_anchor=(1, 0.5), fontsize=12) plt.show() </code></pre> <p>Result: <a href="https://i.stack.imgur.com/kghE1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kghE1.png" alt="enter image description here"></a></p> <p>You can see all the available markers <a href="https://matplotlib.org/3.1.1/api/markers_api.html" rel="nofollow noreferrer">here</a>. You can also change their sizes, borders, etc.</p>
python-3.x|matplotlib
4
1,905,144
31,462,415
Only one button in a panel with multiple togglebuttons changes color - wxPython
<p>I want to set the color of a toggle button of my choice in the panel that I have created. The problem is that in the numerous toggle buttons that I have displayed on my panel when I want to change the color of each one only the color of the last button changes. Here's my code:</p> <pre><code>import wx class Frame(wx.Frame): def __init__(self): wx.Frame.__init__(self,None) self.panel = wx.Panel(self,wx.ID_ANY) self.sizer = wx.BoxSizer(wx.VERTICAL) self.flags_panel = wx.Panel(self, wx.ID_ANY, style = wx.SUNKEN_BORDER) self.sizer.Add(self.flags_panel) self.SetSizer(self.sizer,wx.EXPAND | wx.ALL) self.flags = Flags(self.flags_panel, [8,12]) self.flags.Show() class Flags (wx.Panel): def __init__(self,panel, num_flags = []):#,rows = 0,columns = 0,radius = 0, hspace = 0, vspace = 0,x_start = 0, y_start = 0 wx.Panel.__init__(self,panel,-1, size = (350,700)) num_rows = num_flags[0] num_columns = num_flags[1] x_pos_start = 10 y_pos_start = 10 i = x_pos_start j = y_pos_start buttons = [] for i in range (num_columns): buttons.append('toggle button') self.ButtonValue = False for button in buttons: index = 0 while index != 15: self.Button = wx.ToggleButton(self,-1,size = (10,10), pos = (i,j)) self.Bind(wx.EVT_TOGGLEBUTTON,self.OnFlagCreation, self.Button) self.Button.Show() i += 15 index += 1 j += 15 i = 10 self.Show() def OnFlagCreation(self,event): if not self.ButtonValue: self.Button.SetBackgroundColour('#fe1919') self.ButtonValue = True else: self.Button.SetBackgroundColour('#14e807') self.ButtonValue = False if __name__ == '__main__': app = wx.App(False) frame = Frame() frame.Show() app.MainLoop() </code></pre>
<p>Your problem is quite simple. The last button is always changed because it's the last button defined:</p> <pre><code>self.Button = wx.ToggleButton(self,-1,size = (10,10), pos = (i,j)) </code></pre> <p>Each time through the <code>for</code> loop, you reassign the <code>self.Button</code> attribute to a different button. What you want to do is extract the button from your event object and change its background color. So change your function to look like this:</p> <pre><code>def OnFlagCreation(self,event): btn = event.GetEventObject() if not self.ButtonValue: btn.SetBackgroundColour('#fe1919') self.ButtonValue = True else: btn.SetBackgroundColour('#14e807') self.ButtonValue = False </code></pre> <p>See also:</p> <ul> <li><a href="http://www.blog.pythonlibrary.org/2011/09/20/wxpython-binding-multiple-widgets-to-the-same-handler/" rel="nofollow">http://www.blog.pythonlibrary.org/2011/09/20/wxpython-binding-multiple-widgets-to-the-same-handler/</a></li> </ul>
button|wxpython|togglebutton
0
1,905,145
31,599,972
Understanding python decorator. why does this not work?
<p>I am new to python and am fiddling with things. I really do not understand why this code does not work. Can you please help me understand what is happening here ? </p> <pre><code>from functools import wraps class A: def __init__(self): self.methodName = 'temp1' def temp(self, i): print(self.__class__.__name__) print("hi" + str(i)) def temp2(self): print("hey hey hey") class B: pass class C: def __call__(self, Func): @wraps(Func) def newFunc(*args, **kwargs): return Func(*args, **kwargs) return newFunc if __name__ == '__main__': a = A() setattr(B, a.methodName, a.temp) setattr(B, 'temp1', C().__call__(a.temp)) b = B() b.temp1(5) </code></pre>
<p>Try this:</p> <pre><code>from functools import wraps class A : def __init__(self): self.methodName = 'temp1' def temp(self, i) : print (self.__class__.__name__) print("hi" +str(i)) def temp2(self): print "hey hey hey" class B : pass class C : def __call__(self,Func) : @wraps(Func) def newFunc(self, *args, **kwargs) : return Func(*args, **kwargs); return newFunc if __name__ == '__main__' : a = A() setattr(B, a.methodName, a.temp) setattr(B, 'temp1', C().__call__(a.temp)) b = B() b.temp1(5) </code></pre> <p>Note that <code>newFunc</code> now takes <code>self</code> as its first argument.</p> <p>The reason this works is that bound instance methods, like <code>b.temp1</code>, always receive their bound instance as the first argument (in this case <code>b</code>). Originally, you were passing all arguments via <code>*args</code> to <code>a.temp</code>. This meant that <code>temp</code> was being invoked with the arguments <code>(a, b, 1)</code>. Adding <code>self</code> to <code>newFunc</code>'s parameter list ensures that <code>a</code> is not mistakenly passed to <code>temp</code>. </p>
python
1
1,905,146
31,415,844
Using the `==` operator on circularly defined dictionaries
<p>Python allows dictionaries to be compared with <code>==</code></p> <pre><code>import copy child = {'name': 'child'} parent_1 = {'name': 'parent', 'child': child} parent_2 = copy.deepcopy(parent_1) print(parent_1 == parent_2) </code></pre> <p>Prints <code>True</code>, as you would expect it to.</p> <p>Python also allows dictionaries to reference each other circularly.</p> <pre><code>child = {'name': 'child'} parent_1 = {'name': 'parent', 'child': child} child['parent'] = parent_1 # Create the circular reference </code></pre> <p>However, trying to use the <code>==</code> operator on dictionaries with circular references raises an error.</p> <pre><code>parent_2 = copy.deepcopy(parent_1) print(parent_1 == parent_2) </code></pre> <p>Returns</p> <pre><code>C:\Python34\python.exe -i C:/Users/anon/.PyCharm40/config/scratches/scratch_5 Traceback (most recent call last): File "C:/Users/anon/.PyCharm40/config/scratches/scratch_5", line 11, in &lt;module&gt; print(parent_1 == parent_2) RuntimeError: maximum recursion depth exceeded in comparison </code></pre> <p>How can I check two dictionaries with circular references for equality?</p>
<p>You need to define what you mean by equal. Normally "equal" for dictionaries means 'all the key/value pairs are the "equal"'. If a dictionary has a reference to itself, this definition of equal may lead to a recursive definition, i.e. <code>a == b</code> iff <code>a == b</code>.</p> <p>Take this simple example:</p> <pre><code>a = {}; a['item'] = a b = {}; b['item'] = b </code></pre> <p>Are <code>a</code> and <code>b</code> equal? In order to know that, you need to first know if <code>a</code> and <code>b</code> are equal ... </p> <p>You could create a special <code>equal</code> that looks something like this:</p> <pre><code>def equal(a, b, special=[]): if not isinstance(a, dict) or not isinstance(b, dict): return a == b special = special + [a, b] set_keys = set(a.keys()) if set_keys != set(b.keys()): return False for key in set_keys: if any(a[key] is i for i in special): continue elif any(b[key] is i for i in special): continue elif not equal(a[key], b[key], special): return False return True </code></pre>
python|python-3.x|dictionary|circular-reference
3
1,905,147
15,705,546
How to search through a string to see if i can spell a word
<p>For instance I have </p> <pre><code>x = "dsjcosnag" y = "dog" print(checkYinX(y,x)) &gt;&gt;true </code></pre> <p>So I think I would need to use a while loop as a counter for each of the letter in y, and then I can use itetools to cycle through each of x, each cycle It would check to see if x == y, if it is it would remove it then check the next letter in o.</p> <p>Is there a more simple way to do this?</p>
<p>Use <a href="http://docs.python.org/2/library/collections.html#collections.Counter" rel="nofollow noreferrer"><code>collections.Counter()</code></a> to convert <code>x</code> and <code>y</code> to multi-sets, then subtract to see if all of <code>y</code>'s letters can be found in <code>x</code>:</p> <pre><code>from collections import Counter def checkYinX(y, x): return not (Counter(y) - Counter(x)) </code></pre> <p>Subtracting multi-sets <em>removes</em> characters when their count falls to 0. If this results in an empty multi-set, it becomes <code>False</code> in a boolean context, like all 'empty' python types. <code>not</code> turns that into <code>True</code> if that is the case.</p> <p>Demo:</p> <pre><code>&gt;&gt;&gt; x = "dsjcosnag" &gt;&gt;&gt; y = "dog" &gt;&gt;&gt; print(checkYinX(y,x)) True &gt;&gt;&gt; print(checkYinX('cat',x)) False </code></pre>
python|string|itertools
8
1,905,148
70,951,829
Flask/Python/Gunicorn/Nginx WebApp dropping https from target web page when submitting Flask WTForm
<p>this is my first ever time submitting on StackOverflow, I hope I provide enough details.</p> <p>I have been building a Python/Flask web app, based loosely on the framework used in the <a href="https://blog.miguelgrinberg.com/post/the-flask-mega-tutorial-part-xvii-deployment-on-linux" rel="nofollow noreferrer">blog by Miguel Grinberg</a></p> <p>It works fine in my PyCharm IDE, the issue is when I deployed it to an ubuntu server on Oracle VirtualBox using Gunicorn and NginX on top of the Flask server.</p> <p>The webapp consists of 10 pages/templates and when deployed and running on Virtulbox, I can navigate around the app fine on my host browser UNTIL I submit a FlaskForm which then should return a RESULTS page with the required data.</p> <p>It is supposed to return <a href="https://127.0.0.1:3000/results" rel="nofollow noreferrer">https://127.0.0.1:3000/results</a> but instead returns only http: <a href="http://127.0.0.1:3000/results" rel="nofollow noreferrer">http://127.0.0.1:3000/results</a></p> <p>(https dropped to http)</p> <p>and the resulting</p> <p><strong>400 Bad Request</strong></p> <p><em>The plain HTTP request was sent to HTTPS port</em></p> <p><em>nginx/1.18.0 (Ubuntu)</em></p> <p>The error is self explanatory and I know somewhere it may be in the nginx config somewhere I just don't have the knowledge (been trying to nut this out for a week or so)</p> <pre><code> /etc/nginx/sites-enabled/highceesdev: server { # listen on port 80 (http) listen 80; server_name _; location / { # redirect any requests to the same URL but on https return 301 https://$host$request_uri; } </code></pre> <p>}</p> <pre><code>server { # listen on port 443 (https) listen 443 ssl; server_name _; # location of the self-signed SSL certificate ssl_certificate /home/ubuntu2/HighCees/HighCeesDev/certs/cert.pem; ssl_certificate_key /home/ubuntu2/HighCees/HighCeesDev/certs/key.pem; # write access and error logs to /var/log access_log /var/log/highceesdev_access.log; error_log /var/log/highceesdev_error.log; location / { # forward application requests to the gunicorn server proxy_pass http://localhost:8000; proxy_redirect off; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } location /static { # handle static files directly, without forwarding to the application alias /home/ubuntu2/HighCees/HighCeesDev/app/static; expires 30d; } </code></pre> <p>}</p> <p>If relevant, further code as follows.</p> <p>The definition of the FlaskForm being submitted is HighCeesDev/app/forms.py:</p> <pre><code>.... class CreateSong2(FlaskForm): startdate = DateField('Start Date', default=(datetime.today()-timedelta(90)),format='%Y-%m-%d',validators=(validators.data_required(),)) enddate = DateField('End Date', default=datetime.today(),format='%Y-%m-%d', validators=(validators.data_required(),)) city = RadioField(choices=[('Wellington', 'Wellington'), ('Auckland', 'Auckland'),('Christchurch', 'Christchurch'), ('Dunedin', 'Dunedin')]) submit = SubmitField('Submit') </code></pre> <p>...</p> <p>The definition of the routes for CreateSong2(FlaskForm) and results in HighCeesDev/app/routes.py:</p> <pre><code>@app.route('/create_song2',methods=['GET','POST']) def create_song2(): latedate = Song.query.order_by(Song.date.desc()).first() startdate = Song.query.order_by(Song.date.asc()).first() convertedlatedate1=latedate.date convertedlatedate2 = convertedlatedate1.strftime(&quot;%d/%m/%Y&quot;) convertedstartdate1=startdate.date convertedstartdate2 = convertedstartdate1.strftime(&quot;%d/%m/%Y&quot;) form = CreateSong2() if form.validate_on_submit(): session['startdate'] = form.startdate.data session['enddate'] = form.enddate.data session['shcity'] = form.city.data shcity = form.city.data purge_files() return redirect(url_for('results')) return render_template('create_song2.html',form=form, latedate=latedate, startdate=startdate,convertedlatedate2=convertedlatedate2, convertedstartdate2=convertedstartdate2) </code></pre> <p>and</p> <pre><code>@app.route('/results',methods=['GET','POST']) def results(): startdate = session['startdate'] endate= session['enddate'] shcity = session['shcity'] form=Results() startedate = extract_date_sequel(startdate) finishedate = extract_date_sequel(endate) song = Song.query.filter(Song.date &lt;= finishedate,Song.date &gt;= startedate,Song.city==shcity).order_by(Song.date.asc()) song2 = Song.query.with_entities(Song.high_temp).filter(Song.date &lt;= finishedate,Song.date &gt;= startedate,Song.city==shcity).order_by(Song.date.asc()) if not check_song_query(song2): print(&quot;check_song_query says its False :-(!!&quot;) flash(&quot;No data in that query :-(&quot;) return redirect(url_for('create_song2')) else: print(&quot;check_song_query says its True!!&quot;) print(&quot;song is: &quot;, type(song)) print(&quot;song2 is: &quot;,type(song2)) a = [] for i in song2: stringy = '' stringy = stringy.join(i._data) a.append(stringy) print(type(a)) print(a) songfile = SongFileEvent.query.all() dtf = date_time_file_wav() record = SongFileEvent(song_file_name=dtf) db.session.add(record) db.session.commit() testy2(a) daft = SongFileEvent.query.order_by(SongFileEvent.id.desc()).first() songfile = SongFileEvent.query.all() now = time.time() purge_files_2(now) print(&quot;purge files was called&quot;) #convert_numbers_to_notes(a) return render_template('results.html',title='Home', song=song, songfile=songfile, daft=daft) </code></pre> <p>Keen to offer more details if that helps.</p> <p>Thanks a lot</p> <p>Pat</p>
<p>In your deployment your view is running within the context of a http request - <code>http://localhost:8000</code> and <code>url_for</code> will return URLs with the 'http' protocol.</p> <p>You need to set up the Flask instance with specific <a href="https://flask.palletsprojects.com/en/2.0.x/config/" rel="nofollow noreferrer">configuration settings</a>. I find the following two settings fix these kind of issues:</p> <p><a href="https://flask.palletsprojects.com/en/2.0.x/config/#PREFERRED_URL_SCHEME" rel="nofollow noreferrer">PREFERRED_URL_SCHEME</a></p> <p><a href="https://flask.palletsprojects.com/en/2.0.x/config/#SERVER_NAME" rel="nofollow noreferrer">SERVER_NAME</a></p> <p>A simple example.</p> <pre><code># app/config.py class Config(object): # common configurations SECRET_KEY = 'XXXXXXX' MAX_CONTENT_LENGTH = 32 * 1024 * 1024 ### more settings class DevelopmentConfig(Config): # specific configurations SERVER_NAME = &quot;example.local:5072&quot; PREFERRED_URL_SCHEME = 'http' ### more settings class ProductionConfig(Config): # specific configurations SERVER_NAME = &quot;example.com&quot; PREFERRED_URL_SCHEME = 'https' ### more settings # app/__init__.py def create_app(): app = App(__name__) # APP_SETTINGS is an environment variable either set to: # export APP_SETTINGS=app.config.DevelopmentConfig # or # export APP_SETTINGS=app.config.ProductionConfig app.config.from_object(os.environ['APP_SETTINGS']) ### more setup return app </code></pre> <p>Nginx Proxy Settings</p> <pre><code>location / { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Real-IP $remote_addr; proxy_redirect off; proxy_pass http://localhost:8000; proxy_set_header X-Sendfile-Type X-Accel-Redirect; } </code></pre>
python|https|virtualbox|gunicorn|flask-wtforms
0
1,905,149
60,273,045
Python regular expressions ignoring commas in a long number
<p>I am trying to figure out how to ignore commas in a long number, some the numbers are in the 0's to 10 million so I need something that can capture numbers from 0 to 10,000,000 and ignore commas. I am not sure how to go about this. Thanks </p> <pre><code>#Here is the pattern that contains the information I am looking for Median Sales Price\n$1,417,000 # here is my pattern median_sales_price = re.findall(r'\bMedian Sales Price\n\$(\d*\,\d*\,\d*)',data) </code></pre>
<p>You can't. <em>One</em> capture captures <em>one</em> continuous substring. Capture with commas, then filter the commas out later.</p> <pre><code>median_sales_price = [re.sub(',', '', price) for price in re.findall(r'\bMedian Sales Price\n\$([\d,]+)', data)] </code></pre>
python|regex
1
1,905,150
67,672,474
Train a model with 2 stacked models in it keras
<p>I have the folowing models that i want to train (See image below):</p> <p><a href="https://i.stack.imgur.com/J0E3I.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/J0E3I.png" alt="enter image description here" /></a></p> <p>The model has an input of 20. The model A has an input of 10 (the first 10 elements of the initial input), the model B has an input of 10 (the last 10 elements of the initial input) finally the input of the model C is the concatenation of the output of the models A and B.</p> <p>How can I train this 3 models at the same time in Keras? Can I merge it in one big model? (I only have data to train the big model)</p>
<p>Lets assume that you have your three models defined, and named model_A, model_B and model_C. You can now define you complete model somewhat like this (I did not check the exact code):</p> <pre><code>def complete_model(model_A, model_B, model_C): input_1 = layers.Input(shape=(10,)) input_2 = layers.Input(shape=(10,)) model_A_output = model_A(input_1) model_B_output = model_B(input_2) concatenated = tf.concat([model_A_output, model_B_output], axis=-1) model_C_output = model_C(concatenated) model = Model(inputs=[input_1, input_2], outputs=model_C_output) model.compile(loss=losses.MSE) model.summary() return model </code></pre> <p>This requires you to give two-dimensional inputs, so you have to do some numpy slicing to preprocess your inputs.</p> <p>If you still want your one-dimensional inputs, you can just define a single input layer with shape (20,) and then use the tf.split function to split it in half and feed it into the next networks.</p>
python|tensorflow|keras|model
0
1,905,151
68,017,410
FALSE changed to "false" while using pd.read_csv
<p>In Python, I'm reading a text file using <code>pd.read_csv</code>. There are columns that has &quot;FALSE&quot; and &quot;TRUE&quot; as cell values. When I read the files, the &quot;FALSE&quot; turns to &quot;False' and &quot;TRUE&quot; is changed as &quot;true&quot;. The script is given below.</p> <pre><code>input_file_1 = pd.read_csv(input_file,delimiter=&quot;\t&quot;) </code></pre> <p>I want all those values in upper case. I don't want to force fit uppercase for those specific column as I'm trying to generalize the script for any file.</p> <p>Appreciate your help!</p>
<p><code>TRUE</code> and <code>FALSE</code> are interpreted as booleans, to save them as upper case strings you can specify the type of those columns</p> <pre><code>pd.read_csv(input_file, delimiter=&quot;\t&quot;, dtype={'column1': str, 'column2': str}) </code></pre> <p>or treat all the columns as strings</p> <pre><code>pd.read_csv(r'C:\Users\GuySaban\Desktop\test.csv', delimiter=&quot;,&quot;, dtype=str) </code></pre>
python|pandas|dataframe
1
1,905,152
30,744,195
How to sum a list of numbers stored as strings
<p>Having a list of numbers stored as strings how do I find their sum?</p> <p>This is what I'm trying right now:</p> <pre><code>numbers = ['1', '3', '7'] result = sum(int(numbers)) </code></pre> <p>but this gives me an error:</p> <pre class="lang-none prettyprint-override"><code>Traceback (most recent call last): File &quot;main.py&quot;, line 2, in &lt;module&gt; result = sum(int(numbers)) TypeError: int() argument must be a string, a bytes-like object or a number, not 'list' </code></pre> <p>I understand that I cannot force the list to be a number, but I can't think of a fix.</p>
<p><code>int(numbers)</code> is trying to convert the list to an integer, which obviously won't work. And if you had somehow been able to convert the list to an integer, <code>sum(int(numbers))</code> would then try to get the sum of that integer, which doesn't make sense either; you sum a collection of numbers, not a single one.</p> <p>Instead, use the function <a href="https://docs.python.org/3/library/functions.html#map" rel="nofollow noreferrer"><code>map</code></a>:</p> <pre><code>result = sum(map(int, numbers)) </code></pre> <p>That'll take each item in the list, convert it to an integer and sum the results.</p>
python
7
1,905,153
66,884,772
What are the fastest ways to do object detection (context in the question)?
<p>So recently I tried to make a bot to fish in Minecraft as a challenge. (not that I use it in any community, or modify the game`s code so I guess its ok with TOS) My approach was and stays so far to track the movements of the bob.</p> <p>My first bot relied on color space segmentation and finetuning the image with morphological transformations from OpenCV-python (as part of my learning experience I aimed to make the bot purely computer vision based). That bot only worked in specific location where I set illumination and environment color with in-game methods. Also it worked at expense of turning games graphics to lowest settings to disable particles.</p> <p>My second bot used HAAR-like classifiers, since I already made few models for real life objects which were fairly good. Sadly this time (I assume due to the game`s unique graphic style where essentially everything is a cube with textures mapped on it) it was fairly inconsistent and caused a lot of false positives.</p> <p>My third bot used HOG-features based svm but it was fairly slow for all models ranging from more then 4000 original samples with really fit bounding boxes to about 200, due to that lack of speed fish was of the hook when detection occurred.</p> <p>My last attempt used tensor flow lite and failed miserably due to even worse detection speed.</p> <p>I also looked into possibility of doing motion detection by comparing the consequent frames, and speed benefits of java vs python, as well as different preprocessing options like increasing contrast, reducing color pallet and etc.</p> <p>AT this point I don't know if wondering 'blind' will give me any clues on what would be the 'to go' approach, and hence I decided to ask here.</p> <p>Thanks in advance.</p> <p>P.S. For exact specifics - I think the time to reel is approximately 0.7 seconds but I can be slightly off.</p>
<p>For a fast and straight forward object detection technique, I would suggest you to use a pretrained retinanet. You can find all the explanation that you would need to know, from these links: <a href="https://github.com/fizyr/keras-retinanet" rel="nofollow noreferrer">https://github.com/fizyr/keras-retinanet</a></p> <p>And follow this Collab, for fast training and straight forward implementation: <a href="https://colab.research.google.com/drive/1v3nzYh32q2rm7aqOaUDvqZVUmShicAsT" rel="nofollow noreferrer">https://colab.research.google.com/drive/1v3nzYh32q2rm7aqOaUDvqZVUmShicAsT</a></p> <p>I would suggest that you resnet50 as backbone, and use the pretrained weights to start your training.</p>
python|object-detection
0
1,905,154
64,048,701
Why is pickle.dump is not writing a new file (the code executes without an error)?
<pre><code>filename1 = 'random_forest.pkl' filename2 = 'naive_bayes.pkl' pickle.dump(r_clf,open(filename1,&quot;wb&quot;)) pickle.dump(clf,open(filename2,&quot;wb&quot;)) </code></pre> <p>This has worked previously. After i made some changes, it just runs and do nothing. I'm new to pickle, plz help me guys!</p>
<p>try that</p> <pre class="lang-py prettyprint-override"><code>import pickle r_clf = None clf = None filename1 = 'random_forest.pkl' filename2 = 'naive_bayes.pkl' with open(filename1,&quot;wb&quot;) as f1: pickle.dump(r_clf, f1) with open(filename2,&quot;wb&quot;) as f2: pickle.dump(clf, f2) </code></pre>
python|pickle
0
1,905,155
42,781,653
PuLP: casting LpVariable or LpAffineExpression to integer
<p>In my optimization problem, I have a conditional that the amount of items (LpInteger) in a particular group may not exceed a percentage of the total amount of items. To do that, I wrote the following code:</p> <pre><code>total = lpSum([num[i].varValue for i in ind]) for d in length: # get list of items that satisfy the conditional items_length_d = list(compress(items,[work[i]==work_group[d] for i in items])) # use that list to calculate the amount of items in the group (an item can occur multiple times) amount[d] = lpSum([num[dl] for dl in items_length_d]) max_d[d] = total*perc_max[d] + 1 min_d[d] = total*perc_min[d] - 1 prob += max_d[d] &gt;= amount[d] prob += min_d[d] &lt;= amount[d] </code></pre> <p>The problem with this approach is that my maximum and minimum become floats (LpContinuous). This in turn makes the solution <code>infeasible</code>. </p> <p><strong>How can I make sure that each max_d and min_d values are integers?</strong> Preferably, I would also like to round up max_d, while truncating min_d. </p> <h2>Edit</h2> <p>I solved the problem of an <code>infeasible</code> solution by changing <code>total = lpSum([num[i].varValue for i in ind])</code> to <code>total = lpSum([num[i] for i in ind])</code>. However, the minimum and maximum values are still floats. If someone knows how to convert these to ints, an answer would still be very appreciated. </p>
<p>You appear to misunderstand how constructing and solving an Linear Programming problem works.</p> <p>The entire problem should be set up, then solved and the solution values extracted.</p> <p>You can't get the LpVariable.varValue for a variable while setting up the problem.</p> <p>So for a fractional constraint if we define an the group as i /in G and then define the total as i /in T</p> <p>we get where f is the required fraction</p> <p><img src="https://latex.codecogs.com/gif.latex?f&space;%5Cleq&space;%5Cfrac%7B%5Csum_%7Bi&space;%5Cin&space;G%7D&space;x_i%7D%7B%5Csum_%7Bj&space;%5Cin&space;T%7D&space;x_j%7D" title="f \leq \frac{\sum_{i \in G} x_i}{\sum_{j \in T} x_j}" /></p> <p>if rearrange this equation.</p> <p><img src="https://latex.codecogs.com/gif.latex?f%5Csum_%7Bj&space;%5Cin&space;T%7D&space;x_j&space;%5Cleq&space;%5Csum_%7Bi&space;%5Cin&space;G%7D&space;x_i" title="f\sum_{j \in T} x_j \leq \sum_{i \in G} x_i" /></p> <p>so in your code</p> <pre><code>prob += perc_max[d] * lpSum([num[i] for i in ind]) &lt;= lpSum([num[dl] for dl in items_length_d]) </code></pre>
python|casting|linear-programming|pulp|coin-or-cbc
3
1,905,156
43,023,918
Python: Creating a dictionary from a file
<p>I want to write a function that opens a file containing two lines, and creates a dictionary. The first line is the string giving the keys and the second line is the string giving the values.</p> <p>How would I go about doing this?</p>
<p>The technique is to use <a href="https://docs.python.org/2.7/library/stdtypes.html#file.readline" rel="nofollow noreferrer"><em>file.readline()</em></a> to extract a line at a time. Use <a href="https://docs.python.org/2.7/library/stdtypes.html#str.split" rel="nofollow noreferrer"><em>str.split()</em></a> to break it into keys (whether you need an explicit delimiter or not depends on your data). Once the <em>keys</em> and <em>values</em> are obtained, <a href="https://docs.python.org/2.7/library/functions.html#zip" rel="nofollow noreferrer"><em>zip()</em></a> them together and call <a href="https://docs.python.org/2.7/library/functions.html#func-dict" rel="nofollow noreferrer"><em>dict()</em></a> to make the final dictionary:</p> <pre><code>with open('somefile.txt') as f: keys = f.readline().split() values = f.readline().split() d = dict(zip(keys, values)) </code></pre> <p>For example, given "somefile.txt" like this:</p> <pre><code>python ruby go c rust swift snake gem verb letter oxide race </code></pre> <p>The resulting dict <em>d</em> will be:</p> <pre><code>{'python': 'snake', 'ruby': 'gem', 'go': 'verb', 'c': 'letter', 'rust': 'oxide', 'swift': 'race'} </code></pre>
python
2
1,905,157
42,673,561
Iterate through multiple lists of characters in order
<p>I want to find an alphabetical string that is anywhere between 1-4 characters long.</p> <p>I start off by iterating through the list of 52 letters:</p> <pre><code>letters = string.ascii_letters </code></pre> <p>I then need to iterate through the same list for the next 3 characters of the string until I find the string I am looking for.</p> <p>If each _ represents the list of 52 letters, I need to basically do this, while checking for a match at each iteration:</p> <pre><code>_ _ _ _ _ _ _ _ _ _ </code></pre> <p>How would I best structure a series of loops to do this?</p> <hr> <p>If the premise of the question seems confusing, this is for a problem set on brute force cracking. I have simply extracted the part of the question that I am struggling with.</p> <hr> <p>Edit: This is where I've got to so far.</p> <pre><code>#we know the salt is the 2-digit '50' #we know the key is limited to 4 alphabetical letters #cycle through all possibilities of the key till we match the hash letters = string.ascii_letters lcounter = 0 i = 0 j = 0 k = 0 l = 0 tryhash = "a" word = [letters[i]] while(tryhash != hash): for c in letters: word = [letters[i]] #this does not work as the additional letters need to be appended to word after the first lcounter loop tryword = ''.join(word) tryhash = crypt.crypt(tryword, "50") if (tryhash == hash): print(word) break i += 1 if (lcounter &gt; 0) and (i == 52): i = 0 if (lcounter == 1) and (j == 0): word.insert(lcounter, letters[j]) j += 1 if (lcounter &gt; 1) and (k == 52): j = 0 if (lcounter == 2) and (k == 0): word.insert(lcounter, letters[k]) k += 1 if (lcounter &gt; 2) and (k == 52): k = 0 if (lcounter == 3) and (l == 0): word.insert(lcounter, letters[l]) l += 1 lcounter += 1 </code></pre>
<p>You can do something like this:</p> <pre> import string import itertools data = string.ascii_lowecase for i in itertools.permutations(data, 4): if i == 'your_string': #do something else: pass </pre>
python
2
1,905,158
65,714,912
How to create new dataframe with only rows that have been changed in the another dataframe?
<p>I want to create a new csv file with only rows that have been changed.</p> <p><a href="https://i.stack.imgur.com/xLWOb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xLWOb.png" alt="enter image description here" /></a></p> <p>The conditions are:</p> <pre><code>import pandas as pd df = pd.read_csv(&quot;sample.csv&quot;, delimiter='') df['Part Number'] = df['Part Number'].astype(str).str.replace('+','-PLUS') df['Part Number'] = df['Part Number'].astype(str).str.replace('/','-SLASH-') df['Part Number'] = df['Part Number'].astype(str).str.replace('\\','-SLASH-') df['Part Number'] = df['Part Number'].astype(str).str.replace(' ','-') df['Part Number'] = df['Part Number'].astype(str).str.replace('_','-') df['Part Number'] = df['Part Number'].astype(str).str.replace('.','-') df['Part Number'] = df['Part Number'].astype(str).str.replace('&quot;','') df['Part Number'] = df['Part Number'].astype(str).str.replace('(','') df['Part Number'] = df['Part Number'].astype(str).str.replace(')','') df['Part Number'] = df['Part Number'].astype(str).str.replace('%','-') # It can be more these are examples. </code></pre> <p>Now I want to create new dataframes as: This should only list the rows where the part number has been replaced.</p> <p><a href="https://i.stack.imgur.com/Le3n0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Le3n0.png" alt="enter image description here" /></a></p>
<p>Try storing the original values and comparing them later:</p> <pre><code>original = df['Part Number'].copy() #...Changes happen here... new_df = df[df['Part Number'] != original].join(original, rsuffix = &quot; Changed&quot;) </code></pre>
python|python-3.x|pandas|dataframe|numpy
3
1,905,159
65,509,204
Python - permission denied installing pymongo
<p>I'm trying to install <code>pymongo</code> and getting permission denied when I do:</p> <pre><code>pip install pymongo Collecting pymongo Downloading https://files.pythonhosted.org/packages/0f/84/b329b5debc71693111780b389222897949f084a833dd996b4e7a36c839fc/pymongo-3.11.2-cp36-cp36m-manylinux1_x86_64.whl (492kB) 100% |████████████████████████████████| 501kB 2.3MB/s Installing collected packages: pymongo Exception: Traceback (most recent call last): File &quot;/usr/lib/python3.6/site-packages/pip/basecommand.py&quot;, line 215, in main status = self.run(options, args) File &quot;/usr/lib/python3.6/site-packages/pip/commands/install.py&quot;, line 365, in run strip_file_prefix=options.strip_file_prefix, File &quot;/usr/lib/python3.6/site-packages/pip/req/req_set.py&quot;, line 789, in install **kwargs File &quot;/usr/lib/python3.6/site-packages/pip/req/req_install.py&quot;, line 854, in install strip_file_prefix=strip_file_prefix File &quot;/usr/lib/python3.6/site-packages/pip/req/req_install.py&quot;, line 1069, in move_wheel_files strip_file_prefix=strip_file_prefix, File &quot;/usr/lib/python3.6/site-packages/pip/wheel.py&quot;, line 345, in move_wheel_files clobber(source, lib_dir, True) File &quot;/usr/lib/python3.6/site-packages/pip/wheel.py&quot;, line 287, in clobber ensure_dir(dest) # common for the 'include' path File &quot;/usr/lib/python3.6/site-packages/pip/utils/__init__.py&quot;, line 83, in ensure_dir os.makedirs(path) File &quot;/usr/lib64/python3.6/os.py&quot;, line 210, in makedirs makedirs(head, mode, exist_ok) File &quot;/usr/lib64/python3.6/os.py&quot;, line 220, in makedirs mkdir(name, mode) PermissionError: [Errno 13] Permission denied: '/usr/local/lib64/python3.6' </code></pre> <p>If I try to specify the <code>--user</code> flag it claims that there's no module by that name available:</p> <pre><code>python aws_ec2_list_instances.py --user Traceback (most recent call last): File &quot;aws_ec2_list_instances.py&quot;, line 25, in &lt;module&gt; from ec2_mongo import insert_doc,set_db,mongo_export_to_file File &quot;/home/tdun0002/stash/cloud_scripts/aws_scripts/python/aws_tools/ec2_mongo.py&quot;, line 7, in &lt;module&gt; import pymongo ModuleNotFoundError: No module named 'pymongo' </code></pre> <p>How can I get this done?</p>
<p>You should use <code>--user</code> flag for installing modules for single user. You said that you tried to use <code>--user</code> but you used it while running python file. You should install before running.</p> <p>So, you can use <code>--user</code> flag. You can use <code>pip install --user pymongo</code></p>
python|pip
1
1,905,160
50,920,805
Using a nested list Comprehension to check & change all columns of a data frame
<p>All I have successfully written a list comprehension that tests for non ascii characters in a column in a dataframe.</p> <p>I am trying now to write a nest listed comprehension to check all of the columns in the data frame.</p> <p>I have researched this by searching nested List Comprehensions dataframes and several other variations and while they are close I can get them to fit my problem.</p> <p>Here is my code:</p> <pre><code>import pandas as pd import numpy as np data = {'X1': ['A', 'B', 'C', 'D', 'E'], 'X2': ['meow', 'bark', 'moo', 'squeak', '120°']} data2 = {'X1': ['A', 'B', 'F', 'D', 'E'], 'X3': ['cat', 'dog', 'frog', 'mouse®', 'chick']} df = pd.DataFrame(data) df2 = pd.DataFrame(data2) dfAsc = pd.merge(df, df2, how ='inner', on = 'X1') dfAsc['X2']=[row.encode('ascii', 'ignore').decode('ascii') for row in dfAsc['X2'] if type(row) is str] dfAsc </code></pre> <p>which correctly returns:</p> <pre><code>X1 X2 X3 0 A meow cat 1 B bark dog 2 D squeak mouse® 3 E 120 chick </code></pre> <p>I have tried to create a nested comprehension to check all of the columns instead of just X2. The attempt below is to create a new df that contains the answer. If this continues to be an issue of confusion, I'll will delete it as it is only one of my attempts to obtain the answer, don't get hung up on it please</p> <pre><code>df3 = pd.DataFrame([dfAsc.loc[idx] for idx in dfAsc.index [row.encode('ascii', 'ignore').decode('ascii') for row in dfAsc[idx] if type(row) is str] df3 </code></pre> <p>which doesnt work. I know Im close but Im still having trouble getting my head around comprehensions</p>
<p>You don't need to use list comprehension, you can directly use <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.applymap.html" rel="nofollow noreferrer">df.applymap</a> This will be lot faster than using comprehensions.</p> <pre><code>data = {'X1': ['A', 'B', 'C', 'D', 'E'], 'X2': ['meow', 'bark', 'moo', 'squeak', '120°']} data2 = {'X1': ['A', 'B', 'F', 'D', 'E'], 'X3': ['cat', 'dog', 'frog', 'mouse®', 'chick']} df1 = pd.DataFrame(data, index=data['X1'], columns=['X2']) df2 = pd.DataFrame(data2, index=data2['X1'], columns=['X3']) dfAsc = pd.merge(df1, df2, how ='inner', left_index=True, right_index=True) dfAsc = dfAsc.applymap(lambda x: x.encode('ascii', 'ignore').decode('ascii') if isinstance(x, str) else x) &gt;&gt;&gt; dfAsc X2 X3 A meow cat B bark dog D squeak mouse E 120 chick </code></pre>
python|dataframe|list-comprehension
1
1,905,161
3,884,247
Mysterious and weird lines after using uni-form
<p>I'm using django-uni-form to display forms. I've included all the css and javascript (notably jquery) in the page. But now I get some weird looking lines. The image below show how it looks:</p> <p><a href="http://i243.photobucket.com/albums/ff176/cwalkrox/uni-form1.jpg" rel="nofollow">http://i243.photobucket.com/albums/ff176/cwalkrox/uni-form1.jpg</a></p> <p>You can notice that for username and email address, the lines are aligned with the upper side of text inputs while for two passwords, the lines are below the password inputs. In uni-form's official website, I can't see any line in the 3 examples. Even if it gives me some lines, should it be consistent?</p> <p>So the strange story doesn't stop here. The jquery can highlight the selected inputs. But the ways it highlights username, email and password are still inconsistent. The following images prove it:</p> <p>i243.photobucket.com/albums/ff176/cwalkrox/uni-form2.jpg</p> <p>i243.photobucket.com/albums/ff176/cwalkrox/uni-form3.jpg</p> <p>So every problems seem to stem from the mysterious lines. So how this happens?</p> <p>BTW, the page I show you is rendered with the form of django-registration. The rendering snippet is:</p> <pre><code>&lt;form action="" method="post" class="uniForm"&gt; &lt;fieldset&gt; {{ form|as_uni_form }} &lt;/fieldset&gt; &lt;/form&gt; </code></pre>
<p>Those lines are due to the css files included in django-uniform: uni-form.css, uni-form-generic.css and uni-form.jquery.css. </p> <p>It seems weird but at least in my case (a pinax project) the forms look better without the provided css.</p> <p>my 2 cents</p>
python|css|django|uniform
0
1,905,162
3,275,004
How to write a twisted server that is also a client?
<p>How do I create a twisted server that's also a client? I want the reactor to listen while at the same time it can also be use to connect to the same server instance which can also connect and listen.</p>
<p>Call <code>reactor.listenTCP</code> and <code>reactor.connectTCP</code>. You can have as many different kinds of connections - servers or clients - as you want.</p> <p>For example:</p> <pre><code>from twisted.internet import protocol, reactor from twisted.protocols import basic class SomeServerProtocol(basic.LineReceiver): def lineReceived(self, line): host, port = line.split() port = int(port) factory = protocol.ClientFactory() factory.protocol = SomeClientProtocol reactor.connectTCP(host, port, factory) class SomeClientProtocol(basic.LineReceiver): def connectionMade(self): self.sendLine("Hello!") self.transport.loseConnection() def main(): import sys from twisted.python import log log.startLogging(sys.stdout) factory = protocol.ServerFactory() factory.protocol = SomeServerProtocol reactor.listenTCP(12345, factory) reactor.run() if __name__ == '__main__': main() </code></pre>
python|twisted
15
1,905,163
35,051,060
OpenCV 3.1.0: Finding the 3D position of a flat plane with known dimensions
<p>I have a flat quadrilateral plane that I know the dimensions of, and I can find the contours of it, along with the 4 corners. <strong>I need assistance in figuring out the method of determining its 3D position.</strong> I have managed to get a 3x3 perspective transform of it, which looks something like this:</p> <pre><code>[[ 3.91873630e-02 1.20990983e+00 -2.81213415e+02] [ 1.21202027e+00 -1.85962357e-15 -3.52697898e+02] [ 3.83991908e-04 2.52680041e-05 1.00000000e+00]] </code></pre> <p><em>(this is OpenCV 3.1.0, in python, and that matrix is just one frame, so it might not be representative of all potential orientations)</em></p> <p>Can I determine the angle away from the camera and distance from the camera from this information, or do I need to perform more calculations?</p> <p>I'm not sure what you guys need to know, so I'll be happy to give you more information if you need it?</p>
<p>You can use solvePnP function to determine quadrilateral's position and orientation wrt camera. </p> <p>All you need to have is </p> <ul> <li>3D coordinates of quadrilateral's corners in world frame</li> <li>corresponding pixel coordinates</li> <li>Camera's intrinsic parameters</li> </ul> <p>You can directly use <a href="http://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#bool%20solvePnP(InputArray%20objectPoints,%20InputArray%20imagePoints,%20InputArray%20cameraMatrix,%20InputArray%20distCoeffs,%20OutputArray%20rvec,%20OutputArray%20tvec,%20bool%20useExtrinsicGuess,%20int%20flags)" rel="nofollow">solvepnp()</a> function of Opencv.</p> <p>In output you get pose of world coordinate system in camera coordinate system. If you take world as quadrilateral itself, you get pose information(Rotation and translation) of quadrilateral in camera coordinate system</p> <hr> <pre><code>C++: bool solvePnP(InputArray objectPoints, InputArray imagePoints, InputArray cameraMatrix, InputArray distCoeffs, OutputArray rvec, OutputArray tvec, bool useExtrinsicGuess=false, int flags=ITERATIVE ); </code></pre> <p>Use CV_P3P argument for flags, in case you have exactly 4 points.</p>
python|opencv|math|3d
1
1,905,164
35,075,416
Last elements in 2D list rows are not changing
<p>I have a two-dimensional list and I'm iterating over it to change the elements with whatever a user inputs. </p> <p>The length of each row is determined by the key the user inputs and the amount of rows is determined by the the length of the message the user inputs (+ 1, because the first row is filled with ASCII values which needs to be there for other reasons)</p> <p>For example, if I input "frank" as the key, and "how are you" as the message, I want to get the output of:</p> <pre><code>[[(ASCII values], ['h', 'o', 'w', 'a', 'r'], ['e', 'y', 'o', 'u', 0] </code></pre> <p>But instead I get:</p> <pre><code>[[(ASCII values], ['h', 'o', 'w', 'a', '0'], ['r', 'e', 'y', 'o', 0] </code></pre> <p>Here is the code:</p> <pre><code>def main(): keyword = get_keyword() key_length = get_keyword_length(keyword) message = get_message() ascii_list = ascii_conversion(keyword, key_length) box = encryption_box(ascii_list, message, key_length) print(box) fill_letters(box, message, key_length) print(box) # Gets the keyword to encrypt with. def get_keyword(): keyword = input("Enter the word you'd like to use for encryption (no duplicate letters): ").lower() return keyword # Gets length of keyword def get_keyword_length(keyword): key_length = len(keyword) return key_length # Gets the message to encrypt and removes punctuation and spaces. def get_message(): message = input('Enter the message you want to encrypt: ').lower() message = message.replace("'", "").replace(",", "").replace(".", "").replace("!", "").replace("?", "")\ .replace(" ", "") return message # Converts keyword to ASCII def ascii_conversion(keyword, key_length): ascii_list = [0] * key_length index = 0 for character in keyword: ascii_list[index] = ord(character) index += 1 return ascii_list # Creates 2D list with correct dimensions and fills first row with the ascii numbers. def encryption_box(ascii_list, message, key_length): if len(message) % len(ascii_list) != 0: box = [[0] * len(ascii_list) for x in range(len(message)//(len(ascii_list))+2)] else: box = [[0] * len(ascii_list) for x in range(len(message)//(len(ascii_list))+1)] index = 0 for number in ascii_list: box[0][index] = number index += 1 return box # Fill in the message in the remaining encryption box spaces. def fill_letters(box, message, key_length): len_box = len(box) message = list(message) index = 0 for r in range(1, len_box): for c in range(key_length - 1): box[r][c] = message[index] index += 1 main() </code></pre> <p>Looking at this:</p> <pre><code>for r in range(1, len_box): for c in range(key_length - 1): box[r][c] = message[index] index += 1 </code></pre> <p>I feel like eventually box[r][c] will be box[1][4] and correspond to that last element, yet it remains 0. Any help will be greatly appreciated. Thank you.</p>
<p><code>range</code> has an exclusive upper bound, so the -1 can't be there. After that you'll get an index-out-of-range for trying to access positions of the message that aren't there. Gotta stop the loop early when the end of the message is reached.</p> <pre><code>for r in range(1, len_box): for c in range(key_length): if index == len(message): break box[r][c] = message[index] index += 1 </code></pre>
python|python-3.x|multidimensional-array
1
1,905,165
35,129,908
Django 1.9 + sorl-thumbnail + memcached
<p>I'm configuring <code>sorl-thumbnail</code> and when memcached is running locally I get this error:</p> <pre><code>OperationalError at /groups/1/ no such table: thumbnail_kvstore </code></pre> <p>When memcached isn't running (obviously doesn't work):</p> <pre><code>TypeError at /groups/1/ a bytes-like object is required, not 'str' </code></pre> <p>What's wrong with my configuration? Why is it saying there's no <code>thumbnail_kvstore</code> table? Here are my settings variables. I tried setting the <code>THUMBNAIL_KVSTORE</code> setting but it didn't change anything:</p> <pre><code>CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': '127.0.0.1:11211', } } THUMBNAIL_DEBUG = True THUMBNAIL_FORMAT = 'PNG' </code></pre>
<p>If just</p> <pre><code>manage.py makemigrations </code></pre> <p>doesn't create any migrations, try</p> <pre><code>manage.py makemigrations thumbnail manage.py migrate </code></pre> <p>This will create migrations for thumbnail and then migrate them. It works for me. I am using Django 1.9 and sorl.thumbnail 12.3.</p>
python|django|sorl-thumbnail
16
1,905,166
26,653,310
Index a matrix by symbol in sympy
<p>I try to index a matrix in a summation like this</p> <pre><code>from sympy import * vx1,vx2,vx3,vx4,vx5, vy1,vy2,vy3,vy4,vy5, = symbols('vx1 vx2 vx3 vx4 vx5 vy1 vy2 vy3 vy4 vy5') vx=Matrix([vx1,vx2,vx3,vx4,vx5]) vy=Matrix([vy1,vy2,vy3,vy4,vy5]) p, n = symbols('p n', integer=True) vx[0] vx[1] vx[2] vx[3] summation(p, (p, 0, 4)) summation(vx[p], (p, 0, 4)) </code></pre> <p>But it seems like sympy cannot do this:</p> <pre><code>NameError: IndexError: Invalid index a[p] </code></pre> <p>Is there a way?</p>
<p>If you want a symbolic index into a Matrix, use MatrixSymbol:</p> <pre><code>In [15]: vx = MatrixSymbol('vx', 1, 4) In [16]: summation(vx[(0, p)], (p, 0, 4)).doit() Out[16]: vx₀₀ + vx₀₁ + vx₀₂ + vx₀₃ + vx₀₄ </code></pre>
python|sympy
5
1,905,167
26,587,566
How can i make sure \n does not register? when it is in a string
<p>I want to get rid of <code>\n</code> from a string, so I am using <code>string.replace("\n", "")</code> But when i do that it registers the "<code>\n"</code> as a new line,</p> <p>I'm sure there is a simple solution but at the moment i am stuck. Future thanks.</p> <p>SIDE NOTE: I can not use <code>strip()</code> because it appears a few times in the middle. </p> <p>Here is the example:</p> <pre><code>stringExample = ["a", "\n", "b", "\n", "G"] x = (str(stringExample)) y = x.replace("\n", "") print(y) </code></pre> <p><code>--&gt; ["a", "\n", "b", "\n", "G"]</code></p>
<p>When you are trying to concatenate the list into a string, you should not cast it to <code>str</code>, as this essentially just wraps it in quotes. You need to join it together like so:</p> <pre><code>x = ''.join(stringExample) </code></pre> <hr> <p>Example:</p> <pre><code>stringExample = ["a", "\n", "b", "\n", "G"] x = ''.join(stringExample) y = x.replace("\n", "") print(y) # abG </code></pre> <p>If you want it in a list like you had at the beginning, just cast <code>y</code> into a <code>list</code>.</p>
python
2
1,905,168
56,573,945
Adding a string outside of JSON in Python json.dump
<p>I'm trying to output a string outside of a JSON object using json.dump in Python. I'm able to successfully output a JSON with the following code:</p> <pre><code>events = [] item = {} allEvents = [] for event in events: #Do a bunch of stuff case = {'Artist': item['Artist'], 'Date': item['Date'], 'EventDate': item['eventDate'], 'Time': item['Time'], 'Venue': item['Venue'], 'Address': item['Address'], 'Coordinates': coordinates, 'ArtistImage': item['artistImage'], 'Genre': item['genre'], 'OtherInfo': item['otherInfo'], 'ArtistBio': item['artistBio']} item[event] = case allEvents.append(case) with open(&quot;events.json&quot;, &quot;w&quot;) as writeJSON: json.dump(item, writeJSON, sort_keys=True) </code></pre> <p>My output is as expected (a JSON):</p> <pre><code>[{&quot;Address&quot;: &quot;581 5th Street, Oakland, California 94607&quot;, &quot;Artist&quot;: &quot;Triangle Man&quot;, &quot;ArtistBio&quot;: &quot;No artist bio available&quot;, &quot;ArtistImage&quot;: &quot;https://assets.bandsintown.com/images/fallbackImage.png&quot;, &quot;Coordinates&quot;: [-122.278385, 37.799161], &quot;Date&quot;: &quot;Wednesday, June 12th, 2019&quot;, &quot;EventDate&quot;: &quot;2019-06-12&quot;, &quot;Genre&quot;: &quot;No genre available&quot;, &quot;OtherInfo&quot;: &quot;No other event info available&quot;, &quot;Time&quot;: &quot;10:00 PM&quot;, &quot;Venue&quot;: &quot;Brix 581&quot;}, {&quot;Address&quot;:.........}] </code></pre> <p>However, I want the output to look like below: <code>&quot;var events= &quot;</code> as a string before the JSON.</p> <p><a href="https://i.stack.imgur.com/nRs6G.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nRs6G.png" alt="enter image description here" /></a></p> <p>I've tried:</p> <pre><code>eventsVariable = &quot;var events = &quot; with open(&quot;events.json&quot;, &quot;w&quot;) as writeJSON: json.dump(eventsVariable, item, writeJSON, sort_keys=True) </code></pre> <p>But it gives me an error when I try to concatenate a string with a list. Can I do this using json.dump?</p>
<p>Your original approach <code>json.dump(eventsVariable, item, writeJSON, sort_keys=True)</code> is incorrect since as per the <a href="https://docs.python.org/3/library/json.html#json.dump" rel="nofollow noreferrer">json.dump docs</a>, the first element has to be a valid json object, which <code>"var events = "</code> is not</p> <p>You can append your <code>"var events = "</code> string to the json string returned by <code>json.dumps()</code>, and then save that string to your file.</p> <p>You can use <code>string.format</code> or <code>f-strings</code> based on your python version, I have included both examples below</p> <pre><code>#Use f-strings for python &gt;= 3.6 #s = f'var events = {json.dumps(data)}' s = 'var events = {}'.format(json.dumps(data)) with open("events.txt", "w") as fp: fp.write(s) </code></pre>
python|json|list|object
2
1,905,169
45,117,871
Organizing strings in recursive functions - python
<p>I'm trying to split and organize a string in a single function, my goal is to seperate lowercase and uppercase characters and then return a new string essentially like so:</p> <pre><code> "lowercasestring" + " " + "uppercasestring". </code></pre> <p>Importantly all characters must return in the order they were recieved but split up. My problem is that i have to do this recursively in a single function(for educational purposes) and i struggle to understand how this is doable without an external function calling the recursive and then modifying the string.</p> <pre><code> def split_rec(string): if string == '': return "-" #used to seperate late elif str.islower(string[0]) or string[0] == "_" or string[0] == ".": #case1 return string[0] + split_rec(string[1:]) elif str.isupper(string[0]) or string[0] == " " or string[0] == "|": #case2 return split_rec(string[1:]) + string[0] else: #discard other return split_rec(string[1:]) def call_split_rec(string): ##Essentially i want to integrate the functionality of this whole function into the recursion mystring = split_rec(string) left, right = mystring.split("-") switch_right = right[::1] print(left + " " + switchright) </code></pre> <p>The recursion alone would return:</p> <pre><code> "lowerUPPERcaseCASE" -&gt; "lowercase" + "ESACREPPU" </code></pre> <p>My best attempt at solving this in a single function was to make case2:</p> <pre><code> elif str.isupper(string[-1]) or string[-1] == " " or string[-1] == "|": #case2 return split_rec(string[:-1]) + string[-1] </code></pre> <p>So that the uppercase letters would be added with last letter first, in order to correctly print the string. The issue here is that i obviously just get stuck when the first character is uppercase and the last one is lowercase.</p> <p>I've spent alot of time trying to figure out a good solution to this, but im unable and there's no help for me to be found. I hope the question is not too stupid - if so feel free to remove it. Thanks!</p>
<p>The easiest way would be to use <a href="https://wiki.python.org/moin/HowTo/Sorting" rel="nofollow noreferrer"><code>sorted</code></a> with a custom key:</p> <pre><code>&gt;&gt;&gt; ''.join(sorted("lowerUPPERcaseCASE" + " ", key=str.isupper)) 'lowercase UPPERCASE' </code></pre> <p>There's really no reason to use any recursive function here. If it's for educational purpose, you could try to find a problem for which it's actually a good idea to write a recursive function (fibonacci, tree parsing, merge sort, ...).</p> <p>As mentioned by @PM2Ring in the comments, this sort works fine here because Python <code>sorted</code> is <a href="https://en.wikipedia.org/wiki/Category:Stable_sorts" rel="nofollow noreferrer">stable</a>: when sorting by case, letters with the same case stay at the same place relative to one another.</p>
python|string|recursion
1
1,905,170
64,734,717
python root finding in integers
<p>Is there tools for optimization in python, where I can choose target value for function and get best parameters, that will be integers?</p> <p>Example, my function is:</p> <pre><code>f(x) = 4*A + B </code></pre> <p>So If choose 5 as a target value it will return me A=1 and B=1</p>
<p>Maybe a constraint solver:</p> <pre><code># https://pypi.org/project/python-constraint/ from constraint import * problem = Problem() problem.addVariables([&quot;a&quot;,&quot;b&quot;],range(1,100000)) problem.addConstraint(ExactSumConstraint(5,[4,1])) problem.getSolutions() </code></pre> <p>This gives:</p> <pre><code>[{'a': 1, 'b': 1}] </code></pre>
python|optimization|convergence
0
1,905,171
64,619,387
How to call the LinkedIn API using Python?
<p>I tried so many methods, but none seem to work. Help me make a connection with LinkedIn using python. Issue in generating Access Token I received CODE but it doesn't work. I have python 3.9 Please post a sample of basic code that establishes a connection and gets a access Token. And which redirectUri I have to use. Can i use any website link for rediectUri.</p> <p>I tried to check API through curl and Postman but didn't get solution its say Unauthorized Accesss. <a href="https://github.com/ozgur/python-linkedin" rel="nofollow noreferrer">https://github.com/ozgur/python-linkedin</a> &lt;---This is where I got some idea how to use API .To recievd Access token .</p>
<p><strong>First solution</strong> valid for any (including free) applications, it useses so-called <code>3-Legged OAuth 2.0 Authentication</code>:</p> <ol> <li>Login to your account in the browser.</li> <li>Create new application by <a href="https://www.linkedin.com/developer/apps/new" rel="noreferrer">this link</a>.</li> <li>If you already have application you may use it by selecting it <a href="https://www.linkedin.com/developers/apps" rel="noreferrer">here</a> and changing its options if needed.</li> <li>In application credentials copy Client ID and Client Secret, you'll need them later.</li> <li>On your application's server side create Authorization request URL by next code and send/redirect it to client. If your Python code runs locally you may just open this URL in your browser with <code>import webbrowser; webbrowser.open(url)</code> code. Fill in all fields with your values too. There is <code>redirect_uri</code> in the code, this is URL where authorization response is sent back, for locally running script you have to run Python HTTP web server to retrieve result.</li> </ol> <pre><code># Needs: python -m pip install requests import requests, secrets url = requests.Request( 'GET', 'https://www.linkedin.com/oauth/v2/authorization', params = { 'response_type': 'code', # Always should equal to fixed string &quot;code&quot; # ClientID of your created application 'client_id': 'REPLACE_WITH_YOUR_CLIENT_ID', # The URI your users are sent back to after authorization. # This value must match one of the OAuth 2.0 Authorized Redirect # URLs defined in your application configuration. # This is basically URL of your server that processes authorized requests like: # https://your.server.com/linkedin_authorized_callback 'redirect_uri': 'REPLACE_WITH_REDIRECT_URL', # Replace this with your value # state, any unique non-secret randomly generated string like DCEeFWf45A53sdfKef424 # that identifies current authorization request on server side. # One way of generating such state is by using standard &quot;secrets&quot; module like below. # Store generated state string on your server for further identifying this authorization session. 'state': secrets.token_hex(8).upper(), # Requested permissions, below is just example, change them to what you need. # List of possible permissions is here: # https://docs.microsoft.com/en-us/linkedin/shared/references/migrations/default-scopes-migration#scope-to-consent-message-mapping 'scope': '%20'.join(['r_liteprofile', 'r_emailaddress', 'w_member_social']), }, ).prepare().url # You may now send this url from server to user # Or if code runs locally just open browser like below import webbrowser webbrowser.open(url) </code></pre> <ol start="6"> <li><p>After user authorized your app by previous URL his browser will be redirected to <code>redirect_uri</code> and two fields <code>code</code> and <code>state</code> will be attached to this URL, <code>code</code> is unique authorization code that you should store on server, <code>code</code> expires after <code>30 minutes</code> if not used, <code>state</code> is a copy of state from previous code above, this state is like unique id of your current authorization session, use same state string only once and generate it randomly each time, also state is not a secret thing because you send it to user inside authorization URL, but should be unique and quite long. Example of full redirected URL is <code>https://your.server.com/linkedin_authorized_callback?code=987ab12uiu98onvokm56&amp;state=D5B1C1348F110D7C</code>.</p> </li> <li><p>Next you have to exchange <code>code</code> obtained previously to <code>access_token</code> by next code, next code should be run on your server or where your application is running, because it uses <code>client_secret</code> of your application and this is a secret value, you shouldn't show it to public, never share <code>ClientSecret</code> with anyone except maybe some trusted people, because such people will have ability to pretend (fake) to be your application while they are not.</p> </li> </ol> <pre><code># Needs: python -m pip install requests import requests access_token = requests.post( 'https://www.linkedin.com/oauth/v2/accessToken', params = { 'grant_type': 'authorization_code', # This is code obtained on previous step by Python script. 'code': 'REPLACE_WITH_CODE', # This should be same as 'redirect_uri' field value of previous Python script. 'redirect_uri': 'REPLACE_WITH_REDIRECT_URL', # Client ID of your created application 'client_id': 'REPLACE_WITH_YOUR_CLIENT_ID', # Client Secret of your created application 'client_secret': 'REPLACE_WITH_YOUR_CLIENT_SECRET', }, ).json()['access_token'] print(access_token) </code></pre> <ol start="8"> <li><p><code>access_token</code> obtained by previous script is valid for <code>60 days</code>! So quite long period. If you're planning to use your application for yourself only or your friends then you can just pre-generate manually once in two months by hands several tokens for several people without need for servers.</p> </li> <li><p>Next use <code>access_token</code> for any API calls on behalf of just authorized above user of LinkedIn. Include <code>Authorization: Bearer ACCESS_TOKEN</code> HTTP header in all calls. Example of one such API code below:</p> </li> </ol> <pre><code>import requests print(requests.get( 'https://api.linkedin.com/v2/jobs', params = { # Any API params go here }, headers = { 'Authorization': 'Bearer ' + access_token, # Any other needed HTTP headers go here }, ).json()) </code></pre> <ol start="10"> <li>More details <a href="https://docs.microsoft.com/en-us/linkedin/shared/authentication/authorization-code-flow?context=linkedin/context" rel="noreferrer">can be read here</a>. Regarding how your application is organized, there are 3 options: <ul> <li>Your application is running fully on remote server, meaning both authentication and running application (API calls) are done on some dedicated remote server. Then there is no problem with security, server doesn't share any secrets like <code>client_secret</code>, <code>code</code>, <code>access_token</code>.</li> <li>Your application is running locally on user's machine while authentication is runned once in a while by your server, also some other things like storing necessary data in DataBase can be done by server. Then your server doesn't need to share <code>client_secret</code>, <code>code</code>, but shares <code>access_token</code> which is sent back to application to user's machine. It is also OK, then your server can keep track of what users are using your application, also will be able to revoke some or all of <code>access_token</code>s if needed to block user.</li> <li>Your application is fully run on local user's machine, no dedicated server is used at all. In this case all of <code>client_secret</code>, <code>code</code>, <code>access_token</code> are stored on user's machine. In this case you can't revoke access to your application of some specific users, you can only revoke all of them by regenerating <code>client_secret</code> in your app settings. Also you can't track any work of your app users (although maybe there is some usage statistics in your app settings/info pages). In this case any user can look into your app code and copy <code>client_secret</code>, unless you compile Python to some <code>.exe</code>/<code>.dll</code>/<code>.so</code> and encrypt you client secret there. If anyone got <code>client_secret</code> he can pretend (fake) to be your application meaning that if you app contacts other users somehow then he can try to authorize other people by showing your app interface while having some other fraudulent code underneath, basically your app is not that secure or trusted anymore. Also local code can be easily modified so you shouldn't trust your application to do exactly your code. Also in order to authorize users like was done in previous steps <code>5)-7)</code> in case of local app you have to start Python HTTP Server to be able to retrieve redirected results of step <code>5)</code>.</li> </ul> </li> </ol> <hr /> <p>Below is a <strong>second solution</strong> valid only if your application is a part of <code>LinkedIn Developer Enterprise Products</code> paid subscription, also then you need to <code>Enable Client Credentials Flow</code> in your application settings, next steps uses so-called <code>2-Legged OAuth 2.0 Authentication</code>:</p> <ol> <li>Login to your account in the browser.</li> <li>Create new application by <a href="https://www.linkedin.com/developer/apps/new" rel="noreferrer">this link</a>.</li> <li>If you already have application you may use it by selecting it <a href="https://www.linkedin.com/developers/apps" rel="noreferrer">here</a> and changing its options if needed.</li> <li>In application credentials copy ClientID and ClientSecret, you'll need them later.</li> <li>Create AccessToken by next Python code (put correct client id and client secret), you should run next code only on your server side or on computers of only trusted people, because code uses ClientSecret of your application which is a secret thing and shouldn't be showed to public:</li> </ol> <pre><code># Needs: python -m pip install requests import requests access_token = requests.post( 'https://www.linkedin.com/oauth/v2/accessToken', params = { 'grant_type': 'client_credentials', 'client_id': 'REPLACE_WITH_YOUR_CLIENT_ID', 'client_secret': 'REPLACE_WITH_YOUR_CLIENT_SECRET', }, ).json()['access_token'] print(access_token) </code></pre> <ol start="6"> <li>Copy <code>access_token</code> from previous response, it expires after 30 minutes after issue so you need to use previous script often to gain new access token.</li> <li>Now you can do any API requests that you need using this token, like in code below (<code>access_token</code> is taken from previous steps):</li> </ol> <pre><code>import requests print(requests.get( 'https://api.linkedin.com/v2/jobs', params = { # Any API params go here }, headers = { 'Authorization': 'Bearer ' + access_token, # Any other needed HTTP headers go here }, ).json()) </code></pre> <ol start="8"> <li>More details can be read <a href="https://docs.microsoft.com/en-us/linkedin/shared/authentication/client-credentials-flow" rel="noreferrer">here</a> or <a href="https://www.linkedin.com/developers/" rel="noreferrer">here</a>.</li> </ol>
python|api|authorization|linkedin|linkedin-api
11
1,905,172
61,224,070
How to use preceding sibling for XML with xPath in Python?
<p>I have an XML structured like this:</p> <pre><code>&lt;?xml version="1.0" encoding="utf-8"?&gt; &lt;pages&gt; &lt;page id="1" bbox="0.000,0.000,462.047,680.315" rotate="0"&gt; &lt;textbox id="0" bbox="179.739,592.028,261.007,604.510"&gt; &lt;textline bbox="179.739,592.028,261.007,604.510"&gt; &lt;text font="NUMPTY+ImprintMTnum" bbox="191.745,592.218,199.339,603.578" ncolour="0" size="12.482"&gt;C&lt;/text&gt; &lt;text font="NUMPTY+ImprintMTnum-it" bbox="192.745,592.218,199.339,603.578" ncolour="0" size="12.333"&gt;A&lt;/text&gt; &lt;text font="NUMPTY+ImprintMTnum-it" bbox="193.745,592.218,199.339,603.578" ncolour="0" size="12.333"&gt;P&lt;/text&gt; &lt;text font="NUMPTY+ImprintMTnum-it" bbox="191.745,592.218,199.339,603.578" ncolour="0" size="12.333"&gt;I&lt;/text&gt; &lt;text font="NUMPTY+ImprintMTnum" bbox="191.745,592.218,199.339,603.578" ncolour="0" size="12.482"&gt;T&lt;/text&gt; &lt;text font="NUMPTY+ImprintMTnum" bbox="191.745,592.218,199.339,603.578" ncolour="0" size="12.482"&gt;O&lt;/text&gt; &lt;text font="NUMPTY+ImprintMTnum" bbox="191.745,592.218,199.339,603.578" ncolour="0" size="12.482"&gt;L&lt;/text&gt; &lt;text font="NUMPTY+ImprintMTnum" bbox="191.745,592.218,199.339,603.578" ncolour="0" size="12.482"&gt;O&lt;/text&gt; &lt;text&gt;&lt;/text&gt; &lt;text font="NUMPTY+ImprintMTnum" bbox="191.745,592.218,199.339,603.578" ncolour="0" size="12.482"&gt;I&lt;/text&gt; &lt;text font="NUMPTY+ImprintMTnum" bbox="191.745,592.218,199.339,603.578" ncolour="0" size="12.482"&gt;I&lt;/text&gt; &lt;text font="NUMPTY+ImprintMTnum" bbox="191.745,592.218,199.339,603.578" ncolour="0" size="12.482"&gt;I&lt;/text&gt; &lt;text&gt;&lt;/text&gt; &lt;/textline&gt; &lt;/textbox&gt; &lt;/page&gt; &lt;/pages&gt; </code></pre> <p>Attribute bbox in text tag has four values, and I need to have the difference of the first bbox value of an element and its preceding one. In other words, the distance between the first two bboxes is 1. In the following loop, I need to find the preceding sibling of the bbox attribute value I take in order to calculate the distance between the two.</p> <pre><code> def wrap(line, idxList): if len(idxList) == 0: return # No elements to wrap # Take the first element from the original location idx = idxList.pop(0) # Index of the first element elem = removeByIdx(line, idx) # The indicated element # Create "newline" element with "elem" inside nElem = E.newline(elem) line.insert(idx, nElem) # Put it in place of "elem" while len(idxList) &gt; 0: # Process the rest of index list # Value not used, but must be removed idxList.pop(0) # Remove the current element from the original location currElem = removeByIdx(line, idx + 1) nElem.append(currElem) # Append it to "newline" for line in root.iter('textline'): idxList = [] for elem in line: bbox = elem.attrib.get('bbox') if bbox is not None: tbl = bbox.split(',') distance = float(tbl[2]) - float(tbl[0]) else: distance = 100 # "Too big" value if distance &gt; 10: par = elem.getparent() idx = par.index(elem) idxList.append(idx) else: # "Wrong" element, wrap elements "gathered" so far wrap(line, idxList) idxList = [] # Process "good" elements without any "bad" after them, if any wrap(line, idxList) #print(etree.tostring(root, encoding='unicode', pretty_print=True)) </code></pre> <p>I tried with xPath like this:</p> <pre><code>for x in tree.xpath("//text[@bbox&lt;preceding::text[1]/@bbox+11]"): print(x) </code></pre> <p>But it returns nothing. Is my path wrong and how can I insert it in the loop?</p>
<p>Python uses the very old XPath 1.0 standard. In XPath 1.0, the "&lt;" operator always converts its operands to numbers. So when you write</p> <pre><code>//text[@bbox &lt; preceding::text[1]/@bbox + 11] </code></pre> <p>you are performing numeric differencing and numeric addition on <code>@bbox</code> values.</p> <p>But <code>@bbox</code> is not a number, it is a comma-separated list of four numbers:</p> <pre><code>179.739,592.028,261.007,604.510 </code></pre> <p>Converting that to a number produces NaN (not-a-number), and <code>NaN &lt; NaN</code> returns false.</p> <p>To do anything useful with a structured attribute value like this, you really need XPath 2.0 or later.</p>
python|xml|xpath|tags|lxml
1
1,905,173
60,540,542
How can I apply the same exceptions to multiple functions without copying the code?
<p>I have two different functions that catch the same exceptions, for example:</p> <pre><code>def func1(): try: # ...do something except FileNotFoundError as e: print(e) except NotADirectoryError as e: print(e) def func2(): try: # ...do something else except FileNotFoundError as e: print(e) except NotADirectoryError as e: print(e) </code></pre> <p>How can I avoid doing writing these identical exceptions for each function?</p> <p>My ideal scenario would be to have it like this:</p> <pre><code>def func1(): # ... do something while catching those exceptions without explicitly stating them here. def func2(): # ... do something while catching those exceptions without explicitly stating them here. </code></pre>
<p>You could write a decorator.</p> <pre><code>from functools import wraps def just_report_file_errors(fn): @wraps(fn) def decorator(*args, **kwargs): try: return fn(*args, **kwargs) except FileNotFoundError as e: print(e) except NotADirectoryError as e: print(e) return decorator @just_report_file_errors def func1(): pass # do thing... @just_report_file_errors def func2(): pass # do thing... </code></pre>
python
2
1,905,174
56,401,685
Object parameter in python class declaration
<h3>Concepts of objects in python classes</h3> <p>While reading about old style and new style classes in Python , term object occurs many times. What is exactly an object? Is it a base class or simply an object or a parameter ?</p> <p>for e.g. :</p> <p>New style for creating a class in python </p> <pre><code>class Class_name(object): pass </code></pre> <p>If object is just another class which is base class for Class_name (inheritance) then what will be termed as object in python ?</p>
<p>From <a href="https://docs.python.org/2/library/functions.html#object" rel="nofollow noreferrer">[Python 2.Docs]: Built-in Functions - class object</a> (<strong>emphasis</strong> is mine):</p> <blockquote> <p>Return a new featureless object. <strong><a href="https://docs.python.org/2/library/functions.html#object" rel="nofollow noreferrer">object</a> is a base for all new style classes</strong>. It has the methods that are common to all instances of new style classes.</p> </blockquote> <p>You could also check <a href="https://www.python.org/doc/newstyle" rel="nofollow noreferrer">[Python]: New-style Classes</a> (and referenced <em>URL</em>s) for more details.</p> <blockquote> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; import sys &gt;&gt;&gt; sys.version '2.7.10 (default, Mar 8 2016, 15:02:46) [MSC v.1600 64 bit (AMD64)]' &gt;&gt;&gt; &gt;&gt;&gt; class OldStyle(): ... pass ... &gt;&gt;&gt; &gt;&gt;&gt; class NewStyle(object): ... pass ... &gt;&gt;&gt; &gt;&gt;&gt; dir(OldStyle) ['__doc__', '__module__'] &gt;&gt;&gt; &gt;&gt;&gt; dir(NewStyle) ['__class__', '__delattr__', '__dict__', '__doc__', '__format__', '__getattribute__', '__hash__', '__init__', '__module__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__'] &gt;&gt;&gt; &gt;&gt;&gt; old_style = OldStyle() &gt;&gt;&gt; new_style = NewStyle() &gt;&gt;&gt; &gt;&gt;&gt; type(old_style) &lt;type 'instance'&gt; &gt;&gt;&gt; &gt;&gt;&gt; type(new_style) &lt;class '__main__.ClassNewStyle'&gt; </code></pre> </blockquote> <p>In the above example, <em>old_style</em> and <em>new_style</em> are instances (or may be referred to as <em><strong>object</strong></em>s), so I guess the answer to your question is: depends on the context.</p>
python|class
2
1,905,175
69,430,624
"reshape" numpy array of (N, 2) shape into (N, 2, 2) where each column (size 2) become a diag (2,2) block?
<p>Is there an efficient way to do this? For example I have</p> <pre><code>[[1, 2, 3], [4, 5, 6]] </code></pre> <p>I would like to get:</p> <pre><code>[[[1, 0], [0, 4]], [[2, 0], [0, 5]], [[3, 0], [0, 6]]] </code></pre>
<p>For large arrays I recommend <code>np.einsum</code> as follows:</p> <pre><code>&gt;&gt;&gt; data array([[1, 2, 3], [4, 5, 6]]) &gt;&gt;&gt; out = np.zeros((*reversed(data.shape),2),data.dtype) &gt;&gt;&gt; np.einsum(&quot;...ii-&gt;...i&quot;,out)[...] = data.T &gt;&gt;&gt; out array([[[1, 0], [0, 4]], [[2, 0], [0, 5]], [[3, 0], [0, 6]]]) </code></pre> <p><code>einsum</code> creates a writable strided view of the memory locations holding the diagonal elements. This is about as efficient as it gets in numpy.</p>
python|arrays|numpy|reshape|diagonal
2
1,905,176
69,376,478
return each string representation in a list of user-defined objects in a new line
<p>I have a list where each element is a user-defined class, <code>class_object_list</code>. I wish to return each string representation of that list in a new line and the code needs to be wrapped inside an f-string (or would work inside a return statement).</p> <p>Or basically, put the equivalent of the following code inside a return statment</p> <pre><code>for i in class_object_list: print(i) </code></pre> <p>i've tried <code>''.join([str(i) for i in class_object_list])</code> but it doesnt print each string representation in a new line.</p> <p>Also tried <code>nl = '\n'</code>, <code>f&quot;{nl.join([*class_object_list])}&quot;</code> but it gave a <code>TypeError: sequence item 0: expected str instance</code> error.</p> <p>And tried <code>print(*class_object_list, sep='\n')</code> but it only works with a print statement</p>
<p>Use:</p> <pre><code>'\n'.join([str(i) for i in class_object_list]) </code></pre> <p>Or:</p> <pre><code>'\n'.join(map(str, class_object_list)) </code></pre>
python|oop
3
1,905,177
69,394,263
How to view kivy application logs in pc
<p>I'm beginner in python and also in kivy (maybe kivymd). I've hardly created an app (at last). It's work fine on pc. Then I've converted it to apk with <code>BUILDOZER</code> in google colab. I also full fill the <code>buildozer.spec</code> file. But when I install and run it on my android (vivo y93), It shows a default kivy loading screen, and with in 2 sec, It crashed. So I'm badly need of a kivy log viewer. I searched on google about it. but there is mac and linux softwares. Is there any logviewer software you know well or any build in kivy function to save the log file in a separate folder? I can't give you screenshot at this time, But you can get <code>main.py</code> <a href="https://pastebin.com/dl/CaMsq3wJ" rel="nofollow noreferrer">here</a>, if you need so.</p>
<p>Try to delete the &quot;.buildozer&quot; folder in your project folder, then edit your &quot;buildozer.spec&quot; file and add the requirements:</p> <pre><code>requirements = kivy==2.0.0,kivymd==0.104.1,python3,pyjnius,plyer,requests,urllib3,chardet,idna,pip,Image,PIL,watchdog </code></pre> <p>And then compile your app once again. That's what I use most of the times for my apps, also the issue maybe the &quot;MDNavigationLayout&quot; in the kv lang section, but you will have to review both, the requirements and the MDNavigationLayout</p>
python|python-3.x|kivy|kivymd
1
1,905,178
55,443,028
AttributeError: module 'graph_tool.draw' has no attribute 'draw_hierarchy' is returned when running my code, which is not true
<p>I'm trying to run a script that uses graph tools, and the code returns: </p> <pre><code>/usr/lib/python3/dist-packages/graph_tool/all.py:40: RuntimeWarning: Error importing draw module, proceeding nevertheless: No module named 'cairo._cairo' warnings.warn(msg, RuntimeWarning) Nuclear_Overhauser_effect ['the', 'nuclear', 'overhauser', 'effect', 'noe', 'is', 'the', 'transfer', 'of', 'nuclear'] Traceback (most recent call last): File "/home/qhama/Desktop/hSBM_Topicmodel/graphtools_tut.py", line 39, in &lt;module&gt; model.plot(filename='tmp.png', nedges=1000) File "/home/qhama/Desktop/hSBM_Topicmodel/sbmtm.py", line 183, in plot subsample_edges=nedges, hshortcuts=1, hide=0) File "/usr/lib/python3/dist-packages/graph_tool/inference/nested_blockmodel.py", line 934, in draw return graph_tool.draw.draw_hierarchy(self, **kwargs) AttributeError: module 'graph_tool.draw' has no attribute 'draw_hierarchy' </code></pre> <p>Tried reinstalling cairo and every dependency </p> <pre><code># Creating an instance of the sbtm-class model = sbmtm() # We have to create the word document network from the corpus model.make_graph(texts, documents=titles) gt.seed_rng(32) model.fit() # Plot the result model.plot(filename='tmp.png', nedges=1000) model.topics(l=1, n=20) </code></pre>
<p>Try importing all graph_tool submodules before running your code and it might work. It worked for me.</p> <pre><code>import graph_tool.all as gt </code></pre>
python|graph-tool
1
1,905,179
55,174,600
Whats the most efficient way to parse this XML sitemap with Python?
<p>I have the following sitemap that I am trying to parse:</p> <pre><code>&lt;?xml version="1.0" encoding="UTF-8"?&gt; &lt;urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"&gt; &lt;url&gt; &lt;loc&gt;https://www.example.com/examplea&lt;/loc&gt; &lt;priority&gt;0.5&lt;/priority&gt; &lt;lastmod&gt;2019-03-14&lt;/lastmod&gt; &lt;changefreq&gt;daily&lt;/changefreq&gt; &lt;/url&gt; &lt;url&gt; &lt;loc&gt;https://www.example.com/exampleb&lt;/loc&gt; &lt;priority&gt;0.5&lt;/priority&gt; &lt;lastmod&gt;2019-03-14&lt;/lastmod&gt; &lt;changefreq&gt;daily&lt;/changefreq&gt; &lt;/url&gt; &lt;/urlset&gt; </code></pre> <p>Whats the fastest way to obtain the url links within the loc tags using Python?</p> <p>I tried using ElementTree, but I think it didnt work because of namespaces.</p> <p>I need to get "<a href="https://www.example.com/examplea" rel="nofollow noreferrer">https://www.example.com/examplea</a>" and "<a href="https://www.example.com/exampleab" rel="nofollow noreferrer">https://www.example.com/exampleab</a>"</p>
<pre class="lang-py prettyprint-override"><code>import re str = """ &lt;?xml version="1.0" encoding="UTF-8"?&gt; &lt;urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"&gt; &lt;url&gt; &lt;loc&gt;https://www.example.com/examplea&lt;/loc&gt; &lt;priority&gt;0.5&lt;/priority&gt; &lt;lastmod&gt;2019-03-14&lt;/lastmod&gt; &lt;changefreq&gt;daily&lt;/changefreq&gt; &lt;/url&gt; &lt;url&gt; &lt;loc&gt;https://www.example.com/exampleb&lt;/loc&gt; &lt;priority&gt;0.5&lt;/priority&gt; &lt;lastmod&gt;2019-03-14&lt;/lastmod&gt; &lt;changefreq&gt;daily&lt;/changefreq&gt; &lt;/url&gt; &lt;/urlset&gt; """ url = re.findall("&lt;loc&gt;(.*?)&lt;/loc&gt;", str) </code></pre>
python|xml|sitemap
1
1,905,180
55,351,647
why I get this error 'int' object is not iterable when I Initialize two variables at once
<p>I'm a beginner of python and follow a book to practice. In my book, the author uses this code</p> <pre><code>s, k = 0 </code></pre> <p>but I get the error:</p> <pre class="lang-none prettyprint-override"><code>Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; TypeError: 'int' object is not iterable </code></pre> <p>I want to know what happened here.</p>
<p>You are asking to initialize two variables <code>s</code> and <code>k</code> using a single int object <code>0</code>, which of course is not iterable. </p> <p>The corrrect syntax being:</p> <pre><code>s, k = 0, 0 </code></pre> <p><strong>Where</strong></p> <pre><code>s, k = 0, 1 </code></pre> <p>Would assign <code>s = 0</code> and <code>k = 1</code></p> <blockquote> <p>Notice the each <code>int</code> object on the right being initialized to the corresponding <code>var</code> on the left.</p> </blockquote> <p><strong>OR</strong></p> <pre><code>s,k = [0 for _ in range(2)] print(s) # 0 print(k) # 0 </code></pre>
python-2.7
1
1,905,181
57,364,954
How to use multiple conditions in different columns to update the new rows values in python?
<p>This is the current dataframe:</p> <pre><code> id = ['793601486525702000','793601486525702000','793601710614802000','793601355214561000','793601355214561000','793601355214561000','793601355214561000','788130215436230000','788130215436230000','788130215436230000','788130215436230000','788130215436230000'] time = ['11/1/2016 16:53','11/1/2016 16:53','11/1/2016 16:52','11/1/2016 16:55','11/1/2016 16:53','11/1/2016 16:53','11/1/2016 16:51','11/1/2016 3:09','11/1/2016 3:04','11/1/2016 2:36','11/1/2016 2:08','11/1/2016 0:28'] rank = ['2','1','1','4','3','2','1','5','4','3','2','1'] flag =['c_reply','c_start','u_start','u_reply','c_reply','c_reply','u_start','c_reply','c_reply','u_reply','u_reply','u_start'] df = pd.DataFrame({"id": id, "time": time, "rank": rank, "flag": flag}) id time rank flag . . 793601486525702000 11/1/2016 16:53 2 c_reply 793601486525702000 11/1/2016 16:53 1 c_start 793601710614802000 11/1/2016 16:52 1 u_start 793601355214561000 11/1/2016 16:55 4 u_reply 793601355214561000 11/1/2016 16:53 3 c_reply 793601355214561000 11/1/2016 16:53 2 c_reply 793601355214561000 11/1/2016 16:51 1 u_start 788130215436230000 11/1/2016 3:09 5 c_reply 788130215436230000 11/1/2016 3:04 4 c_reply 788130215436230000 11/1/2016 2:36 3 u_reply 788130215436230000 11/1/2016 2:08 2 u_reply 788130215436230000 11/1/2016 0:28 1 u_start . . </code></pre> <p>My dataset has thousands of rows.<br> The column 'id': One id might have multiple rows/records. The rows have the same id means they are in the same group.<br> The column 'rank' is arranged by the chronological order of the same group of id. </p> <p>I would like to use a loop or function to create two new columns 'reply' and 'reply_time' based on multiple columns: 'id', 'rank', 'time', and 'flag' in my dataframe.<br> Step 1: Select rows in the same id group (group by id column)<br> Step 2: Update 'reply' column value:The conditions I would like to set are as follows: </p> <p>value '0' : rank = '1' and flag = 'u_start' and no 'c_reply' in flag column<br> value '1' : rank = '1' and flag = 'u_start' and has 'c_reply' in flag column<br> value '2' : the first/earliest c_reply in flag column. (if there's multiple c_reply, list the earliest c_reply (the smaller value in rank column))<br> value '3' : If the above conditions aren't met, the rows should be assigned to this category, including (1)rank = '1' and flag = 'c_start' OR (2)rank >= '2' and flag = 'u_reply' OR (3)rank >= '2' and flag = 'c_reply' and not the first c_reply in flag column OR (4) rank >= '2' and flag = 'c_reply' and no 'u_start' in flag column </p> <p>Step 3: Update 'reply_time' column value:The conditions I would like to set are as follows:<br> value 'time': rank = '1' and flag = 'u_start' and has 'c_reply' in flag column, list the first/earliest 'c_reply' time.<br> value 'na': If the above conditions aren't met, the rows should be assigned to 'na'. </p> <p>The target output would look something like this:</p> <pre><code> id time rank flag reply reply_time 793601486525702000 11/1/2016 16:53 2 c_reply 3 na 793601486525702000 11/1/2016 16:53 1 c_start 3 na 793601710614802000 11/1/2016 16:52 1 u_start 0 na 793601355214561000 11/1/2016 16:55 4 u_reply 3 na 793601355214561000 11/1/2016 16:53 3 c_reply 3 na 793601355214561000 11/1/2016 16:53 2 c_reply 2 na 793601355214561000 11/1/2016 16:51 1 u_start 1 11/1/2016 16:53 788130215436230000 11/1/2016 3:09 5 c_reply 3 na 788130215436230000 11/1/2016 3:04 4 c_reply 2 na 788130215436230000 11/1/2016 2:36 3 u_reply 3 na 788130215436230000 11/1/2016 2:08 2 u_reply 3 na 788130215436230000 11/1/2016 0:28 1 u_start 1 11/1/2016 3:04 </code></pre> <p>It seems like a simple question however I couldn't find it anywhere.<br> I used excel to do the manual coding now but I think there should be a faster way to solve this by using python.<br> Any help is much appreciated. Thanks a lot!</p>
<p>Took a bit longer than expected. I don't have enough time for your second question (you should ask only one question when asking in SO, anyways), so I'll help you until step 2:</p> <pre><code>import pandas as pd import numpy as np id = ['793601486525702000','793601486525702000','793601710614802000','793601355214561000','793601355214561000','793601355214561000','793601355214561000','788130215436230000','788130215436230000','788130215436230000','788130215436230000','788130215436230000'] time = ['11/1/2016 16:53','11/1/2016 16:53','11/1/2016 16:52','11/1/2016 16:55','11/1/2016 16:53','11/1/2016 16:53','11/1/2016 16:51','11/1/2016 3:09','11/1/2016 3:04','11/1/2016 2:36','11/1/2016 2:08','11/1/2016 0:28'] rank = ['2','1','1','4','3','2','1','5','4','3','2','1'] flag =['c_reply','c_start','u_start','u_reply','c_reply','c_reply','u_start','c_reply','c_reply','u_reply','u_reply','u_start'] df = pd.DataFrame({"id": id, "time": time, "rank": rank, "flag": flag}) </code></pre> <p>Let's start with the hardest condition:</p> <pre><code>ids_c3 = pd.DataFrame(df[df.flag=='c_reply'].groupby('id')['rank'].min()) ids_c3['reply'] = 2 df= df.merge(ids_c3, on=['id','rank'], how='left') </code></pre> <p>First, we found id's that have <code>c_reply</code> and obtained the minimum <code>rank</code> of those id's. Then turned into a dataFrame, and marked with 2. Then I merged it with the original dataframe to create the <code>reply</code> column. Now we're missing number 0, 1 and 3.</p> <p>For numbers 1 and 0: </p> <pre><code>df['is_c_reply'] = df.groupby('id').flag.transform(lambda x: x.eq('c_reply').any()) c1= (df['rank']=='1') &amp; (df.flag=='u_start') &amp; (df.is_c_reply==0) c2= (df['rank']=='1') &amp; (df.flag=='u_start') &amp; (df.is_c_reply==1) df['reply'] = np.select([c1,c2],[0,1], default=df.reply) </code></pre> <p>We wrote the conditions you specified: <code>c1</code> for <code>0</code> and <code>c2</code> for <code>1</code>. Then used <code>np.select()</code> to fill the reply column.</p> <p>Now we're only missing <code>3</code>. As stated, everything else is a 3, so you just <code>fillna()</code>:</p> <pre><code>df.reply = df.reply.fillna(3) </code></pre> <p>We're done!</p> <p>Possibly there's faster ways to do this, though</p>
python|python-3.x|pandas|dataframe
1
1,905,182
57,318,680
Calculate age at 100 in Python, only goes to 99 for odd numbers?
<p>I have an assignment to use a for loop to display the users age and year and increase by 2 until they user turns 100 or greater. Since the assignment says greater it is ok as it is, only worth 4 pts so no big deal. But I would like to learn how to stop it on 100 if the users age is an odd number such as mine, 57, which goes to 99 or 101.</p> <p>I've tried several if statements with no luck but I'm a beginner and old!</p> <pre><code>name = input("May I have your name?"" ") age = int(input("Can I ask how old you are?"" ")) year = 2019 for age in range(age, 101, 2): print("In {0} you will be {1} years old, {2}!".format(year, age, name)) year += 2 </code></pre> <p>If there is an easy way to stop it at 100 I would like to understand how, not asking anyone to give me the answer just instructions.</p>
<h3>Boundary check</h3> <p>As mentioned by other answers, the <code>range()</code> function with <code>2</code> as the third argument will only ever increment the <code>age</code> in multiples of <code>2</code>. It is also exclusive of the last value. If I want to output <code>100</code> but the <code>range()</code> function only gives me <code>101</code>, the most straightforward way is to add a conditional to check for <code>101</code> and change it to <code>100</code>.</p> <p>Code example below, stop reading if you want to try it for yourself.</p> <p>.<br> ..<br> ...</p> <h3>Conditional:</h3> <pre class="lang-py prettyprint-override"><code> if age == 101: age = 100 </code></pre> <h3>Full code:</h3> <pre class="lang-py prettyprint-override"><code>name = input("May I have your name? ") age = int(input("Can I ask how old you are? ")) year = 2019 for age in range(age, 102, 2): if age == 101: age = 100 year -= 1 print("In {0} you will be {1} years old, {2}!".format(year, age, name)) year += 2 </code></pre>
python|python-3.x
3
1,905,183
58,476,857
How to match & replace multiple strings with regex in Python
<p>I am trying to replace some text in Python with regex.</p> <p>My text looks like this:</p> <pre><code>WORKGROUP 1. John Doe ID123, Jane Smith ID456, Ohe Keedoke ID7890 Situation paragraph 1 WORKGROUP 2. John Smith ID321, Jane Doe ID654 Situation paragraph 2 </code></pre> <p>What I am trying to do is put the names in double square brackets and remove the IDs so that it will end up looking like this.</p> <pre><code>WORKGROUP 1. [[John Doe]], [[Jane Smith]], [[Ohe Keedoke]] Situation paragraph 1 WORKGROUP 2. [[John Smith]], [[Jane Doe]] Situation paragraph 2 </code></pre> <p>So far I have this.</p> <pre><code>re.sub(r"(WORKGROUP\s\d\.\s)",r"\1[[") re.sub(r"(WORKGROUP\s\d\..+?)(?:\s\b\w+\b),(?:\s)(.+\n)",r"\1]], [[\2") re.sub(r"(WORKGROUP\s\d\..+?)(?:\s\b\w+\b)(\n)",r"\1]]\2") </code></pre> <p>This works for groups with two people (WORKGROUP 2) but leaves all the IDs except the first and last persons' if there are more than two. So WORKGROUP 1 ends up looking like this.</p> <pre><code>WORKGROUP 1. [[John Doe]], [[Jane Smith ID456, Ohe Keedoke]] Situation paragraph 1 </code></pre> <p>Unfortunately, I can't do something like</p> <pre><code>re.sub(r"((\s\b\w+\b),(\s))+",r"\1]], [[\2") </code></pre> <p>because it will match inside the situation paragraphs.</p> <p>My question is: is it possible to do multiple match/replacements in a string segment without doing it universally?</p>
<p>If you have the <code>regex</code> module installed:</p> <pre><code>(?&lt;=\bWORKGROUP\s+\d+\.\s|,)\s*(.+?)\s*ID\d+\s*(?=,|$) </code></pre> <p>might work OK.</p> <p>If not, you can simply do that in your terminal, by running:</p> <pre><code>$ pip install regex </code></pre> <p>or </p> <pre><code>$ pip3 install regex </code></pre> <p>Here, we're assuming that you might have other <code>ID\d+</code> present in your text, otherwise, if you don't your problem would be much simple. </p> <h3>Test</h3> <pre><code>import regex as re regex = r"(?&lt;=\bWORKGROUP\s+\d+\.\s|,)\s*(.+?)\s*ID\d+\s*(?=,|$)" test_str = ''' WORKGROUP 1. John Doe ID123, Jane Smith ID456, Ohe Keedoke ID7890 Situation paragraph 1 WORKGROUP 2. John Smith ID321, Jane Doe ID654 Situation paragraph 2 WORKGROUP 11. Bob Doe ID123, Alice Doe ID123, John Doe ID123, Jane Smith ID456, Ohe Keedoke ID7890 Situation paragraph 1 WORKGROUP 21. John Smith ID321, Jane Doe ID654 Situation paragraph 2 ''' subst = "[[\\1]]" print(re.sub(regex, subst, test_str, 0, re.MULTILINE)) </code></pre> <h3>Output</h3> <pre><code>WORKGROUP 1. [[John Doe]],[[Jane Smith]],[[Ohe Keedoke]] Situation paragraph 1 WORKGROUP 2. [[John Smith]],[[Jane Doe]] Situation paragraph 2 WORKGROUP 11. [[Bob Doe]],[[Alice Doe]],[[John Doe]],[[Jane Smith]],[[Ohe Keedoke]] Situation paragraph 1 WORKGROUP 21. [[John Smith]],[[Jane Doe]] Situation paragraph 2 </code></pre> <hr> <p>If you wish to simplify/modify/explore the expression, it's been explained on the top right panel of <a href="https://regex101.com/r/iGkaoE/1/" rel="nofollow noreferrer">regex101.com</a>. If you'd like, you can also watch in <a href="https://regex101.com/r/iGkaoE/1/debugger" rel="nofollow noreferrer">this link</a>, how it would match against some sample inputs.</p> <hr>
python|regex|python-3.x
0
1,905,184
22,806,639
How to access to the Raspberry pi files using a python gui running on windows?
<p>I am developping an application in which i have to establish an ethernet connection between raspberry pi and a windows pc. On my pc i want to develop a python program (gui) that can not only import files from the raspberry pi, but also read those files and modify them. I don't want to use any soft or program already existing. So what is the best solution: sockets, or ssh? or there is an other choice? </p>
<p>Samba, FTP/SFTP, or also (if doable on windows) SSHFS. If you want your own implementation then for example you could use a REST API (web app) running on PI and allowing file operations in some folders (create, modify, delete, get, list...). You could also think about using Git and git pulling/pushing between each other :)</p>
python|sockets|ssh|raspberry-pi|ethernet
1
1,905,185
22,575,489
Saving BeatifulSoup output to mongo and loading it back again
<p>I have a crawler, that obtains certain web pages for my app. I want to separate concerns, the crawler should be 'dumb', just fetch the page, take the BeautifulSoup JSON, and save it into MongoDB.</p> <p>Other workers should then read the MongoDB documents and extract the relevant information into a relational model.</p> <p>The question is how to safely convert a BeautifulSoup object to JSON (MongoDB document) and back to it self, safely and without errors.</p> <p>Edit: <strong>Illustration</strong></p> <pre><code> import urllib2 import json from bs4 import BeautifulSoup req = urllib2.Request('http://www.google.com') res = urllib2.urlopen(req) soup = BeautifulSoup(res.read()) content = soup.findAll(text=True) soup_json = json.dumps(content) soup_json </code></pre> <p>Outputs:</p> <pre><code>'["doctype html", "Google", "(function(){\\nwindow.google={kEI:\\"LGktU9bfHqHk4wT1poGoAg\\",getEI:function(a){for(var b;a&amp;&amp;(!a.getAttribute||!(b=a.getAttribute(\\"eid\\")));)a=a.parentNode;return b||google.kEI},https:function(){return\\"https:\\"==window.location.protocol},kEXPI:\\"4006,17259,4000116,4007661,4007830,4008067,4008133,4008142,4009033,4009352,4009565,4009641,4010297,4010806,4010858,4010899,4011228,4011258,4011679,4011959,4012373,4012504,4012507,4013338,4013374,4013414,4013416,4013591,4013723,4013747,4013787,4013823,4013967,4013979,4014016,4014431,4014515,4014636,4014649,4014671,4014792,4014804,4014813,4014991,4015119,4015155,4015195,4015234,4015260,4015320,4015444,4015497,4015514,4015582,4015589,4015637,4015638,4015640,4015690,4015772,4015853,4015904,4015991,4015995,4016007,4016047,4016062,4016139,4016167,4016193,4016304,4016311,4016407,8300007,8300015,8300018,8500149,8500157,10200002,10200012,10200029,10200030,10200040,10200045,10200048,10200053,10200055,10200066,10200083,10200103,10200120,10200134,10200157\\",kCSI:{e:\\"4006,17259,4000116,4007661,4007830,4008067,4008133,4008142,4009033,4009352,4009565,4009641,4010297,4010806,4010858,4010899,4011228,4011258,4011679,4011959,4012373,4012504,4012507,4013338,4013374,4013414,4013416,4013591,4013723,4013747,4013787,4013823,4013967,4013979,4014016,4014431,4014515,4014636,4014649,4014671,4014792,4014804,4014813,4014991,4015119,4015155,4015195,4015234,4015260,4015320,4015444,4015497,4015514,4015582,4015589,4015637,4015638,4015640,4015690,4015772,4015853,4015904,4015991,4015995,4016007,4016047,4016062,4016139,4016167,4016193,4016304,4016311,4016407,8300007,8300015,8300018,8500149,8500157,10200002,10200012,10200029,10200030,10200040,10200045,10200048,10200053,10200055,10200066,10200083,10200103,10200120,10200134,10200157\\",ei:\\"LGktU9bfHqHk4wT1poGoAg\\"},authuser:0,ml:function(){},kHL:\\"iw\\",time:function(){return(new Date).getTime()},log:function(a,b,c,h,k){var d=\\nnew Image,f=google.lc,e=google.li,g=\\"\\";d.onerror=d.onload=d.onabort=function(){delete f[e]};f[e]=d;c||-1!=b.search(\\"&amp;ei=\\")||(g=\\"&amp;ei=\\"+google.getEI(h));c=c||\\"/\\"+(k||\\"gen_204\\")+\\"?atyp=i&amp;ct=\\"+a+\\"&amp;cad=\\"+b+g+\\"&amp;zx=\\"+google.time();a=/^http:/i;a.test(c)&amp;&amp;google.https()?(google.ml(Error(\\"GLMM\\"),!1,{src:c}),delete f[e]):(d.src=c,google.li=e+1)},lc:[],li:0,y:{},x:function(a,b){google.y[a.id]=[a,b];return!1},load:function(a,b,c){google.x({id:a+l++},function(){google.load(a,b,c)})}};var l=0;})();\\n(function(){google.sn=\\"webhp\\";google.timers={};google.startTick=function(a,b){google.timers[a]={t:{start:google.time()},bfr:!!b}};google.tick=function(a,b,g){google.timers[a]||google.startTick(a);google.timers[a].t[b]=g||google.time()};google.startTick(\\"load\\",!0);\\ntry{}catch(d){}})();\\nvar _gjwl=location;function _gjuc(){var a=_gjwl.href.indexOf(\\"#\\");if(0&lt;=a&amp;&amp;(a=_gjwl.href.substring(a),0&lt;a.indexOf(\\"&amp;q=\\")||0&lt;=a.indexOf(\\"#q=\\"))&amp;&amp;(a=a.substring(1),-1==a.indexOf(\\"#\\"))){for(var d=0;d&lt;a.length;){var b=d;\\"&amp;\\"==a.charAt(b)&amp;&amp;++b;var c=a.indexOf(\\"&amp;\\",b);-1==c&amp;&amp;(c=a.length);b=a.substring(b,c);if(0==b.indexOf(\\"fp=\\"))a=a.substring(0,d)+a.substring(c,a.length),c=d;else if(\\"cad=h\\"==b)return 0;d=c}_gjwl.href=\\"/search?\\"+a+\\"&amp;cad=h\\";return 1}return 0}\\nfunction _gjh(){!_gjuc()&amp;&amp;window.google&amp;&amp;google.x&amp;&amp;google.x({id:\\"GJH\\"},function(){google.nav&amp;&amp;google.nav.gjh&amp;&amp;google.nav.gjh()})};\\nwindow._gjh&amp;&amp;_gjh();", "#gbar,#guser{font-size:13px;padding-top:1px !important;}#gbar{height:22px}#guser{padding-bottom:7px !important;text-align:left}.gbh,.gbd{border-top:1px solid #c9d7f1;font-size:1px}.gbh{height:0;position:absolute;top:24px;width:100%}@media all{.gb1{height:22px;margin-left:.5em;vertical-align:top}#gbar{float:right}}a.gb1,a.gb4{text-decoration:underline !important}a.gb1,a.gb4{color:#00c !important}.gbi .gb4{color:#dd8e27 !important}.gbf .gb4{color:#900 !important}", "body,td,a,p,.h{font-family:arial,sans-serif}body{margin:0;overflow-y:scroll}#gog{padding:3px 8px 0}td{line-height:.8em}.gac_m td{line-height:17px}form{margin-bottom:20px}.h{color:#36c}.q{color:#00c}.ts td{padding:0}.ts{border-collapse:collapse}em{font-weight:bold;font-style:normal}.lst{height:25px;width:496px}.gsfi,.lst{font:18px arial,sans-serif}.gsfs{font:17px arial,sans-serif}.ds{display:inline-box;display:inline-block;margin:3px 0 4px;margin-right:4px}input{font-family:inherit}a.gb1,a.gb2,a.gb3,a.gb4{color:#11c !important}body{background:#fff;color:black}a{color:#11c;text-decoration:none}a:hover,a:active{text-decoration:underline}.fl a{color:#36c}a:visited{color:#551a8b}a.gb1,a.gb4{text-decoration:underline}a.gb3:hover{text-decoration:none}#ghead a.gb2:hover{color:#fff !important}.sblc{padding-top:5px}.sblc a{display:block;margin:2px 0;margin-right:13px;font-size:11px}.lsbb{background:#eee;border:solid 1px;border-color:#ccc #ccc #999 #999;height:30px}.lsbb{display:block}.ftl,#fll a{display:inline-block;margin:0 12px}.lsb{background:url(/images/srpr/nav_logo80.png) 0 -258px repeat-x;border:none;color:#000;cursor:pointer;height:30px;margin:0;outline:0;font:15px arial,sans-serif;vertical-align:top}.lsb:active{background:#ccc}.lst:focus{outline:none}#addlang a{padding:0 3px}.tiah{width:458px}", "(function(){var src=\'/images/nav_logo176.png\';var iesg=false;document.body.onload = function(){window.n &amp;&amp; window.n();if (document.images){new Image().src=src;}\\nif (!iesg){document.f&amp;&amp;document.f.q.focus();document.gbqf&amp;&amp;document.gbqf.q.focus();}\\n}\\n})();", " ", "\\u00e7\\u00e9\\u00f4\\u00e5\\u00f9", " ", "\\u00fa\\u00ee\\u00e5\\u00f0\\u00e5\\u00fa", " ", "\\u00ee\\u00f4\\u00e5\\u00fa", " ", "YouTube", " ", "\\u00e7\\u00e3\\u00f9\\u00e5\\u00fa", " ", "Gmail", " ", "Drive", " ", "\\u00e9\\u00e5\\u00ee\\u00ef", " ", "\\u00f2\\u00e5\\u00e3", " \\u00bb", "\\u00e4\\u00e9\\u00f1\\u00e8\\u00e5\\u00f8\\u00e9\\u00e9\\u00fa \\u00e0\\u00fa\\u00f8\\u00e9\\u00ed", " | ", "\\u00e4\\u00e2\\u00e3\\u00f8\\u00e5\\u00fa", " | ", "\\u00e4\\u00e9\\u00eb\\u00f0\\u00f1", " ", "\\u00e9\\u00f9\\u00f8\\u00e0\\u00ec", "\\u00a0", "\\u00e7\\u00e9\\u00f4\\u00e5\\u00f9 \\u00ee\\u00fa\\u00f7\\u00e3\\u00ed", "\\u00eb\\u00ec\\u00e9 \\u00f9\\u00f4\\u00e4", "Google.co.il \\u00e2\\u00ed \\u00e1: ", "\\u0627\\u0644\\u0639\\u0631\\u0628\\u064a\\u0629", " ", "English", " \\u00f4\\u00f8\\u00f1\\u00e5\\u00ed \\u00e1-Google", "\\u00f4\\u00fa\\u00f8\\u00e5\\u00f0\\u00e5\\u00fa \\u00f2\\u00f1\\u00f7\\u00e9\\u00e9\\u00ed", "\\u00e4\\u00eb\\u00ec \\u00e0\\u00e5\\u00e3\\u00e5\\u00fa Google", "Google.com", "\\u00a9 2013 - ", "\\u00f4\\u00f8\\u00e8\\u00e9\\u00e5\\u00fa \\u00e5\\u00fa\\u00f0\\u00e0\\u00e9\\u00ed", "if(google.y)google.y.first=[];(function(){function b(a){window.setTimeout(function(){var c=document.createElement(\\"script\\");c.src=a;document.getElementById(\\"xjsd\\").appendChild(c)},0)}google.dljp=function(a){google.xjsu=a;b(a)};google.dlj=b;})();\\nif(!google.xjs){window._=window._||{};window._._DumpException=function(e){throw e};if(google.timers&amp;&amp;google.timers.load.t){google.timers.load.t.xjsls=new Date().getTime();}google.dljp(\'/xjs/_/js/k\\\\x3dxjs.hp.en_US.X67G-1Nbjpc.O/m\\\\x3dsb_he,pcc/rt\\\\x3dj/d\\\\x3d1/sv\\\\x3d1/rs\\\\x3dAItRSTO_vkVhEK6twEUdYclvmSrFcRL-Zw\');google.xjs=1;}google.pmc={\\"sb_he\\":{\\"agen\\":true,\\"cgen\\":true,\\"client\\":\\"heirloom-hp\\",\\"dh\\":true,\\"ds\\":\\"\\",\\"eqch\\":true,\\"fl\\":true,\\"host\\":\\"google.co.il\\",\\"jsonp\\":true,\\"msgs\\":{\\"dym\\":\\"\\u00e4\\u00e0\\u00ed \\u00e4\\u00fa\\u00eb\\u00e5\\u00e5\\u00f0\\u00fa \\u00ec:\\",\\"lcky\\":\\"\\u00e9\\u00e5\\u00fa\\u00f8 \\u00ee\\u00e6\\u00ec \\u00ee\\u00f9\\u00eb\\u00ec\\",\\"lml\\":\\"\\u00ec\\u00ee\\u00e9\\u00e3\\u00f2 \\u00f0\\u00e5\\u00f1\\u00f3\\",\\"oskt\\":\\"\\u00eb\\u00ec\\u00e9 \\u00e4\\u00e6\\u00f0\\u00e4\\",\\"psrc\\":\\"\\u00e7\\u00e9\\u00f4\\u00e5\\u00f9 \\u00e6\\u00e4 \\u00e4\\u00e5\\u00f1\\u00f8 \\u00ee\\\\u003Ca href=\\\\\\"/history\\\\\\"\\\\u003E\\u00e4\\u00e9\\u00f1\\u00e8\\u00e5\\u00f8\\u00e9\\u00e9\\u00fa \\u00e4\\u00e0\\u00e9\\u00f0\\u00e8\\u00f8\\u00f0\\u00e8\\\\u003C/a\\\\u003E \\u00f9\\u00ec\\u00ea\\",\\"psrl\\":\\"\\u00e4\\u00f1\\u00f8\\",\\"sbit\\":\\"\\u00e7\\u00f4\\u00f9 \\u00ec\\u00f4\\u00e9 \\u00fa\\u00ee\\u00e5\\u00f0\\u00e4\\",\\"srch\\":\\"\\u00e7\\u00e9\\u00f4\\u00e5\\u00f9 \\u00e1-Google\\"},\\"ovr\\":{},\\"pq\\":\\"\\",\\"qcpw\\":false,\\"scd\\":10,\\"sce\\":5,\\"stok\\":\\"AVgtYJUWkObPx6V5QqvD7hitdNE\\"},\\"pcc\\":{}};google.y.first.push(function(){if(google.med){google.med(\'init\');google.initHistory();google.med(\'history\');}});if(google.j&amp;&amp;google.j.en&amp;&amp;google.j.xi){window.setTimeout(google.j.xi,0);}", "(function(){if(google.timers&amp;&amp;google.timers.load.t){var b,c,d,e,g=function(a,f){a.removeEventListener?(a.removeEventListener(\\"load\\",f,!1),a.removeEventListener(\\"error\\",f,!1)):(a.detachEvent(\\"onload\\",f),a.detachEvent(\\"onerror\\",f))},h=function(a){e=(new Date).getTime();++c;a=a||window.event;a=a.target||a.srcElement;g(a,h)},k=document.getElementsByTagName(\\"img\\");b=k.length;for(var l=c=0,m;l&lt;b;++l)m=k[l],m.complete||\\"string\\"!=typeof m.src||!m.src?++c:m.addEventListener?(m.addEventListener(\\"load\\",h,!1),m.addEventListener(\\"error\\",\\nh,!1)):(m.attachEvent(\\"onload\\",h),m.attachEvent(\\"onerror\\",h));d=b-c;var n=function(){if(google.timers.load.t){google.timers.load.t.ol=(new Date).getTime();google.timers.load.t.iml=e;google.kCSI.imc=c;google.kCSI.imn=b;google.kCSI.imp=d;void 0!==google.stt&amp;&amp;(google.kCSI.stt=google.stt);google.csiReport&amp;&amp;google.csiReport()}};window.addEventListener?window.addEventListener(\\"load\\",n,!1):window.attachEvent&amp;&amp;\\nwindow.attachEvent(\\"onload\\",n);google.timers.load.t.prt=e=(new Date).getTime()};})();\\n"]' </code></pre> <p>This JSON should be saved in MongoDB, in a manner that will allow me to respote a Beatiful soup object from it later.</p>
<p>FYI, you really don't have to build the soup before storing it into mongo (or any database). </p> <p>Here are my reasons:</p> <p>(1) After you turned it into a soup, which is a class 'bs4.BeautifulSoup', when you store it into mongo, it will be in a format of txt, either in json format or whatever. Next time you try to pull from the database, you need to call the BeautifulSoup function again to rebuild the soup from string/json, which clearly you BeautifulSoup-ed twice.</p> <p>(2) Soup is basically a xml tree built based on the HTML page. BeautifulSoup will parse the tree and sometimes fix broken/missing tags and do some "smart staff" that you might not actually want, or modify the HTML page slightly. For example, you can get different types of results based on the parser you are using, "lxml"/"html5"... So using beautifulsoup before storing data might screw you up. </p> <p>In conclusion: I would recommend you store the raw HTML content without doing any work. And the simplest way to store them is just building document following the format of:</p> <pre><code>{"url":"www.xxx.com/..", "html":"&lt;DOCTYPE!&gt;...."} </code></pre> <p>In this case, you basically mirrored/indexed the website to your local machine and won't miss any information. </p> <p>Here are some codes that help you store/retrieve html using mongo:</p> <pre><code>&gt;&gt;&gt; from pymongo import MongoClient &gt;&gt;&gt; client = MongoClient('localhost', 27017) &gt;&gt;&gt; db = client.oleg &gt;&gt;&gt; &gt;&gt;&gt; # get the raw html ... url = "http://www.crummy.com/software/BeautifulSoup/bs4/doc/#" &gt;&gt;&gt; import urllib2 &gt;&gt;&gt; html = urllib2.urlopen(url).read() &gt;&gt;&gt; html[:100] '&lt;!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"\n "http://www.w3.org/TR/xhtml1/DTD/xh' &gt;&gt;&gt; &gt;&gt;&gt; &gt;&gt;&gt; # store the &lt;key:value&gt; -&gt; &lt;url:html&gt; into mongo for later use ... db.tikhonov.insert({"url":url, "html":html}) ObjectId('532e6904866cd3431a90c618') &gt;&gt;&gt; &gt;&gt;&gt; # retrieve the stored html by search the url ... record = db.tikhonov.find_one({"url":url}) &gt;&gt;&gt; record['url'] u'http://www.crummy.com/software/BeautifulSoup/bs4/doc/#' &gt;&gt;&gt; &gt;&gt;&gt; # turn html txt into soup and start parsing ... from bs4 import BeautifulSoup &gt;&gt;&gt; soup = BeautifulSoup(record['html']) &gt;&gt;&gt; soup.find("h1").text u'Beautiful Soup Documentation\xb6' </code></pre> <p>PS: It is a WONDERFUL idea to separate the "extracting html" step from the "parsing" step. You can start collecting the HTML page without any parsing because it is always the HTTP request takes the most time. And you can start collecting raw html page and meanwhile writing&amp;testing your parser.</p> <p><em><strong>DEFINITELY CHECK OUT THE TERMS OF SERVICES BEFORE SCRAPING OR STORING INTELLECTUAL PROPERTIES LOCALLY.</em></strong></p>
python|mongodb|beautifulsoup
3
1,905,186
45,490,841
Keras/TF error: Incompatible shapes
<p>I've got an error:</p> <blockquote> <p>InvalidArgumentError (see above for traceback): Incompatible shapes: [12192768] vs. [4064256] [[Node: mul = Mul[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"](Reshape, Reshape_1)]]</p> </blockquote> <p>Here is my code:</p> <pre><code>import numpy as np import os from skimage.io import imread, imsave from keras.models import load_model, Model from keras.layers import Conv2D, MaxPooling2D, Input, concatenate, Conv2DTranspose from keras.optimizers import Adam from keras.callbacks import TensorBoard from keras import backend as K K.set_image_dim_ordering('tf') tbCallBack = TensorBoard(log_dir='./logs', histogram_freq=1, write_graph=True, write_grads=True, write_images=True) def dice_coef(y_true, y_pred): y_true_f = K.flatten(y_true) y_pred_f = K.flatten(y_pred) intersection = K.sum(y_true_f * y_pred_f) return (2. * intersection + 1.0) / (K.sum(y_true_f) + K.sum(y_pred_f) + 1.0) def dice_coef_loss(y_true, y_pred): return -dice_coef(y_true, y_pred) def build(): inputs = Input(shape=(1008, 1008, 3)) conv1 = Conv2D(32, (3, 3), activation='relu', padding='same')(inputs) conv1 = Conv2D(32, (3, 3), activation='relu', padding='same')(conv1) pool1 = MaxPooling2D(pool_size=(2, 2))(conv1) conv2 = Conv2D(64, (3, 3), activation='relu', padding='same')(pool1) conv2 = Conv2D(64, (3, 3), activation='relu', padding='same')(conv2) pool2 = MaxPooling2D(pool_size=(2, 2))(conv2) conv3 = Conv2D(128, (3, 3), activation='relu', padding='same')(pool2) conv3 = Conv2D(128, (3, 3), activation='relu', padding='same')(conv3) pool3 = MaxPooling2D(pool_size=(2, 2))(conv3) conv4 = Conv2D(256, (3, 3), activation='relu', padding='same')(pool3) conv4 = Conv2D(256, (3, 3), activation='relu', padding='same')(conv4) pool4 = MaxPooling2D(pool_size=(2, 2))(conv4) conv5 = Conv2D(512, (3, 3), activation='relu', padding='same')(pool4) conv5 = Conv2D(512, (3, 3), activation='relu', padding='same')(conv5) up6 = concatenate([Conv2DTranspose(256, (2, 2), strides=(2, 2), padding='same')(conv5), conv4], axis=3) conv6 = Conv2D(256, (3, 3), activation='relu', padding='same')(up6) conv6 = Conv2D(256, (3, 3), activation='relu', padding='same')(conv6) up7 = concatenate([Conv2DTranspose(128, (2, 2), strides=(2, 2), padding='same')(conv6), conv3], axis=3) conv7 = Conv2D(128, (3, 3), activation='relu', padding='same')(up7) conv7 = Conv2D(128, (3, 3), activation='relu', padding='same')(conv7) up8 = concatenate([Conv2DTranspose(64, (2, 2), strides=(2, 2), padding='same')(conv7), conv2], axis=3) conv8 = Conv2D(64, (3, 3), activation='relu', padding='same')(up8) conv8 = Conv2D(64, (3, 3), activation='relu', padding='same')(conv8) up9 = concatenate([Conv2DTranspose(32, (2, 2), strides=(2, 2), padding='same')(conv8), conv1], axis=3) conv9 = Conv2D(32, (3, 3), activation='relu', padding='same')(up9) conv9 = Conv2D(32, (3, 3), activation='relu', padding='same')(conv9) conv10 = Conv2D(1, (1, 1), activation='sigmoid')(conv9) model = Model(inputs=[inputs], outputs=[conv10]) model.compile(optimizer=Adam(lr=1e-5), loss=dice_coef_loss, metrics=[dice_coef]) return model def prepare_train(): files = os.listdir('./raws/') x_files_names = filter(lambda x: x.endswith('_raw.jpg'), files) total = len(x_files_names) x_train = np.ndarray((total, 1008, 1008, 3), dtype=np.uint8) i = 0 for x_file_name in x_files_names: img = imread(os.path.join('./raws/' + x_file_name)) x_train[i] = np.array([img]) i += 1 np.save('x_train.npy', x_train) files = os.listdir('./masks/') y_files_names = filter(lambda x: x.endswith('_mask.jpg'), files) total = len(y_files_names) y_train = np.ndarray((total, 1008, 1008, 3), dtype=np.uint8) i = 0 for y_file_name in y_files_names: img = imread(os.path.join('./masks/' + y_file_name)) y_train[i] = np.array([img]) i += 1 np.save('y_train.npy', y_train) def train(): x_train = np.load('x_train.npy') x_train = x_train.astype('float32') x_train /= 255 y_train = np.load('y_train.npy') y_train = y_train.astype('float32') y_train /= 255. model.fit(x_train, y_train, batch_size=4, epochs=25, callbacks=[tbCallBack]) model.save('model.h5') def prepare_predict(): files = os.listdir('./predict_raws/') x_files_names = filter(lambda x: x.endswith('_raw.jpg'), files) total = len(x_files_names) x_train = np.ndarray((total, 1008, 1008, 3), dtype=np.uint8) i = 0 for x_file_name in x_files_names: img = imread(os.path.join('./predict_raws/' + x_file_name)) x_train[i] = np.array([img]) i += 1 np.save('x_predict.npy', x_train) def predict(): x_predict = np.load('x_predict.npy') x_predict = x_predict.astype('float32') x_predict /= 255 predictions = model.predict_on_batch(x_predict) np.save('predictions.npy', predictions) if not os.path.exists('logs'): os.makedirs('logs') if not os.path.exists('raws'): os.makedirs('raws') if not os.path.exists('masks'): os.makedirs('masks') if not os.path.exists('predict_raws'): os.makedirs('predict_raws') if not os.path.exists('predict_masks'): os.makedirs('predict_masks') zero_choice = raw_input('Prepare training data? (y or n): ') if zero_choice == 'y': prepare_train() frst_choice = raw_input('Please, enter needed action (load or train): ') if frst_choice == 'load': model = load_model('model.h5') elif frst_choice == 'train': model = build() train() scnd_choice = raw_input('Prepare test data? (y or n): ') if scnd_choice == 'y': prepare_predict() thrd_choice = raw_input('Model is ready! Start prediction? (y or n): ') if thrd_choice == 'y': predict() elif thrd_choice == 'n': exit() </code></pre> <p>Here is full text of error:</p> <pre><code>Epoch 1/25 Traceback (most recent call last): File "segmenting_network.py", line 162, in &lt;module&gt; train() File "segmenting_network.py", line 111, in train callbacks=[tbCallBack]) File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 1430, in fit initial_epoch=initial_epoch) File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 1079, in _fit_loop outs = f(ins_batch) File "/usr/local/lib/python2.7/dist-packages/keras/backend/tensorflow_backend.py", line 2268, in __call__ **self.session_kwargs) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 789, in run run_metadata_ptr) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 997, in _run feed_dict_string, options, run_metadata) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1132, in _do_run target_list, options, run_metadata) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1152, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.InvalidArgumentError: Incompatible shapes: [12192768] vs. [4064256] [[Node: gradients/mul_grad/BroadcastGradientArgs = BroadcastGradientArgs[T=DT_INT32, _class=["loc:@mul"], _device="/job:localhost/replica:0/task:0/cpu:0"](gradients/mul_grad/Shape, gradients/mul_grad/Shape_1)]] Caused by op u'gradients/mul_grad/BroadcastGradientArgs', defined at: File "segmenting_network.py", line 162, in &lt;module&gt; train() File "segmenting_network.py", line 111, in train callbacks=[tbCallBack]) File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 1413, in fit self._make_train_function() File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 937, in _make_train_function self.total_loss) File "/usr/local/lib/python2.7/dist-packages/keras/optimizers.py", line 404, in get_updates grads = self.get_gradients(loss, params) File "/usr/local/lib/python2.7/dist-packages/keras/optimizers.py", line 71, in get_gradients grads = K.gradients(loss, params) File "/usr/local/lib/python2.7/dist-packages/keras/backend/tensorflow_backend.py", line 2305, in gradients return tf.gradients(loss, variables, colocate_gradients_with_ops=True) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gradients_impl.py", line 540, in gradients grad_scope, op, func_call, lambda: grad_fn(op, *out_grads)) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gradients_impl.py", line 346, in _MaybeCompile return grad_fn() # Exit early File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gradients_impl.py", line 540, in &lt;lambda&gt; grad_scope, op, func_call, lambda: grad_fn(op, *out_grads)) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/math_grad.py", line 663, in _MulGrad rx, ry = gen_array_ops._broadcast_gradient_args(sx, sy) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_array_ops.py", line 395, in _broadcast_gradient_args name=name) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 767, in apply_op op_def=op_def) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2506, in create_op original_op=self._default_original_op, op_def=op_def) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1269, in __init__ self._traceback = _extract_stack() ...which was originally created as op u'mul', defined at: File "segmenting_network.py", line 161, in &lt;module&gt; model = build() File "segmenting_network.py", line 68, in build model.compile(optimizer=Adam(lr=1e-5), loss=dice_coef_loss, metrics=[dice_coef]) File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 840, in compile sample_weight, mask) File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 446, in weighted score_array = fn(y_true, y_pred) File "segmenting_network.py", line 29, in dice_coef_loss return -dice_coef(y_true, y_pred) File "segmenting_network.py", line 24, in dice_coef intersection = K.sum(y_true_f * y_pred_f) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/math_ops.py", line 838, in binary_op_wrapper return func(x, y, name=name) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/math_ops.py", line 1061, in _mul_dispatch return gen_math_ops._mul(x, y, name=name) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_math_ops.py", line 1377, in _mul result = _op_def_lib.apply_op("Mul", x=x, y=y, name=name) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 767, in apply_op op_def=op_def) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2506, in create_op original_op=self._default_original_op, op_def=op_def) File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1269, in __init__ self._traceback = _extract_stack() InvalidArgumentError (see above for traceback): Incompatible shapes: [12192768] vs. [4064256] [[Node: gradients/mul_grad/BroadcastGradientArgs = BroadcastGradientArgs[T=DT_INT32, _class=["loc:@mul"], _device="/job:localhost/replica:0/task:0/cpu:0"](gradients/mul_grad/Shape, gradients/mul_grad/Shape_1)]] </code></pre> <p>Versions:</p> <p>Keras 2.0.6</p> <p>TF 1.2.1</p> <p>NP 1.13.1</p> <p>The only idea, which I had, is to decrease the size of batch, but it does not help. Have anybody some ideas?</p> <p>For training I'm using 11 images with 1008*1008 size and 3 channels of color.</p>
<p>The last layer has a wrong number of channels.</p> <p>It should be</p> <pre><code>conv10 = Conv2D(3, (1, 1), activation='sigmoid')(conv9) </code></pre>
tensorflow|keras|conv-neural-network|image-segmentation
7
1,905,187
45,498,424
Make pie chart with percentage readable in grayscale
<p>I have a source code to generate pie chart</p> <pre><code>import matplotlib.pyplot as plt from matplotlib.pyplot import savefig import numpy as np import matplotlib.gridspec as gridspec plt.clf() plt.cla() plt.close() labels_b = ["Negative", "Positive"] dev_sentences_b = [428, 444] test_sentences_b = [912, 909] train_sentences_b = [3310, 3610] gs = gridspec.GridSpec(2, 2) ax1= plt.subplot(gs[0, 0]) ax1.pie(train_sentences_b, autopct='%1.1f%%', shadow=True, startangle=90) ax1.axis('equal') ax1.set_title("Train") ax2= plt.subplot(gs[0, 1]) ax2.pie(dev_sentences_b, autopct='%1.1f%%', shadow=True, startangle=90) ax2.axis('equal') ax2.set_title("Dev") ax3 = plt.subplot(gs[1, 1]) ax3.pie(test_sentences_b, autopct='%1.1f%%', shadow=True, startangle=90) ax3.axis('equal') ax3.set_title("Test") ax3.legend(labels=labels_b, bbox_to_anchor=(-1,1), loc="upper left") plt.savefig('sstbinary', format='pdf') </code></pre> <p>Result <br> Color picture <br> <a href="https://i.stack.imgur.com/juelZ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/juelZ.jpg" alt="color-pie-chart"></a> <br> and grayscale <br> <a href="https://i.stack.imgur.com/BWpxt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BWpxt.png" alt="grayscale"></a></p> <p>The grayscale version is a bit difficult to read. Is there any suggestion to make grayscale pie chart readable in black-and-white printing ?</p>
<p>It's not clear from the question whether you would like to create your chart in black and white already or produce it in color and later convert it. The strategy in both cases might be the same though: <strong>You can create a new color cycle using colors from a colormap.</strong> A reference for possible colormaps is given <a href="http://matplotlib.org/examples/color/colormaps_reference.html" rel="nofollow noreferrer">here</a>. Of course you could also use your own list of colors.</p> <p>E.g. creating 5 colors from the <code>gray</code> colormap between <code>0.2</code> (dark gray) to <code>0.8</code> (lightgray):</p> <pre><code>from cycler import cycler colors = plt.cm.gray(np.linspace(0.2,0.8,5)) plt.rcParams['axes.prop_cycle'] = cycler(color=colors) </code></pre> <p><a href="https://i.stack.imgur.com/wMvsI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wMvsI.png" alt="enter image description here"></a></p> <p>Similarly, you may use a colorful map (e.g. <code>magma</code>) which would still look good, when converted to grayscale afterwards.</p> <pre><code>from cycler import cycler colors = plt.cm.magma(np.linspace(0.2,0.8,5)) plt.rcParams['axes.prop_cycle'] = cycler(color=colors) </code></pre> <p><a href="https://i.stack.imgur.com/g8iIE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/g8iIE.png" alt="enter image description here"></a></p> <p>Changing the range of colors, e.g. to between <code>0.4</code> and <code>0.95</code> gives a lighter colorrange,</p> <pre><code>from cycler import cycler colors = plt.cm.magma(np.linspace(0.4,0.95,5)) plt.rcParams['axes.prop_cycle'] = cycler(color=colors) </code></pre> <p><a href="https://i.stack.imgur.com/apJZh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/apJZh.png" alt="enter image description here"></a></p> <p>Note that you may, instead of defining a color cycle, also apply the colors directly to each pie chart,</p> <pre><code>ax.pie(..., colors=colors, ...) </code></pre> <p>Finally, to distinguish shapes in gray scale images, an often applied technique is to use hatching. See e.g. <a href="https://matplotlib.org/devdocs/gallery/api/filled_step.html" rel="nofollow noreferrer">this example</a>. </p> <pre><code>pie = ax.pie(..., autopct='%1.1f%%', pctdistance=1.3, colors=colors, ...) for patch, hatch in zip(pie[0],hatches): patch.set_hatch(hatch) </code></pre> <p><a href="https://i.stack.imgur.com/yoMY8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yoMY8.png" alt="enter image description here"></a></p>
python|matplotlib|pie-chart|grayscale
4
1,905,188
45,386,603
Simple Way to Compare Two Images in Python
<p>I need to copy images from 'Asset' folder in Windows 10 which has background images automatically downloaded. Some of these images will never be displayed and at some point deleted. To make sure I have seen all the new images before they are deleted I have created a Python script that copy these images into a different folder. To efficient I need a way to compare two images those that only the new ones are copied. All I need to do is to have a function that takes two images compare them with a simple approach to be sure that the two images are not visually identical. A simple test would be to take an image file copy it and compare the copy and the original, in which case the function should be able to tell that those are the same images. How can I compare two images in python? I need simple and efficient way to do it. Several answers I have read are a bit complicated.</p>
<p>I encountered a similar problem before. I used PIL.Image.tobytes() to convert the image to a byte object, then call hash() on the byte object and compared the hash values.</p>
python|image
5
1,905,189
57,040,988
Is there a Python function to create points patterns?
<p>I have the following series data. It has 600 data points shown below and i want to generate an array of 100000 elements following the same pattern of the data using Python and i don't know how to do it. Could someone help me to do that?</p> <p>I tried to plot the histogram and fit the data with some distribution but didn't work so well, because i think my data don't follow any distribution</p> <pre class="lang-py prettyprint-override"><code>tup = levy.fit_levy(data) array = tup[0].get('0') random = levy_stable.rvs(array[0], array[1],array[2],array[3],size=100000) </code></pre> <p>Here is the the plot and the histogram of the data i'm trying to fit</p> <p><a href="https://i.stack.imgur.com/yzEG2.png" rel="nofollow noreferrer">Plot</a></p> <p><a href="https://i.stack.imgur.com/GIeIP.png" rel="nofollow noreferrer">Histogram</a></p>
<p>The first question here is whether you have (1) time-series or (2) independent samples? If I understand your question I believe you are in the second scenario, in which you need to: </p> <ol> <li>Look for a model that fits your frequency distribution:</li> </ol> <p>I would suggest you start trying different main distributions (looking at your data exponential might be a good starting point). Here you can find a number of continuous distributions in SciPy :</p> <p><a href="https://docs.scipy.org/doc/scipy/reference/stats.html#continuous-distributions" rel="nofollow noreferrer">https://docs.scipy.org/doc/scipy/reference/stats.html#continuous-distributions</a></p> <p>If you want to automatically find the distribution with the least SSE for your data look at the answer from tmthydvnprt here:</p> <p><a href="https://stackoverflow.com/questions/6620471/fitting-empirical-distribution-to-theoretical-ones-with-scipy-python">Fitting empirical distribution to theoretical ones with Scipy (Python)?</a></p> <ol start="2"> <li>Sample the model 100000 times </li> </ol> <p>However, if it's time-series data you need to fit a time-series model instead of a frequency distribution. In this case, I suggest you start fitting an ARIMA model and move to more complex ones if this doesn't work: <a href="https://www.statsmodels.org/dev/generated/statsmodels.tsa.arima_model.ARIMA.html" rel="nofollow noreferrer">https://www.statsmodels.org/dev/generated/statsmodels.tsa.arima_model.ARIMA.html</a></p>
python|r|pattern-matching
0
1,905,190
44,741,928
BeautifulSoup: scraping titles from www.themoviedb.org
<p>I know this is specific, but I'm looking to find a way to scrape the following website:</p> <p><a href="https://www.themoviedb.org/discover/movie?page=1" rel="nofollow noreferrer">https://www.themoviedb.org/discover/movie?page=1</a></p> <p>and return a list of the titles of the movies.</p> <p>I've tried BeautifulSoup:</p> <pre><code>from bs4 import BeautifulSoup import requests r = requests.get('https://www.themoviedb.org/discover/movie?page=1') soup = BeautifulSoup(r.text) soup </code></pre> <p>However I can't find any of the titles in the output. I'm new to this, but I was wondering if anyone could provide an example of how you would do this?</p>
<p>Looking at the HTML, it seems info about movies is located inside <code>&lt;div&gt;</code>s with the class <code>info</code>.</p> <pre><code>from bs4 import BeautifulSoup import requests r = requests.get('https://www.themoviedb.org/discover/movie?page=1') soup = BeautifulSoup(r.text, "html5lib") items = soup.find_all('div', {'class' : 'info'}) for item in items: print(item.p.a['title']) </code></pre> <p>Output:</p> <pre><code>Split Miss Peregrine's Home for Peculiar Children Deadpool Captain America: Civil War X-Men: Apocalypse Fantastic Beasts and Where to Find Them Arrival Tomorrow Everything Starts Doctor Strange La La Land Sing The Great Wall Rogue One: A Star Wars Story Batman v Superman: Dawn of Justice Hacksaw Ridge Zootopia Inferno Star Trek Beyond Now You See Me 2 Passengers </code></pre>
python|web-scraping|beautifulsoup
1
1,905,191
44,461,960
Is it possible to terminate a Flask worker process after processing N requests?
<p>If I am using the <code>processes</code> parameter (<code>application.run(processes=10)</code>) in Flask is it possible to specify somehow to terminate the process after it handles N tasks ? </p> <p>Basically I would like to reuse a resource for N requests then recreate it after N calls by killing the current process and forcing flask to replace it with a new process. The functionality would be similar to using <code>multiprocessing.Pool</code> with a <code>maxtasksperchild</code> parameter equal to N.</p>
<p>You shouldn't be using the Flask dev server in any situation where killing processes after N requests would be relevant. <a href="https://stackoverflow.com/questions/33086555/why-shouldnt-flask-be-deployed-with-the-built-in-server/33088612#33088612">Use a real WSGI server in production</a>, they all have options for this.</p> <p>For example, with <a href="http://gunicorn.org/" rel="nofollow noreferrer">gunicorn</a>:</p> <pre><code>gunicorn --max-requests=N --workers=10 myapp:app </code></pre>
python|flask|python-multiprocessing
1
1,905,192
23,457,658
Flask global variables
<p>I am trying to find out how to work with global variables in Flask:</p> <pre><code>gl = {'name': 'Default'} @app.route('/store/&lt;name&gt;') def store_var(name=None): gl['name'] = name return "Storing " + gl['name'] @app.route("/retrieve") def retrieve_var(): n = gl['name'] return "Retrieved: " + n </code></pre> <p>Storing the name and retrieving it from another client works fine. However, this doesn't feel right: a simple global dictionary where any session pretty much simultaneously can throw in complex objects, does that really work without any dire consequences?</p>
<p>No, it doesn't work, not outside the simple Flask development server.</p> <p>WSGI servers scale in two ways; by using threads or by forking the process. A global dictionary is not a thread-safe storage, and when using multi-processing changes to globals are not going to be shared. If you run this on a PAAS provider like Google App Server, the processes aren't even forked; they are run on entirely <em>separate machines</em> even.</p> <p>Use some kind of backend storage instead; a memcached server, a database server, <em>something</em> to control concurrent access and share the data across processes.</p>
python|flask
17
1,905,193
36,110,834
What's difference between tf.sub and just minus operation in tensorflow?
<p>I am trying to use Tensorflow. Here is an very simple code.</p> <pre><code>train = tf.placeholder(tf.float32, [1], name="train") W1 = tf.Variable(tf.truncated_normal([1], stddev=0.1), name="W1") loss = tf.pow(tf.sub(train, W1), 2) step = tf.train.GradientDescentOptimizer(0.1).minimize(loss) </code></pre> <p>Just ignore the optimization part (4th line). It will take a floating number and train W1 so as to increase squared difference.</p> <p>My question is simple. If I use just minus sign instead of tf.sub" as below, what is different? Will it cause a wrong result? </p> <pre><code>loss = tf.pow(train-W1, 2) </code></pre> <p>When I replace it, the result looks the same. If they are the same, why do we need to use the "tf.add/tf.sub" things?</p> <p>Built-in back propagation calculation can be done only by the "tf.*" things? </p>
<p>Yes, - and + resolve to tf.sub ad tf.add. If you look at the tensorflow code you will see that these operators on tf.Variable are overloaded with the tf.* methods.</p> <p>As to why both exists I assume the tf.* ones exist for consistency. So sub and say matmul operation can be used in the same way. While the operator overloading is for convenience. </p>
python|tensorflow
20
1,905,194
46,317,926
Scraping content with python and selenium
<p>I would like to extract all the league names (e.g. England Premier League, Scotland Premiership, etc.) from this website <a href="https://mobile.bet365.com/#type=Splash;key=1;ip=0;lng=1" rel="nofollow noreferrer">https://mobile.bet365.com/#type=Splash;key=1;ip=0;lng=1</a></p> <p>Taking the inspector tools from Chrome/Firefox I can see that they are located here:</p> <pre><code>&lt;span&gt;England Premier League&lt;/span&gt; </code></pre> <p>So I tried this</p> <pre><code>from lxml import html from selenium import webdriver session = webdriver.Firefox() url = 'https://mobile.bet365.com/#type=Splash;key=1;ip=0;lng=1' session.get(url) tree = html.fromstring(session.page_source) leagues = tree.xpath('//span/text()') print(leagues) </code></pre> <p>Unfortunately this doesn't return the desired results :-(</p> <p>To me it looks like the website has different frames and I'm extracting the content from the wrong frame.</p> <p>Could anyone please help me out here or point me in the right direction? As an alternative if someone knows how to extract the information through their api then this would obviously be the superior solution.</p> <p>Any help is much appreciated. Thank you!</p>
<p>Hope you are looking for something like this:</p> <pre><code>from selenium import webdriver import bs4, time driver = webdriver.Chrome() url = 'https://mobile.bet365.com/#type=Splash;key=1;ip=0;lng=1' driver.get(url) driver.maximize_window() # sleep is given so that JS populate data in this time time.sleep(10) pSource= driver.page_source soup = bs4.BeautifulSoup(pSource, "html.parser") for data in soup.findAll('div',{'class':'eventWrapper'}): for res in data.find_all('span'): print res.text </code></pre> <p>It will print the below data:</p> <pre><code>Wednesday's Matches International List Elite Euro List UK List Australia List Club Friendly List England Premier League England EFL Cup England Championship England League 1 England League 2 England National League England National League North England National League South Scotland Premiership Scotland League Cup Scotland Championship Scotland League One Scotland League Two Northern Ireland Reserve League Scotland Development League East Wales Premier League Wales Cymru Alliance Asia - World Cup Qualifying UEFA Champions League UEFA Europa League Wednesday's Matches International List Elite Euro List UK List Australia List Club Friendly List England Premier League England EFL Cup England Championship England League 1 England League 2 England National League England National League North England National League South Scotland Premiership Scotland League Cup Scotland Championship Scotland League One Scotland League Two Northern Ireland Reserve League Scotland Development League East Wales Premier League Wales Cymru Alliance Asia - World Cup Qualifying UEFA Champions League UEFA Europa League </code></pre> <p>Only problem is its printing result set twice</p>
python|api|selenium|xpath
2
1,905,195
21,346,817
Overriding --errors-only=yes specified in rcfile
<p>I use paver to run pylint as a task. In my rcfile(pylintrc) I have configured pylint to report only errors by setting <code>errors-only=yes</code>.</p> <p>But I like to run <code>paver pylint</code> task with a verbose option to get it to report non-errors as well. How can I run pylint overriding the <code>errors-only=yes</code> setting? </p> <p>Running with <code>--errors-only=no</code> gives an exception indicating that the --errors-only cannot be given a value. <code>--enable=all</code> also does not work.</p>
<p>This is an unexpected restriction that deserve an issue on the pylint's tracker (<a href="https://bitbucket.org/logilab/pylint/issues" rel="nofollow">https://bitbucket.org/logilab/pylint/issues</a>).</p> <p>Though to get it works properly in your case, I would use a custom rc file for the task that wouldn't be used in my daily usage, eg <code>pylint --rcfile=task.pylinrc ...</code></p>
python|pylint|paver
1
1,905,196
20,995,196
Pandas counting and summing specific conditions
<p>Are there single functions in pandas to perform the equivalents of <a href="http://office.microsoft.com/en-us/excel-help/sumifs-function-HA010047504.aspx" rel="noreferrer">SUMIF</a>, which sums over a specific condition and <a href="http://office.microsoft.com/en-us/excel-help/countifs-function-HA010047494.aspx" rel="noreferrer">COUNTIF</a>, which counts values of specific conditions from Excel?</p> <p>I know that there are many multiple step functions that can be used for</p> <p>for example for <code>sumif</code> I can use <code>(df.map(lambda x: condition), or df.size())</code> then use <code>.sum()</code></p> <p>and for <code>countif</code> I can use <code>(groupby functions</code> and look for my answer or use a filter and the <code>.count())</code> </p> <p>Is there simple one step process to do these functions where you enter the condition and the data frame and you get the sum or counted results?</p>
<p>You can first make a conditional selection, and sum up the results of the selection using the <code>sum</code> function.</p> <pre><code>&gt;&gt; df = pd.DataFrame({'a': [1, 2, 3]}) &gt;&gt; df[df.a &gt; 1].sum() a 5 dtype: int64 </code></pre> <p>Having more than one condition:</p> <pre><code>&gt;&gt; df[(df.a &gt; 1) &amp; (df.a &lt; 3)].sum() a 2 dtype: int64 </code></pre> <p>If you want to do <code>COUNTIF</code>, just replace <code>sum()</code> with <code>count()</code></p>
python|pandas|sum
125
1,905,197
70,329,065
How do I query SQL Server database with a user's input?
<pre><code>import pyodbc def read(conn): print(&quot;Read&quot;) cursor = conn.cursor() userInputOpening = input(&quot;Enter a opening that you would like to know: &quot;) print(userInputOpening) cursor.execute(&quot;select * from openings where name like '%{}%'&quot;.format(userInputOpening)) for row in cursor: print(f&quot;{row}&quot;) conn = pyodbc.connect('Driver={SQL Server};' 'Server=SAM-PC\SQLEXPRESS;' 'Database=ChessOpenings;' 'Trusted_Connection=yes;' ) read(conn) </code></pre> <p>I get a error with the user input when I use an ' within the name.</p> <p>For example:</p> <p>If the userInputOpening = london</p> <p>It works and gives a list of all the openings that have something like &quot;london&quot; in them.</p> <p>But...</p> <p>If the userInputOpening = King's</p> <p>It throws an error:</p> <blockquote> <p>cursor.execute(&quot;select * from openings where name like '%{}%'&quot;.format(userInputOpening)) pyodbc.ProgrammingError: ('42000', &quot;[42000] [Microsoft][ODBC SQL Server Driver][SQL Server]Incorrect syntax near 's'. (102) (SQLExecDirectW); [42000] [Microsoft][ODBC SQL Server Driver][SQL Server]Unclosed quotation mark after the character string ''. (105)&quot;)</p> </blockquote> <p>What do I need to put in the <code>cursor.execute()</code> so that it takes whatever the userInputOpening entered.</p> <p>I want to treat it like a search engine and have it display all the results of the users search.</p> <p>I also want to make it so that user doesn't have to be perfect with their title hence why I've added the LIKE in my SQL statement</p> <p>Thanks!</p>
<p>The problem is the quotation mark in the input string (King's) Your SQL statement then becomes</p> <pre><code>where name like '%King's%' </code></pre> <p>(hence an extra single quote error message). Change the single quote to two single quotes and it should work fine</p> <pre><code> userInputOpening.replace(&quot;'&quot;, &quot;''&quot;) </code></pre> <p>Not a python guy, but I believe that is the proper replace syntax</p>
python|sql|sql-server
-1
1,905,198
45,916,726
how to convert hex values containing .txt file to decimal values
<p>here is my output.txt file</p> <pre><code>4f337d5000000001 4f337d5000000001 0082004600010000 0082004600010000 334f464600010000 334f464600010000 [... many values omitted ...] 334f464600010000 334f464600010000 4f33464601000100 4f33464601000100 </code></pre> <p>how i can change these values into decimal with the help of python and save into a new .txt file..</p>
<p>Since the values are 16 hex digits long I assume these are 64-bit integers you want to play with. If the file is reasonably small then you can use <code>read</code> to bring in the whole string and <code>split</code> to break it into individual values:</p> <pre><code>with open("newfile.txt", 'w') as out_file, open("outfile,txt") as in_file: for hex in in_file.read().split(): print(int(hex, 16), file=out_file) </code></pre> <p>should do this for you.</p>
python|python-2.7|hex|decimal
2
1,905,199
55,115,687
How to set platform when using pip download command
<p>I want to download some pacakges(tensorflow, keras, imbalanced, xgboost, lightgbm, catboost) for centos 7.4 and python 3.7 on mac.</p> <p>How should i set platform name and ant other settings?</p> <p>I used below command line</p> <pre><code>pip download --only-binary=:all: --python-version 3 --abi cp3m --platform manylinux1_x86_64 tensorflow </code></pre>
<p>Set version to 37:</p> <pre><code>pip download --only-binary=:all: --python-version 37 --abi cp37m --platform manylinux1_x86_64 tensorflow </code></pre>
python|linux|pip
0