Unnamed: 0
int64
0
1.91M
id
int64
337
73.8M
title
stringlengths
10
150
question
stringlengths
21
64.2k
answer
stringlengths
19
59.4k
tags
stringlengths
5
112
score
int64
-10
17.3k
1,905,500
67,904,725
How to print console in Python to a file
<p>I am using Spyder for Pyhton and for some problems the output in the IPython console is quite large (more than 10000 lines) and it is quite inefficient to analyse the output directly in the IPython console. This is why I would like to know if there is a way how I can just print the output to a .txt file? I tried what was suggested here <a href="https://stackoverflow.com/questions/47699023/how-to-write-console-output-on-text-file/47699306">How to write console output on text file</a>. So I use at the beginning of my code:</p> <pre><code>import sys sys.stdout = open('C:/Users/Python_Code/log.txt', 'w') </code></pre> <p>and at the end <code>sys.stdout.close()</code>. However, I get an error message stating &quot;Value Error: I/O operation on closed file&quot; and the log.txt file is not created. If I create the log.txt file manually in the corresponding folder, the file remains empty. Not only remains the file empty but also Spyder does not react at all after executing this code so I have to shut it down and restart.</p> <p>Do you have any idea, how I can redirect the console output to a txt-file? I'd appreciate every comment and would be thankful for your help.</p>
<p>Put <code>sys.stdout = sys.__stdout__</code> after you close the file.</p> <p>Something is still trying to write to console, so you have to change it to the system stdout to prevent errors after closing. If you absolutely must capture these logs, you must use the <code>python file.py &gt; output.txt</code> on the command line.</p>
python|logging|console|spyder
1
1,905,501
67,936,141
How to identify Project_Code, File_Name, File_Format. and create new folder structure based of this
<p>I'm not a good python programmer but a good max scripter. I have been trying to automate a process of cleaning up max files which are corrupted. There are total of 88000 files which needs to be cleaned up.</p> <p>The files I get to clean up are in .zip format, with naming conventions like this.</p> <p><strong>&quot;Project_Name_File_Name_File_Format.zip&quot;</strong></p> <p>The automation process of loading the .max file and cleaning up the corruption is done via max script.</p> <p>What i have been trying to do is Create a folder structure like so:</p> <p>Project Name--&gt; File Name --&gt; File Format</p> <p>I have been at it for the last two week, still no good progress on this front.</p> <p>Here is the basic code I have been trying to get it working at least identifying the file names with file formats. I tried a dictionary method for project names and file formats. Still no luck, I created a list to string and then I landed in a loop of creating lists to strings and strings to lists.</p> <pre><code>import os files = os.listdir('path\\') # Set location where all the .zip files are present. files_zip = [i for i in files if i.endswith('.zip')] for file_name in files_zip: print(file_name) token = os.path.splitext(file_name)[0].split(&quot;_&quot;) #print(token) new_token = token[1:-1] print(new_token) new_file_name = &quot;_&quot;.join(new_token) print(new_file_name) </code></pre> <p><strong>Another set of code i tried to do is here</strong></p> <pre><code>import os path = 'path\\' files = os.listdir(path) # Set location where all the .zip files are present. # Dictionary Project Keys and Values project_dic = {'ABC': 'Apple Bucket Cake', 'XYZ': 'Xerox Yacht Zoo'} # Dictionary for File Formats file_formats = {'FBX': 'FBX', 'OBJ': 'OBJ', '3ds Max': '3ds Max'} # Looking for the files which ends with Project Names (prj_lst) files_txt = [i for i in files if i.endswith('.zip')] # print(files_txt) prj_lstToStr = ' '.join([str(elem) for elem in files_txt]) name_set = prj_lstToStr.split('prj_lstToStr') # print(name_set) #print(&quot;Project_List : &quot; + str(name_set)) res = [ele if len(ele) &gt; 0 else () for ele in [[key for key in project_dic if key in sub] for sub in name_set]] #print(&quot;Project_Matching_Keys : &quot; + str(res)) string_key = ''.join(str(res)) format_list = [ele if len(ele) &gt; 0 else () for ele in [[key for key in file_formats if key in sub] for sub in name_set]] #print(&quot;Format_Matching_Keys : &quot; + str(format_list)) format_key = ''.join(str(format_list)) token = files_txt </code></pre>
<p>I don't know what exactly you're trying to do, so I will delete this answer if it is not correct. From what I understand:</p> <pre><code>input = [&quot;BC_Warren_Vidic_Head_OBJ.zip&quot;, &quot;ALS_Sand_Mound_pr_ann_0479_c_OBJ.zip&quot;, &quot;GRT_ENV-SPE-GRP-SK-ExplorationChestPMC-A_3dsMax.zip&quot;, &quot;KLS_alpha-GEN_PRO_HedgePotPlanter_Group_01A_2021-03-31_FBX.zip&quot;, &quot;MISC_gho_caucasian-mattE_(wise)_OBJ.zip&quot;, &quot;MISC_W_ATT_SalvoXL_FBX.zip&quot;, &quot;MISC_XA-20_Razorback_JetFighter_3dsMax.zip&quot;, &quot;WLD_ENV-GLO-PRO-Bivouac-TacticalSmartphone-A_3dsMax.zip&quot;, &quot;XYZ_WPN_ATT_MAG_MagpulPMAGMOE_FBX.zip&quot;] for inp in input: splitted = inp.split('_') project_name = splitted[0] file_name = '_'.join(splitted[1:-1]) file_format = splitted[-1].split('.')[0] path = f'./{project_name}/{file_name}/{file_format}' os.makedirs(os.path.dirname(path), exist_ok=True) </code></pre>
python
0
1,905,502
67,692,465
Convert several json files into a single dask dataframe and save this dataframe in a database
<p>I have a large number of json files in a folder. I would like to read them and save them in a single database. I had thought of pandas dataframes to do this but due to the large number of files, this operation is very slow. Someone suggested dask, which also has dataframes and seems to be very fast, so I installed it and did some tests. It seems to be confirmed. but unlike pandas, the dataframe just contains the content of each json as text. Can someone tell me how to do this so that I get a single dataframe with all the json files read and with the column names of the keys of each object in each json as well as the values?</p> <h2>pandas code</h2> <pre class="lang-py prettyprint-override"><code>import pandas as pd data = [] folder = '20-05-2019' json_files = get_json_files(folder) for json_file in json_files: df_temp = pd.read_json(json_file, encoding='utf-8') data.append(df_temp) df = pd.concat(data) df.head(10) </code></pre> <h2>result</h2> <p><img src="https://i.imgur.com/NMLH5dj.png" alt="https://i.imgur.com/NMLH5dj.png" /></p> <h2>dask dataframe code</h2> <p><img src="https://wtf.roflcopter.fr/pics/I8wWAFah/UCtfQymu" alt="code+result" /></p> <h2>sample folder with 7000 json files in each folder (3000 folder in total)</h2> <p><img src="https://i.imgur.com/JxzYO6X.png" alt="https://i.imgur.com/JxzYO6X.png" /></p> <h2>sample json</h2> <pre class="lang-json prettyprint-override"><code>[ { &quot;sector&quot;:&quot;AAA&quot;, &quot;code&quot;:&quot;0009&quot;, &quot;id&quot;:&quot;00000001&quot;, &quot;fname&quot;:&quot;FirstName&quot;, &quot;lname&quot;:&quot;LastName&quot;, &quot;height&quot;:&quot;158&quot;, &quot;dob&quot;:&quot;01/03/2006&quot;, &quot;din&quot;:&quot;19/05/2019 13:23&quot;, &quot;dout&quot;:&quot;19/05/2019 17:46&quot;, &quot;type&quot;:&quot;some text&quot;, &quot;group&quot;:&quot;2&quot;, &quot;dod&quot;:&quot;19/05/2019 13:48&quot;, &quot;desc&quot;:&quot;some text&quot;, &quot;details&quot;:&quot;some text&quot;, &quot;feval&quot;:&quot;Triage 1&quot;, &quot;localisation&quot;:&quot;some place&quot;, &quot;infop&quot;:&quot;not yet implemented&quot;, &quot;is_in&quot;:true, &quot;is_op&quot;:false, &quot;list_c&quot;:[], &quot;list_e&quot;:[], &quot;list_a&quot;:[], &quot;list_r&quot;:[] }, { &quot;sector&quot;:&quot;AAA&quot;, &quot;code&quot;:&quot;0009&quot;, &quot;id&quot;:&quot;00000001&quot;, &quot;fname&quot;:&quot;FirstName&quot;, &quot;lname&quot;:&quot;LastName&quot;, &quot;height&quot;:&quot;158&quot;, &quot;dob&quot;:&quot;01/03/2006&quot;, &quot;din&quot;:&quot;19/05/2019 13:23&quot;, &quot;dout&quot;:&quot;19/05/2019 17:46&quot;, &quot;type&quot;:&quot;some text&quot;, &quot;group&quot;:&quot;2&quot;, &quot;dod&quot;:&quot;19/05/2019 13:48&quot;, &quot;desc&quot;:&quot;some text&quot;, &quot;details&quot;:&quot;some text&quot;, &quot;feval&quot;:&quot;Triage 1&quot;, &quot;localisation&quot;:&quot;some place&quot;, &quot;infop&quot;:&quot;not yet implemented&quot;, &quot;is_in&quot;:true, &quot;is_op&quot;:false, &quot;list_c&quot;:[], &quot;list_e&quot;:[], &quot;list_a&quot;:[], &quot;list_r&quot;:[] }, { &quot;sector&quot;:&quot;AAA&quot;, &quot;code&quot;:&quot;0009&quot;, &quot;id&quot;:&quot;00000001&quot;, &quot;fname&quot;:&quot;FirstName&quot;, &quot;lname&quot;:&quot;LastName&quot;, &quot;height&quot;:&quot;158&quot;, &quot;dob&quot;:&quot;01/03/2006&quot;, &quot;din&quot;:&quot;19/05/2019 13:23&quot;, &quot;dout&quot;:&quot;19/05/2019 17:46&quot;, &quot;type&quot;:&quot;some text&quot;, &quot;group&quot;:&quot;2&quot;, &quot;dod&quot;:&quot;19/05/2019 13:48&quot;, &quot;desc&quot;:&quot;some text&quot;, &quot;details&quot;:&quot;some text&quot;, &quot;feval&quot;:&quot;Triage 1&quot;, &quot;localisation&quot;:&quot;some place&quot;, &quot;infop&quot;:&quot;not yet implemented&quot;, &quot;is_in&quot;:true, &quot;is_op&quot;:false, &quot;list_c&quot;:[], &quot;list_e&quot;:[], &quot;list_a&quot;:[], &quot;list_r&quot;:[] }, { &quot;sector&quot;:&quot;AAA&quot;, &quot;code&quot;:&quot;0009&quot;, &quot;id&quot;:&quot;00000001&quot;, &quot;fname&quot;:&quot;FirstName&quot;, &quot;lname&quot;:&quot;LastName&quot;, &quot;height&quot;:&quot;158&quot;, &quot;dob&quot;:&quot;01/03/2006&quot;, &quot;din&quot;:&quot;19/05/2019 13:23&quot;, &quot;dout&quot;:&quot;19/05/2019 17:46&quot;, &quot;type&quot;:&quot;some text&quot;, &quot;group&quot;:&quot;2&quot;, &quot;dod&quot;:&quot;19/05/2019 13:48&quot;, &quot;desc&quot;:&quot;some text&quot;, &quot;details&quot;:&quot;some text&quot;, &quot;feval&quot;:&quot;Triage 1&quot;, &quot;localisation&quot;:&quot;some place&quot;, &quot;infop&quot;:&quot;not yet implemented&quot;, &quot;is_in&quot;:true, &quot;is_op&quot;:false, &quot;list_c&quot;:[], &quot;list_e&quot;:[], &quot;list_a&quot;:[], &quot;list_r&quot;:[] }, { &quot;sector&quot;:&quot;AAA&quot;, &quot;code&quot;:&quot;0009&quot;, &quot;id&quot;:&quot;00000001&quot;, &quot;fname&quot;:&quot;FirstName&quot;, &quot;lname&quot;:&quot;LastName&quot;, &quot;height&quot;:&quot;158&quot;, &quot;dob&quot;:&quot;01/03/2006&quot;, &quot;din&quot;:&quot;19/05/2019 13:23&quot;, &quot;dout&quot;:&quot;19/05/2019 17:46&quot;, &quot;type&quot;:&quot;some text&quot;, &quot;group&quot;:&quot;2&quot;, &quot;dod&quot;:&quot;19/05/2019 13:48&quot;, &quot;desc&quot;:&quot;some text&quot;, &quot;details&quot;:&quot;some text&quot;, &quot;feval&quot;:&quot;Triage 1&quot;, &quot;localisation&quot;:&quot;some place&quot;, &quot;infop&quot;:&quot;not yet implemented&quot;, &quot;is_in&quot;:true, &quot;is_op&quot;:false, &quot;list_c&quot;:[], &quot;list_e&quot;:[], &quot;list_a&quot;:[], &quot;list_r&quot;:[] }, { &quot;sector&quot;:&quot;AAA&quot;, &quot;code&quot;:&quot;0009&quot;, &quot;id&quot;:&quot;00000001&quot;, &quot;fname&quot;:&quot;FirstName&quot;, &quot;lname&quot;:&quot;LastName&quot;, &quot;height&quot;:&quot;158&quot;, &quot;dob&quot;:&quot;01/03/2006&quot;, &quot;din&quot;:&quot;19/05/2019 13:23&quot;, &quot;dout&quot;:&quot;19/05/2019 17:46&quot;, &quot;type&quot;:&quot;some text&quot;, &quot;group&quot;:&quot;2&quot;, &quot;dod&quot;:&quot;19/05/2019 13:48&quot;, &quot;desc&quot;:&quot;some text&quot;, &quot;details&quot;:&quot;some text&quot;, &quot;feval&quot;:&quot;Triage 1&quot;, &quot;localisation&quot;:&quot;some place&quot;, &quot;infop&quot;:&quot;not yet implemented&quot;, &quot;is_in&quot;:true, &quot;is_op&quot;:false, &quot;list_c&quot;:[], &quot;list_e&quot;:[], &quot;list_a&quot;:[], &quot;list_r&quot;:[] }, ] </code></pre> <h3>EDIT for further information</h3> <blockquote> <p>Maybe I didn't explain well enough why I put everything in a dataframe first. In fact, in each json a certain amount of information is stored about people and this data is updated at regular intervals. At the end of the day, the backup operation in the database takes place. So it is sometimes and often a question of duplicate data between files that must be processed before saving everything in the database.</p> <p>This means that the files cannot be processed individually. You have to group them all together in a single dataframe (this is the best idea I have at the moment).</p> </blockquote>
<p>As often happens, the answer is actually simple - you do not need <em>any</em> of dask's dataframe API for this, you will be acting on your files one-by-one. This requires no interaction between the tasks, and no output, so you can just use the <code>delayed</code> function.</p> <pre><code>from dask import compute, delayed @dask.delayed def process(file_name): df = pd.read_json(json_file, encoding='utf-8') df.to_sql(...) tasks = [process(fn) for fn in json_files] dask.compute(*tasks) </code></pre> <p>This probably makes heavy use of the python interpreter, so you will find the above cannot use all the cores when in a single process. I would recommend you use <code>dask.distributed</code> to create a local cluster with as many processes as you have cores, and one thread per process.</p>
python|json|dataframe|dask-dataframe
0
1,905,503
30,001,856
How to add or increment single item of the Python Counter class
<p>A <code>set</code> uses <code>.update</code> to add multiple items, and <code>.add</code> to add a single one.</p> <p>Why doesn't <a href="https://docs.python.org/3/library/collections.html#collections.Counter" rel="noreferrer"><code>collections.Counter</code></a> work the same way?</p> <p>To increment a single <code>Counter</code> item using <code>Counter.update</code>, it seems like you have to add it to a list:</p> <pre><code>from collections import Counter c = Counter() for item in something: for property in properties_of_interest: if item.has_some_property: # simplified: more complex logic here c.update([item.property]) elif item.has_some_other_property: c.update([item.other_property]) # elif... etc </code></pre> <p>Can I get <code>Counter</code> to act like <code>set</code> (i.e. eliminate having to put the property in a list)?</p> <p>Use case: <code>Counter</code> is very nice because of its <code>defaultdict</code>-like behavior of providing a default zero for missing keys when checking later:</p> <pre><code>&gt;&gt;&gt; c = Counter() &gt;&gt;&gt; c['i'] 0 </code></pre> <p>I find myself doing this a lot as I'm working out the logic for various <code>has_some_property</code> checks (especially in a notebook). Because of the messiness of that, a list comprehension isn't always desirable etc.</p>
<p>Well, you don't really need to use methods of <code>Counter</code> in order to count, do you? There's a <code>+=</code> operator for that, which also works in conjunction with Counter.</p> <pre><code>c = Counter() for item in something: if item.has_some_property: c[item.property] += 1 elif item.has_some_other_property: c[item.other_property] += 1 elif item.has_some.third_property: c[item.third_property] += 1 </code></pre>
python|counter
37
1,905,504
27,643,313
Raw Input not working on funciont/while
<p>This part of my code is giving me problems with the raw_input. The thing is, the terminal does not detect any problems and the program runs, however it never asks the user for input, the program just prints what it has to print at the beginning and then exits for some odd reasons, everything inside the while is not executed. Thanks in advance. </p> <p>Heres the code:</p> <pre><code>options_secondscenario = ['Whats going on out there?', 'So what now?'] def second_scenario(): print "Conversation 1" print "Conversation 2" print "Conversation 3" print options_secondscenario option = options_secondscenario[1] while next == option: choice_secondscenario = raw_input("&gt; ") if next == 'Whats going on out there?': print "Conversation 4" elif next == 'So what now': third_scenario() else: dead() second_scenario() </code></pre>
<p><code>next == option</code> is never true, because <code>next</code> is a built-in function and is never equal to a string. In fact, this would actually be an error in Python 3. So your <code>while</code> loop is never entered.</p>
python
3
1,905,505
27,841,771
How to use bdist_rpm like bdist_wininst?
<p>I am building a Tkinter app and distributing it to both centos and win7</p> <p>while packaging for win7</p> <pre><code>python setup.py bdist_wininst --install-script script_to_create_shortcut.py </code></pre> <p>works excellently fine with a launcher at both start menu and for desktop</p> <p>but for CentOS</p> <pre><code>python setup.py bdist_rpm --install-script script_to_create_shortcut_for_linux.py </code></pre> <p>Fails miserably</p> <p>While later I realized rpm requires the sh file to execute so I used a post-install script</p> <pre><code>python setup.py bdist_rpm --post-install=script_to_create_shortcut_for_linux.py </code></pre> <p>Fails as it is also a python code stored in a sh file</p> <p>Now, I wrote a sh file that runs <code>python -c "from module import post_install_script"</code> but that too fails as the post-installation script cannot find the proper function name</p> <p><strong>setup.py</strong></p> <pre><code>setup=(.. scripts=[os.path.join('tickets','complaints.py'), os.path.join('tickets','shortcut_linux.py'), os.path.join('tickets','tickets.svg')], ...) </code></pre> <p><strong>shortcut creator or post installation python script</strong></p> <pre><code> file_created(os.path.join(sys.prefix,'bin','complaints.py')) desktop=get_special_folder_path("CSIDL_COMMON_DESKTOPDIRECTORY") startmenu=get_special_folder_path("CSIDL_COMMON_STARTMENU") create_shortcut(os.path.join(sys.prefix,'bin','complaints.py'), "Complaints Register", os.path.join(desktop,'complaints.desktop'), '','', os.path.join(sys.prefix,'bin','tickets.svg')) file_created(os.path.join(desktop,'complaints.desktop')) create_shortcut(os.path.join(sys.prefix,'bin','complaints.py'), "Complaints Register", os.path.join(startmenu,'complaints.desktop'), '','', os.path.join(sys.prefix,'bin','tickets.svg')) </code></pre> <p>It fails with error global name file_created was not defined...</p> <p>Why rpm is not so simple as wininst which does everything very simply</p> <p>I spent too much time behind this...Any help will be appreciated Thanks</p> <p><strong>Note</strong>: for wininst the shortcutfile had different paths eg:it dint had 'bin'</p>
<p>Are you building rpm in <code>Windows/CentOs?</code> If it is a <code>centos system</code>, you need to install <code>rpm-build package</code> for building rpm using -></p> <p><code>python setup.py bdist_rpm</code> </p> <p>So first, install rpm-build package using the command -></p> <p><code>yum install rpm-build</code> </p> <p>Then, run the command -></p> <p><code>python setup.py bdist_rpm</code></p>
python|tkinter|centos|rpm|distutils
-1
1,905,506
65,609,778
How to replace keys to values in DataFrame?
<p>I have the dictionary and the dataframe. I would like to replace keys to values in column 'x_kod'. How can I do that?</p> <pre><code>lookup_x_kod = { 'BUBD01': 'budynki mieszkalne jednorodzinne', 'BUBD02': 'budynki o dwóch mieszkaniach', 'BUBD03': 'budynki o trzech i więcej mieszkaniach', 'BUBD04': 'budynki zbiorowego zamieszkania', 'BUBD05': 'budynki hoteli', 'BUBD06': 'budynki zakwaterowania turystycznego, pozostałe', 'BUBD07': 'budynki biurowe', 'BUBD08': 'budynki handlowo-usługowe', 'BUBD09': 'budynki łączności, dworców i terminali', 'BUBD10': 'budynki garaży', 'BUBD11': 'budynki przemysłowe', 'BUBD12': 'zbiorniki, silosy i budynki magazynowe', 'BUBD13': 'ogólnodostępne obiekty kulturalne', 'BUBD14': 'budynki muzeów i bibliotek', 'BUBD15': 'budynki szkół i instytucji badawczych', 'BUBD16': 'budynki szpitali i zakładów opieki medycznej', 'BUBD17': 'budynki kultury fizycznej', 'BUBD18': 'budynki gospodarstw rolnych', 'BUBD19': 'budynki przeznaczone do sprawowania kultu religijnego i czynności religijnych', 'BUBD20': 'obiekty budowlane wpisane do rejestru zabytków i objęte indywidualną ochroną konserwatorską oraz nieruchome, archeologiczne dobra kultury', 'BUBD21': 'pozostałe budynki niemieszkalne, gdzie indziej nie wymienione', } </code></pre> <p><a href="https://i.stack.imgur.com/4jDZ1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4jDZ1.png" alt="enter image description here" /></a></p>
<p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.map.html" rel="nofollow noreferrer"><code>series.map</code></a>:</p> <pre><code>df['x_kod'] = df['x_kod'].map(lookup_x_kod) </code></pre>
python|pandas
0
1,905,507
43,279,497
How to run a python script on images present in firebase?
<p>I have a image in my firebase acccount and I want to run a python script on this image and get a result back.</p> <p>Can someone suggest me a simple way to do this?</p> <p>I tried hosting the python file in heroku. Fetching the image from firebase to heroku and running the python script would be an overhead.</p> <p>Is there a simpler way to run the python script in firebase itself?</p>
<p>There are a handful of Python wrappers for Firebase but some have not been updated in awhile. Try this <a href="https://github.com/thisbejim/Pyrebase#getting-started" rel="nofollow noreferrer">Getting Started with Pyrebase</a>, then try this tutorial for <a href="https://devcenter.heroku.com/articles/getting-started-with-python#introduction" rel="nofollow noreferrer">Getting started on Heroku with Python</a>.</p>
python|firebase|heroku|firebase-realtime-database|firebase-authentication
0
1,905,508
37,139,014
Find image type in python openCV
<p>I recently had some issues finding the type of a template image i'm using for pattern matching purposes. I use the python version of OpenCV, and the image returned by <code>imread</code> does not seem to have a "type" property like in the C++ OpenCV implementation. </p> <p>I would like to access the type of the template to create a "dst" <code>Mat</code> having the same size and type of the template.</p> <p>Here is the code : </p> <pre><code>template = cv2.imread('path-to-file', 1) height, width = template.shape[:-1] dst = cv2.cv.CreateMat(width,height, template.type) ---&gt;'Error :numpy.ndarray' object has no attribute 'type' </code></pre> <p>Do you have any thoughts about this issue ?</p> <p>Thanks a lot for your answers.</p>
<p>While it's true that the numpy array type can be accessed using <code>template.dtype</code>, that's not the type you want to pass to <code>cv2.cv.CreateMat()</code> , e.g.</p> <pre><code>In [41]: cv2.imread('abalone.jpg', cv2.IMREAD_COLOR).dtype Out[41]: dtype('uint8') In [42]: cv2.imread('abalone.jpg', cv2.IMREAD_GRAYSCALE).dtype Out[42]: dtype('uint8') </code></pre> <p>If you pass the numpy array's <code>dtype</code> to <code>cv2.cv.CreateMat()</code> you will get this error, e.g.</p> <pre><code>cv2.cv.CreateMat(500, 500, template.dtype) </code></pre> <blockquote> <p>TypeError: an integer is required</p> </blockquote> <p>And as you can see, the dtype doesn't change for grayscale/color, however</p> <pre><code>In [43]: cv2.imread('abalone.jpg', cv2.IMREAD_GRAYSCALE).shape Out[43]: (250, 250) In [44]: cv2.imread('abalone.jpg', cv2.IMREAD_COLOR).shape Out[44]: (250, 250, 3) </code></pre> <p>Here you can see that it's the <code>img.shape</code> that is more useful to you.</p> <h2>Create a numpy matrix from the template</h2> <p>So you want to create a object from your template you can do :</p> <pre><code>import numpy as np dst = np.zeros(template.shape, dtype=template.dtype) </code></pre> <p>That should be useable as far as Python API goes.</p> <h2>cv2.cv.CreateMat</h2> <p>If you want this using the C++ way of creating matrices, you should remember the type with which you opened your template:</p> <pre><code>template = cv2.imread('path-to-file', 1) # 1 means cv2.IMREAD_COLOR height, width = template.shape[:-1] dst = cv2.cv.CreateMat(height, width, cv2.IMREAD_COLOR) </code></pre> <p><strong>If you insist on guessing the type:</strong> While it's not perfect, you can guess by reading the length of the dimensions of the matrix, an image of type <code>IMREAD_COLOR</code> has 3 dimensions, while <code>IMREAD_GRAYSCALE</code> has 2</p> <pre><code>In [58]: len(cv2.imread('abalone.jpg', cv2.IMREAD_COLOR).shape) Out[58]: 3 In [59]: len(cv2.imread('abalone.jpg', cv2.IMREAD_GRAYSCALE).shape) Out[59]: 2 </code></pre>
python|opencv
24
1,905,509
37,032,619
Scipy root-finding method
<p>I am writing a class that has an mathematical function as an attribute, say <em>f</em>.</p> <p><em>f</em> is:</p> <ul> <li>Defined on a real segment [-w;+w]</li> <li>Positive and bounded above by a real H</li> <li>even (for all x in [-w;+w], f(x)=f(-x)) and f(w)=f(-w)=0</li> <li>Differentiable over [-w;+w] and its derivative is positive and continuous over [-w;0] </li> </ul> <p>My class looks like :</p> <pre><code>from scipy.misc import derivative from scipy.integrate import quad from math import cosh, sqrt class Function(object): w = 1. PRECISION = 1e-6 def f(self, x): '''This is an example but f could be any math function matching requirments above. ''' return 0.5+1.07432*(1-cosh(x/1.07432)) def deriv_f(self, x): return derivative(self.f, x, self.PRECISION) def x_to_arc_length(self, x): def func(x): return sqrt(1+self.deriv_f(x)**2) return quad(func, -self.w, x)[0] def arc_length_to_x(self, L): bound = [-self.w, self.w] while bound[1]-bound[0] &gt; self.PRECISION: mid= sum(bound)/2 bound[(self.x_to_arc_length(mid)-L &gt; 0)] = mid return sum(bound)/2 </code></pre> <p>I use bisection to inverse the arc length method, but I was looking at changing this for one of the <code>scipy.optimize</code> root-finding method to gain speed. I am new to scipy and must admit that my math are a bit rusted... Scipy gives me the choice between <code>brentq</code>, <code>brenh</code>, <code>ridder</code>, <code>bisect</code> and <code>newton</code>.</p> <p>Could anyone point me to the best-suited method for a case like this ? Or maybe there is a better library for this ?</p>
<p>I'm not an expert at Python, but I know from Numerical Analysis that among the methods you listed (Brent, bisection, Ridder's method and Newton-Raphson), Brent's method is usually preferred for generic real scalar functions <em>f</em> of a single real variable <em>x</em>. As you can read <a href="https://en.wikipedia.org/wiki/Brent%27s_method" rel="nofollow">here</a>, if <em>f</em> is continuous and the method is applied to an interval [a,b] with <em>f(a)f(b)</em>&lt;0, then Brent's method enjoys guaranteed convergence to a zero, like the bisection method. For many well-behaved functions, Brent's method converges much faster than bisection , but in some unlucky cases it can require <em>N^2</em> iterations, where <em>N</em> is the number of iterations of bisection to achieve a given tolerance. </p> <p><a href="https://en.wikipedia.org/wiki/Newton%27s_method#Proof_of_quadratic_convergence_for_Newton.27s_iterative_method" rel="nofollow">Newton's method</a>, on the other hand, usually converges faster than Brent's, when it converges, but there are cases where it doesn't converge at all. For the same function, Newton's method may or may not converge, depending also on the distance between the starting point and the root. Thus, it's riskier to use in a general purpose code.</p> <p>Regarding the choice between <code>brentq</code> and <code>brenth</code>, <a href="http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.optimize.brenth.html" rel="nofollow">it looks like</a> they should be pretty similar, with the first one more heavily tested. Thus you could go for <code>brentq</code>, or, if you have some time, do a little benchmarking between them.</p>
python|scipy
2
1,905,510
48,639,285
a bytes-like object is required, not 'int'
<p><a href="https://i.stack.imgur.com/IoOPS.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IoOPS.jpg" alt="enter image description here"></a></p> <p>As you can see this parameter's kind is 'bytes' already, however the type error also exist. Who can tell me where I am wrong? </p>
<p><code>writelines</code> expects an iterable, so it will iterate on your <code>bytes</code> object, and each item it iterates on is an <code>int</code>.</p> <p>You probably mean to <code>write</code> your <code>bytes</code> object instead.</p>
python|file|nlp
2
1,905,511
67,097,567
Get Content-length minus length of headers
<p>Python code</p> <pre><code>requests.head(&quot;http://...&quot;).headers['Content-length'] </code></pre> <p>return length of content including size of headers block.</p> <p>For example</p> <pre><code>import requests thum = &quot;https://img.youtube.com/vi/VcseIGkyaw8/hqdefault.jpg&quot; len1 = int(requests.head(thum).headers['Content-length']) len2 = len(requests.get(thum).text) print(len1, len2, len1-len2) </code></pre> <p>Result</p> <pre><code>34353 32516 1837 </code></pre> <p>But, I want get size of downloading file (<strong>without file download</strong>)</p> <p>How to do it?</p>
<p>The problem is in your code, probably the assumption that the &quot;text&quot; size matches the length of the binary.</p> <p>You can easily check with &quot;curl&quot; that both HEAD and GET return the same Content-Length, and that that matches the size of the JPG.</p>
python|http|request
0
1,905,512
4,777,592
Django-ratings and Jquery integration?
<p>How do I make django-ratings work with Jquery?</p> <p>What I'm trying to do is allow the user to select how ever many stars they want to give the product, and to have the corresponding rating processed asynchronously. I realize this is probably basic AJAX, and I apologize if this is a stupid question.</p> <p>Thank you in advanced.</p>
<p>I'm not sure if I understand your question, but is your question from the javascript side or from the model side? From javascript, I used something like this </p> <pre><code>STARS_ELEMENT.stars({ callback: function(ui, type, value){ $.post('URL_ADDRESS', {rate: value}, function(data){ STARS_ELEMENT.stars("select",data); }); } </code></pre> <p>Then in my view, I would have a function that captures that request and do this</p> <pre><code>p = Product.objects.get(id=product_id) p.rating.add(score=int(request.POST['rate']), user=request.user, ip_address=request.META['REMOTE_ADDR']) p.save() </code></pre> <p>And use the request to send me back the most updated rate value. Is that what you were looking for?</p>
jquery|python|django
2
1,905,513
48,221,974
Pandas Dataframe: Can I normalize to return percentage for each column using df.apply(pd.value_counts)?
<p>I have spent the best part of the day searching for this, but cannot find a satisfactory answer.</p> <p>I want to return the percentage of a categorical dataframe (0 &amp; 1) by column and normalize it to return percentages which I would like to then present as a stacked bar graph.</p> <p>If applying <code>value_counts</code> by <code>pd.Series.value_counts()</code>, I could do it by individual columns but that will be time consuming. When I try to use <code>df.apply(pd.value_counts(normalize=True))</code>, the following error occurs: </p> <pre><code>'value_counts() missing 1 required positional argument: 'values' </code></pre> <p>Why can't I apply the same series logic to the whole dataframe using <code>df.apply</code>?</p>
<p>I think you need <code>lambda</code>:</p> <pre><code>df.apply(lambda x: pd.value_counts(x, normalize=True)) </code></pre>
python|pandas
2
1,905,514
73,692,569
Discord bot with python
<p>so i'm coding a discord bot with python. when i type this:</p> <pre><code>client = commands.Bot(command_prefix= &quot;!&quot;) </code></pre> <p>and run this comes up:</p> <pre><code>TypeError: BotBase.__init__() missing 1 required keyword-only argument: 'intents' </code></pre>
<p>You forgot to add intents into parameters.</p> <pre class="lang-py prettyprint-override"><code>intents = discord.Intents.default() client = commands.Bot(command_prefix='!', intents=intents) </code></pre> <p>Or you can use <code>pip install discord.py==1.7.3</code> which doesn't require you to pass in intents as a parameter.</p>
python|encoding|discord|discord.py
0
1,905,515
17,237,198
python - creating an empty file and closing in one line
<p>I want to ensure that all resources are being cleaned correctly. Is this a safe thing to do:</p> <pre><code>try: closing(open(okFilePath, "w")) except Exception, exception: logger.error(exception) raise </code></pre> <p><strong>EDIT:</strong></p> <p>Infact, thinking about it, do I even need the try/catch as I am raising the exception anyways I can log at a higher level. If it errors on creating the file, one can assume there is nothing to close?</p>
<p>To be sure that the file is closed in any case, you can use the <a href="http://docs.python.org/release/2.5/whatsnew/pep-343.html" rel="noreferrer">with</a> statement. For example:</p> <pre><code>try: with open(path_to_file, "w+") as f: # Do whatever with f except: # log exception </code></pre>
python|file-io
8
1,905,516
17,494,031
Thrift: Python Server, Erlang Client Errors... {thrift_socket_server,244,{child_error,function_clause,[]}}
<p>This is actually my first question on stackoverflow, but I've been having a problem that I can't really seem to solve. I'm making a Python server that calls an erlang client through thrift. The only function I've made in thrift is one called bar, which takes in an integer and prints bar (integer). Heres the Python Client, its not too complicated:</p> <pre><code>#!/usr/bin/env python import sys sys.path.append('../gen-py') from foo import Foo from foo.ttypes import * from foo.constants import * from thrift import Thrift from thrift.transport import TSocket from thrift.transport import TTransport from thrift.protocol import TBinaryProtocol try: # Make socket transport = TSocket.TSocket('localhost', 9999) # Buffering is critical. Raw sockets are very slow transport = TTransport.TBufferedTransport(transport) # Wrap in a protocol protocol = TBinaryProtocol.TBinaryProtocol(transport) # Create a client to use the protocol encoder client = Foo.Client(protocol) # Connect! transport.open() msg = client.bar(1452) print msg transport.close() except Thrift.TException, tx: print "%s" % (tx.message) </code></pre> <p>Here is my thrift client, which is listening on port 9999:</p> <pre><code>-module(foo_service). -include("foo_thrift.hrl"). -include("foo_types.hrl"). -export([start_link/0, stop/1, handle_function/2, % Thrift implementations % FILL IN HERE bar/1]). %%%%% EXTERNAL INTERFACE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% start_link() -&gt; thrift_socket_server:start ([{port, get_port()}, {name, ?MODULE}, {service, foo_thrift}, {handler, ?MODULE}, {framed, true}, {socket_opts, [{recv_timeout, 60*60*1000}]}]). stop(_Server) -&gt; thrift_socket_server:stop (?MODULE), ok. %%%%% THRIFT INTERFACE %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% handle_function(Function, Args) when is_atom(Function), is_tuple(Args) -&gt; case apply(?MODULE, Function, tuple_to_list(Args)) of ok -&gt; ok; Reply -&gt; {reply, Reply} end. %%%%% HELPER FUNCTIONS %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% get_port() -&gt; {ok, Result} = application:get_env(foo, service_port), Result. %% ADD THRIFT FUNCTIONS HERE bar(I) -&gt; io:format("bar (~p)~n", [I]). </code></pre> <p>So I start up the thrift client, and from the python server I call client.bar(1452), and unfortunately get a child error:</p> <pre><code>=CRASH REPORT==== 5-Jul-2013::08:34:32 === crasher: initial call: thrift_socket_server:acceptor_loop/1 pid: &lt;0.51.0&gt; registered_name: [] exception error: no function clause matching thrift_socket_transport:read({data,#Port&lt;0.1067&gt;,3600000}, -2147418111) (src/thrift_socket_transport.erl, line 53) in function thrift_transport:read/2 (src/thrift_transport.erl, line 67) in call from thrift_framed_transport:read/2 (src/thrift_framed_transport.erl, line 79) in call from thrift_transport:read/2 (src/thrift_transport.erl, line 67) in call from thrift_binary_protocol:read_data/2 (src/thrift_binary_protocol.erl, line 315) in call from thrift_binary_protocol:read/2 (src/thrift_binary_protocol.erl, line 286) in call from thrift_binary_protocol:read/2 (src/thrift_binary_protocol.erl, line 175) in call from thrift_protocol:read_specific/2 (src/thrift_protocol.erl, line 186) ancestors: [foo_service,foo_sup,&lt;0.46.0&gt;] messages: [] links: [&lt;0.48.0&gt;,#Port&lt;0.1067&gt;] dictionary: [] trap_exit: false status: running heap_size: 987 stack_size: 27 reductions: 513 neighbours: =ERROR REPORT==== 5-Jul-2013::08:34:32 === {thrift_socket_server,244,{child_error,function_clause,[]}} </code></pre> <p>Any Ideas? Thanks for any help!</p>
<p>Figured it out! I was using TBufferedTransport when I had specified framed transport in my erlang file. I changed it to TFramedTrasport, recompiled my thrift files, and everything worked nicely.</p>
python|erlang|thrift
4
1,905,517
64,562,234
How to add parentheses to math operations in django template?
<p>I have a math operation that requires parentheses in Django's template. However, if I put them, I am getting the error in the attachment. When I remove them the program is running but the results are not correct. Below is my code in the template:</p> <pre><code>{% if salaireImposable &lt; 150000 %} &lt;td&gt; {{ salaireImposable|multiply:0.00|floatformat:0 }} &lt;/td&gt; {% elif salaireImposable &gt;= 150000 and salaireImposable &lt;= 500000 %} &lt;td&gt; {{ 150000|add:&quot;-0&quot;|multiply:0.00|add:salaireImposable|add:&quot;-150000&quot;|multiply:0.05|floatformat }}&lt;/td&gt; {% elif salaireImposable &gt;= 500000 and salaireImposable &lt;= 1000000 %} &lt;td&gt;{{ 500000|add:&quot;-150000&quot;|multiply:0.05|add:salaireImposable|add:&quot;-500000&quot;|multiply:0.1|floatformat }}&lt;/td&gt; {% elif salaireImposable &gt;= 1000000 and salaireImposable &lt;= 1500000 %} &lt;td&gt;{{ 500000|add:&quot;-150000&quot;|multiply:0.05|add:(1000000|add:&quot;-500000&quot;)|multiply:0.1|add:(salaireImposable|add:&quot;-1000000&quot;)|multiply:0.15|floatformat}}&lt;/td&gt; {% elif salaireImposable &gt;= 1500000 and salaireImposable &lt;= 2500000 %} &lt;td&gt;{{500000|add:&quot;-150000&quot;|multiply:0.05|add:1000000|add:&quot;-500000&quot;|multiply:0.1|add:1500000|add:&quot;-1000000&quot;|multiply:0.15|add:salaireImposable|add:&quot;-1500000&quot;|multiply:0.2|floatformat:0 }}&lt;/td&gt; {% elif salaireImposable &gt;= 2500000 and salaireImposable &lt;= 3500000 %} &lt;td&gt; {{500000|add:&quot;-150000&quot;|multiply:0.05|add:1000000|add:&quot;-500000&quot;|multiply:0.1|add:1500000|add:&quot;-1000000&quot;|multiply:0.15|add:2500000|add:&quot;-1500000&quot;|multiply:0.2|add:salaireImposable|add:&quot;-2500000&quot;|multiply:0.25|floatformat:0 }}&lt;/td&gt; {% else %} &lt;td&gt;{{ 500000|add:&quot;-150000&quot;|multiply:0.05|add:1000000|add:&quot;-500000&quot;|multiply:0.1|add:1500000|add:&quot;-1000000&quot;|multiply:0.15|add:2500000|add:&quot;-1500000&quot;|multiply:0.2|add:3500000|add:&quot;-2500000&quot;|multiply:0.25|add:salaireImposable|add:&quot;-3500000&quot;|multiply:0.30|floatformat:0 }} &lt;/td&gt; {% endif %} </code></pre> <p>Here is the way I did it in a view and the result are ok:</p> <pre><code>if float(salaireBrut) &lt; 150000: salaireImposable = float(salaireBrut)*0.00 elif 150000 &lt;= float(salaireBrut) &lt;= 500000: salaireImposable=(150000-0)*0.00+(float(salaireBrut)-150000)*0.05 elif 500000 &lt;= float(salaireBrut) &lt;= 1000000: salaireImposable = (500000-150000)*0.05+(float(salaireBrut)-500000)*0.1 elif 1000000 &lt;= float(salaireBrut) &lt;=1500000: salaireImposable = (500000-150000)*0.05+(1000000-500000)*0.1+(float(salaireBrut)-1000000)*0.15 elif 1500000 &lt;=float(salaireBrut)&lt;= 2500000: salaireImposable = (500000-150000)*0.05+(1000000-500000)*0.1+(1500000-1000000)*0.15+\ (float(salaireBrut)-1500000)*0.2 elif 2500000 &lt;= float(salaireBrut) &lt;= 3500000: salaireImposable = (500000-150000)*0.05+(1000000-500000)*0.1+(1500000-1000000)*0.15+\ (2500000-1500000)*0.2+(float(salaireBrut)-2500000)*0.25 else: salaireImposable =(500000-150000)*0.05+(1000000-500000)*0.1+(1500000-1000000)*0.15+(2500000-1500000)*0.2+(3500000-2500000)*0.25+(float(salaireBrut)-3500000)*0.3 </code></pre> <p><a href="https://i.stack.imgur.com/pwmYC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pwmYC.png" alt="enter image description here" /></a></p> <p>I would like to know how can I make parentheses like this: <code>{{(500000|add:&quot;-150000&quot;)|multiply:0.05|add:(salaireImposable|add:&quot;-500000&quot;)|multiply:0.1}}</code>so that, resutls will be correct.</p> <p>Please assist me</p>
<p>I solved the problem by calculating the same in its models instead of in its view or template and the results are perfect</p> <p>Here are my solutions:</p> <pre><code>def get_SalaireImposable(self): salaireImposable= Decimal(float(self.salaire_de_base + (self.mutuelle_de_sante*(Decimal(0.1))) + self.caisse_de_retraite + self.prime_d_anciennete + self.prime_de_logement + \ self.heures_supplementaires + self.autre_prime)*12*0.7) return salaireImposable def get_IRPP(self): if self.get_SalaireImposable()&lt;150000: total_irpp=float(self.get_SalaireImposable())*0.00 return total_irpp elif 150000 &lt;= float(self.get_SalaireImposable()) &lt;= 500000: total_irpp = float((150000-0)*0.00+(float(self.get_SalaireImposable())-150000)*0.05) return total_irpp elif 500000 &lt;= float(self.get_SalaireImposable()) &lt;= 1000000: total_irpp=float((500000-150000)*0.05+(float(self.get_SalaireImposable())-500000)*0.1) return total_irpp elif 1000000 &lt;= float(self.get_SalaireImposable()) &lt;= 1500000: total_irpp=float((500000-150000)*0.05+(1000000-500000)*0.1+(float(self.get_SalaireImposable())-1000000)*0.15) return total_irpp elif 1500000 &lt;= float(self.get_SalaireImposable()) &lt;= 2500000: total_irpp=float((500000-150000)*0.05+(1000000-500000)*0.1+(1500000-1000000)*0.15+\ (float(self.get_SalaireImposable())-1500000)*0.2) return total_irpp elif 2500000 &lt;= float(self.get_SalaireImposable()) &lt;= 3500000: total_irpp=float((500000-150000)*0.05+(1000000-500000)*0.1+(1500000-1000000)*0.15+\ (2500000-1500000)*0.2+(float(self.get_SalaireImposable())-2500000)*0.25) return total_irpp else: total_irpp=float((500000-150000)*0.05+(1000000-500000)*0.1+(1500000-1000000)*0.15+(2500000-1500000)*0.2+\ (3500000-2500000)*0.25+(float(self.get_SalaireImposable())-3500000)*0.3) return total_irpp </code></pre>
python-3.x|django-templates
-1
1,905,518
69,772,071
Reformatting Strings from Scraped Data in order to Satisfy Keyword Argument
<p>I am working on a baseball analysis project where I web-scrape the real-time lineups for a given team, on a given date.</p> <p>I am currently facing an issue with the names that I receive in the scraped dataframe -- in random cases, the player names will come in a different format and are unusable (I take the player name and pass it into a statistics function which will only work if I have the players name formatted correctly.)</p> <p>Example:</p> <pre><code> Freddie Freeman Ozzie Albies Ronald Acuna Austin RileyA. A.Riley Dansby Swanson Adam Duvall Joc PedersonJ. J.Pederson </code></pre> <p>As you can see, most of the names are formatted normally, however, In a few cases, the player name is displayed, along with the first letter of their first name added onto their last name, followed by a period, and then their First initial and last name. If I could turn: Austin RileyA. A.Riley, into Austin Riley, then everything would work.</p> <p>This is a consistent theme throughout all teams and data that I pull -- sometimes there a few players whos names are formatted in this exact way -- FirstName + LastName+First letter of First Name. + First initial. + Last Name</p> <p>I am trying to figure out a way to re-format the names so that they are usable and doing so in a way that is generalized/applicable to any possible names.</p>
<p>If the theme is really consistent you could do something like this:</p> <pre><code>name_list = ['Freddie Freeman', 'Ozzie Albies', 'Ronald Acuna', 'Austin RileyA. A.Riley ', 'Dansby Swanson', 'Adam Duvall', 'Joc PedersonJ. J.Pederson'] new_list = [] for n in name_list: new_list.append(n[:n.find('.')-1]) new_list </code></pre> <p>There are several methods to achieve this (also using regex which I would not reccomend). The example I have posted is the best in my opinion ( <a href="https://python-reference.readthedocs.io/en/latest/docs/str/find.html" rel="nofollow noreferrer"><code>find() documentation</code></a>)</p>
python|string|reformatting
1
1,905,519
72,900,720
How do i query a postgres database running on docker container using a FastApi endpoint or python script on another docker container?
<p>I'm trying to get query results by running a SQL command using a python script which is shown below (the below file is named <strong>database.py</strong> and I'm importing the <code>query()</code> function to another python file i.e., a FastApi endpoint from which I'm passing the query):</p> <pre><code>import psycopg2 import psycopg2.extras def query(query:str): try: conn = psycopg2.connect(&quot;dbname=DBNAME host=localhost user=postgres password=PASSWORD&quot;) if not conn: return &quot;Database connection failed!!&quot; cur = conn.cursor(cursor_factory=psycopg2.extras.DictCursor) cur.execute (query=query) result =cur.fetchall() response = [] for row in result: response.append(dict(row)) return response finally: if conn is not None: conn.close() </code></pre> <p>My Dockerfile is:</p> <pre><code>FROM python WORKDIR /requisition COPY ./requirements.txt . RUN pip install -r requirements.txt COPY . . EXPOSE 9000 </code></pre> <p>My Docker-compose file for is:</p> <pre><code>version: '3.8' services: db: container_name: &quot;database&quot; image: postgres:14.1-alpine environment: - POSTGRES_DB=DBNAME - POSTGRES_USER=postgres - POSTGRES_PASSWORD=PASSWORD ports: - '5432:5432' expose: - 5432 app: build: . ports: - &quot;9000:8000&quot; command: uvicorn requisitionApi:app --host 0.0.0.0 --reload depends_on: - db </code></pre> <p>My fastapi app is:</p> <pre><code>from fastapi import FastAPI import uvicorn from database import query from fastapi.middleware.cors import CORSMiddleware app = FastAPI() app.add_middleware( CORSMiddleware, allow_origins=[&quot;*&quot;], allow_credentials=True, allow_methods=[&quot;*&quot;], allow_headers=[&quot;*&quot;] ) @app.get(&quot;/results&quot;) def getRequisitionList(): sql_query = &quot;SELECT * FROM tablename&quot; result = query(sql_query) return result </code></pre>
<p>Your database is not on <code>localhost</code> to the container, it is on <code>db</code> (the service's name), <code>database</code> (the container's name), the containers IP or the services IP. Since you also forwarded the ports, it is also on your computer's IP.</p> <p>Something like</p> <pre><code>conn = psycopg2.connect(&quot;dbname=DBNAME host=db user=postgres password=PASSWORD&quot;) </code></pre> <p>should work, you can also open a shell in your container and test dns resolution.</p>
python|postgresql|docker|docker-compose|fastapi
0
1,905,520
56,004,938
How to use both dashed-negatives defalut style + line colors in Matplotlib/Python?
<p>When drawing a contour plot with Python/Matplotlib, the default behaviour (for 1 color) is that negative values are dashed. This is a desired feature for me. However, if I set the color of the lines, all of them are drawn solid. I would like to combine the dashed negatives and custom colors.</p> <p><strong>How can I plot colored lines, and keep the negative-dahsed style?</strong></p> <p>Below, I copy (modifying a bit), an example from this tutorial: <a href="https://www.oreilly.com/library/view/python-data-science/9781491912126/ch04.html" rel="nofollow noreferrer">https://www.oreilly.com/library/view/python-data-science/9781491912126/ch04.html</a></p> <pre class="lang-py prettyprint-override"><code>import matplotlib import matplotlib.pyplot as plt import numpy as np x = np.linspace(0, 5, 50) y = np.linspace(0, 5, 40) X, Y = np.meshgrid(x, y) def f(x, y): return np.sin(x) ** 10 + np.cos(10 + y * x) * np.cos(x) Z = f(X, Y) # Default: 1 color, negatives are dashed plt.contour(X, Y, Z, colors='black') plt.show() # Set colormap: all lines are solid plt.contour(X, Y, Z, cmap='RdBu') plt.show() # Set individual colors: all solid lines plt.contour(X, Y, Z, colors=['b','b','b','r','r','r','r','r']) plt.show() </code></pre> <p>Defalut: negatives are dashed. <a href="https://i.stack.imgur.com/XwZN0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XwZN0.png" alt="Default"></a></p> <p>Set colors via colormap: all have become solid. <a href="https://i.stack.imgur.com/Bcd49.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Bcd49.png" alt="Colormap"></a></p> <p>Set individual colors: all solid again. I would like the blue lines here to be dashed, automatically, since they are negative values. <a href="https://i.stack.imgur.com/HyPPA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HyPPA.png" alt="Individual lines with colors"></a></p>
<p>Unfortunately, the feature of different linestyles for negative values is not exposed to the user. It is bound to whether or not a single color is used for the lines. This toggles a property <code>monochrome</code>, which in turn decides whether or not to change the linestyle. </p> <p>A quick hack is hence to set the <code>monochrome</code> attribute to True and reset the linesstyles.</p> <pre><code>import matplotlib.pyplot as plt import numpy as np x = np.linspace(0, 5, 50) y = np.linspace(0, 5, 40) X, Y = np.meshgrid(x, y) def f(x, y): return np.sin(x) ** 10 + np.cos(10 + y * x) * np.cos(x) Z = f(X, Y) cntr = plt.contour(X, Y, Z, cmap='RdBu') cntr.monochrome = True for col, ls in zip(cntr.collections, cntr._process_linestyles()): col.set_linestyle(ls) plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/7LS2K.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7LS2K.png" alt="enter image description here"></a></p> <p>Since this uses a private <code>._process_linestyles()</code> attribute, it would not be recommended to use it in production code; but rather use <a href="https://stackoverflow.com/a/56005617/4124317">@WarrenWeckesser's answer</a> or the option below.</p> <p>Here I would like to point to the option to set the <code>linestyles</code> a priori, depending on the levels:</p> <pre><code>import matplotlib.pyplot as plt import matplotlib.ticker import numpy as np x = np.linspace(0, 5, 50) y = np.linspace(0, 5, 40) X, Y = np.meshgrid(x, y) def f(x, y): return np.sin(x) ** 10 + np.cos(10 + y * x) * np.cos(x) Z = f(X, Y) loc = matplotlib.ticker.MaxNLocator(7) lvls = loc.tick_values(Z.min(), Z.max()) cntr = plt.contour(X, Y, Z, levels=lvls, cmap='RdBu', linestyles=np.where(lvls &gt;= 0, "-", "--")) plt.show() </code></pre>
python|matplotlib|visualization|contour
4
1,905,521
73,283,740
cannot install required dependencies into my Conda environment
<p>the command I typed is &quot;python3 -m pip install -r requirements.txt&quot;.</p> <p>I think the first error is that the library mkl_rt is missing but I'm not sure how to add.</p> <p>Complete log:</p> <pre><code>Defaulting to user installation because normal site-packages is not writeable Collecting Flask==1.1.1 Using cached Flask-1.1.1-py2.py3-none-any.whl (94 kB) Collecting imutils==0.5.3 Using cached imutils-0.5.3.tar.gz (17 kB) Preparing metadata (setup.py) ... done Collecting Keras==2.4.0 Using cached Keras-2.4.0-py2.py3-none-any.whl (170 kB) Collecting opencv-python==4.4.0.46 Using cached opencv-python-4.4.0.46.tar.gz (88.9 MB) Installing build dependencies ... error error: subprocess-exited-with-error × pip subprocess to install build dependencies did not run successfully. │ exit code: 1 ╰─&gt; [4837 lines of output] Ignoring numpy: markers 'python_version == &quot;3.6&quot;' don't match your environment Ignoring numpy: markers 'python_version == &quot;3.7&quot;' don't match your environment Ignoring numpy: markers 'python_version &gt;= &quot;3.9&quot;' don't match your environment Collecting setuptools Using cached setuptools-63.4.2-py3-none-any.whl (1.2 MB) Collecting wheel Using cached wheel-0.37.1-py2.py3-none-any.whl (35 kB) Collecting scikit-build Using cached scikit_build-0.15.0-py2.py3-none-any.whl (77 kB) Collecting cmake Using cached cmake-3.24.0-py2.py3-none-macosx_10_10_universal2.macosx_10_10_x86_64.macosx_11_0_arm64.macosx_11_0_universal2.whl (77.9 MB) Collecting pip Using cached pip-22.2.2-py3-none-any.whl (2.0 MB) Collecting numpy==1.17.3 Using cached numpy-1.17.3.zip (6.4 MB) Preparing metadata (setup.py): started Preparing metadata (setup.py): finished with status 'done' Collecting distro Using cached distro-1.7.0-py3-none-any.whl (20 kB) Collecting packaging Using cached packaging-21.3-py3-none-any.whl (40 kB) Collecting pyparsing!=3.0.5,&gt;=2.0.2 Using cached pyparsing-3.0.9-py3-none-any.whl (98 kB) Building wheels for collected packages: numpy Building wheel for numpy (setup.py): started Building wheel for numpy (setup.py): finished with status 'error' error: subprocess-exited-with-error × python setup.py bdist_wheel did not run successfully. │ exit code: 1 ╰─&gt; [4428 lines of output] Running from numpy source directory. blas_opt_info: blas_mkl_info: customize UnixCCompiler libraries mkl_rt not found in ['/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib', '/usr/lib'] NOT AVAILABLE .............. </code></pre> <p>For now I'm going to try to install the requirements with out the file.</p>
<p>This error occurs when there is a problem between the versions of python and /or pip vs the versions of the packages you're trying to install.</p> <p>If you know the version of python used to generate the <code>requirements.txt</code> file that you have, please make sure to use the same version. Otherwise try upgrading the version of python.</p>
tensorflow|opencv|flask|keras|imutils
0
1,905,522
49,797,837
Find date name in datetime column with user input as weekday name as a 'String'
<p>I have a datetime column called 'Start Time'</p> <p><a href="https://i.stack.imgur.com/cNReS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cNReS.png" alt="enter image description here"></a></p> <p>I am trying to find entries with a specific weekday name based on user input. </p> <p>The user input is a String and can be Sunday, Monday,...Saturday. So if Sunday is input I want to find all entries that have Sunday as the day, regardless of the month or year. </p> <p>Here is my code:</p> <pre><code>user_day = input('Input the name of the day.user_day.') print(df[df['Start Time'].dt.weekday == dt.datetime.strptime(user_day, '%A')]) </code></pre> <p>The output is: Empty DataFrame Columns: [Unnamed: 0, Start Time, End Time, Trip Duration, Start Station, End Station, User Type] Index: []</p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dt.weekday_name.html" rel="nofollow noreferrer"><code>weekday_name</code></a> or <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dt.strftime.html" rel="nofollow noreferrer"><code>strftime</code></a> with same format:</p> <pre><code>print(df[df['Start Time'].dt.weekday_name == user_day]) </code></pre> <p>Or:</p> <pre><code>print(df[df['Start Time'].dt.strftime('%A') == user_day]) </code></pre> <p><strong>Verify</strong>:</p> <pre><code>df = pd.DataFrame({'Start Time':pd.date_range('2015-01-01 15:02:45', periods=10)}) print (df) Start Time 0 2015-01-01 15:02:45 1 2015-01-02 15:02:45 2 2015-01-03 15:02:45 3 2015-01-04 15:02:45 4 2015-01-05 15:02:45 5 2015-01-06 15:02:45 6 2015-01-07 15:02:45 7 2015-01-08 15:02:45 8 2015-01-09 15:02:45 9 2015-01-10 15:02:45 user_day = 'Monday' print(df[df['Start Time'].dt.weekday_name == user_day]) Start Time 4 2015-01-05 15:02:45 </code></pre> <hr> <pre><code>print (df['Start Time'].dt.weekday_name) 0 Thursday 1 Friday 2 Saturday 3 Sunday 4 Monday 5 Tuesday 6 Wednesday 7 Thursday 8 Friday 9 Saturday Name: Start Time, dtype: object print (df['Start Time'].dt.strftime('%A')) 0 Thursday 1 Friday 2 Saturday 3 Sunday 4 Monday 5 Tuesday 6 Wednesday 7 Thursday 8 Friday 9 Saturday Name: Start Time, dtype: object </code></pre>
python|pandas
2
1,905,523
49,968,591
Possible to Use MXNet gluon.Trainer without a Neural Network?
<p>I am trying to use the graph structure of MXNet to speed up some calculations, and I am currently trying to mimic behavior that I have already implemented in PyTorch. However, I am confused on how to properly do this, be it with <a href="https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Trainer" rel="nofollow noreferrer"><code>gluon.Trainer</code></a> or some other method.</p> <p>To explain with an example, what I have working in PyTorch is the following (slightly modified to try to give the simplest example), and I want to translate this to MXNet.</p> <pre><code>import torch.optim def unconstrained_fit(objective, data, pdf, init_pars, tolerance): init_pars.requires_grad = True optimizer = torch.optim.Adam([init_pars]) max_delta = None n_epochs = 10000 for _ in range(n_epochs): loss = objective(init_pars, data, pdf) optimizer.zero_grad() loss.backward() init_old = init_pars.data.clone() optimizer.step() max_delta = (init_pars.data - init_old).abs().max() if max_delta &lt; tolerance: break return init_pars </code></pre> <p>As <a href="https://github.com/zackchase/mxnet-the-straight-dope/blob/master/cheatsheets/pytorch_gluon.md#pytorch-optimizer-vs-gluon-trainer" rel="nofollow noreferrer">The Straight Dope points out in the PyTorch to MXNet cheatsheet</a>, in MXNet one would usually be able to use a Trainer where one would use an <a href="http://pytorch.org/docs/master/optim.html" rel="nofollow noreferrer">optimizer in PyTorch</a>. However, I don't understand how to properly initialize the Trainer in my case, as where one would usually do something along the lines of</p> <pre><code>trainer = gluon.Trainer(net.collect_params(), 'adam') </code></pre> <p>I assume that I will need to collect the parameters myself as I don't have a neural network that I want to use, but rather <code>objective</code> that I want to minimize. I am confused on how to do this properly, as the below is obviously not correct.</p> <pre><code>import mxnet as mx from mxnet import gluon, autograd def unconstrained_fit(self, objective, data, pdf, init_pars, tolerance): ctx = mx.cpu() # How to properly do this chunck? init_pars = mx.gluon.Parameter('init_pars', shape=init_pars.shape, init=init_pars.asnumpy().all) init_pars.initialize(ctx=ctx) optimizer = mx.optimizer.Adam() trainer = gluon.Trainer([init_pars], optimizer) ### max_delta = None n_epochs = 10000 for _ in range(n_epochs): with autograd.record(): loss = objective(init_pars, data, pdf) loss.backward() init_old = init_pars.data.clone() trainer.step(data.shape[0]) max_delta = (init_pars.data - init_old).abs().max() if max_delta &lt; tolerance: break return init_pars </code></pre> <p>I am clearly misunderstanding something basic, so if anyone can point me to something clarifying that would be helpful. Even more helpful would be if someone understands what I am asking and is able to summarize why what I am doing is wrong.</p>
<p>The Trainer in gluon is simply updating a set of parameters according to an optimizer. You need to pass it the parameters that you want to optimize in your objective function. A couple of points already:</p> <ul> <li>The <code>Trainer</code> takes a <code>ParameterDict</code>, not a <code>Parameter[]</code>.</li> <li>You need to use <code>.data()</code> to get the data of a Parameter, rather than <code>.data</code>.</li> </ul> <p>If you posted your objective function, error logs, I might be able to help you further.</p> <p>Also using a <code>Trainer</code> is not compulsory. Have a look at this tutorial: <a href="https://github.com/zackchase/mxnet-the-straight-dope/blob/master/chapter02_supervised-learning/linear-regression-scratch.ipynb" rel="nofollow noreferrer">https://github.com/zackchase/mxnet-the-straight-dope/blob/master/chapter02_supervised-learning/linear-regression-scratch.ipynb</a> It is performing linear regression optimization from scratch, using only <code>NDArray</code> and <code>autograd</code>.</p> <p>One key point is to attach a gradient to your parameters, to allocate memory so that the gradient can stored when using <code>autograd.record()</code> (here two params, <code>w</code> and <code>b</code>):</p> <pre><code>w = nd.random_normal(shape=(num_inputs, num_outputs), ctx=model_ctx) b = nd.random_normal(shape=num_outputs, ctx=model_ctx) params = [w, b] for param in params: param.attach_grad() </code></pre> <p>Then after calling <code>loss.backward()</code> you can access the gradient of each parameters and update them using the SGD formula like this:</p> <pre><code>for param in params: param[:] = param - lr * param.grad </code></pre>
python|mxnet
1
1,905,524
66,477,848
filtering pos-tag results based on tag patterns and other tags
<p>original sentence</p> <p>key_list= ['techniques from nonlinear analysis and partial differential equations form the basis for these studies .','differential equations are cool.' 'it is not too great of an equation']</p> <pre><code> Spacy Tagging. [[['techniques', 'NNS'], ['from', 'IN'], ['nonlinear', 'JJ'], ['analysis', 'NN'], ['and', 'CC'], ['partial', 'JJ'], ['differential', 'JJ'], ['equations', 'NNS'], ['form', 'VBP'], ['the', 'DT'], ['basis', 'NN'], ['for', 'IN'], ['these', 'DT'], ['studies', 'NNS'], ['.', '.']], [['differential', 'JJ'], ['equations', 'NNS'], ['are', 'VBP'], ['cool', 'JJ'], ['.', '.']], [['it', 'PRP'], ['is', 'VBZ'], ['not', 'RB'], ['too', 'RB'], ['great', 'JJ'], ['of', 'IN'], ['an', 'DT'], ['equation', 'NN']]] </code></pre> <p>I am using a wordnet to make things easier, but is there a way i can get the all the nouns of a sentence as well as tag patterns like [RB,RB,JJ] &amp;[JJ,NN]?</p> <pre><code>required output. [['techniques' ,'nonlinear analysis', 'differential equations', 'basis','studies'],['differential equations'],['not too great','equation']] </code></pre>
<p>You need something like this if I understand your question correctly</p> <pre><code>import spacy from spacy.matcher import Matcher nlp = spacy.load('en_core_web_sm') matcher = Matcher(nlp.vocab) text= &quot;&quot;&quot;techniques from nonlinear analysis and partial differential equations form the basis for these studies. Differential equations are cool. It is not too great of an equation&quot;&quot;&quot; doc = nlp(text) pattern1 = [{&quot;TAG&quot;: {&quot;IN&quot;: [&quot;NN&quot;, &quot;NNS&quot;]}}] pattern2 = [{&quot;TAG&quot;: &quot;RB&quot;},{&quot;TAG&quot;: &quot;RB&quot;}, {&quot;TAG&quot;: &quot;JJ&quot;}] matcher.add(&quot;matcher&quot;, [pattern1, pattern2]) for sent in doc.sents: matches = matcher(sent) for match_id, start, end in matches: print(sent[start:end]) </code></pre>
python|spacy
1
1,905,525
66,364,139
How to preprocess tensorflow imdb_review dataset
<p>I am using <a href="https://www.tensorflow.org/datasets/catalog/imdb_reviews" rel="nofollow noreferrer">tensorflow imdb_review dataset</a>, and I want to preprocess it using <strong>Tokenizer</strong> and <strong>pad_sequences</strong></p> <p>When I am using the <strong>Tokenizer</strong> instance and using the following code:</p> <pre class="lang-py prettyprint-override"><code>tokenizer=Tokenizer(num_words=100) tokenizer.fit_on_texts(df['text']) word_index = tokenizer.word_index sequences=tokenizer.texts_to_sequences(df['text']) print(word_index) print(sequences) </code></pre> <p>I am getting the error <em><strong>TypeError: a bytes-like object is required, not 'dict'</strong></em></p> <p><strong>What I've tried</strong></p> <p>store dataset as dataframe and then iterate over the text column, and store it in a list, and then tokenize it.</p> <pre class="lang-py prettyprint-override"><code>df = tfds.as_dataframe(ds.take(4), info) # list to store corpus corpus = [] for sentences in df['text'].iteritems(): corpus.append(sentences) tokenizer=Tokenizer(num_words=100) tokenizer.fit_on_texts(corpus) word_index=tokenizer.word_index print(word_index) </code></pre> <p>But i'm getting the error <em><strong>AttributeError: 'tuple' object has no attribute 'lower'</strong></em></p> <p>How can I use the 'text' column and preprocess it to feed it to my neural network?</p>
<p>All you need to convert the <code>['text']</code> column into <code>numpy</code> first followed by necessary tokenization and padding. Below is the full working code. Enjoy.</p> <p><strong>DataSet</strong></p> <pre><code>import numpy as np import tensorflow as tf import tensorflow_datasets as tfds from tensorflow.keras.preprocessing.text import Tokenizer from tensorflow.keras.preprocessing.sequence import pad_sequences # get the data first imdb = tfds.load('imdb_reviews', as_supervised=True) </code></pre> <p><strong>Data Prepare</strong></p> <pre><code># we will only take train_data (for demonstration purpose) # do the same for test_data in your case train_data, test_data = imdb['train'], imdb['test'] training_sentences = [] training_labels = [] for sentence, label in train_data: training_sentences.append(str(sentence.numpy())) training_labels.append(str(label.numpy())) training_labels_final = np.array(training_labels).astype(np.float) print(training_sentences[0]) # first samples print(training_labels_final[0]) # first label # b&quot;This was an absolutely terrible movie. ....&quot; # 0.0 </code></pre> <p><strong>Preprocess - Tokenizer + Padding</strong></p> <pre><code>vocab_size = 2000 # The maximum number of words to keep, based on word frequency. embed_size = 30 # Dimension of the dense embedding. max_len = 100 # Length of input sequences, when it is constant. # https://keras.io/api/preprocessing/text/ tokenizer = Tokenizer(num_words=vocab_size, filters='!&quot;#$%&amp;()*+,-./:;&lt;=&gt;?@[\\]^_`{|}~\t\n', lower=True, split=&quot; &quot;, oov_token=&quot;&lt;OOV&gt;&quot;) tokenizer.fit_on_texts(training_sentences) print(tokenizer.word_index) # {'&lt;OOV&gt;': 1, 'the': 2, 'and': 3, 'a': 4, 'of': 5, 'to': 6, 'is': 7, ... # tokenized and padding training_sequences = tokenizer.texts_to_sequences(training_sentences) training_padded = pad_sequences(training_sequences, maxlen=max_len, truncating='post') print(training_sentences[0]) print() print(training_padded[0]) # b&quot;This was an absolutely terrible movie. ....&quot; # [ 59 12 14 35 439 400 18 174 29 1 9 33 1378 1 42 496 1 197 25 88 156 19 12 211 340 29 70 248 213 9 486 62 70 88 116 99 24 1 12 1 657 777 12 18 7 35 406 1 178 1 426 2 92 1253 140 72 149 55 2 1 1 72 229 70 1 16 1 1 1 1 1506 1 3 40 1 119 1608 17 1 14 163 19 4 1253 927 1 9 4 18 13 14 1 5 102 148 1237 11 240 692 13] </code></pre> <p><strong>Model</strong></p> <p>Sample Model.</p> <pre><code># Input for variable-length sequences of integers inputs = tf.keras.Input(shape=(None,), dtype=&quot;int32&quot;) # Embed each integer x = tf.keras.layers.Embedding(input_dim = vocab_size, output_dim = embed_size, input_length=max_len)(inputs) # Add 2 bidirectional LSTMs x = tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64, return_sequences=True))(x) x = tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64))(x) # Add a classifier outputs = tf.keras.layers.Dense(1, activation=&quot;sigmoid&quot;)(x) model = tf.keras.Model(inputs, outputs) # Compile and Run model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) model.fit(training_padded, training_labels_final, epochs=10, verbose=1) Epoch 1/10 782/782 [==============================] - 25s 18ms/step - loss: 0.5548 - accuracy: 0.6915 Epoch 2/10 782/782 [==============================] - 14s 18ms/step - loss: 0.3921 - accuracy: 0.8248 ... 782/782 [==============================] - 14s 18ms/step - loss: 0.2171 - accuracy: 0.9121 Epoch 9/10 782/782 [==============================] - 14s 17ms/step - loss: 0.1807 - accuracy: 0.9275 Epoch 10/10 782/782 [==============================] - 14s 18ms/step - loss: 0.1486 - accuracy: 0.9428 </code></pre>
python|pandas|tensorflow
1
1,905,526
64,808,626
Best way to handle socket connections with very much mysql updates
<p>Js is single-threaded so I have vehicles connected with socket io So these vehicles will send location every second or less and update MySQL with it and some things too, Which better approach to do these heavy load things? Like Node.Js should be single-threaded and I have many things in the same Node.Js process So I think it could be bad? is Python better? Like maybe having 2 servers 1 node 2 python I have no idea what to do and which is better? if anything I didn't mention is better than these please mention it like java or smth any ideas?</p>
<p>First of all , you have to know you are the developer and you have to choose what makes your needs .</p> <p>Anyway lets say my oponion about this.</p> <p>If you want to create high load realtime server you have to learn about <strong>multi threading</strong> and <strong>none blocking sockets</strong>.</p> <p><strong>Why multi threading ?</strong> To handle jobs in their own thread. Ok it seems easy but not ! You have to know how to share threads and manage them. Creating many threads make mistake and overload on server so you need to manage them correctly. For example if you have many databse update its better to put this jobs in a queue and execute one after another but not create a new thread for each one. Ofcourse many of this things handled and implemented at low level layer of libraries.</p> <p><strong>Why none blocking socket ?</strong> To make server communication async. It can be related to the first section (multi threading). If you use blocking-socket you will need to open a new thread per socket for reading message (and maybe you will lock other threads for send packets ...) so if you have 1000 concurrent client you need 1000 threads! It is so bad. In other hand we have none blocking socket that you can handle many connections in just one thread(<a href="https://stackoverflow.com/a/60749097/5004157">Related Question</a>).</p> <p><strong>Which programming language ?</strong> In realtime servers performance is a big mistake. Interpreted languages can make performance issues and I prefer to use a compiled programming language liked Java , C , Go and ... . For better exprience in multithreading you can use <strong>GO</strong> and <strong>Erlang</strong> that have their own light threads and handle async operations like a sync operations. We can say these langs are care about how threads create , share and ... .</p> <p>Remember this in your mind that there is many tools are exists to handle this type of jobs . Even if you want to develop it by your hand , It's not a bad idea to work with one of these tools to help you about your concept.</p> <p>If you work with java :</p> <p><strong>Netty</strong>(transport lib) ,</p> <p><strong>Hazelcast</strong>(cluster(failure tolerance - horizontal scale and ...)) ,</p> <p><strong>Kafka</strong>(message broker) ,</p> <p><strong>Cassendra</strong>(db) ,</p> <p>and ...</p> <p>Last word : just do it.</p>
node.js|python-3.x
1
1,905,527
64,690,096
Find percent diff and diff with consecutive but odd number of dates
<p>I have a dataset, df, where I wish to find the percent diff and diff. I wish to look at the earliest date and compare this value to the next date:</p> <pre><code> id date value 1 11/01/2020 10 2 11/01/2020 5 1 10/01/2020 20 2 10/01/2020 30 1 09/01/2020 15 2 09/01/2020 10 3 11/01/2020 5 </code></pre> <p><em><strong>Desired output</strong></em></p> <pre><code> id date diff percent 1 10/01/2020 5 33 1 11/01/2020 -10 -50 2 10/01/2020 20 200 2 11/01/2020 -25 -83.33 3 11/01/2020 0 0 </code></pre> <p>I am wanting to look at one group at a time and compare the previous value to the next value and find the percent increase and diff.</p> <p><em><strong>For example</strong></em>,</p> <p><em><strong>ID 1, from 09/01/2020 to 10/01/2020</strong></em> : goes from <em><strong>15 to 20</strong></em>, giving a difference of <em><strong>5</strong></em> <em><strong>percent difference is 33%</strong></em></p> <p><em><strong>from 10/01/2020 to 11/01/2020:</strong></em> goes from <em><strong>20 to 10,</strong></em> <em><strong>difference of <em><strong>-10</strong></em> and a <em><strong>50% percent difference.</strong></em></strong></em></p> <p>This what I am doing:</p> <pre><code>a['date'] = pd.to_datetime(a['date']) grouped = a.sort_values('date').groupby(['id']) output = pd.DataFrame({ 'date': grouped['date'].agg(lambda x: x.iloc[-1]).values, 'diff': grouped['value'].agg(lambda x: x.diff().fillna(0).iloc[-1]).values, 'percentdiff': grouped['value'].agg(lambda x: x.pct_change().fillna(0).iloc[-1] * 100).values, 'type': grouped['id'].agg(lambda x: x.iloc[0]).values }) </code></pre> <p>However, I notice that some values are missing, as this is my output:</p> <p><a href="https://i.stack.imgur.com/sJ2qF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sJ2qF.png" alt="enter image description here" /></a></p> <p><em>Is it possible to achieve my desired output?</em> Perhaps a loop would have to be implemented to refer back to the previous date row and compare to the next?</p> <p>Any suggestion is appreciated</p>
<p>Here is one way around it, assuming I get your logic right :</p> <p>The idea is to use <code>shift</code> for each group to calculate the difference and percentage,</p> <pre><code>result = (df.sort_values([&quot;id&quot;, &quot;date&quot;, &quot;value&quot;]) # use this later to drop the first row per group # if number is greater than 1, else leave as-is .assign(counter=lambda x: x.groupby(&quot;id&quot;).date.transform(&quot;size&quot;), date_shift=lambda x: x.groupby([&quot;id&quot;]).date.shift(1), value_shift=lambda x: x.groupby(&quot;id&quot;).value.shift(1), diff=lambda x: x.value - x.value_shift, percent=lambda x: x[&quot;diff&quot;].div(x.value_shift).mul(100).round(2)) # here is where the counter column becomes useful # drop rows where date_shift is null and counter is &gt; 1 # this way if number of rows in the group is just one it is kept, # if greater than one, the first row is dropped, # as the first row would have nulls due to the `shift` method. .query(&quot;not (date_shift.isna() and counter&gt;1)&quot;) .loc[:, [&quot;id&quot;, &quot;date&quot;, &quot;diff&quot;, &quot;percent&quot;]] .fillna(0)) result id date diff percent 2 1 10/01/2020 5.0 33.33 0 1 11/01/2020 -10.0 -50.00 3 2 10/01/2020 20.0 200.00 1 2 11/01/2020 -25.0 -83.33 6 3 11/01/2020 0.0 0.00 </code></pre>
python|pandas|numpy|percentage
1
1,905,528
52,973,758
Is there any way to convert pretrained model from PyTorch to ONNX?
<p>I trained StarGAN model on my custom dataset. And I need to convert this model from .pth(Pytorch) to .pb for using on Android studio. I searched a lot and I found some ways for conversion. However all solutions don't work on my case.</p> <p>I tried on small network that includes only one nn.Linear layer. On this network, solutions work very well!</p> <p>I think, my network includes Conv2D layer and MaxPooling2D layer so conversion processing doesn't work.</p> <p>First, this is my network(StarGAN).</p> <pre><code>import torch import torch.nn as nn import numpy as np class ResidualBlock(nn.Module): def __init__(self, dim_in, dim_out): super(ResidualBlock, self).__init__() self.main = nn.Sequential( nn.Conv2d(dim_in, dim_out, kernel_size=3, stride=1, padding=1, bias=False), nn.InstanceNorm2d(dim_out, affine=True, track_running_stats=True), nn.ReLU(inplace=True), nn.Conv2d(dim_out, dim_out, kernel_size=3, stride=1, padding=1, bias=False), nn.InstanceNorm2d(dim_out, affine=True, track_running_stats=True)) def forward(self, x): return x + self.main(x) class Generator(nn.Module): def __init__(self, conv_dim=64, c_dim=5, repeat_num=6): super(Generator, self).__init__() layers = [] layers.append(nn.Conv2d(3 + c_dim, conv_dim, kernel_size=7, stride=1, padding=3, bias=False)) layers.append(nn.InstanceNorm2d(conv_dim, affine=True, track_running_stats=True)) layers.append(nn.ReLU(inplace=True)) curr_dim = conv_dim for _ in range(2): layers.append(nn.Conv2d(curr_dim, curr_dim * 2, kernel_size=4, stride=2, padding=1, bias=False)) layers.append(nn.InstanceNorm2d(curr_dim * 2, affine=True, track_running_stats=True)) layers.append(nn.ReLU(inplace=True)) curr_dim = curr_dim * 2 for _ in range(repeat_num): layers.append(ResidualBlock(dim_in=curr_dim, dim_out=curr_dim)) for _ in range(2): layers.append(nn.ConvTranspose2d(curr_dim, curr_dim // 2, kernel_size=4, stride=2, padding=1, bias=False)) layers.append(nn.InstanceNorm2d(curr_dim // 2, affine=True, track_running_stats=True)) layers.append(nn.ReLU(inplace=True)) curr_dim = curr_dim // 2 layers.append(nn.Conv2d(curr_dim, 3, kernel_size=7, stride=1, padding=3, bias=False)) layers.append(nn.Tanh()) self.main = nn.Sequential(*layers) def forward(self, x, c): c = c.view(c.size(0), c.size(1), 1, 1) c = c.repeat(1, 1, x.size(2), x.size(3)) x = torch.cat([x, c], dim=1) return self.main(x) class Discriminator(nn.Module): def __init__(self, image_size=128, conv_dim=64, c_dim=5, repeat_num=6): super(Discriminator, self).__init__() layers = [] layers.append(nn.Conv2d(3, conv_dim, kernel_size=4, stride=2, padding=1)) layers.append(nn.LeakyReLU(0.01)) curr_dim = conv_dim for _ in range(1, repeat_num): layers.append(nn.Conv2d(curr_dim, curr_dim * 2, kernel_size=4, stride=2, padding=1)) layers.append(nn.LeakyReLU(0.01)) curr_dim = curr_dim * 2 kernel_size = int(image_size / np.power(2, repeat_num)) self.main = nn.Sequential(*layers) self.conv1 = nn.Conv2d(curr_dim, 1, kernel_size=3, stride=1, padding=1, bias=False) self.conv2 = nn.Conv2d(curr_dim, c_dim, kernel_size=kernel_size, bias=False) def forward(self, x): h = self.main(x) out_src = self.conv1(h) out_cls = self.conv2(h) return out_src, out_cls.view(out_cls.size(0), out_cls.size(1)) </code></pre> <p>And this is the error message.</p> <pre><code>TypeError: object of type 'torch._C.Value' has no len() (occurred when translating repeat) </code></pre> <p>Is there any way for conversion? Help me.</p>
<p>I have the same issue when trying to produce a graph of my model, using TensorboardX.</p> <p>I believe the error comes from what operators <code>torch.onnx</code> currently supports. You can check this link:<br> <a href="https://pytorch.org/docs/stable/onnx.html" rel="nofollow noreferrer">https://pytorch.org/docs/stable/onnx.html</a><br> Under section <strong>Supported operators</strong>, you will see that <code>repeat</code> is not listed.</p> <p>To answer your question, it seems that you currently cannot convert a model using <code>repeat</code> with <code>torch.onnx</code>.</p>
python|deep-learning|pytorch|caffe2|onnx
0
1,905,529
52,933,369
Listen to websockets running Raspbian from Windows
<p>I've created a websocket using Python's <code>asyncio</code> and <code>websockets</code> modules. This servers works properly in the same machine. This is the actual code for the server:</p> <pre><code>import sys import os import asyncio import websockets @asyncio.coroutine def receive(websocket, path): data = yield from websocket.recv() print('&lt; {}'.format(data)) output = 'Sent data from server: {}'.format(data) yield from websocket.send(output) print('&gt; {}'.format(output)) start_server = websockets.serve(receive, '127.0.0.1', 8765) asyncio.get_event_loop().run_until_complete(start_server) asyncio.get_event_loop().run_forever() </code></pre> <p>It runs properly, and the connection from a client residing on the same machine connects to it without any problem.</p> <p>But when I try to access it from a client on a LAN network, it generates a <code>ConnectionRefusedError</code>. This is the client code:</p> <pre><code>import asyncio import websockets @asyncio.coroutine def hello(): websocket = yield from websockets.connect( 'ws://192.168.0.26:8765') try: name = input("What's your name? ") yield from websocket.send(name) print("&gt; {}".format(name)) greeting = yield from websocket.recv() print("&lt; {}".format(greeting)) finally: yield from websocket.close() asyncio.get_event_loop().run_until_complete(hello()) </code></pre> <p>I've installed <code>ufw</code> on Raspbian to enable the port 8765 with this command:</p> <pre><code>ufw allow 8765 </code></pre> <p>But it doesn't work. On the Windows machine, the command</p> <pre><code>nmap -p 8765 192.168.0.26 </code></pre> <p>generates this result:</p> <pre><code>PORT STATE SERVICE 8765/tcp closed ultraseek-http </code></pre> <p>And... the command </p> <pre><code>ufw status </code></pre> <p>Could someone give some suggestions to solve this communication problem between the client and the server.</p>
<p>Here is one problem:</p> <pre><code>start_server = websockets.serve(receive, '127.0.0.1', 8765) </code></pre> <p>You have told websockets to <a href="https://websockets.readthedocs.io/en/stable/api.html#websockets.server.serve" rel="nofollow noreferrer">listen</a> only on 127.0.0.1, thus you can only receive connections originating from the local host, and only on legacy IPv4. Both localhost IPv6 connections (the default) and all connections from other computers will receive the connection refused error.</p> <p>If you want to receive connections from outside the local machine, you should set the <code>Host</code> to <code>None</code> or the empty string. This will accept connections from anywhere, on both IPv6 and IPv4, subject of course to any firewall rules.</p> <pre><code>start_server = websockets.serve(receive, None, 8765) </code></pre> <p>The host and port are passed directly to <a href="https://docs.python.org/3/library/asyncio-eventloop.html#creating-network-servers" rel="nofollow noreferrer"><code>asyncio.create_server()</code></a> which documents Host as:</p> <blockquote> <ul> <li>If host is a string, the TCP server is bound to a single network interface specified by host.</li> <li>If host is a sequence of strings, the TCP server is bound to all network interfaces specified by the sequence.</li> <li>If host is an empty string or None, all interfaces are assumed and a list of multiple sockets will be returned (most likely one for IPv4 and another one for IPv6).</li> </ul> </blockquote>
python|websocket|raspbian|python-asyncio
1
1,905,530
65,353,181
Extract day/year from dataframe string column and sum it [Python]
<p>I have a column in a dataframe called &quot;time&quot; that has a string format. I would like to extract the year and day digit from the string of each cell of that column, create a new column where year digit is multiplied by 365 and if day digit available is added as per the below calculation. Any suggestion on how to solve this?</p> <p>Many thanks in advance.</p> <p><a href="https://i.stack.imgur.com/5Ipsp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5Ipsp.png" alt="enter image description here" /></a></p>
<p>This is not the most efficient or robust solution. Here is a function that can take in one of your strings from the <code>time</code> column and return the <code>output</code> value</p> <pre><code>def foo(s): result = 0 l = s.split() for i, word in enumerate(l): if not word.isdigit(): continue # word is number if l[i+1] == 'year': # unit is years result += int(word) * 365 else: # unit is days result += int(word) return result print(foo('5 day')) # 5 print(foo('2 year')) # 730 print(foo('3 year 10 day')) # 1105 </code></pre> <p>Or if you prefer a one-liner</p> <pre><code>def foo(s): return sum(int(word) * (365 if s.split()[i+1] == 'year' else 1) for i, word in enumerate(s.split()) if word.isdigit()) </code></pre>
python
1
1,905,531
65,443,602
what is the equivalent sytax for `Tensor.grad` in Tensorflow 2.0
<p>In the Pytorch, we can access the gradient of a variable <code>x</code> by</p> <pre><code>z.grad </code></pre> <p>What is the same sytax in <code>Tensorflow 2</code>. My goad is to cut the gradient. Here is Pytorch code</p> <pre><code>if z.grad &gt; 1000: z.grad = 10 </code></pre> <p>Can tensorflow 2 apply the same functions?</p> <p>Thanks</p>
<p>So in TF2, assume we define following variables and optimizer:</p> <pre><code>import tensorflow as tf from tensorflow import keras opt = tf.keras.optimizers.Adam(learning_rate=0.1) x = tf.Variable([3.0, 4.0]) y = tf.Variable([1.0, 1.0, 1.0, 1.0]) var_list = [x, y] </code></pre> <p>Then we can get gradients by using <code>tf.GradientTape()</code>:</p> <pre><code>with tf.GradientTape() as tape: loss = tf.reduce_sum(x ** 2) + tf.reduce_sum(y) grads = tape.gradient(loss, var_list) </code></pre> <p>Finally we could process the gradients by custom function:</p> <pre><code>def clip_grad(grad): if grad &gt; 1000: grad = 10 return grad processed_grads = [tf.map_fn(clip_grad, g) for g in grads] opt.apply_gradients(zip(processed_grads, var_list)) </code></pre> <p>Note you may find the keras optimizers have <code>get_gradients</code> method, but it won't work with <code>eager execution</code> enabled which is default in TF2, if you want use that, then you may have to write code in TF1 fashion</p>
python|pytorch|tensorflow2.0|gradient
1
1,905,532
68,525,076
Divide values of rows based on condition which are of running count
<p>Sample of the table for 1 id, exists multiple id in the original df.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>id</th> <th>legend</th> <th>date</th> <th>running_count</th> </tr> </thead> <tbody> <tr> <td>101</td> <td>X</td> <td>24-07-2021</td> <td>3</td> </tr> <tr> <td>101</td> <td>Y</td> <td>24-07-2021</td> <td>5</td> </tr> <tr> <td>101</td> <td>X</td> <td>25-07-2021</td> <td>4</td> </tr> <tr> <td>101</td> <td>Y</td> <td>25-07-2021</td> <td>6</td> </tr> </tbody> </table> </div> <p>I want to create a new column where I have to perform division of the running_count on the basis of the id, legend and date - (X/Y) for the date 24-07-2021 for a particular id and so on.</p> <p>How shall I perform the calculation?</p>
<p>If there is same order <code>X, Y</code> for each <code>id</code> is possible use:</p> <pre><code>df['new'] = df['running_count'].div(df.groupby(['id','date'])['running_count'].shift(-1)) print (df) id legend date running_count new 0 101 X 24-07-2021 3 0.600000 1 101 Y 24-07-2021 5 NaN 2 101 X 25-07-2021 4 0.666667 3 101 Y 25-07-2021 6 NaN </code></pre> <p>If possible change ouput:</p> <pre><code>df1 = df.pivot(index=['id','date'], columns='legend', values='running_count') df1['new'] = df1['X'].div(df1['Y']) df1 = df1.reset_index() print (df1) legend id date X Y new 0 101 24-07-2021 3 5 0.600000 1 101 25-07-2021 4 6 0.666667 </code></pre>
python-3.x|pandas
1
1,905,533
61,755,056
In Matplotlib, try to store mouse positions in a dictionary. But it can only leave the last item
<p>I tried to use Matplotlib in Python 3.7 to get a series of coordinators in a dictionary (or List). I want to use SHIFT with mouse left key to get current positions and store them in a temporary dictionary. Then I use CONTROL + mouse left key to add the temporary dictionary to a permanent dictionary. The code is attached. While I select a coordinator, I do not have trouble to add it to the permanent dictionary. I can even add the same coordinator multiple times. But whenever I try to get a new temporary coordinator, it wipes out all previous saved items in the permanent dictionary, except the last one. I do not want to use append in list since I might want to modify data previous stored (not sure here). Any idea?</p> <pre><code>from matplotlib.backend_bases import MouseButton import matplotlib.pyplot as plt import numpy as np tmpDict={} PermDict={} i=0 t = np.arange(0.0, 1.0, 0.01) s = np.sin(2 * np.pi * t) fig, ax = plt.subplots() ax.plot(t, s) def on_click(event): global tmpDict, PermDict, i # get the x and y pixel coords x, y = event.x, event.y if event.key=='shift': if event.button is MouseButton.LEFT: ax = event.inaxes # the axes instance print('Data set: %d, data coords %f %f' % (i, event.xdata, event.ydata)) tmpDict={'x':event.xdata, 'y':event.ydata} print(tmpDict) print(PermDict) if event.key=='control': if event.button is MouseButton.LEFT: ax = event.inaxes # the axes instance PermDict={str(i):tmpDict} print(PermDict) i+=1 plt.connect('button_press_event', on_click) plt.show() </code></pre>
<p>I think you want to do</p> <pre><code>PermDict[str(i)] = tmpDict tmpDict = {} </code></pre> <p>instead of <code>PermDict={str(i):tmpDict}</code></p>
python|dictionary|matplotlib|mouseevent|store
0
1,905,534
71,186,196
Set tick labels for matplotlib Slider widgets
<p>The <a href="https://matplotlib.org/stable/api/widgets_api.html?highlight=slider#matplotlib.widgets.Slider" rel="nofollow noreferrer"><code>slider</code></a> behavior in matplotlib has changed with recent updates. With the following code:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt from matplotlib.widgets import Slider fig = plt.figure(figsize=(8, 4)) ax_main = plt.axes([0.15, 0.3, 0.7, 0.6]) ax_skal = plt.axes([0.2, 0.18, 0.65, 0.02], facecolor=&quot;lightgrey&quot;) s_skal = Slider(ax_skal, 'time scale', 0.5, 2, valinit=1, valfmt='%0.1f') #ax_skal.xaxis.set_visible(True) sl_xticks = np.arange(0.6, 2, 0.2) ax_skal.set_xticks(sl_xticks) plt.show() </code></pre> <p>we could generate in matplotlib 3.3.1 and earlier a <code>slider</code> object with tick labels. <a href="https://i.stack.imgur.com/gSpJw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gSpJw.png" alt="enter image description here" /></a></p> <p>However, in matplotlib 3.5.1 the tick labels have disappeared.</p> <p><a href="https://i.stack.imgur.com/gWXrS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gWXrS.png" alt="enter image description here" /></a></p> <p>The suggestion <a href="https://stackoverflow.com/a/42305217/8881141">in this thread</a> to set the <code>x-axis</code> to visible does not work, as the <code>x-axis</code> attribute <code>visible</code> is already <code>True</code>, as is the attribute <code>in_layout</code>. So, how can we set tick labels for slider objects?</p>
<p>As I spent some time on this problem, I thought I leave the answer here. Turns out the updated slider version hogs the axis space in which it is placed and removes the x- and y-axes with their spine objects from the list of artists used for rendering the layout. So, we have to add the <code>x-axis</code> object (or for vertical sliders the <code>y-axis</code> object) again to the axis after the creation of the <code>slider</code> object:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt from matplotlib.widgets import Slider fig = plt.figure(figsize=(8, 4)) ax_main = plt.axes([0.15, 0.3, 0.7, 0.6]) ax_skal = plt.axes([0.2, 0.18, 0.65, 0.02], facecolor=&quot;lightgrey&quot;) s_skal = Slider(ax_skal, 'time scale', 0.5, 2, valinit=1, valfmt='%0.1f') ax_skal.add_artist(ax_skal.xaxis) sl_xticks = np.arange(0.6, 2, 0.2) ax_skal.set_xticks(sl_xticks) plt.show() </code></pre> <p>Sample output: <a href="https://i.stack.imgur.com/bYKFY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bYKFY.png" alt="enter image description here" /></a></p>
python|matplotlib|interactive|matplotlib-widget
2
1,905,535
56,732,059
Feature scaling converts different values in columns on a same scale
<p>Scaling converts different columns with different values alike example Standard Scaler but when building a model out of it, the values which were different earlier are converted to same values with mean=0 and std = 1, so it should affect the model fit and results.</p> <p>I have taken a toy pandas dataframe with 1st column starting from 1 to 10 and 2nd column starting from 5 to 14 and scaled both using Standard Scaler. </p> <pre><code>import pandas as pd ls1 = np.arange(1,10) ls2 = np.arange(5,14) before_scaling= pd.DataFrame() before_scaling['a'] = ls1 before_scaling['b'] = ls2 ''' a b 0 1 5 1 2 6 2 3 7 3 4 8 4 5 9 5 6 10 6 7 11 7 8 12 8 9 13 ''' from sklearn.preprocessing import StandardScaler,MinMaxScaler ss = StandardScaler() after_scaling = pd.DataFrame(ss.fit_transform(before_scaling),columns= ['a','b']) ''' a b 0 -1.549193 -1.549193 1 -1.161895 -1.161895 2 -0.774597 -0.774597 3 -0.387298 -0.387298 4 0.000000 0.000000 5 0.387298 0.387298 6 0.774597 0.774597 7 1.161895 1.161895 8 1.549193 1.549193 ''' </code></pre> <p>If there is a regression model to be built using the above 2 independent variables then i believe that fitting the model ( Linear regression ) will produce different fit and results using the dataframe on before_scaling and after_scaling dataframes. If yes, then why we use feature Scaling and if we use feature scaling on individual columns one by one then also it will produce same results</p>
<p>This happening because the <code>fit_transform</code> function work as follow: </p> <p>For each feature you have ('a', 'b' in your case) apply this equation:</p> <pre><code> X = (X - MEAN) / STD </code></pre> <p>where MEAN is the mean of the feature and STD is the standared diviation. </p> <p>The first feature <code>a</code> has a mean of '5' and std of '2.738613', while feature <code>b</code> has mean of '9' and std of '2.738613'. So if you subtract from each value the mean of its corresponding feature you will have two identical features and as we have the std equal in both features you will end up with identical transformation.</p> <pre><code>before_scaling['a'] = before_scaling['a'] - before_scaling['a'].mean() before_scaling['b'] = before_scaling['b'] - before_scaling['b'].mean() print(before_scaling) a b 0 -4.0 -4.0 1 -3.0 -3.0 2 -2.0 -2.0 3 -1.0 -1.0 4 0.0 0.0 5 1.0 1.0 6 2.0 2.0 7 3.0 3.0 8 4.0 4.0 </code></pre> <p>Finally be aware that the last value in the <code>arange</code> function is not included.</p>
python|python-3.x|pandas|feature-scaling
1
1,905,536
56,608,807
I am trying to extract some data from espn as a table and getting it as list
<p>start_urls = <a href="http://www.espncricinfo.com/series/18679/scorecard/1144998/australia-vs-india-2nd-odi-india-in-aus-2018-19" rel="nofollow noreferrer">http://www.espncricinfo.com/series/18679/scorecard/1144998/australia-vs-india-2nd-odi-india-in-aus-2018-19</a></p> <p>I scraped this site and extracted the match result(winning team) and then I yielded the player URL and I want to print the player name and the batting style. My first problem is 1. I cant abstract the player batting sytle. it is under <code>&lt;pclass="ciPlayerinformationtxt"&gt;&lt;b&gt;Batting style&lt;/b&gt; &lt;span&gt;Right-hand bat&lt;/span&gt;</code>. I was only able to extract the text 'Batting style'.How to extract 'Right-hand bat' 2.I was unable to yield the whole extracted data as a table. The result I got was like </p> <p>p link of all the player <a href="http://www.espncricinfo.com/ci/content/player/326434.html" rel="nofollow noreferrer">http://www.espncricinfo.com/ci/content/player/326434.html</a></p> <p>Player_name Country Alex Carey Australia<br> Kuldeep Yadav India<br> Mohammed Siraj India<br> Winning_Team:India</p> <pre><code>class ScoreSpider(scrapy.Spider): name = 'score' allowed_domains = ['espncricinfo.com'] def parse(self, response): Player_URLs=[] #got the result result= response.xpath('//div[@class="cscore_notes"]/span/text()').extract_first() result=result.split(" ") Winning_Team =result[0] #extracted player ulrs Batting_Player_URLs=response.xpath('//div[@class="cell batsmen"]/a/@href').extract() Bowling_Player_URLs=response.xpath('//*[@class="scorecard-section bowling"]/table/tbody/tr/td/a/@href').extract() #added to a list Player_URLs.extend(Batting_Player_URLs) Player_URLs.extend(Bowling_Player_URLs) for p in Player_URLs: yield Request(p,callback=self.parse_players,meta={'p':p}) yield{'Winning_Team':Winning_Team} def parse_players(self,response): Player_name=response.xpath('//div[@class="ciPlayernametxt"]/div/h1/text()').extract_first() Country=response.xpath('//div[@class="ciPlayernametxt"]/div/h3/b/text()').extract_first() #this wont give the batting style but the 'batting style' as text Batting_style=response.xpath('//div[@class="ciPlayerinformationtxt"]/p/text()').extract_first() yield{'Player_name':Player_name, 'Country':Country, 'Batting_style':Batting_style} </code></pre> <p>what I want is the extracted data as a single table and I wanted to avoid repetition.</p> <pre><code> yield{'Winning_Team':Winning_Team, 'Player_name':Player_name, 'Country':Country, 'Batting_style':Batting_style} </code></pre> <p>Thanks in advance</p>
<p>You need to adjust your XPath: </p> <pre><code>batting_style = response.xpath('//p[@class="ciPlayerinformationtxt"]/b[.="Batting style"]/following-sibling::span[1]/text()').get() </code></pre> <p><strong>UPDATE</strong></p> <pre><code>for p in Player_URLs: yield Request(p,callback=self.parse_players,meta={'Winning_Team':Winning_Team}) </code></pre> <p>and later:</p> <pre><code>def parse_players(self,response): Winning_Team = response.meta["Winning_Team"] </code></pre>
python|web-scraping|scrapy
0
1,905,537
61,061,593
How to calculate distance between binary message?
<p>I have a project in which I need to find the distance between binary messages. eg distance between 0001 and 1010. In decimal system distance between 1 and 1010 is 9 as (10-1). So I was wondering is there any formula for such a scenario?</p>
<ol> <li>One possibility is the <a href="https://en.wikipedia.org/wiki/Hamming_distance" rel="nofollow noreferrer">Hamming distance</a>. In short: the Hamming distance of two equal length strings (could be binary as in your case) is the number of positions with different characters.</li> <li>Another one is the <a href="https://en.wikipedia.org/wiki/Levenshtein_distance" rel="nofollow noreferrer">Levenshtein distance</a>. In short: Levenshtein distance of two strings is the minimum number of operations (character insertions, deletions, substitutions) required to to transform one string to the other.</li> <li>The <a href="https://en.wikipedia.org/wiki/Damerau%E2%80%93Levenshtein_distance" rel="nofollow noreferrer">Damerau–Levenshtein distance</a> is similar to the Levenshtein distance with one difference: the Levenshtein distance allows only the above three single character operation, and the Damerau–Levenshtein distance allows also the transposition of adjacent characters.</li> </ol>
python|machine-learning
0
1,905,538
66,052,628
How can i split the train, test, valid data from datasets and store it in pickle
<p>Currently my datasets contain 161 folders with 500 data(.img) inside each folder. Total = 80500 data Is there any code I can change? Currently stuck in the process of split into Train/Valid/Test and save it.</p> <p>The below shows the code of loading of my 161 folders datasets</p> <pre><code> import os import numpy as np import cv2 import glob folders = glob.glob('C:/Users/Pc/Desktop/datasets/*') imagenames_list = [] for folder in folders: for f in glob.glob(folder+'/*.jpg'): imagenames_list.append(f) read_images = [] for image in imagenames_list: read_images.append(cv2.imread(image, cv2.IMREAD_GRAYSCALE)) images = np.array(read_images) </code></pre> <p>The below code shows how am i split the data into 60% train / 20% test / 20% valid. Am i proceed with correct and the train/test/valid able to link to my datasets? How can i store them into a pickle file?</p> <pre><code>from sklearn.model_selection import train_test_split X, y = np.random.random((80500,10)), np.random.random((80500,)) p = 0.2 new_p = (p*y.shape[0])/((1-p)*y.shape[0]) X, X_val, y, y_val = train_test_split(X, y, test_size=p) X_train, X_test, y, y_test = train_test_split(X, y, test_size=new_p) print([i.shape for i in [X_train, X_test, X_val]]) </code></pre> <p><a href="https://i.stack.imgur.com/xo5o1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xo5o1.png" alt="enter image description here" /></a><a href="https://i.stack.imgur.com/zQxxF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zQxxF.png" alt="enter image description here" /></a></p>
<p>You can store them in a pickle file like this:</p> <pre><code>import pickle dataset_dict = {&quot;X_train&quot;: X_train, &quot;X_test&quot;: X_test, &quot;X_val&quot;: X_val, &quot;y_train&quot;: y_train, &quot;y_test&quot;: y_test, &quot;y_val&quot;: y_val} with open('dataset_dict.pickle', 'wb') as file: pickle.dump(dataset_dict, file) </code></pre> <p>And load them back like this;</p> <pre><code>with open('dataset_dict.pickle', 'rb') as file: dataset_dict = pickle.load(file) </code></pre>
python|jupyter-notebook|jupyter
1
1,905,539
69,230,836
How to communicate with a python Twisted server from Java successfully?
<p>These questions may sound silly, but I am new to this networking thing. I have been trying for quite a few days now to implement a client that works with a Twisted server, but I am failing to get any response back from the server. I have read a lot of docs and watched a few tutorials and I got some of the stuff fixed and got some of the concepts better understood.</p> <p>Before I step on to asking any questions, I wanna show you my code first. This is what I use to talk to the Twisted-based server:</p> <pre><code> val socketfactory: SocketFactory = SocketFactory.getDefault() val socket = socketfactory.createSocket(host, port) socket.keepAlive = true socket.tcpNoDelay = true val isSocketConnected = socket.isConnected //this checks socket's connectivity val dOut = DataOutputStream(socket.getOutputStream()) val dIn = DataInputStream(socket.getInputStream()) val teststring = &quot;Hi server!&quot; dOut.writeUTF(teststring) Log.d(&quot;MILESTONE&quot;, &quot;MESSAGE SENT AT THIS POINT, Socket is connected ?: $isSocketConnected&quot;) var testreader = &quot;&quot; while (true) { testreader = dIn.readUTF() Log.d(&quot;READING:&quot;, &quot;RECEIVED THIS: $testreader&quot;) } </code></pre> <p>My code seems to never get to the second &quot;Log&quot; line. It never gets there. I assume that's because I never get any input from the server. This is getting me confused. Because &quot;socket.isConnected&quot; returns true. Doesn't that mean there is an ongoing connection between the client (me) and the server ? But when I send any output the server doesn't talk back.</p> <p><strong>So my questions are:</strong> 1- Am I doing something wrong? Why do I receive no talk from the server and it blocks the code? 2- Is SocketFactory necessary ? 3- Is there any library that communicates with Twisted from Java ?</p> <p>Thanks in advance !</p>
<p>For everyone who's struggling to communicate with a Twisted-running python server, I came with the absolutely best solution ever! After inspecting Twisted's open source code, I came to realize it has a &quot;LineReceiver&quot; class that only responds to a message if the line is finished. In other words, you can keep sending data forever and it will never respond until you finish the line and start a new one. Twisted will know the line has finished when a delimiter is used. (It is configured on the server-side). Most servers running Twisted will use a line delimiter : &quot;\r\n&quot;</p> <p>That's the tricky thing! Once you send that little string, it will start responding to you. Here it is in an example :</p> <pre><code> val dOut = DataOutputStream(socket.getOutputStream()) //This is my favorite way of sending data! val dIn = socket.getInputStream().bufferedReader(Charsets.UTF_8) //This is my favorite way of reading data ! val teststring = &quot;hi server! \r\n&quot; //This is the tricky part ! </code></pre> <p>That's it ! all you have to do after that is read the lines from the bufferedReader, like this !</p> <pre><code>var testreader: List&lt;String&gt; while (true) { testreader = dIn.bufferedReader(Charsets.UTF_8).readLines() for (line in testreader) Log.e(&quot;MILESTONE&quot;, line) } </code></pre> <p>After I started reading the input, I found out the server started actually sending me strings and communicating back with me. I hope everyone will get their codes working concerning this or any other thing!</p>
java|python|kotlin|twisted|socketfactory
0
1,905,540
69,152,686
Unrecognised option moz:debuggerAddress
<p>I updated from selenium 3.141.0 to 4.0.0b4. The code below worked fine before (it opened up google)</p> <pre><code>import selenium options = Options() options.headless = True driver = webdriver.Firefox(options=options) driver.get(&quot;https://google.com.au&quot;) </code></pre> <p>But once I updated, selenium has begun to give me this error message:</p> <pre><code>Unrecognised option moz:debuggerAddress </code></pre> <p>I've tried to dig around on the internet for a while and I can't find any resolution. The closest thing I found was another <a href="https://github.com/webdriverio/webdriverio/issues/6087" rel="nofollow noreferrer">package</a> that had the same issue, but I'm not sure how the fix applies here. I wanted to try and &quot;disable&quot; the moz debuggeraddress option but I don't know how. I never had the option to add it - it seems like something else is mysteriously adding this option for me. How can I make the above code work in 4.0.0.b4 ?</p>
<p>I don't have any insight on this, but seems that this parameter is not present anymore, or has another name. This happend to me, then I just digg into the problem and ended up with this.</p> <p>You can import capabilities, and set it as argument at webdriver creation. So, you can remove such offending parameter. If you don't supply it manually, gets created automatically then miserably fails.</p> <p>Following your code results in this:</p> <pre><code>import selenium from selenium.webdriver.common.desired_capabilities import DesiredCapabilities from selenium.webdriver.firefox.options import Options options = Options() capabilities = DesiredCapabilities.FIREFOX.copy() capabilities.pop(&quot;moz:debuggerAddress&quot;) options.headless = True driver = webdriver.Firefox(options=options, capabilities=capabilities) driver.get(&quot;https://google.com.au&quot;) </code></pre> <p>Remember this will work or fail depending on versions and or configs, but at least you have a way to fix it when doesn't want to work.</p>
python|windows|selenium|selenium-webdriver
0
1,905,541
72,698,924
Django Rest Framework: generics.RetrieveUpdateDestroyAPIView not working for the given below condition
<p>In this generic APIView when I'm trying to log in through a non-admin user it is giving &quot;detail&quot;: &quot;You do not have permission to perform this action.&quot; but working fine for the admin user.</p> <p>I don't whether the problem is in code or permission.py<br /> I've shared my Views.py, permissions.py, and models.py for the same.<br /> If there is any mistake in my def get_queryset(self): please do let me know.</p> <p>Thank you!!</p> <pre><code>class BookingRetrtieveUpdateDestroyAPIView(generics.RetrieveUpdateDestroyAPIView): permission_classes = [IsUserOrIsAdmin] # queryset = Booking.objects.all() # serializer_class = BookingSerializer def get_queryset(self): if self.request.user.is_admin == False: user_data= self.request.user book = Booking.objects.filter(user= user_data) return book else: book = Booking.objects.all() return book serializer_class = BookingSerializer </code></pre> <pre><code>permissions.py from django.contrib.auth import get_user_model from rest_framework.permissions import BasePermission User = get_user_model() class IsUserOrIsAdmin(BasePermission): &quot;&quot;&quot;Allow access to the respective User object and to admin users.&quot;&quot;&quot; def has_object_permission(self, request, view, obj): return (request.user and request.user.is_staff) or ( isinstance(obj, User) and request.user == obj ) </code></pre> <pre><code>views.py class User(AbstractBaseUser): email = models.EmailField(verbose_name='Email',max_length=255,unique=True) name = models.CharField(max_length=200) contact_number= models.IntegerField() gender = models.IntegerField(choices=GENDER_CHOICES) address= models.CharField(max_length=100) state=models.CharField(max_length=100) city=models.CharField(max_length=100) country=models.CharField(max_length=100) pincode= models.IntegerField() dob = models.DateField(null= True) # is_staff = models.BooleanField(default=False) is_active = models.BooleanField(default=True) is_admin = models.BooleanField(default=False) created_at = models.DateTimeField(auto_now_add=True) updated_at = models.DateTimeField(auto_now=True) objects = UserManager() USERNAME_FIELD = 'email' REQUIRED_FIELDS = ['name','contact_number','gender','address','state','city','country','pincode','dob'] def __str__(self): return self.email def has_perm(self, perm, obj=None): &quot;Does the user have a specific permission?&quot; # Simplest possible answer: Yes, always return self.is_admin def has_module_perms(self, app_label): &quot;Does the user have permissions to view the app `app_label`?&quot; # Simplest possible answer: Yes, always return True @property def is_staff(self): &quot;Is the user a member of staff?&quot; # Simplest possible answer: All admins are staff return self.is_admin class Booking(models.Model): user =models.ForeignKey(User,on_delete=models.CASCADE) flights =models.ForeignKey(Flight,on_delete=models.CASCADE) passenger =models.ManyToManyField(Passenger) booking_number= models.CharField(max_length= 100,default=0, blank= True) booking_time = models.DateTimeField(auto_now_add=True) no_of_passengers= models.IntegerField(default=0,blank= True) def __str__(self): return self.booking_number </code></pre>
<p>Assuming that you want to allow non-admin users to access their bookings the permission class would have to look like:</p> <pre class="lang-py prettyprint-override"><code> class IsUserOrIsAdmin(BasePermission): def has_object_permission(self, request, view, obj): return ( # staff can do everything (request.user and request.user.is_staff) or # accessed obj is a Booking and belongs to the user (isinstance(obj, Booking) and request.user.pk == obj.user.pk) or # user can access or modify his user object (isinstance(obj, User) and request.user.pk == obj.pk) ) </code></pre>
python|django|django-models|django-rest-framework|django-views
0
1,905,542
68,381,892
tf2onnx Tensorflow to Onnx inconsistent outputs
<p>I'm experimenting with creating a super simple Tensorflow network (one data processing Lambda layer), and then converting the model to ONNX and verifying the results match when calling the ONNX model from onnxruntime. I'm using Tensorflow v2.5.0. &amp; onnxruntime v1.8.1.</p> <pre><code>example_input2 = tf.convert_to_tensor([0,1000,2000,2000,2000,3000,3000,4000],dtype=tf.float32) </code></pre> <p>Model definition:</p> <pre><code>inp = keras.layers.Input(name=&quot;input&quot;, type_spec=tf.TensorSpec(shape=[None], dtype=tf.float32)) output = keras.layers.Lambda(lambda x: tf.roll(x,shift=-1,axis=0),name=&quot;output&quot;) (inp) model = keras.Model(inp,output,name=&quot;pipeline&quot;) </code></pre> <p>Then I can feed my example_input2 into the network:</p> <pre><code>model.predict(example_input2) </code></pre> <p>Which provides the desired output (simply a tf.roll operation):</p> <pre><code>array([1000., 2000., 2000., 2000., 3000., 3000., 4000., 0.], dtype=float32) </code></pre> <p>Great! Now I can save my tensorflow model,</p> <pre><code>model.save(&quot;OnnxInvestigateData/pipeline2&quot;, overwrite=True, include_optimizer=False, save_format='tf') </code></pre> <p>Then at the shell, I can convert it to an ONNX format using tf2onnx:</p> <pre><code>python -m tf2onnx.convert --opset 14 --saved-model pipeline2 --output pipeline2.onnx </code></pre> <p>Then, back in python, I can load the onnx model and try to feed through the same input:</p> <pre><code>sess = rt.InferenceSession(&quot;OnnxInvestigateData/pipeline2.onnx&quot;, log_verbosity_level=2) xinput = example_input2.numpy() sess.run(['output'],{&quot;args_0&quot;:xinput}) </code></pre> <p>Which provides output that matches the input, and not the desired output (which should have been tf.roll'd by -1):</p> <pre><code>[array([ 0., 1000., 2000., 2000., 2000., 3000., 3000., 4000.], dtype=float32)] </code></pre> <p>I'm totally at a loss as to why the output here isn't matching when I call <code>model.predict</code> from within python on my original keras model. Any ideas?</p>
<p>Looks like you found a bug in tf2onnx. Here's a fix: <a href="https://github.com/onnx/tensorflow-onnx/pull/1616" rel="nofollow noreferrer">https://github.com/onnx/tensorflow-onnx/pull/1616</a></p> <p>If you were just doing a test and don't want to wait for the fix to merge, try using a positive shift value.</p> <pre><code>output = keras.layers.Lambda(lambda x: tf.roll(x,shift=2,axis=0),name=&quot;output&quot;) (inp) </code></pre> <p>Also if you want to convert directly from keras in your script you can do:</p> <pre><code>onnx_model, _ = tf2onnx.convert.from_keras(model, opset=14) sess = rt.InferenceSession(onnx_model.SerializeToString()) </code></pre>
python|tensorflow|keras|onnx|onnxruntime
1
1,905,543
59,073,604
Can i make a server open html instead of downloading it? if i can how?
<p>i got that server:</p> <pre><code>import http.server import socketserver PORT = 80 class MyRequestHandler(http.server.SimpleHTTPRequestHandler): def __init__(self, *args, directory=None, **kwargs): super().__init__(*args, directory=None, **kwargs) self.path = 'path.html' def do_GET(self): if self.path == '/': pass return http.server.SimpleHTTPRequestHandler.do_GET Handler = MyRequestHandler with socketserver.TCPServer(('', PORT), Handler) as httpd: print('Serving at port', PORT) httpd.serve_forever() </code></pre> <p>but it downloads the html file. can i make it open it as the website? if yes how?</p>
<p>According to HTTP spec you need to have the below header set.</p> <p>The content to be loaded from the browser instead of download. </p> <pre><code>Content-Disposition: inline </code></pre> <p>python code for the server</p> <pre><code>send_header('Content-Disposition', 'inline') </code></pre> <p><a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Disposition" rel="nofollow noreferrer">content to display browser</a></p>
python-3.x
0
1,905,544
59,183,601
How to properly append a list to a DataFrame in a for loop
<p>I'm attempting to add every item in a list of strings (lines from a file) to my DataFrame. The line is filled with keys and values dumped into a list and converted to json. The issue is I cant get pandas to properly make a DataFrame from the list in the loop (code gets stuck in for loop).</p> <pre><code>df = pd.DataFrame() df2 = pd.DataFrame() with open(log_file_path, "r") as file: for line in file: line = json.loads(line[1:]) items = line.items() all_list.append(list) df = df.append(pd.DataFrame.from_records([line])) continue print("work") print(df) print(df.head()) </code></pre> <p>Here is what each line looks like. </p> <pre><code>line = {'protocol': 'https', 'instanceid': 'beacond-lga13-1349-12003', 'raw_data': 'i|200|122!i|200|114!i|200|117', 'source_ip': '90.227.61.0', 'ts': 1549434199, 'jobid': '1uxw9ir', 'geocode': 'SE', 'referer': 'https://sv.cam4.com/female', 'user_agent': 'Mozilla/5.0 (Linux; Android 8.0.0; SAMSUNG SM-G935F/G935FXXS3ERL4 Build/R16NW) AppleWebKit/537.36 (KHTML, like Gecko) SamsungBrowser/8.2 Chrome/63.0.3239.111 Mobile Safari/537.36', 'appid': '157pr4o', 'app_version': 1536174158, 'asn': 3301} </code></pre>
<p>I would make a list of lists and THEN construct your dataframe. For example:</p> <pre class="lang-py prettyprint-override"><code># After collecting each list lists = [['a', 'b'], ['c', 'd']] # Pass your list of lists (and you can name the columns too if you like!) pd.DataFrame(lists, columns=['col1', 'col2']) </code></pre> <p>Output:</p> <pre><code> col1 col2 0 a b 1 c d </code></pre>
python|pandas|loops|dataframe
0
1,905,545
58,806,270
Counting number of HTML tags through webscraping
<p>My output should be the separate total number of each heading tag used ( "H1"-"H6" header tags used on the page), paragraphs, images, and links</p> <p>I am getting an output of the incorrect number it is not finding H tags at all and the counter outputs 1 for the header tags. How do I count the correct number of html tags?</p> <pre><code>import re from bs4 import BeautifulSoup import requests from collections import Counter from string import punctuation #main program link_url = input("Please Enter the website address ") #retrieves url for parsing r = requests.get(link_url) b_soup = BeautifulSoup(r.content, features="html.parser") #Searaching/parsing for various sized header content headerH1 = headH2 = headerH3 = headerH4 = headerH5 = headerH6 = 0 for header_tags in b_soup.findAll(): if(header_tags.name == "H1" or header_tags.name == "&lt;H1&gt;"): headerH1 = headerH1+1 if(header_tags.name == "H2" or header_tags.name == "&lt;H2 &gt;"): headH2 = headH2+1 if(header_tags.name == "H3" or header_tags.name == "&lt;H3 &gt;"): headerH3 = headerH3+1 if(header_tags.name == "H4" or header_tags.name == "&lt;H4 &gt;"): headerH4 = headerH4+1 if(header_tags.name == "H5" or header_tags.name == "&lt;H5 &gt;"): headerH5 = headerH5+1 if(header_tags.name == "H6" or header_tags.name == "&lt;H6 &gt;"): headerH6 = headerH6+1 print("Total Headings in H1: ", headerH1) print("Total Headings in H2: ", headH2) print("Total Headings in H3: ", headerH3) print("Total HeadingS in H4: ", headerH4) print("Total Headings in H4: ", headerH5) print("Total Headings in H5: ", headerH6) count = 0 #counting number of paragraphs for header_tags in b_soup.findAll(): if(header_tags.name == 'p' or header_tags.name == '&lt;p&gt;'): count = count+1 print("Paragraphs: ", count) #counting image total for img in b_soup.findAll(): if(img.name == 'img'): count = count+1 print("Images: ", count) count = 0 #counting number of links for link in b_soup.find_all('a', href=True): count = count+1 print("Links: ", count) </code></pre> <p>my output</p> <pre><code> Total Headings in H1: 1 Total Headings in H2: 1 Total Headings in H3: 1 Total HeadingS in H4: 1 Total Headings in H4: 1 Total Headings in H5: 1 Paragraphs: 23 Images: 33 Links: 70 </code></pre> <p>The correct output of the website I was using should actually be similar too </p> <pre><code>Number of H1 Headings: 9 Number of images on this page: 10 </code></pre> <p>You don't need the website I use you can use any link to test it. </p>
<p>Here is an example to count the number of <code>&lt;h1&gt;</code> tags from some HTML code:</p> <pre class="lang-py prettyprint-override"><code>from bs4 import BeautifulSoup html = "&lt;h1&gt;first&lt;/h1&gt;&lt;h1&gt;second&lt;/h1&gt;&lt;h2&gt;third&lt;/h2&gt;" soup = BeautifulSoup(html, 'html.parser') h1s = soup.find_all('h1') h1_count = len(h1s) # Gets the number of &lt;h1&gt; tags </code></pre> <p>In this example <code>h1_count</code> would be 2.</p> <p>You can do the same for other tag types by replacing <code>h1</code> in <code>find_all('h1')</code>:</p> <pre class="lang-py prettyprint-override"><code>h2s = soup.find_all('h2') h3s = soup.find_all('h3') ... h2_count = len(h2s) h3_count = len(h3s) </code></pre> <p>Hope this helps.</p>
python|html|python-3.x|parsing|beautifulsoup
3
1,905,546
59,850,674
How can I successively subract values from int elements in a list until I reach the end of my list?
<p>So what I'm overall trying to accomplish is shifting a list containing multiple int elements successively by a variable that holds an unknown amount of ints.</p> <p>Example: list = [11,2,107,103,97]</p> <p>key = '18'</p> <p>if we were subtracting the updated list should look like this: updatedList = [10, -6, 106, 95, 96]</p> <p>In all the for loops that I've tried my errors consist of ints not being iterable. When I tried iterating it as a string or a list for some reason only one element was picked up. I could even specify which element by list slicing but it was always one. </p>
<pre><code>my_list = [11,2,107,103,97] key = [1,8] updated_list = [my_list[i]-key[i % len(key)] for i in range(len(my_list))] </code></pre> <p>or, if key is a string as in the original post,</p> <pre><code>my_list = [11,2,107,103,97] key = '18' updated_list = [my_list[i]-int(key[i % len(key)]) for i in range(len(my_list))] </code></pre>
python-3.x
2
1,905,547
49,130,343
Is there a way to get the error in fitting parameters from scipy.stats.norm.fit?
<p>I have some data which I have fitted a normal distribution to using the scipy.stats.normal objects fit function like so: </p> <pre><code>import numpy as np import matplotlib.pyplot as plt from scipy.stats import norm import matplotlib.mlab as mlab x = np.random.normal(size=50000) fig, ax = plt.subplots() nbins = 75 mu, sigma = norm.fit(x) n, bins, patches = ax.hist(x,nbins,normed=1,facecolor = 'grey', alpha = 0.5, label='before'); y0 = mlab.normpdf(bins, mu, sigma) # Line of best fit ax.plot(bins,y0,'k--',linewidth = 2, label='fit before') ax.set_title('$\mu$={}, $\sigma$={}'.format(mu, sigma)) plt.show() </code></pre> <p>I would now like to extract the uncertainty/error in the fitted mu and sigma values. How can I go about this? </p>
<p>You can use <a href="https://docs.scipy.org/doc/scipy-0.19.0/reference/generated/scipy.optimize.curve_fit.html" rel="nofollow noreferrer"><code>scipy.optimize.curve_fit</code></a>: This method does not only return the estimated optimal values of the parameters, but also the corresponding covariance matrix:</p> <blockquote> <p><strong>popt : array</strong></p> </blockquote> <blockquote> <p>Optimal values for the parameters so that the sum of the squared residuals of f(xdata, *popt) - ydata is minimized</p> </blockquote> <blockquote> <p><strong>pcov : 2d array</strong></p> </blockquote> <blockquote> <p>The estimated covariance of popt. The diagonals provide the variance of the parameter estimate. To compute one standard deviation errors on the parameters use perr = np.sqrt(np.diag(pcov)).</p> </blockquote> <blockquote> <p>How the sigma parameter affects the estimated covariance depends on absolute_sigma argument, as described above.</p> </blockquote> <blockquote> <p>If the Jacobian matrix at the solution doesn’t have a full rank, then ‘lm’ method returns a matrix filled with np.inf, on the other hand ‘trf’ and ‘dogbox’ methods use Moore-Penrose pseudoinverse to compute the covariance matrix.</p> </blockquote> <p>You can calculate the standard deviation errors of the parameters from the square roots of the diagonal elements of the covariance matrix as follows:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import matplotlib.pyplot as plt from scipy.stats import norm from scipy.optimize import curve_fit x = np.random.normal(size=50000) fig, ax = plt.subplots() nbins = 75 n, bins, patches = ax.hist(x,nbins, density=True, facecolor = 'grey', alpha = 0.5, label='before'); centers = (0.5*(bins[1:]+bins[:-1])) pars, cov = curve_fit(lambda x, mu, sig : norm.pdf(x, loc=mu, scale=sig), centers, n, p0=[0,1]) ax.plot(centers, norm.pdf(centers,*pars), 'k--',linewidth = 2, label='fit before') ax.set_title('$\mu={:.4f}\pm{:.4f}$, $\sigma={:.4f}\pm{:.4f}$'.format(pars[0],np.sqrt(cov[0,0]), pars[1], np.sqrt(cov[1,1 ]))) plt.show() </code></pre> <p>This results in the following plot:</p> <p><a href="https://i.stack.imgur.com/rPlNS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rPlNS.png" alt="enter image description here" /></a></p>
python|statistics|curve-fitting|gaussian|data-fitting
8
1,905,548
60,011,230
Dictionary and function gene mapping output not returning expected frequencies
<p>I have been trying to figure out what's wrong with this bioinformatics code for hours and I can't see it. the pieces of my function appear to work, but it's not seeing certain patterns. I'm using a sliding window function to return the number of times a certain combination of base pairs of length <code>k</code> shows up in a piece of text.</p> <p>the first function I need essentially creates the index of the quaternary number of the nucleotide string:</p> <pre><code>nucs = {'A':0,'C':1,'G':2,'T':3} def PatternToNumber(Pattern): index = 0 power = [] for i in range(len(Pattern)-1,-1,-1): power.append(i) for i in range(len(Pattern)): index += nucs[Pattern[i]]*(4**power[i]) return index </code></pre> <p>and the next function I use iterates down a chunk of text and adds 1 to the index in a frequency array.</p> <pre><code>def ComputingFrequencies(Text,k): FrequencyArray = [0]*(4**k) for i in range(len(Text)-k): Pattern = Text[i:i+k] index = PatternToNumber(Pattern) FrequencyArray[index] += 1 print(*FrequencyArray) </code></pre> <p>Like I said I've looked into every line and it seems to work fine getting nucleotide patterns into index numbers the way I would expect them to, but the output you get running:</p> <pre><code>ComputeFrequencies('ACGCGGCTCTGAAA',2) </code></pre> <p>is:</p> <pre><code>1 1 0 0 0 0 2 2 1 2 1 0 0 1 1 0 </code></pre> <p>if you look at the first number in the <code>FrequencyArray</code> it would tell us that the string 'AA' only shows up 1, but the last three characters in the Text input are 'AAA' which would mean 'AA' shows up twice and the first entry in the <code>FrequencyArray</code> should be 2 and not 1. What we should expect is:</p> <pre><code>2 1 0 0 0 0 2 2 1 2 1 0 0 1 1 0 </code></pre> <p>If I've not explained it well, I can try to clarify my code a bit if needed. </p>
<p>I'm pretty sure you just have an off by 1 error. Since you're just not checking up to the last character?</p> <pre><code>for i in range(len(Text)-k): </code></pre> <p>For length 2, it will only iterate up to the first <code>AA</code>, so you're only seeing it once. Change to </p> <pre><code>for i in range(len(Text)-(k-1)): </code></pre> <p>And I think that should give you what you want. </p>
python|bioinformatics|genetics
1
1,905,549
67,654,576
How to merge elements of dictionary with elements of list of lists of dictionaries?
<p>I have a dictionary in a list:</p> <pre><code>[{'selectionId': 47999}, {'selectionId': 55190}, {'selectionId': 58805}] </code></pre> <p>and also a list of lists of dictionaries:</p> <pre><code>[[{'price': 2.04, 'size': 45.35}, {'price': 2.02, 'size': 7404.31}, {'price': 2.0, 'size': 15485.06}], [{'price': 4.3, 'size': 2493.19}, {'price': 4.2, 'size': 5627.74}, {'price': 4.1, 'size': 1489.93}], [{'price': 3.5, 'size': 5785.37}, {'price': 3.45, 'size': 4404.69}, {'price': 3.4, 'size': 4917.9}]] </code></pre> <p>I want to 'zip' the elements of the first dictionary with each element in the dictionaries of the list of lists, like the following:</p> <pre><code>[[{'selectionId': 47999, 'price': 2.04, 'size': 45.35}, {'selectionId': 47999, 'price': 2.02, 'size': 7404.31}, {'selectionId': 47999, 'price': 2.0, 'size': 15485.06}]... </code></pre> <p>and so on. How could I do this? I have tried the <code>{**x, **y}</code> syntax but doesn't work how I want it to.</p>
<pre class="lang-py prettyprint-override"><code>from pprint import pprint l1 = [{&quot;selectionId&quot;: 47999}, {&quot;selectionId&quot;: 55190}, {&quot;selectionId&quot;: 58805}] l2 = [ [ {&quot;price&quot;: 2.04, &quot;size&quot;: 45.35}, {&quot;price&quot;: 2.02, &quot;size&quot;: 7404.31}, {&quot;price&quot;: 2.0, &quot;size&quot;: 15485.06}, ], [ {&quot;price&quot;: 4.3, &quot;size&quot;: 2493.19}, {&quot;price&quot;: 4.2, &quot;size&quot;: 5627.74}, {&quot;price&quot;: 4.1, &quot;size&quot;: 1489.93}, ], [ {&quot;price&quot;: 3.5, &quot;size&quot;: 5785.37}, {&quot;price&quot;: 3.45, &quot;size&quot;: 4404.69}, {&quot;price&quot;: 3.4, &quot;size&quot;: 4917.9}, ], ] for a, b in zip(l1, l2): for d in b: d.update(a) pprint(l2) </code></pre> <p>Prints:</p> <pre class="lang-py prettyprint-override"><code>[[{'price': 2.04, 'selectionId': 47999, 'size': 45.35}, {'price': 2.02, 'selectionId': 47999, 'size': 7404.31}, {'price': 2.0, 'selectionId': 47999, 'size': 15485.06}], [{'price': 4.3, 'selectionId': 55190, 'size': 2493.19}, {'price': 4.2, 'selectionId': 55190, 'size': 5627.74}, {'price': 4.1, 'selectionId': 55190, 'size': 1489.93}], [{'price': 3.5, 'selectionId': 58805, 'size': 5785.37}, {'price': 3.45, 'selectionId': 58805, 'size': 4404.69}, {'price': 3.4, 'selectionId': 58805, 'size': 4917.9}]] </code></pre>
python|dictionary|merge|zip
0
1,905,550
26,443,370
configuring Django urls between project and apps
<p>I want to add a accounts app to the mysite/polls project from the Django tutorial. I am not able to get the urls setup properly when I add a 'accounts' app to manage user authentication.</p> <p>My root URLconf is:</p> <pre class="lang-py prettyprint-override"><code>from django.conf.urls import patterns, include, url from django.contrib import admin urlpatterns = patterns( '', # Examples: # url(r'^$', 'uniVote.views.home', name='home'), # url(r'^blog/', include('blog.urls')), url(r'^elections/', include('elections.urls', namespace='elections')), url(r'^admin/', include(admin.site.urls)), # ex: accounts/... url(r'^accounts/', include('accounts.urls')), ) </code></pre> <p>and my URLconf in my accounts app is:</p> <pre class="lang-py prettyprint-override"><code>from django.conf.urls import patterns, url urlpatterns = patterns( '', # ex: /accounts/login/ url(r'^accounts/login/$', 'accounts.views.user_login'), # ex: /accounts/logout/ url(r'^accounts/logout/$', 'accounts.views.user_logout'), # ex: /accounts/register/ url(r'^accounts/register/$', 'accounts.views.user_register'), ) </code></pre> <p>Also, here is my accounts views.py:</p> <pre class="lang-py prettyprint-override"><code>from django.http import HttpResponse, HttpResponseRedirect from django.contrib.auth import authenticate, login, logout from django.shortcuts import render_to_response from django.core.context_processors import csrf #Import a user registration form from accounts.forms import UserRegisterForm # User Login View def user_login(request): if request.user.is_anonymous(): if request.method == 'POST': username = request.POST['username'] password = request.POST['password'] #This authenticates the user user = authenticate(username=username, password=password) if user is not None: if user.is_active: #This logs him in login(request, user) else: return HttpResponse("Not active") else: return HttpResponse("Wrong username/password") return HttpResponseRedirect("/") # User Logout View def user_logout(request): logout(request) return HttpResponseRedirect('/') # User Register View def user_register(request): if request.user.is_anonymous(): if request.method == 'POST': form = UserRegisterForm(request.POST) if form.is_valid: form.save() return HttpResponse('User created succcessfully.') else: form = UserRegisterForm() context = {} context.update(csrf(request)) context['form'] = form #Pass the context to a template return render_to_response('accounts/register.html', context) else: return HttpResponseRedirect('/') </code></pre> <p>Whenever I point my browser to: 0.0.0.0:8080/accounts/register, I get a 404 error and the following output:</p> <p>Using the URLconf defined in uniVote.urls, Django tried these URL patterns, in this order:</p> <ol> <li>^elections/ <li>^admin/ <li>^accounts/ ^accounts/login/$ <li>^accounts/ ^accounts/logout/$ <li>^accounts/ ^accounts/register/$ </ol> <p>The current URL, accounts/register/, didn't match any of these.</p> <hr> <p>From the tutorial, I believed that addr:port/accounts/register would load the root URLconf and see 'accounts' and load the accounts/urls.py and pass 'register' and see the third url pattern and load accounts.views.user_register.</p> <p>Is there something I am missing to reach the accounts/... urls?</p>
<p>Currently you app is configured to view in <code>accounts/accounts/register/</code> url.</p> <p>From your code, the regex of URL has "accounts" repeated.</p> <p>Basically you should try:</p> <p><code>url(r'^accounts/', include('accounts.urls'))</code></p> <p>and then remove "accounts" from account urls.</p> <pre><code>urlpatterns = patterns( '', # ex: /accounts/login/ url(r'^login/$', 'accounts.views.user_login'), # ex: /accounts/logout/ url(r'^logout/$', 'accounts.views.user_logout'), # ex: /accounts/register/ url(r'^register/$', 'accounts.views.user_register'), ) </code></pre>
python|django
3
1,905,551
26,438,814
How to call a client side method from the server using socket.io
<p>I am using flask-socketio for the server and have the following code to call a client function from the server. Is this the correct way to do it? Or is there a better way</p> <p>Server</p> <pre><code>@socketio.on('connect', namespace='/test') def test_connect(): sleep(5) emit('my response', {'data': 'myFunction', 'parameter': 2}) </code></pre> <p>Client</p> <pre><code>socket.on('my response', function(msg) { console.log(msg) window[msg.data]({parameter1 : msg.parameter}); } </code></pre>
<p><strong>The server should never try to explicitly call any function on the client.</strong> Client and server are different applications that "talk" to each other through requests, but I would consider bad practice having a server that knows the names of functions in the client, how the response will be handled by the client, or how the visual interface works. Similarly, it would be really bad if the client knew about the server specific implementation, and tells it which database to use, for instance.</p> <p>This is just simple <a href="http://en.wikipedia.org/wiki/Separation_of_concerns" rel="noreferrer" title="separation of concerns">separation of concerns</a>. Also, read about Client-Server domain separation. Anyhow, I don't know about your specific application, but <strong>let's get to a practical example</strong>:</p> <p>Let's assume your app is a modified version of stack overflow. When a user upvotes an answer, the server is notified, and the client app has to change the height of a bar for that certain user (according to the new number of points) on a page that shows all users...</p> <p>Following how your implementation design today, this is what would happen:</p> <p>Server</p> <pre><code>emit('new-points', { 'data': 'IncreaseBarHeight', 'parameter': '10px', 'user': '3476' }) </code></pre> <p>Client</p> <pre><code>socket.on('new-points', function(msg) { window[msg.data]({height : msg.parameter}); //this would trigger global IncreaseBarHeight('10px', msg.user) } </code></pre> <p>So the client <em>has some knowledge</em> about the interface, it knows that there's a bar in the UI and that the height should be increased. If I decide to change the UI and have only the number instead of a bar, I would have to re-implement the server as well. They are too coupled.</p> <p><strong>This is a much better approach:</strong></p> <p>Server</p> <pre><code>emit('new-points', {'points': 3560, 'user': '8376'}) </code></pre> <p>Client</p> <pre><code>socket.on('new-points', function(msg) { SetBarHeight(msg.points, msg.user); //set height based on total points } </code></pre> <p>The server doesn't know how the client will use the information, and therefore they are not coupled. I can change the UI representation of the points without having to modify the server. Does it make sense?</p>
javascript|python|websocket|flask
5
1,905,552
61,203,352
How to rename python locust actions?
<p>I have the next code from locustio documentation: <br></p> <pre><code>from locust import HttpLocust, TaskSet, between def login(l): l.client.post("/login", {"username":"ellen_key", "password":"education"}) def logout(l): l.client.post("/logout", {"username":"ellen_key", "password":"education"}) def index(l): l.client.get("/") def profile(l): l.client.get("/profile") class UserBehavior(TaskSet): tasks = {index: 2, profile: 1} def on_start(self): login(self) def on_stop(self): logout(self) class WebsiteUser(HttpLocust): task_set = UserBehavior wait_time = between(5.0, 9.0) </code></pre> <p>In the locust logs and locust web (localhost:8089) I see the next tasks</p> <pre><code>- /login - /logout - / - /profile </code></pre> <p>But what if I need to have few requests in one task and get measure from full tasks (not 1 request).<br> What I want to see is: <br></p> <pre><code>- login - logout - index - profile </code></pre> <p>I want to see the names of tasks instead of request url. In Jmeter I can insert few request in one action and get action time (not request).</p>
<p>You can set the name by <code>name</code> attribute for each request, see the example:</p> <pre><code>def index(l): l.client.get("/", name="index") def profile(l): l.client.get("/profile", name="my-profile") </code></pre>
python|performance-testing|locust|taurus
7
1,905,553
69,334,958
Taking a 3*3 subset matrix from from a really large numpy ndarray in Python
<p>I am trying to take a 3*3 subset from a really large 400 x 500 ndarray of numpy. But due to some reason, I am not getting the desired result. Rather it is taking the first three rows as a whole.</p> <p>Here is the code that I wrote.</p> <pre><code>subset_matrix = mat[0:3][0:3] </code></pre> <p>But this is what I am getting in my output of my Jupyter Notebook</p> <pre><code>array([[91, 88, 87, ..., 66, 75, 82], [91, 89, 88, ..., 68, 78, 84], [91, 89, 89, ..., 72, 80, 87]], dtype=uint8) </code></pre>
<p><code>mat[0:3][0:3]</code> slice the axis 0 of the 2D array twice and is equivalent to <code>mat[0:3]</code>. What you need is <code>mat[0:3,0:3]</code>.</p>
python-3.x|numpy|matrix|jupyter-notebook|numpy-ndarray
1
1,905,554
42,190,984
`dyld: Library not loaded` error preventing virtualenv from loading
<p>When I tried creating a virtual environment with python using the command <code>virtualenv venv</code> from Terminal, I got the following error: </p> <pre><code>Using base prefix '/Users/zacharythomas/anaconda3' New python executable in /Users/zacharythomas/venv/bin/python dyld: Library not loaded: @rpath/libpython3.6m.dylib Referenced from: /Users/zacharythomas/venv/bin/python Reason: image not found ERROR: The executable /Users/zacharythomas/venv/bin/python is not functioning ERROR: It thinks sys.prefix is '/Users/zacharythomas' (should be '/Users/zacharythomas/venv') ERROR: virtualenv is not compatible with this system or executable </code></pre> <p>I'm not the first person to encounter a similar error -- I tried following <a href="https://stackoverflow.com/a/25947333/3899919">this answer's</a> recommendations and running: </p> <pre><code>gfind ~/.virtualenvs/my-virtual-env/ -type l -xtype l -delete </code></pre> <p>That didn't help. Nor did running <code>sudo virtualenv venv</code> to run commmand as a super user. </p> <p>What should I investigate next?</p>
<p>I had the exact same error message. Ray Donnelly at Continuum Analytics Support Group provided the following solution, which resolved the issue for me:</p> <blockquote> <p>When you pip installed virtualenvwrapper, pip will have installed virtualenv for you as it is a dependency. Unfortunately, that virtualenv is not compatible with Anaconda Python. Fortunately, the Anaconda Distribution has a virtualenv that is compatible. To fix this:</p> <pre><code>pip uninstall virtualenv conda install virtualenv </code></pre> </blockquote> <p><a href="https://groups.google.com/a/continuum.io/forum/#!topic/anaconda/XuXMiJLhMgk" rel="noreferrer">can't get virtualenv to work with anaconda3 v4.3 on mac</a></p>
python|virtualenv
51
1,905,555
28,521,971
BST total depth complication
<p>Here is an excerpt of a working BST:</p> <pre><code>class BinaryTree(): def __init__(self,rootid): self.left = None self.right = None self.rootid = rootid def getLeftChild(self): return self.left def getRightChild(self): return self.right def setNodeValue(self,value): self.rootid = value def getNodeValue(self): return self.rootid </code></pre> <p>I decided not to display every function of the class above, only the important ones for what I am trying to achieve.</p> <p>What I would like is to calculate the total depth of every node in the tree, and I attempted to use the following function:</p> <pre><code>def depth(tree, count=1): if tree != None: return count + depth(tree.getLeftChild(), count+1) + depth(tree.getRightChild(), count+1) </code></pre> <p>The <code>count=1</code> represents the idea that the root node has a depth of 1.</p> <p>The problem with this function, however, is that it crashes when it reaches a <code>None</code> node, and I do not know how to fix it.</p> <p>This is the error message I get when I try utilizing the function:</p> <pre><code>TypeError: unsupported operand type(s) for +: 'int' and 'NoneType' </code></pre> <p>Can someone help me out? </p>
<p>Your recursive function <code>depth</code> needs a break condition:</p> <pre><code>def depth(tree): if tree == None: return 0 else: return 1 + max(depth(tree.getLeftChild()), depth(tree.getRightChild())) </code></pre> <p>And you forgot a <code>max</code> around the depths of the subtrees.</p> <p>Your snippet above fails as soon as <code>tree == None</code> (when a node has no child on whatever side). Then nothing is returned, which implicitly returns <code>None</code> in Python.</p>
python|binary-search-tree|depth
1
1,905,556
41,663,386
How is this 2sum Algorithm O(nlogn)?
<p>Given a list <code>L</code> and an int <code>c</code>, I have to find out if there are two elements in my list, that add up to <code>c</code> (2Sum Problem) . I came up with the following algorithm:</p> <pre><code>def tsum(L,c): a=sorted(L) b=sorted(L,reverse=True) for kleineZahl in a: for großeZahl in b: sum=kleineZahl+großeZahl if sum&gt;c: continue elif sum==c: return(True) elif sum&lt;c: break return(False) </code></pre> <p>Now I found out that this runs in <em>O(n log n)</em>, since the sorting takes <em>O(n log n)</em> actions. The "scanning" is supposed to take <em>O(n)</em> actions. How comes?</p> <p>I figured the worst case scenario would be <code>L=[1,1,1,1,1,c,c,c,c,c]</code>. How is the runtime not n/2*n/2, so <em>O(n<sup>2</sup>)</em>?</p>
<p>The algorithm you discuss above indeed has a time complexity of <em>O(n<sup>2</sup>)</em>. In fact there is no need to sort the elements first here. You can however implement a smarter one: you first sort the list and next you maintain two pointers: <code>left</code> and <code>right</code>. <code>right</code> moves from the <code>right</code> to the <code>left</code> over your list and the constraint always holds that <code>a[left]+a[right] &gt;= sum</code>. In case you get a hit, you return <code>True</code>, if <code>left</code> passes over <code>right</code>, we know no such hit exists and we return <code>False</code>, since at most <code>left</code> and <code>right</code> perform <em>O(n)</em> steps, the time complexity is <em>O(n)</em>, but the sorting step makes it <em>O(n log n)</em>. The smarter algorithm thus is:</p> <pre><code>def tsum(L,c): a=sorted(L) left = 0 right = len(L)-1 while left &lt; right: right1 = right while a[left]+a[right1] &gt; c: right1 -= 1 if a[left]+a[right1] == c: return True elif right1 &lt; right: right = right1+1 left += 1 return False </code></pre> <p>The difference is that you do not have to check from the <em>far</em>-right to a certain point in your array, you can simply start where you ended the previous iteration.</p>
python|runtime|time-complexity
1
1,905,557
46,220,892
how to create a new column conditional on other column by using existing columns in python
<p>I have a <code>dataframe</code> in <strong>python</strong> which looks like this</p> <pre><code>dt = pd.DataFrame({"language1": ["english", "english123", "ingles", "ingles123", "14.0", "13", "french"], "language2": ["englesh", "english123", "ingles", "ingles123", "14", "13", "french"], "language3": ["englesh", "engl", "ingles", "ingles123", "14", "13", "spanish"]}) </code></pre> <p>What I would like to do, is replicate this <strong>R</strong> code but in <strong>python</strong></p> <pre><code>dt[,language4:=ifelse(!language1%in%c("french"),paste0(language2,"_win"),paste0(language3,"_lose"))] </code></pre> <p>I tried this but it does not work</p> <pre><code>dt['language4'] = dt.apply(lambda x: ~x['language1'].isin(['french']), x['language2'] + "_win", x['language3']+"_lose") </code></pre> <p>So i came up with this</p> <pre><code>dt.loc[~dt['language1'].isin(["french"]),'language4'] = surv_dt_sd['language2'] + \ "_win" </code></pre> <p>but i do not know how to implement the <code>else</code> bit in one line</p>
<p><code>numpy.where</code> will work here:</p> <pre><code>dt['language4'] = np.where("french" not in dt['language1'], dt['language2'] + '_win', dt['language2'] + '_lose') </code></pre>
python|r|python-3.x|pandas|dataframe
2
1,905,558
46,595,157
How to apply the torch.inverse() function of PyTorch to every sample in the batch?
<p>This may seem like a basic question, but I am unable to work it through. </p> <p>In the forward pass of my neural network, I have an output tensor of shape 8x3x3, where 8 is my batch size. We can assume each 3x3 tensor to be a non-singular matrix. I need to find the inverse of these matrices. The PyTorch <a href="http://pytorch.org/docs/master/torch.html#torch.inverse" rel="noreferrer">inverse()</a> function only works on square matrices. Since I now have 8x3x3, how do I apply this function to every matrix in the batch in a differentiable manner?</p> <p>If I iterate through the samples and append the inverses to a python list, which I then convert to a PyTorch tensor, should it be a problem during backprop? (I am asking since converting PyTorch tensors to numpy to perform some operations and then back to a tensor won't compute gradients during backprop for such operations)</p> <p>I also get the following error when I try to do something like that.</p> <pre><code>a = torch.arange(0,8).view(-1,2,2) b = [m.inverse() for m in a] c = torch.FloatTensor(b) </code></pre> <blockquote> <p>TypeError: 'torch.FloatTensor' object does not support indexing</p> </blockquote>
<p><strong>EDIT:</strong></p> <p><strong>As of Pytorch version 1.0, <code>torch.inverse</code> now supports batches of tensors. See <a href="https://pytorch.org/docs/master/torch.html#torch.inverse" rel="noreferrer">here</a>. So you can simply use the built-in function <code>torch.inverse</code></strong></p> <p>OLD ANSWER</p> <p>There are plans to implement batched inverse soon. For discussion, see for example <a href="https://github.com/pytorch/pytorch/issues/7500" rel="noreferrer">issue 7500</a> or <a href="https://github.com/pytorch/pytorch/pull/9102" rel="noreferrer">issue 9102</a>. However, as of the time of writing, the current stable version (0.4.1), no batch inverse operation is available. </p> <p>Having said that, recently batch support for <code>torch.gesv</code> was added. This can be (ab)used to define your own batched inverse operation along the following lines:</p> <pre><code>def b_inv(b_mat): eye = b_mat.new_ones(b_mat.size(-1)).diag().expand_as(b_mat) b_inv, _ = torch.gesv(eye, b_mat) return b_inv </code></pre> <p>I found that this gives good speed-ups over a for loop when running on GPU. </p>
python|pytorch
11
1,905,559
46,474,219
python embedding on Mac ModuleNotFoundError: No module named 'encodings'
<p>I'm currently unable to using the Cython embedding feature. The binary compiles fine and <code>otool -L embedded</code> returns the following results.</p> <pre><code>embedded: @rpath/libpython3.6m.dylib (compatibility version 3.6.0, current version 3.6.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1238.60.2) /System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation (compatibility version 150.0.0, current version 1349.8.0) </code></pre> <p>This is the command I ran. Any thoughts on why this is not working? Cython using setup.py works fine when I want to create a Cython module, i.e. I'm able to import the Cython module in Python.</p> <pre><code>$ make gcc -c embedded.c -I/Users/$USER/miniconda3/include/python3.6m -I/Users/$USER/miniconda3/include/python3.6m gcc -o embedded embedded.o -L/Users/$USER/miniconda3/lib -L/Users/$USER/miniconda3/lib/python3.6/config-3.6m-darwin -lpython3.6m -ldl -framework CoreFoundation -Wl,-stack_size,1000000 -framework CoreFoundation $ ./embedded Could not find platform independent libraries &lt;prefix&gt; Could not find platform dependent libraries &lt;exec_prefix&gt; Consider setting $PYTHONHOME to &lt;prefix&gt;[:&lt;exec_prefix&gt;] Fatal Python error: Py_Initialize: unable to load the file system codec ModuleNotFoundError: No module named 'encodings' Current thread 0x000000010f8113c0 (most recent call first): [1] 32931 abort ./embedded </code></pre> <p>Suggestions?</p>
<p>You are basically trying to run a Python native code extension as a stand alone binary without the Python interpreter. This will never work.</p> <p>Cython extension code produces extensions to the Python interpreter. </p> <p>They are shared modules that can only be loaded within a running Python interpreter. They cannot be used as stand alone binaries.</p> <p>If you want to make and distribute a stand alone binary of Python code with or without extensions, the interpreter will need to be bundled along with the code - see <a href="http://cx-freeze.readthedocs.io/en/latest/overview.html" rel="nofollow noreferrer">cx_freeze</a>.</p>
python|c|macos|embed|cython
0
1,905,560
55,133,456
Call class method from another class PYTHON
<p>I am just trying to get a program that receives a point from one class, and then in another class, it uses that point as the center of the circle. I imagine this is simple but I don't know how to do it.</p> <pre><code>class Point: def __init__(self, x, y): self.x = x self.y = y class Circle(Point): def circle(self, center, radius): Point.x = center Point.y = center self.radius = radius </code></pre>
<p>You shouldn't subclass Point for your Circle class, it doesn't make much sense as they are two completely different things. Instead you can take a Point as the center of your circle and pass it into the Circle class in the <strong>init</strong></p> <pre><code>class Circle(object): def __init__(self, center: Point, radius): self.center = center self.radius = radius </code></pre>
python
3
1,905,561
52,111,775
Removing annotation from for loop
<p>The problem is that <code>aux1.remove()</code> doesn't remove the annotations added to the scatter points.</p> <p>But <code>aux.remove()</code> does remove the scatter points. So in the end I get a lot of annotations when I keep adding / removing new points.</p> <pre><code>aux = plt.scatter(obj_dy[:], obj_dx[:], color='green') for k in range(len(obj_index)): aux1 = plt.annotate(str(obj_index[k]), xy = (obj_dy[k], obj_dx[k])) plt.pause(0.1000) aux.remove() aux1.remove() </code></pre>
<p>The problem is that the creation of the annotations is inside a for loop. When you do <code>aux1.remove()</code> you only remove the last annotation on the axes. </p> <p>One solution would be to put <code>aux1</code> into a list and the after the for loop is finished, loop through the list and remove the annotations:</p> <pre><code>aux = plt.scatter(obj_dy[:], obj_dx[:], color='green') aux1_list = [] # empty list that the annotation will go in for k in range(len(obj_index)): aux1 = plt.annotate(str(obj_index[k]), xy = (obj_dy[k], obj_dx[k])) aux1_list.append(aux1) plt.pause(0.1) aux.remove() # remove scatter points # remove annotations for ann in aux1_list: ann.remove() plt.pause(0.01) plt.show() </code></pre> <p>Another way to do this without having to store the annotation in a list would be to loop through the <code>axes</code> children, check whether they are annotations and remove if that is the case:</p> <pre><code>for child in plt.gca().get_children(): if isinstance(child, matplotlib.text.Annotation): child.remove() </code></pre>
python|python-2.7|matplotlib|annotations
0
1,905,562
52,159,838
Keras saving weights of individual layers instead of a model
<p>I'm currently trying to create multiple models that will reuse certain layers, including their weights. I've achieved this by creating a list table that initializes these layers, then call them when creating each individual models.</p> <pre><code>column = [] column.append(Conv2D(self.out_filters, (3, 3), padding='same', kernel_initializer='he_normal', activation='relu')) column.append(Conv2D(self.out_filters, (5, 5), padding='same', kernel_initializer='he_normal', activation='relu')) </code></pre> <p>then when creating models</p> <pre><code>layer = column[0](input) </code></pre> <p>Now my question is, how do I save the weights of all the layers in the list? As far as I know, keras' save function only saves entire models that have been properly built.</p> <p><strong>Edit:</strong> Just to clarify, I want to save the "column" list, and not the final models. I am randomly generating model structures while using the layers stored inside "column". So 2 models may have different architectures, but they have weights shared (training on one model will also affect the weights of the other model). </p>
<p>like so.</p> <pre><code>model.save_weights('my_model_weights.h5') </code></pre> <p>model.get_weights() can also be used to get the weights of the model, then manually save them to use later</p> <pre><code>model.get_weights() </code></pre> <p>see <a href="https://keras.io/getting-started/faq/#how-can-i-save-a-keras-model" rel="nofollow noreferrer">Link</a>. </p>
python|keras|deep-learning|conv-neural-network
0
1,905,563
54,262,808
ImportError: cannot import name 'key' from 'pynput.keyboard'
<p>In the first place I want to apologize if is this question dumb.</p> <p>I have a problem with this error:</p> <pre><code> ImportError: cannot import name 'key' from 'pynput.keyboard' (C:\Users\richard\AppData\Local\Programs\Python\Python37-32\lib\site- packages\pynput\keyboard\__init__.py) </code></pre> <p>Can you please tell me, how to fix it ?: </p> <p>I tried find some advice on google, but I didn't find anything. Maybe it is just too dumb "problem".</p> <p>This is the unfinished code, I wanted to try if it is working, then the error showed up.</p> <pre><code>import pynput from pynput.keyboard import key, Listener count = 0 keys = [] def on_press(key): global keys, count print("{0} pressed".format(key)) def write_file(keys): with open("log.txt", "w") as f: for key in keys: f.write(key) def on_realease(key): if key == Key.esc: return False with Listener (on_press=on_press, on_release=on_realease) as listener: listener.join() </code></pre> <p>This is the whole problem: </p> <pre><code>Traceback (most recent call last): File "C:/Users/richard/AppData/Local/Programs/Python/Python37- 32/Logger.py", line 3, in &lt;module&gt; from pynput.keyboard import key, Listener ImportError: cannot import name 'key' from 'pynput.keyboard' (C:\Users\richard\AppData\Local\Programs\Python\Python37-32\lib\site- packages\pynput\keyboard\__init__.py) Process finished with exit code 1 </code></pre>
<p>Its <code>Key</code> not <code>key</code></p> <pre><code>from pynput.keyboard import Key, Listener </code></pre> <p>Take a look to the docs <a href="https://pypi.org/project/pynput/" rel="nofollow noreferrer">here</a>.</p>
python|python-3.x|pynput
4
1,905,564
26,071,863
Modify regular expression
<p>I am trying to get first pair of numbers from "<strong>09</strong>_135624.jpg"</p> <p>My code now:</p> <pre><code>import re string = "09_135624.jpg" pattern = r"(?P&lt;pair&gt;(.*))_135624.jpg" match = re.findall(pattern, string) print match </code></pre> <p>Output:</p> <pre><code>[('09', '09')] </code></pre> <p>Why I have tuple in output?</p> <p>Can you help me modify my code to get this:</p> <pre><code>['09'] </code></pre> <p>Or:</p> <pre><code>'09' </code></pre>
<p><a href="https://docs.python.org/2/library/re.html#re.findall" rel="nofollow"><code>re.findall</code></a> returns differently according to the number of capturing group in the pattern:</p> <pre><code>&gt;&gt;&gt; re.findall(r"(?P&lt;pair&gt;.*)_135624\.jpg", "09_135624.jpg") ['09'] </code></pre> <p>According to the documentation:</p> <blockquote> <p>Return all non-overlapping matches of pattern in string, as a list of strings. The string is scanned left-to-right, and matches are returned in the order found. <strong>If one or more groups are present in the pattern, return a list of groups; this will be a list of tuples if the pattern has more than one group.</strong> Empty matches are included in the result unless they touch the beginning of another match.</p> </blockquote> <hr> <p>Alternative using <code>re.search</code>:</p> <pre><code>&gt;&gt;&gt; re.search(r"(?P&lt;pair&gt;.*)_135624\.jpg", "09_135624.jpg") &lt;_sre.SRE_Match object at 0x00000000025D0D50&gt; &gt;&gt;&gt; re.search(r"(?P&lt;pair&gt;.*)_135624\.jpg", "09_135624.jpg").group('pair') '09' &gt;&gt;&gt; re.search(r"(?P&lt;pair&gt;.*)_135624\.jpg", "09_135624.jpg").group(1) '09' </code></pre> <p><strong>UPDATE</strong></p> <p>To match <code>.</code> literally, you need to escape it: <code>\.</code>.</p>
python|regex|python-2.7
1
1,905,565
70,578,067
How to convert a FITS file to a numpy array
<p>I'm trying to train an AI algorithm to determine photometric redshifts of galaxies, and to do so I have a FITS file which contains the training data. I need to convert this FITS file into a format which can be manipulated easily in Python, specifically a numpy array. I have already tried using astropy and followed the below youtube video:</p> <p><a href="https://www.youtube.com/watch?v=goH9yXu4jWw" rel="nofollow noreferrer">https://www.youtube.com/watch?v=goH9yXu4jWw</a></p> <p>however, when I attempt to convert the file and then inspect the data type, it is still a FITS file and not a numpy array. If anyone can help it'd be much appreciated!</p> <pre><code>import astropy.io from astropy.io import fits truth_north = fits.open('dr9_pz_truth_north.fits') data = truth_north[1].data </code></pre> <p>when I then print the data type, it gives astropy.io.fits.fitsrec.FITS_rec</p> <p>I have been told that the FITS_rec class behaves as a numpy array, however it is essential that I actually convert the file to a numpy array.</p> <p>NB: I have already posted this question on Physics Stack Exchange, however my question wasn't really answered.</p> <p>Thank you!</p>
<p>Do you mean you want to get rid of all the metadata and retain only the values in the table? In that case, is this what you are looking for?</p> <pre><code>data = np.array(truth_north[1].data) </code></pre>
python|numpy|astropy|astronomy
0
1,905,566
55,780,287
How to apply Deeplab V3 in Android studio for segmentation purpose?
<p>Actually i am a beginner in Tensorflow and Deeplab V3. I literally don't know how to integrate deep lab on android studio. I only just want to use tensorflow trained example model for semantic segmentation in android not real time video image. I have seen a lots of github code but didn't able to run in my android phone. </p> <p>1.<a href="https://www.tensorflow.org/lite/models/segmentation/overview" rel="nofollow noreferrer">https://www.tensorflow.org/lite/models/segmentation/overview</a> </p> <p>If the above code would run well on my phone then i would be able to train this for my data set but firstly i want to run the model on my phone how it's actually work. It would great help for me if someone exactly show me the right way because i have tried since form yesterday morning. so please ..</p>
<p><a href="https://github.com/dailystudio/ml/tree/master/deeplab" rel="nofollow noreferrer">https://github.com/dailystudio/ml/tree/master/deeplab</a>. you can download a demo app here and start working. This app runs off the shelf, just download it from github and open in android studio</p>
tensorflow-lite|deeplab
0
1,905,567
73,448,262
Specify locations in pandas.DataFrame using combination of integer and string
<h2>What I would like</h2> <pre><code>## Sample DataFrame data = [[0, 0, 0, 0, 1, 0], [0, 1, 0, 0, 0, 1]] index = ['Item1', 'Item2'] columns = ['20220130', '20220131', '20220201', '20220202', '20220203', '20220204'] df = pd.DataFrame(data, index=index, columns=columns) print(df) ## Output # 20220130 20220131 20220201 20220202 20220203 20220204 #Item1 0 0 0 0 1 0 #Item2 0 1 0 0 0 1 </code></pre> <p>The column means &quot;date&quot;. I would like to change 0 to -1 if values are 0 three or more days in a row.</p> <pre><code>print(df) ## Expected Output # 20220130 20220131 20220201 20220202 20220203 20220204 #Item1 -1 -1 -1 -1 1 0 #Item2 0 1 -1 -1 -1 1 </code></pre> <h2>What I did</h2> <p>I tried to read values one by one and find where to update (0 to -1).<br /> The problem is <code>(date - 2)</code>. Is there a way to specify column locations using integers and names?</p> <pre><code>for item, row in df.iterrows(): count = 0 for date, value in row.iteritems(): if value == 0: count += 1 else: count = 0 if count &gt;= 3: df.loc[item, (date - 2):date] ## Output # TypeError: unsupported operand type(s) for -: 'str' and 'int' </code></pre> <h2>Premises</h2> <p>I have other use cases unrelated to dates, so do not convert dates to <code>datetime</code> objects and use <code>timedelta</code>. Here I would like to know how I can specify columns like <code>column_name - 2</code>.</p> <h2>Environment</h2> <p>Python 3.10.5<br /> Pandas 1.4.3</p>
<p>For that to run, you need to cast date to integer, say <code>df.loc[item, str(int(date) - 2):date]</code>, but the <code>elif:</code> block is wrong, and <code>df.loc</code> used like that would allow you to replace only one element.</p> <p>I had a similar problem some time ago and solved it with a windowed list. This way you don't have to locate the specific element to change and you don't rely on any convention for your columns names. In short, iterate on the rows, make a windowed list of each row and replace the window with you replacement (in this case, <code>[-1,-1,-1]</code>) if the window contain all zeros, otherwise let it unaltered.</p> <p>Here is the function to create a windowed list:</p> <pre><code> def windowed_list( original_list: Iterable[Any], window_size: int, minimum_window_size: int = None, disjunct: bool = True, ) -&gt; List[List[str]]: &quot;&quot;&quot;Creates a windowed copy of the original_list Args: original_list (Iterable): list to window window_size (int): size of the windows minimum_window_size(int): minimum size of the windows. All windows will have size greater or equal than this parameter. Set to any number lesser than 2 to keep all windows. disjunct (bool, optional): whether to produce disjunct windows or not. If false, produces windows which differ for one element from previous and next windows. Example: [[0,1,2], [1,2,3], [3,4,5]] if False, [[0,1,2], [3,4,5]] if True. Defaults to True. Returns: List[List[str]]: _description_ &quot;&quot;&quot; if minimum_window_size is None: minimum_window_size = window_size if minimum_window_size &gt; window_size: raise ValueError( f&quot;minimum_window_size={minimum_window_size} &gt; window_size={window_size}&quot; ) windowed_list = ( [ original_list[i : i + window_size] for i in range(len(original_list) - len(original_list) % window_size) ] if window_size &lt;= len(original_list) else original_list ) if minimum_window_size: windowed_list = [ window for window in windowed_list if len(window) &gt;= minimum_window_size ] return windowed_list[::window_size] if disjunct else windowed_list </code></pre> <p>And here it is applied to your problem:</p> <pre><code> data = [[0, 0, 0, 0, 1, 0], [0, 1, 0, 0, 0, 1]] index = ['Item1', 'Item2'] columns = ['20220130', '20220131', '20220201', '20220202', '20220203', '20220204'] df = pd.DataFrame(data, index=index, columns=columns) window_size = 3 for row_index, row in df.iterrows(): copy_row = row.copy() # This copy here is necessary because pandas is peculiar at overwriting, so I solved by copying the row, modifying the copy and lastly overwriting the row windowed_row = windowed_list(row, window_size, disjunct=False) for idx, window in enumerate(windowed_row): if sum([val == 0 for val in window]) == window_size: copy_row[idx:idx+window_size] = [-1]*window_size df.loc[row_index] = copy_row #overwrite the entire row print(df) </code></pre> <p>Output:</p> <pre><code> 20220130 20220131 20220201 20220202 20220203 20220204 Item1 -1 -1 -1 -1 1 0 Item2 0 1 -1 -1 -1 1 </code></pre> <p>I tested this approach with a couple cases and it seems to work, but please tell me if you find a case where it doesn't work.</p>
python|pandas|dataframe
1
1,905,568
49,968,526
too many values to unpack with quiver
<p>I am trying to use the quiver function to draw the vector field of a dynamic system.</p> <p>I have 2 lists X and V.</p> <p>I need to build 2 lists UE and VE containing respectively the first and second value returned by f, but I have the following error:</p> <blockquote> <p>too many values to unpack.</p> </blockquote> <p>This is my code.</p> <pre><code>import numpy as np import scipy # donne acces aux librairies scipy, scipy.linalg et scipy.integrate import scipy.linalg import scipy.integrate import matplotlib.pyplot as plt import math %matplotlib inline def f(x,v,t): return v,-(float(g)/l)*np.sin(x) t0=0 x0=1 v0=0 T=20 l=1 g=9.81 UE, VE = np.array([f(x,v,0) for x,v in zip(X,V)]) plt.quiver(X, Y, UE, VE) </code></pre> <p>Any help is much appreciated.</p> <p>Thanks.</p>
<p>By looking at the details at the error, you should be able to find that the error in question occurs at</p> <pre><code>np.array([f(x,v,0) for x,v in zip(X,V)]) </code></pre> <p>where the issue is that the array contains more than two elements, thus unpacking it into <code>UE</code> and <code>VE</code> is impossible.</p> <p>In your case, it looks what you actually want is the transpose of the array,</p> <pre><code>np.array([f(x,v,0) for x,v in zip(X,V)]).T </code></pre>
python|numpy
1
1,905,569
49,955,595
Numpy lambda error using serverless
<p>I'm on mac OSX and deploying a python lambda on AWS. </p> <p>I have created a local env source venv/bin/activate following these instructions. </p> <p><a href="https://serverless.com/blog/serverless-python-packaging/" rel="nofollow noreferrer">https://serverless.com/blog/serverless-python-packaging/</a> </p> <p>I have installed all of the packages</p> <pre><code>$ pip install numpy Requirement already satisfied: numpy in ./venv/lib/python3.5/site-packages (1.14.2) </code></pre> <p>then i deploy the package using </p> <pre><code>pip freeze &gt; requirements.txt serverless deploy </code></pre> <p><strong>error when running on the lambda</strong></p> <blockquote> <p>START RequestId: ################### Version: $LATEST Unable to import module '<strong>main</strong>': Missing required dependencies ['numpy']</p> </blockquote> <p>Also note: my code is not calling numpy, it's calling quandl and quandl is calling numpy </p> <p><strong>requirements.txt</strong></p> <pre><code>asn1crypto==0.24.0 certifi==2018.4.16 cffi==1.11.5 chardet==3.0.4 cryptography==2.2.2 idna==2.6 inflection==0.3.1 more-itertools==4.1.0 ndg-httpsclient==0.4.4 numpy==1.14.2 pandas==0.22.0 pyasn1==0.4.2 pycparser==2.18 pyOpenSSL==17.5.0 python-dateutil==2.7.2 pytz==2018.4 Quandl==3.3.0 requests==2.18.4 six==1.11.0 urllib3==1.22 </code></pre> <p>Running the same code on an ec2. looks like its numpy is having an issue calling it.</p> <p>I added the below to the python file </p> <pre><code>import os import sys CWD = os.path.dirname(os.path.realpath(__file__)) sys.path.insert(0, os.path.join(CWD, "lib")) # end magic four lines </code></pre> <p>error</p> <pre><code>Traceback (most recent call last): File "__main__.py", line 11, in &lt;module&gt; import quandl File "/home/ubuntu/bots/ssali/quandl/__init__.py", line 7, in &lt;module&gt; from .model.database import Database File "/home/ubuntu/bots/ssali/quandl/model/database.py", line 18, in &lt;module&gt; import quandl.model.dataset File "/home/ubuntu/bots/ssali/quandl/model/dataset.py", line 5, in &lt;module&gt; from .data import Data File "/home/ubuntu/bots/ssali/quandl/model/data.py", line 1, in &lt;module&gt; from quandl.operations.data_list import DataListOperation File "/home/ubuntu/bots/ssali/quandl/operations/data_list.py", line 1, in &lt;module&gt; from quandl.model.data_list import DataList File "/home/ubuntu/bots/ssali/quandl/model/data_list.py", line 2, in &lt;module&gt; from .data_mixin import DataMixin File "/home/ubuntu/bots/ssali/quandl/model/data_mixin.py", line 1, in &lt;module&gt; import pandas as pd File "/home/ubuntu/bots/ssali/pandas/__init__.py", line 19, in &lt;module&gt; "Missing required dependencies {0}".format(missing_dependencies)) ImportError: Missing required dependencies ['numpy'] </code></pre>
<p>I think there were two issues here:</p> <ol> <li>AWS Lambda only support Python 2.7 and 3.6 so we should use 3.6 instead of 3.5</li> <li>Packages like Numpy that need to be compiled need to be built for Linux. If you are on Windows or OSX, these packages need to be installed through Docker. Serverless contains a convenient configuration for this. Make sure the following is in your <code>serverless.yml</code>.</li> </ol> <p>From <a href="https://serverless.com/blog/serverless-python-packaging/" rel="nofollow noreferrer">https://serverless.com/blog/serverless-python-packaging/</a></p> <pre><code>plugins: - serverless-python-requirements custom: pythonRequirements: dockerizePip: non-linux </code></pre>
python|pandas|numpy|lambda|serverless-framework
1
1,905,570
66,709,299
How to mask the upper triangle of a covariance matrix in sns.heatmap along with a covariance threshold?
<p>To mask the covariances below a threshold I can use the following:</p> <pre><code>corr = df.corr(method = 'spearman') sns.heatmap(corr, cmap = 'RdYlGn_r', mask = (corr &lt;= T)) </code></pre> <p>now how can I mask the upper triangle with the correlation threshold condition?</p>
<p>You can combine both masks using the <a href="https://numpy.org/doc/stable/reference/generated/numpy.logical_or.html" rel="nofollow noreferrer">logical <code>or</code></a> (<code>|</code>).</p> <p>The example code below supposes you want to remove all correlations for which the absolute value is below some threshold:</p> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt import seaborn as sns import pandas as pd import numpy as np df = pd.DataFrame(data=np.random.rand(7, 10), columns=[*'abcdefghij']) corr = df.corr(method='spearman') trimask = np.triu(np.ones_like(corr, dtype=bool)) sns.heatmap(corr, cmap='RdYlGn_r', mask=trimask | (np.abs(corr) &lt;= 0.4), annot=True) plt.tight_layout() plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/KJK18.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KJK18.png" alt="triangular heatmap with mask" /></a></p>
python|seaborn|heatmap|correlation|mask
0
1,905,571
64,681,856
Tensorflow for XOR is not predicting correctly after 500 epochs
<p>I'm trying to implement a Neural Network to solve the XOR problem using TensorFlow. I chose sigmoid as activation function, shape <code>(2, 2, 1)</code> and <code>optimizer=SGD()</code>. I choose <code>batch_size=1</code> because the universe of the problem is 4, so is really small. The problem is the predictions are not even close to the right answers. What am I doing wrong?</p> <p>I'm doing this on Google Colab, and the Tensorflow version is 2.3.0.</p> <pre><code>import tensorflow as tf import numpy as np x = np.array([[0, 0], [1, 1], [1, 0], [0, 1]], dtype=np.float32) y = np.array([[0], [0], [1], [1]], dtype=np.float32) model = tf.keras.models.Sequential() model.add(tf.keras.Input(shape=(2,))) model.add(tf.keras.layers.Dense(2, activation=tf.keras.activations.sigmoid)) model.add(tf.keras.layers.Dense(2, activation=tf.keras.activations.sigmoid)) model.add(tf.keras.layers.Dense(1, activation=tf.keras.activations.sigmoid)) model.compile(optimizer=tf.keras.optimizers.SGD(), loss=tf.keras.losses.MeanSquaredError(), metrics=['binary_accuracy']) history = model.fit(x, y, batch_size=1, epochs=500, verbose=False) print(&quot;Tensorflow version: &quot;, tf.__version__) predictions = model.predict_on_batch(x) print(predictions) </code></pre> <p>The output:</p> <pre><code>Tensorflow version: 2.3.0 WARNING:tensorflow:10 out of the last 10 calls to &lt;function Model.make_predict_function.&lt;locals&gt;.predict_function at 0x7f69f7a83a60&gt; triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details. [[0.5090364 ] [0.4890102 ] [0.50011414] [0.49678832]] </code></pre>
<p>The problem is your learning rate and the way you are optimizing your weights</p> <p>Another factor to keep in mind when we are training is the step size that we take in the direction of the gradient. If this step is too great, we can end up in a wrong position, jumping outside of our local minimum. If too small we could never reach the minimum.</p> <p>By default the Stochastic Gradient Descent (SGD) in keras, has a learning rate of 0.01. And this learning rate is fixed during training. If you check your training, the loss is moving too slow toward the global minimum or jumping to higher values sometimes. For your specific problem, it's quite difficult to reach the minimum with a fixed learning rate, because you are not taking in consideration the loss function landscape.</p> <p>For example, using <code>Adam</code> as optimizer algorithm and a <code>learning_rate = 0.02</code>, I was able to reach an accuracy of 1</p> <pre><code>import tensorflow as tf import numpy as np x = np.array([[0, 0], [1, 1], [1, 0], [0, 1]], dtype=np.float32) y = np.array([[0], [0], [1], [1]], dtype=np.float32) model = tf.keras.models.Sequential() model.add(tf.keras.Input(shape=(2,))) model.add(tf.keras.layers.Dense(2, activation=tf.keras.activations.sigmoid)) model.add(tf.keras.layers.Dense(2, activation=tf.keras.activations.sigmoid)) model.add(tf.keras.layers.Dense(1, activation=tf.keras.activations.sigmoid)) model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.02), # learning rate was 0.001 prior to this change loss=tf.keras.losses.MeanSquaredError(), metrics=['mse', 'binary_accuracy']) model.summary() print(&quot;Tensorflow version: &quot;, tf.__version__) predictions = model.predict_on_batch(x) print(predictions)history = model.fit(x, y, batch_size=1, epochs=500) [[0.05162644] [0.06670767] [0.9240402 ] [0.923379 ]] </code></pre> <p>I used Adam because it has an adaptive learning rate the which is tuned during training, depending on how the train is going.</p> <p>If you use a greater learning rate (0.1), but using SGD, in the history training losses you can see that at one point the accuracy reach 1, but right after that it jumps to lower values. That's because you have a fixed learning rate. Another strategy would be to stop the training when you reach that values with SGD, maybe with a keras <code>callback</code>.</p> <p>Don't forget to tune your learning rate and to choose the right optimizer. It's fundamental to obtain a fast training and a good minimum.</p> <p>Also consider changing the net architecture (adding nodes, and use other activation functions for the hidden layers, like Relu)</p> <p><a href="https://www.jeremyjordan.me/nn-learning-rate/" rel="nofollow noreferrer">Here some useful details on how to handle the learning rate</a> <a href="https://i.stack.imgur.com/BDCLv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BDCLv.png" alt="enter image description here" /></a></p>
python|tensorflow|machine-learning|keras|neural-network
4
1,905,572
64,773,878
How do I get a random item in a nested list in a dictionary?
<p>I am trying to have 2 values assigned to each key. I'm not sure how to access just one value in the key.</p> <p>I want to display the shop value of each item, the second item in the list, randomly. Eg. printing <code>s1</code> might give <code>10</code> if it randomly choose the key &quot;Tomato&quot;. Here is the code:</p> <pre><code>import random # Shop Key {[item:[cost, value] shop = {&quot;Tomato&quot;: [random.randint(4, 6), 10], &quot;Loaf of Bread&quot;: [random.randint(9, 11), 20], &quot;Banana&quot;: [random.randint(6, 8), 15], &quot;Apple&quot;: [random.randint(3, 5), 8]} s1 = random.choice(list(shop.keys())) print(s1) s2 = random.choice(list(shop.keys())) print(s2) s3 = random.choice(list(shop.keys())) print(s3) </code></pre>
<p>To be clear you are assigning one value to each key. The value is a list that contains two things, &quot;cost&quot; and &quot;value&quot;.</p> <p><code>s1 = random.choice(list(shop.keys())</code> randomly selects a key from your dictionary, <code>shop</code>. To retrieve the value associated with the key you can use <code>s1_cost, s1_value = shop[s1]</code> or <code>s1_cost, s1_value = shop.get(s1)</code>.</p> <p>To retrieve a random item from the list you can use <code>random.choice(shop[s1])</code>.</p>
python|python-3.x|python-3.8
1
1,905,573
64,048,705
How to upload file using multipart/form-data through python for telegram bot
<p>I am making a simple telegram bot in python through google colab and want to upload voice note through sendVoice method and want Telegram clients to display the file as a playable voice message. for that i have found from Telegram bot API manual that i must use multipart/form-data to upload audio to bot. What is this multipart/form-data and how can i upload and use it through sendVoice method.</p> <p>Thank you.</p>
<p>It is possible to send a file in the following manner via telebot and you don't need to worry about form-data.</p> <pre><code> f = open('file_path', 'rb') bot.send_audio(chat_id, f) </code></pre>
python|telegram-bot|python-telegram-bot
0
1,905,574
63,896,424
set equal amount of y-ticks for every subplot
<p>I am trying to plot a nice subplot graph, but I am unable to fix the problem of an unequal number of y-ticks. In the image below, for example the &quot;VVIX Beta&quot; plot has only 3 ticks, while others have four to five. Ideally I would like to have five y-ticks for each graph. Similar questions have already been asked, and suggested:</p> <pre><code>ax1.locator_params(axis='y', nbins=5) </code></pre> <p>where ax1 to ax7 are the seven subplots. This however didn't work, the output seemed to ignore the command. Does anyone know an alternative approach? Any help is highly appreciated and thanks in advance</p> <p><a href="https://i.stack.imgur.com/DObYJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DObYJ.png" alt="enter image description here" /></a></p>
<p>One of possible options is to use <em>MaxNLocator</em>, passing:</p> <ul> <li><em>nbins</em> as the number of ticks,</li> <li><em>min_n_ticks</em> smaller by one from <em>nbins</em>.</li> </ul> <p>Look at the following example:</p> <pre><code>import matplotlib.ticker as ticker def draw(ax, nBins): ax.plot(x, y, 'g', label='line one', linewidth=3) ax.plot(x2, y2,'c', label='line two', linewidth=3) ax.set_title('Epic Info') ax.set_ylabel('Y axis') ax.set_xlabel('X axis') if nBins &gt; 0: ax.yaxis.set_major_locator(ticker.MaxNLocator(nbins=nBins, min_n_ticks=nBins-1)) else: ax.yaxis.set_major_locator(ticker.MaxNLocator(nbins='auto')) ax.legend() ax.grid(True, color='k') x = [ 5, 8,10]; y = [12.1,15.2, 6.3] x2 = [ 6, 9,11]; y2 = [ 5.4,15.5, 7.6] fig, axs = plt.subplots(2, 2, figsize=(10,7), constrained_layout=True) fig.suptitle('Global title') draw(axs.flat[0], 4) draw(axs.flat[1], 5) draw(axs.flat[2], 6) draw(axs.flat[3], 0) plt.show() </code></pre> <p>As you can see, it generates 4 almost identical plots:</p> <ul> <li>3 with explicitely defined number of <em>y</em> ticks,</li> <li>and the last with <em>auto</em> setting.</li> </ul> <p>The result is:</p> <p><a href="https://i.stack.imgur.com/ydBl6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ydBl6.png" alt="enter image description here" /></a></p> <p>Caution: For some <em>nBins</em> the actual number of ticks is different than expected, but at least you can experiment with my code to draw your data.</p>
python|matplotlib|plot
1
1,905,575
53,156,618
api.foursquare.com/v2/venues/search for every coordinate in python dataframe
<h1>result dataframe</h1> <pre><code>Municipality Latitude Longitude Population2011 </code></pre> <p>0 Agia Paraskevi 38.01263 23.82055 59,704 1 Agios Dimitrios 37.93667 23.73320 71,294 2 Alimos 37.91368 23.71506 41,720</p> <pre><code>search_query = 'Hospital' categoryId = '4bf58dd8d48988d104941735' </code></pre> <h1>Define function to iterate in every row of dataframe and using its coordinates</h1> <h1>to fetch all &quot;Hospital&quot; venues for this coordinate</h1> <pre><code> def getNearbyVenues(names, latitudes, longitudes, radius=500): venues_list=[] for name, lat, lng in zip(names, latitudes, longitudes): # create the API request URL url = 'https://api.foursquare.com/v2/venues/search?client_id={}&amp;client_secret={}&amp;ll={},{}&amp;v={}&amp;query={}&amp;radius={}&amp;limit={}&amp;locale={}&amp;categoryId={}'.format(CLIENT_ID, CLIENT_SECRET, latitudes, longitudes, VERSION, search_query, radius, LIMIT, locale, categoryId) # make the GET request results = requests.get(url).json()[&quot;response&quot;] # return only relevant information for each nearby venue venues_list.append([( name, lat, lng, v['venues']['name'], v['venues']['location']['lat'], v['venues']['location']['lng']) for v in results]) nearby_venues = pd.DataFrame([item for venue_list in venues_list for item in venue_list]) return(nearby_venues) </code></pre> <h1>call above function using as input Dataframe's &quot;result&quot; columns:</h1> <pre><code>`all_venues = getNearbyVenues(names=result['Municipality'], latitudes=result['Latitude'], longitudes=result['Longitude'] )` ) </code></pre> <p>Hello! Please, I need your help! I try to set as input each dataframe's coordinates to get Hospitals in each Municipality and create a new dataframe.</p> <p>But It seems I can not correctly parse json (empty dictionary...)</p>
<h1>Hello!</h1> <h1>I slightly modified function and worked:</h1> <pre><code>def getNearbyVenues(names, lat1, long1, radius=3000): venues_list=[] for name, lat, lng in zip(names, lat1, long1): # create the API request URL url1 = 'https://api.foursquare.com/v2/venues/search?client_id={}&amp;client_secret={}&amp;ll={},{}&amp;v={}&amp;query={}&amp;radius={}&amp;limit={}&amp;locale={}&amp;categoryId={}'.format(CLIENT_ID, CLIENT_SECRET, lat, lng, VERSION, search_query, radius, LIMIT, locale, categoryId) # make the GET request results = requests.get(url1).json()[&quot;response&quot;][&quot;venues&quot;] # return only relevant information for each nearby venue venues_list.append([( name, lat, lng, v['name'], v['location']['lat'], v['location']['lng']) for v in results]) nearby_venues = pd.DataFrame([item for venue_list in venues_list for item in venue_list]) return(nearby_venues) </code></pre>
python-3.x|foursquare
0
1,905,576
71,448,719
client.get_channel(channelid) returns None
<p>I have searched the web several times for this issue, and there seem to be a lot of people having this problem. However, they all seemed to have fixed it by putting the expression in the on_ready() event. But I want to use it in a command, where the user passes a channel id, which will be a string, and I made this code:</p> <pre><code>@client.command() async def send_message(ctx,channel,*,msg): channelid = int(channel) channel = client.get_channel(channelid) await channel.send(msg) </code></pre> <p>When I run the code, and execute the command, it says:</p> <blockquote> <p>Command raised an exception: AttributeError: 'NoneType' object has no attribute 'send'</p> </blockquote> <p>Does anyone know how to fix this?</p>
<p>You're making it a little complicated for yourself. You can make it much simpler.</p> <p>In itself, there's not much wrong with the code for now, but it's best to use the tools at our disposal.</p> <p>To get a channel you can use <a href="https://discordpy.readthedocs.io/en/stable/api.html?highlight=discord%20textchannel#discord.TextChannel" rel="nofollow noreferrer">discord.TextChannel</a> as an argument.</p> <p><strong>It will look like this:</strong></p> <pre class="lang-py prettyprint-override"><code>channel: discord.TextChannel </code></pre> <p>Here you can use either the ID, the name of the channel or a mention.</p> <p>The rest of your code looks right to me at first.</p> <p><strong>Your new code:</strong></p> <pre class="lang-py prettyprint-override"><code>@client.command() async def send_message(ctx, channel: discord.TextChannel, *, msg): await channel.send(msg) # send the &quot;msg&quot; to your specified &quot;channel&quot;. </code></pre>
python|discord|discord.py
2
1,905,577
62,703,007
Getting coordinates from a numpy array
<p>so maybe this is a basic question about numpy, but I can't see how to do is, so lets say I have a 2D numpy array like this</p> <pre><code>import numpy as np arr = np.array([[ 0., 460., 166., 167., 123.], [ 0., 0., 0., 0., 0.], [ 0., 81., 0., 21., 0.], [ 0., 128., 23., 0., 12.], [ 0., 36., 0., 13., 0.]]) </code></pre> <p>And I want the coordinates from the subarray</p> <pre><code>[[0., 21,. 0.], [23., 0., 12.], [0., 13., 0.]] </code></pre> <p>I tried slicing my original array and the find the coordinates using <code>np.argwhere</code> like this</p> <pre><code>newarr = np.argwhere(arr[2:, 2:] != 0) #output #[[0 1] # [1 0] # [1 2] # [2 1]] </code></pre> <p>Which are indeed the coordinates from the subarray but I was expecting the coordinates corresponding to my original array, the desired output is:</p> <pre><code>[[2 3] [3 2] [3 4] [4 3]] </code></pre> <p>If I use the <code>np.argwhere</code> with my original array I get a bunch of coordinates that I don't need, so I can't figure it out how to get what I need, any help or if you can point me to the right direction will be great, thank you!</p>
<p>Assume origin on the top left corner of the matrix and the matrix itself placed in 4th quadrant of Cartesian space. The horizontal axis having the column indices, and the vertical axis coming down having row indices.</p> <p>You will see the whole sub-matrix is origin shifted on <code>(2,2)</code> coordinate. Thus when the coordinates you get are with respect to sub-matrix on origin, then to get them back to <code>(2,2)</code> again, just add <code>(2,2)</code> in whole elements:</p> <pre><code>&gt;&gt;&gt; np.argwhere(arr[2:, 2:] != 0) + [2, 2] array([[2, 3], [3, 2], [3, 4], [4, 3]]) </code></pre> <p>For other examples:</p> <pre><code>&gt;&gt;&gt; col_shift, row_shift = 3, 2 &gt;&gt;&gt; arr[row_shift:, col_shift:] array([[21., 0.], [ 0., 12.], [13., 0.]]) &gt;&gt;&gt; np.argwhere(arr[row_shift:, col_shift:] != 0) + [row_shift, col_shift] array([[2, 3], [3, 4], [4, 3]]) </code></pre> <p>For a fully inside sub matrix, you can bound the column and rows:</p> <pre><code>&gt;&gt;&gt; col_shift, row_shift = 0, 1 &gt;&gt;&gt; col_bound, row_bound = 4, 4 &gt;&gt;&gt; arr[row_shift:row_bound, col_shift:col_bound] array([[ 0., 0., 0., 0.], [ 0., 81., 0., 21.], [ 0., 128., 23., 0.]]) &gt;&gt;&gt; np.argwhere(arr[row_shift:row_bound, col_shift:col_bound] != 0) + [row_shift, col_shift] array([[2, 1], [2, 3], [3, 1], [3, 2]]) </code></pre>
python|arrays|numpy
2
1,905,578
61,925,356
Selenium cannot click onto table row even though manually it works
<p>My page has a table. The rows get dynamically loaded into the table through an API call. A row in the finished table looks like this</p> <pre><code>&lt;tr class="ant-table-row ant-table-row-level-0" data-row-key="0"&gt; &lt;td class=""&gt;Value1&lt;/td&gt; &lt;td class=""&gt;Value2&lt;/td&gt; &lt;td class=""&gt;Value3&lt;/td&gt; &lt;td class=""&gt;Value4&lt;/td&gt; &lt;/tr&gt; </code></pre> <p>I want selenium to click on the first row. My code looks like this: </p> <pre><code>wait.until(EC.element_to_be_clickable((By.CLASS_NAME, 'ant-table-row-level-0'))) time.sleep(3) rows = self.selenium.find_elements_by_class_name("ant-table-row-level-0") rows[0].click() </code></pre> <p>I get the error: </p> <p>selenium.common.exceptions.ElementNotInteractableException: Message: Element could not be scrolled into view</p> <p>When the selenium browser builds up, I can see the table and I can click on the row (and then one div that was invisible becomes visible). So when I manually do it, it works, but selenium somehow has problems clicking on this row. </p> <p>I also increased the timer to 10 and 20 seconds without success and I tried this just to verify I am clicking at the right row:</p> <pre><code>td = self.selenium.find_elements_by_tag_name("td") for t in td: if "Value1" in t.get_attribute("innerHTML"): t.click() </code></pre> <p>Nothing works. The onclick listener for the table is on the row itself. So clicking on a td tag should also trigger something. </p> <p>What am I doing wrong ? </p>
<pre><code>td = self.selenium.find_elements_by_tag_name("td") for t in td: if "Value1" in t.get_attribute("innerHTML"): t.click() </code></pre> <p>Turns out the above code did work, but does anyone know why the regular thing does not work ? </p>
python-3.x|selenium|selenium-webdriver
0
1,905,579
61,747,808
Convert dict to str
<p>I have a <code>str</code> payload looks like this</p> <pre><code>payload = "{\"fqdn\":\"examplazdazdazazdzadza.com\",\"duration\":5,\"owner\":{\"city\":\"Paris\",\"given\":\"Alice\",\"family\":\"Doe\",\"zip\":\"75001\",\"country\":\"FR\",\"streetaddr\":\"5 rue neuve\",\"phone\":\"+33.123456789\",\"type\":0,\"email\":\"alice@example.org\"}}" </code></pre> <p>This is the payload from the Gandi <a href="https://api.gandi.net/docs/domains/#post-v5-domain-domains" rel="nofollow noreferrer">API</a> </p> <p>I want to make the payload a bit more dynamic and have some flexibility, so I tired <code>dict</code> </p> <p><code>domain = 'example.com` </code></p> <pre><code> payload = { 'fqdn': domain, 'duration': 1, 'owner': { "city": "Paris", "given": "Alice", "family": "Doe", "zip": "75001", "country": "FR", "streetaddr": "5 rue neuve", "phone": "+33.123456789", "state": "FR-J", "type": 0, "email": "alice@example.org" } } </code></pre> <p>After this I need to revert back to the orginal datetype (str) and I do this like so</p> <p><code>payload = '\n'.join('\%s\: "\%s\"' % (k, v) for k, v in payload.items()) </code></p> <p>However this returns </p> <blockquote> <p>Bad Request</p> </blockquote> <p>. </p> <p>Any ideas how get this done properly? </p>
<p>Can do this using the <a href="https://docs.python.org/3/library/json.html" rel="nofollow noreferrer"><code>JSON Module</code></a>:</p> <pre><code>In [409]: import json In [410]: json.dumps(payload) Out[410]: '{"fqdn": "domain", "duration": 1, "owner": {"city": "Paris", "given": "Alice", "family": "Doe", "zip": "75001", "country": "FR", "streetaddr": "5 rue neuve", "phone": "+33.123456789", "state": "FR-J", "type": 0, "email": "alice@example.org"}}' </code></pre> <h3>After OP's comments:</h3> <pre><code>In [411]: domain = 'example.com' In [412]: payload = { ...: 'fqdn': domain, ...: 'duration': 1, ...: 'owner': { ...: "city": "Paris", ...: "given": "Alice", ...: "family": "Doe", ...: "zip": "75001", ...: "country": "FR", ...: "streetaddr": "5 rue neuve", ...: "phone": "+33.123456789", ...: "state": "FR-J", ...: "type": 0, ...: "email": "alice@example.org" ...: } ...: } In [413]: json.dumps(payload) Out[413]: '{"fqdn": "example.com", "duration": 1, "owner": {"city": "Paris", "given": "Alice", "family": "Doe", "zip": "75001", "country": "FR", "streetaddr": "5 rue neuve", "phone": "+33.123456789", "state": "FR-J", "type": 0, "email": "alice@example.org"}}' </code></pre>
python|python-3.x|dictionary
6
1,905,580
61,966,606
How to handle error in a for loop (python) and continue execution
<p>I m trying to get my script to still print action 2 "print (list1[3])" and skip action 1 "print (list1[9])" if it can not execute it. I aplozise in advnace if my question is not clear enough I m trying to do my best to explain the issue. I m a beginner. </p> <pre><code>list1 = ['a','b','c','d','f'] try: for i in list1: #action 1 print (list1[9]) #action 2 print (list1[3]) break except: pass </code></pre>
<p>Just put a try for each action instead of both actions together, like this</p> <pre class="lang-py prettyprint-override"><code>list1 = ['a','b','c','d','f'] for i in list1: try: #action 1 print (list1[9]) except IndexError: pass try: #action 2 print (list1[3]) except IndexError: pass break </code></pre>
python
2
1,905,581
67,527,254
Is there a way to let selenium ignore the timeoutexception Error and continue if it happens?
<p>I am using selenium with python and my chromedriver and chrome.exe is ver90.0</p> <p>I am having a problem that the script will stop once a timeout error occurs and the problem is that it always happens, sometimes in a few hours, sometimes in a few minutes.</p> <p>It will show up something like :</p> <pre><code>selenium.common.exceptions.timeoutexception: message: timeout: timed out receiving message from renderer: -0.014 </code></pre> <p>when this happens, I see the webpage is usually loading and it basically can't load resulting the error occuring.</p> <p>Is there a way to ignore this error and basically let the script either refresh or just continue loading to the next webpage in the list. An example code will be like this:</p> <pre><code>while True: driver.get(thislist[i]) if .......: i = i + 1 </code></pre> <p>I tried many fixes, but none of them work. I tried with beta chrome.exe and chrome webdriver ver 91.0. I also tried including chrome_options and I also tried some headers that's not in this code, but I always eneded up receiving the timeout error. I asked this question a few times before, but there hasn't bveen a fix that works.</p> <pre><code>chrome_options.add_argument('--enable-automation') chrome_options.add_argument('--lang=en') chrome_options.add_argument(&quot;window-size=1920,1080&quot;) chrome_options.headless = True chrome_options.add_experimental_option (&quot;debuggerAddress&quot;, &quot;localhost:8989&quot;) driver = webdriver.Chrome(executable_path='C:/Webdriver/chromedriver', chrome_options=chrome_options) </code></pre>
<p>What about adding a <code>try catch</code> around your operation that raises the error. If there is an error, you simply <code>continue</code> in your loop.</p> <p>Use <code>Exception</code> since your don't know the kind of exception that will be thrown.</p> <hr /> <p>You can always set the timeout for selenium to be longer with the following line: <code>driver.manage().timeouts().implicitlyWait(30, TimeUnit.SECONDS);</code></p>
python|selenium
1
1,905,582
60,595,408
Understanding Prime Sieve
<p>I'm trying to understand a user's implementation of Erastothene's Prime Seive; the code is only a few lines, but I'm having remarkable difficulty understanding it: </p> <pre><code>def eratos_sieve(n): sieve = [True] * n for i in range(3,int(n**0.5)+1,2): if sieve[i]: sieve[i*i::2*i]=[False]*((n-i*i-1)//(2*i)+1) return len([2] + [i for i in range(3,n,2) if sieve[i]]) </code></pre> <p>My unerstanding thus far is this: We're defining a function with a parameter n. We begin by assuming that n is prime. Then, for some reason, we have a for loop with 3 parameters! And after that if statement, I'm just totally lost. If anyone can help, that would be great!</p>
<p>Perhaps my code commentary below might be of help to you -- I also simplified the code where I could (and hopefully didn't break it!):</p> <pre><code>def eratos_sieve(n): ''' Return the number of primes less than n ''' # Create an array [True, True, True, ...] of length n # i.e. assume all numbers are prime unless proven otherwise sieve = [True] * n # loop over odd numbers from 3 to sqrt(n) for i in range(3, int(n**0.5) + 1, 2): if sieve[i]: # if sieve[i] is still True, i is a prime! # Assign elements of sieve from i squared to the # end of the array skipping by 2 * i (hit multiples # of i but skip the even ones) to False. Since this # is an array to array assignment, create an array of # [False, False, False, ...] of the correct size: sieve[i*i::2*i] = [False] * ((n-i * i-1) // (i*2) + 1) # Add up odd elements of sieve (True = 1, False = 0), # Add one for '2' which we assumed prime: return 1 + sum(sieve[3::2]) </code></pre> <p>The code is complicated somewhat in that it's skipping over even numbers, which is fine. A more optimized solution would also reduce the number of elements in <code>sieve</code> by half instead of storing the even elements it's ignoring. This, of course, makes the indexing more complicated, but doable.</p>
python|primes|sieve
1
1,905,583
71,129,515
Azure Datalake python error: Invalid base64-encoded string: number of data characters (85) cannot be 1 more than a multiple of 4
<p>When trying to upload CSV in a folder to the datalake blob, i get an error message related with base64 encoding. The csv is a string format and seems correct.</p> <p>I have tried encoding the csv many ways and nothing seems to work.</p> <pre><code>service_client = DataLakeServiceClient( account_url=datalake_url, credential=&quot;supersecret&quot;) unstable_system_client = service_client.get_file_system_client(file_system=&quot;my_container&quot;) def store_remote_csv( base_path: str, mega_csv: Optional[str], table_name: str, unstable_system_client: FileSystemClient): file_path = f&quot;{base_path}{table_name}&quot; try: logger.info(f&quot;Uploading csv to the datalake&quot;) unstable_system_client.create_file(file=file_path) output_file = unstable_system_client.get_file_client(file_path=file_path) output_file.upload_data(mega_csv, overwrite=True) except Exception as e: logger.error(f&quot;Failure {e}.&quot;) store_remote_csv(&quot;/base/&quot;, pd.read_csv(&quot;some.csv&quot;).to_csv(), &quot;table_name&quot;, unstable_system_client ) OUTPUT: Failure: Invalid base64-encoded string: number of data characters (85) cannot be 1 more than a multiple of 4. </code></pre>
<p>The issue is in the credentials.</p> <pre><code>service_client = DataLakeServiceClient( account_url=datalake_url, credential=&quot;supersecret&quot;) </code></pre> <p>credential should be a type ClientSecretCredential (from azure.identity) and NOT string.</p> <pre><code>from azure.identity import ClientCredential client_secret_credential = ClientSecretCredential( tenant_id=datalake_tenant_id, client_id=datalake_client_id, client_secret=datalake_client_secret) service_client = DataLakeServiceClient( account_url=datalake_url, credential=client_secret_credential) </code></pre> <p>He is essentially trying to decode the string into and object.</p>
python|azure|csv|azure-data-lake
0
1,905,584
64,523,623
Inheriting and __init__
<p>I created my own class which is basically the same as <code>Button</code> from <strong>tkinter</strong> but I want to change a few things. I need to redefine the <code>__init__</code> method so that it does everything the <strong>Button</strong><code>__init__</code> method does but I want to add some stuff like append the instance to the list of students. so I used <code>super().__init__()</code> but I have no idea what goes inside of the brackets. I know the basics of inheritance from youtube tutorials but inheriting from the Button class seems more comlicated. I tried copying some stuff from the Button class from tkinter module but i still get errors. How do i fix this?</p> <pre><code>from tkinter import * root = Tk() class Student(Button): students = [] def __init__(self, master=None, cnf={}, **kw): super().__init__(master, cnf, kw) Student.students.append(self) enrique = Student(root, padx=30, pady=10, text=&quot;Enrique&quot;, fg=&quot;#000000&quot;, bg=&quot;#00FFFF&quot; ) enrique.grid(row=0, column=0) root.mainloop() &gt;&gt;&gt;TypeError: __init__() takes from 1 to 3 positional arguments but 4 were given </code></pre>
<p>You missed the <code>**</code> from <code>kw</code>:</p> <pre class="lang-py prettyprint-override"><code>class Student(Button): students = [] def __init__(self, master=None, cnf={}, **kw): super().__init__(master, cnf, **kw) Student.students.append(self) </code></pre>
python|inheritance|tkinter
0
1,905,585
70,036,333
use singleton logic within a classmethod
<p>I am currently using this piece of code :</p> <pre class="lang-py prettyprint-override"><code>class FileSystem(metaclass=Singleton): &quot;&quot;&quot;File System manager based on Spark&quot;&quot;&quot; def __init__(self, spark): self._path = spark._jvm.org.apache.hadoop.fs.Path self._fs = spark._jvm.org.apache.hadoop.fs.FileSystem.get( spark._jsc.hadoopConfiguration() ) @classmethod def without_spark(cls): with Spark() as spark: return cls(spark) </code></pre> <p>My object depends obviously on the Spark object (another object that I created - If you need to see its code, I can add it but I do not think it is required for my current issue).</p> <p>It can be used in 2 differents ways resulting the same behavior :</p> <pre class="lang-py prettyprint-override"><code>fs = FileSystem.without_spark() # OR with Spark() as spark: fs = FileSystem(spark) </code></pre> <p>My problem is that, even if <code>FileSystem</code> is a singleton, using the class method <code>without_spark</code> makes me enter (<code>__enter__</code>) the context manager of spark, which lead to a connection to spark cluster, which takes a lot of time. How can I make that the first execution of <code>without_spark</code> do the connection, but the next one only returns the already created instance?</p> <p>The expected behavior would be something like this :</p> <pre class="lang-py prettyprint-override"><code> @classmethod def without_spark(cls): if not cls.exists: # I do not know how to persist this information in the class with Spark() as spark: return cls(spark) else: return cls() </code></pre>
<p>I think you are looking for something like</p> <pre><code>import contextlib class FileSystem(metaclass=Singleton): &quot;&quot;&quot;File System manager based on Spark&quot;&quot;&quot; spark = None def __init__(self, spark): self._path = spark._jvm.org.apache.hadoop.fs.Path self._fs = spark._jvm.org.apache.hadoop.fs.FileSystem.get( spark._jsc.hadoopConfiguration() ) @classmethod def without_spark(cls): if cls.spark is None: cm = cls.spark = Spark() else: cm = contextlib.nullcontext(cls.spark) with cm as s: return cls(s) </code></pre> <p>The first time <code>without_spark</code> is called, a new instance of <code>Spark</code> is created and used as a context manager. Subsequent calls reuse the same <code>Spark</code> instance and use a null context manager.</p> <hr /> <p>I believe your approach will work as well; you just need to initialize <code>exists</code> to be <code>False</code>, then set it to <code>True</code> the first (and every, really) time you call the class method.</p> <pre><code>class FileSystem(metaclass=Singleton): &quot;&quot;&quot;File System manager based on Spark&quot;&quot;&quot; exists = False def __init__(self, spark): self._path = spark._jvm.org.apache.hadoop.fs.Path self._fs = spark._jvm.org.apache.hadoop.fs.FileSystem.get( spark._jsc.hadoopConfiguration() ) @classmethod def without_spark(cls): if not cls.exists: cls.exists = True with Spark() as spark: return cls(spark) else: return cls() </code></pre>
python|python-3.x
1
1,905,586
11,377,028
Getting index of numpy.ndarray
<p>I have a one-dimensional array of the type numpy.ndarray and I want to know the index of it's max entry. After finding the max, I used</p> <pre><code>peakIndex = numpy.where(myArray==max) </code></pre> <p>to find the peak's index. But instead of the index, my script spits out</p> <pre><code>peakIndex = (array([1293]),) </code></pre> <p>I want my code to spit out just the integer 1293. How can I clean up the output?</p>
<p>Rather than using <code>numpy.where</code>, you can use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.argmax.html"><code>numpy.argmax</code></a>.</p> <pre><code>peakIndex = numpy.argmax(myArray) </code></pre> <p><code>numpy.argmax</code> returns a single number, the flattened index of the first occurrence of the maximum value. If <code>myArray</code> is multidimensional you might want to convert the flattened index to an index tuple:</p> <pre><code>peakIndexTuple = numpy.unravel_index(numpy.argmax(myArray), myArray.shape) </code></pre>
python
7
1,905,587
17,962,005
Exiting a function using only basic programming
<p>I am working on my assignment, and I need to exit a function if the value returns True. It's a go fish game (you have all been a great help so far!) and I'm trying to figure out how to exit a function.</p> <pre><code>def TargetPlayer(player_number, pHands,sDeck): """User inputs which player they are choosing as the target player, returns target players number""" gameoverhands = GameOverHands(pHands) if gameoverhands == True: **missing code here** return gameoverhands else: ShowMessage("TURN: Player " + str(player_number) + ", it's your turn. ") if player_number == 0: ask = raw_input("Who do you want to ask? (1-3) ") while not ask.isdigit() or ask not in "123": etc .... return other_values </code></pre> <p>I guess a thing to ask would be can you have different return statements that only return that value if the if statement is executed? gameoverhands is basically saying that you have no cards in your hand and the game is over, so i need to somehow jump directly to the final function in the game, whereas the else statement will (hopefully) execute the rest of the code repeatedly until gameover occurs. is this possible with very basic programming? any input would be fantastic</p>
<p>It's good to have one single <code>return</code> statement in Python (as well as most other languages), but you can have other ones also. Mostly it's about making the code as readable as possible.</p> <p>Here is a sample with one final return in the end:</p> <pre><code>def TargetPlayer(player_number, pHands,sDeck): """User inputs which player they are choosing as the target player, returns target players number""" result = GameOverHands(pHands) if gameoverhands == True: **missing code here** result = gameoverhands else: ShowMessage("TURN: Player " + str(player_number) + ", it's your turn. ") if player_number == 0: ask = raw_input("Who do you want to ask? (1-3) ") while not ask.isdigit() or ask not in "123": etc .... result = "Thisnthat" return result </code></pre> <p>This re-defines an "inner" function:</p> <pre><code>def outerfunc(cond): def inner1(): print('inner1') def inner2(): print('inner2') if cond: chosenfunc = inner1 else: chosenfunc = inner2 chosenfunc() outerfunc(True) outerfunc(False) </code></pre>
python|return
1
1,905,588
61,153,837
Trying to get a specific number of characters from a list of characters which then pass the random function
<p>First of all I would like to point out that I am JUST STARTING to learn to code, so I apologise if my question is downright asinine(which I'm sure it will be) but either way I could use some help from those kind souls who don't mind helping a complete noob out.</p> <p>So as a starter project into python I thought I would write a random password generator that takes the following inputs - 1. Number of characters(minimum being 6) 2. Number of UpperCase Characters 3. Number of LowerCase Characters 4. Number of Numeric Characters(0-9) 5. Number of Special Characters</p> <p>I'm facing two issues:- 1. While I am getting output it isn't the specific number of characters that I input. 2. The output always comes with [''] (As a list)</p> <p>I'm not too worried about 2. as I've already seen some answers about using the join function to remove the brackets.</p> <p>My main worry is running a for loop(Any other ideas more than welcome) to run the number of times the user inputs so that the random function can choose the specified amount of characters</p> <p>Any and all help appreciated.</p> <p>Here's the code:-</p> <p>for l in range(lower): lower_pass = random.choices(lower_list)</p> <p>Output example:- ./pass_gen2.py How many characters would you like?:9 How many upper case characters?:3 How many lower case characters?:3 How many digits?:2 How many Special Characters?:1 [[0], ['>'], ['R'], ['y']]</p> <p>Answer(Full credit to @Arume):-</p> <pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3.7 #Password Generator Attempt 2 import random characters = int(input("How many characters would you like?:")) if characters &gt;= 6: pass else: print("Minimum charachter length is 6") exit() upper = int(input("How many upper case characters?:")) lower = int(input("How many lower case characters?:")) digits = int(input("How many digits?:")) spec_characters = int(input("How many Special Characters?:")) tot_char = int(upper) + int(lower) + int(digits) +int(spec_characters) if tot_char == characters and int(upper) &gt;= 1 and int(lower) &gt;= 1 and int(digits) &gt;= 1 and int(spec_characters) &gt;= 1: pass else: print("The total number of characters does not match what you had selected or one of the character types is set to 0") exit() #upper_list = ['A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z'] #lower_list = ['a','b','c','d','e','f','g,','h','i','j','k','l','m','n','o','p','q','r','s','t','u','v','w','x','y','z'] #digit_list = [1,2,3,4,5,6,7,8,9,0] #spec_characters_list = ['!', '@', '#', '$', '%', '^', '&amp;', '*', '(', ')', ':', ';', '&lt;', '&gt;', '?', '|', '/'] pass_list = [] for u in range(upper): upper_pass = random.choice("ABCDEFGHIJKLMNOPQRSTUVWXYZ") pass_list.append(upper_pass) for l in range(lower): lower_pass = random.choice("abcdefghijklmnopqrstuvwxyz") pass_list.append(lower_pass) for d in range(digits): digit_pass = random.choice("0123456789") pass_list.append(digit_pass) for s in range(spec_characters): spec_characters_pass = random.choice("!@:;#$%^&amp;*,&lt;&gt;?|/") pass_list.append(spec_characters_pass) random.shuffle(pass_list) password = "".join(pass_list) #print(upper_pass, lower_pass, digit_pass, spec_characters_pass) print("Your random password is:", password) </code></pre>
<p>I could not understand what you have trouble with but I tried writing three examples. Try executing them.</p> <ol> <li>An Example using <code>random.choice</code> functions. This is lengthy but easy to understand.</li> </ol> <pre><code>import random number_all = int(input("How many characters would you like? : ")) if number_all &lt; 6: print("The total number of characters must be 6 or more!") exit() number_upper = int(input("How many upper case characters? : ")) number_lower = int(input("How many lower case characters? : ")) number_digit = int(input("How many digits? : ")) number_special = int(input("How many Special Characters? : ")) if number_all != number_upper + number_lower + number_digit + number_special: print("The figures do not add up!") exit() characters_list = [] for _ in range(number_upper): upper = random.choice("ABCDEFGHIJKLMNOPQRSTUVWXYZ") characters_list.append(upper) for _ in range(number_lower): lower = random.choice("abcdefghijklmnopqrstuvwxyz") characters_list.append(lower) for _ in range(number_digit): digit = random.choice("0123456789") characters_list.append(digit) for _ in range(number_special): special = random.choice(",.;:()[]-+=") characters_list.append(special) random.shuffle(characters_list) password = "".join(characters_list) print(password) </code></pre> <ol start="2"> <li>An Example using <code>random.choices</code> functions. This is a little elegant.</li> </ol> <pre><code>import random number_all = int(input("How many characters would you like? : ")) if number_all &lt; 6: print("The total number of characters must be 6 or more!") exit() number_upper = int(input("How many upper case characters? : ")) number_lower = int(input("How many lower case characters? : ")) number_digit = int(input("How many digits? : ")) number_special = int(input("How many Special Characters? : ")) if number_all != number_upper + number_lower + number_digit + number_special: print("The figures do not add up!") exit() characters_list = [] upper_list = random.choices("ABCDEFGHIJKLMNOPQRSTUVWXYZ", k=number_upper) characters_list.extend(upper_list) lower_list = random.choices("abcdefghijklmnopqrstuvwxyz", k=number_lower) characters_list.extend(lower_list) digits_list = random.choices("0123456789", k=number_digit) characters_list.extend(digits_list) special_list = random.choices(",.;:()[]-+=", k=number_special) characters_list.extend(special_list) random.shuffle(characters_list) password = "".join(characters_list) print(password) </code></pre> <ol start="3"> <li>An Example using a <code>random.choices</code> function and a <code>for</code> statement. This is more elegant.</li> </ol> <pre><code>import random number_all = int(input("How many characters would you like? : ")) if number_all &lt; 6: print("The total number of characters must be 6 or more!") exit() number_upper = int(input("How many upper case characters? : ")) number_lower = int(input("How many lower case characters? : ")) number_digit = int(input("How many digits? : ")) number_special = int(input("How many Special Characters? : ")) if number_all != number_upper + number_lower + number_digit + number_special: print("The figures do not add up!") exit() characters_and_numbers_list = ( ("ABCDEFGHIJKLMNOPQRSTUVWXYZ", number_upper), ("abcdefghijklmnopqrstuvwxyz", number_lower), ("0123456789", number_digit), (",.;:()[]-+=", number_special), ) characters_list = [] for characters, number in characters_and_numbers_list: picked_list = random.choices(characters, k=number) characters_list.extend(picked_list) random.shuffle(characters_list) password = "".join(characters_list) print(password) </code></pre>
python|for-loop
0
1,905,589
60,805,233
How to create a optimization where for value of x I get y such that y is minimum and x is maximum
<pre><code>X= [23, 174, 3, 38, 22, 97, 11, 5, 36, 94, 25] y = [8, 58, 2, 13, 8, 86, 5, 2, 23, 60, 20] </code></pre> <p>Now using linear regression I got coefficient = 0.46 y intercept 4 Now I need to find the optimum proportion of y and x </p> <p>I am not sure if linear regression can be of help. is there any optimization process that can take all this into consideration or the coefficient itself gives that value </p>
<p>This will make a dictionary out of your lists with <code>x</code> as keys and <code>y</code> as values. You can now access the dictionary with your <code>x</code> value and get the appropriate <code>y</code> value.</p> <pre class="lang-py prettyprint-override"><code>min_max_values = dict(zip(x, y)) for k, v in min_max: print ("min: {min}, max: {max}".format(min=v, max=k) </code></pre> <pre><code>Output: min: 8, max: 23 min: 58, max: 174 ... </code></pre>
python|optimization|linear-programming
0
1,905,590
68,355,596
How to merge for two different rows in PANDAS?
<p>I want to merge two dataframes. The left dataframe has two identifiers, id1 and id2. The right dataframe has the string version of those identifiers. What I want to do is get both ids and the string version of both ids in the same row. Example:</p> <pre><code>left: right: id1 id2 id string 0 1 0 &quot;a&quot; 3 4 1 &quot;b&quot; 10 0 3 &quot;c&quot; 1 4 4 &quot;d&quot; 10 &quot;e&quot; </code></pre> <p>Output of merging:</p> <pre><code>id1 id2 string1 string2 0 1 &quot;a&quot; &quot;b&quot; 3 4 &quot;c&quot; &quot;d&quot; 10 0 &quot;e&quot; &quot;a&quot; 1 4 &quot;b&quot; &quot;d&quot; </code></pre> <p>How would I do this?</p>
<p>Creating a mapper from the <code>right</code> DataFrame is probably best here then using <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.map.html" rel="nofollow noreferrer"><code>Series.map</code></a> on each column as it scales up very easily:</p> <pre><code>mapper = right.set_index('id')['string'] merged = left.copy() for i, col in enumerate(merged.columns, 1): merged[f'{mapper.name}{i}'] = merged[col].map(mapper) </code></pre> <p>Alternatively with chained <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.merge.html" rel="nofollow noreferrer"><code>merge</code></a> calls:</p> <pre><code>merged = ( left.merge(right.rename(columns={'id': 'id1'}), on='id1', how='left') .merge(right.rename(columns={'id': 'id2'}), on='id2', how='left', suffixes=('1', '2')) ) </code></pre> <p>Both produce <code>merged</code>:</p> <pre><code> id1 id2 string1 string2 0 0 1 a b 1 3 4 c d 2 10 0 e a 3 1 4 b d </code></pre> <hr /> <p>DataFrames:</p> <pre><code>import pandas as pd left = pd.DataFrame({ 'id1': {0: 0, 1: 3, 2: 10, 3: 1}, 'id2': {0: 1, 1: 4, 2: 0, 3: 4} }) right = pd.DataFrame({ 'id': {0: 0, 1: 1, 2: 3, 3: 4, 4: 10}, 'string': {0: 'a', 1: 'b', 2: 'c', 3: 'd', 4: 'e'} }) </code></pre>
python|pandas|dataframe
0
1,905,591
68,242,870
How to make a new column with counter of the number of times a word from a predefined list appears in a text column of the dataframe?
<p>I want to build a new column which contains the count of the number of times a word from ai_functional list occurs in a text column.</p> <p>List given is:</p> <pre><code>&gt; ai_functional = [&quot;natural language &gt; processing&quot;,&quot;nlp&quot;,&quot;A I &quot;,&quot;Aritificial intelligence&quot;, &quot;stemming&quot;,&quot;lemmatization&quot;,&quot;lemmatization&quot;,&quot;information &gt; extraction&quot;,&quot;text mining&quot;,&quot;text analytics&quot;,&quot;data-mining&quot;] </code></pre> <p>the result I want is as follows:</p> <pre><code>&gt; text counter &gt; &gt; 1. More details A I Artificial Intelligence 2 &gt; 2. NLP works very well these days 1 &gt; 3. receiving information at the right time 1 </code></pre> <p>The code i have been using is</p> <pre><code>def func(stringans): for x in ai_tech: count = stringans.count(x) return count df['counter']=df['text'].apply(func) </code></pre> <p>Please can someone help me with this. I am really stuck because everytime i apply this i get result as 0 in the counter column</p>
<p>As you do <code>count = </code>, you erase the previous value, you want to sum up the different counts</p> <pre><code>def func(stringans): count = 0 for x in ai_tech: count += stringans.count(x) return count # with sum and generator def func(stringans): return sum(stringans.count(x) for x in ai_tech) </code></pre> <p>Fixing some typos in <code>ai_tech</code> and setting all to <code>.lower()</code> gives <code>2,1,0</code> in the counter col, the last row has no value in common</p> <pre><code>import pandas as pd ai_tech = [&quot;natural language processing&quot;, &quot;nlp&quot;, &quot;A I &quot;, &quot;Artificial intelligence&quot;, &quot;stemming&quot;, &quot;lemmatization&quot;, &quot;information extraction&quot;, &quot;text mining&quot;, &quot;text analytics&quot;, &quot;data - mining&quot;] df = pd.DataFrame([[&quot;1. More details A I Artificial Intelligence&quot;], [&quot;2. NLP works very well these days&quot;], [&quot;3. receiving information at the right time&quot;]], columns=[&quot;text&quot;]) def func(stringans): return sum(stringans.lower().count(x.lower()) for x in ai_tech) df['counter'] = df['text'].apply(func) print(df) # ------------------ text counter 0 1. More details A I Artificial Intelligence 2 1 2. NLP works very well these days 1 2 3. receiving information at the right time 0 </code></pre>
python|pandas|dataframe|counter|data-manipulation
1
1,905,592
68,302,430
How to remove items from list and make items join
<p>I have a list which has numbers, but because its appended to a list using for loop it has '\n' and I don't know how to remove it.</p> <p>the list looks like this</p> <pre><code>['3', '7', '4', '5', '5', '9', '2', '2', '7', '\n', '4', '3', '7', '1', '5', '9', '4', '3', '0', '\n', '3', '7', '2', '4', '1', '0', '2', '7', '5', '\n', '7', '8', '4', '5', '1', '6', '2', '5', '7', '\n', '2', '8', '0', '6', '6', '1', '1', '2', '3', '\n', '9', '3', '5', '6', '8', '3', '8', '7', '1', '\n', '6', '7', '5', '5', '4', '7', '4', '8', '6'] </code></pre> <p>I want to remove ' ' and '\n' so it would look like this</p> <pre><code>[374559227,437159430,372410275,784516257,280661123,935683871,675547486] </code></pre>
<p>Join to a string and split the newlines:</p> <pre><code>l = [ '3', '7', '4', '5', '5', '9', '2', '2', '7', '\n', '4', '3', '7', '1', '5', '9', '4', '3', '0', '\n', '3', '7', '2', '4', '1', '0', '2', '7', '5', '\n', '7', '8', '4', '5', '1', '6', '2', '5', '7', '\n', '2', '8', '0', '6', '6', '1', '1', '2', '3', '\n', '9', '3', '5', '6', '8', '3', '8', '7', '1', '\n', '6', '7', '5', '5', '4', '7', '4', '8', '6' ] print([int(x) for x in ''.join(l).split('\n')]) &gt;&gt;&gt; [374559227, 437159430, 372410275, 784516257, 280661123, 935683871, 675547486] </code></pre>
python|python-3.x|list
3
1,905,593
35,410,413
Random walk in python
<p>I am trying to implement a random walk in python. This is the error I get. I feel my implementation is wrong or at least not the best. Can someone have a look at it. Keep in mind I am a beginner in python and this is how I think someone would code something, so I can be totally off.</p> <pre><code>in randomWalk(stepSize, stepNumber) 37 for _ in range(stepNumber): 38 r = randint(1,4) ---&gt; 39 x,y = movement[r] 40 xList.append(x) 41 yList.append(y) TypeError: 'function' object is not iterable </code></pre> <p>This is my code</p> <pre><code>from pylab import * import numpy as np import matplotlib.pyplot as plt import random as rnd matplotlib.rcParams.update({'font.size': 20}) x = 0. y = 0. xList = [] yList = [] def goRight(stepSize, y): direction = np.cos(0) x = stepSize*direction return [x,y] def goUp(stepSize, x): direction = np.cos(90) y = stepSize*direction return [x,y] def goLeft(stepSize, y): direction = np.cos(180) x = stepSize*direction return [x,y] def goDown(stepSize, x): direction = np.cos(270) y = stepSize*direction return [x,y] def randomWalk(stepSize, stepNumber): movement = {1: goRight, 2: goUp, 3: goLeft, 4: goDown} for _ in range(stepNumber): r = randint(1,4) x,y = movement[r] xList.append(x) yList.append(y) plt.ioff() plot(x, y) plt.show() randomWalk(1.,4) </code></pre>
<p>You are putting functions in your dict <code>movement</code>. <code>movement[r]</code> is not calling the function, only accessing them. What you line is basically doing, is:</p> <pre><code>x, y = goDown </code></pre> <p>If you want to call the function in that line, you have to add parentheses and arguments, something like:</p> <pre><code>x, y = movement[r](stepSize, x) </code></pre> <p>Which shows that you have a problem in your design, since some functions expect <code>x</code> and some expect <code>y</code>. You could maybe fix that by having all the functions take both coordinates, x and y, and then your line should go like</p> <pre><code>x, y = movement[r](stepSize, x, y) </code></pre>
python|random
3
1,905,594
58,620,091
Python Nested List manipulation
<p>Hello all python developers, I was playing with python lists and Pandas library and am having a trouble in a list manipulation tasks. I want to merge all test_list[i][0] dictionary items into one nested list index, according to same state name at index 0 of each nested list. </p> <p>Sample Input:</p> <pre><code>test_list= [['Alabama', {'Baldwin County': 182265}], ['Alabama', {'Barbour County': 27457}], ['Arkansas', {'Newton County': 8330}], ['Arkansas', {'Perry County': 10445}], ['Arkansas', {'Phillips County': 21757}], ['California', {'Madera County': 150865}], ['California', {'Marin County': 252409}], ['Colorado', {'Adams County': 441603}], ['Colorado', {'Alamosa County': 15445}], ['Colorado', {'Arapahoe County': 572003}] ] </code></pre> <p>Sample Output:</p> <pre><code>test_list1= [['Alabama', {'Baldwin County': 182265, 'Barbour County': 27457}], ['Arkansas', {'Newton County': 8330, 'Perry County': 10445, 'Phillips County': 21757}], ['California', {'Madera County': 150865, 'Marin County': 252409}], ['Colorado', {'Adams County': 441603, 'Alamosa County': 15445, 'Arapahoe County': 572003}] ] </code></pre> <p>I have tried many approaches to solve this but no success so far.I am a beginner python developer. Thanks for helping in advance.</p>
<h2>Approach</h2> <ul> <li><p>Use <a href="https://docs.python.org/3/library/collections.html#collections.defaultdict" rel="nofollow noreferrer">collections.defaultdict</a> as a way to group data by a common field (in this case, by state).</p></li> <li><p>For each state, the defaultdict creates a new <em>dict</em> which is updated with the <em>dict.update()</em> method.</p></li> <li><p>Turn the result back into a list with <code>list</code> applied to the dictionary's items (key/value pairs).</p></li> </ul> <h2>Working code</h2> <pre><code>&gt;&gt;&gt; from pprint import pprint &gt;&gt;&gt; from collections import defaultdict &gt;&gt;&gt; d = defaultdict(dict) &gt;&gt;&gt; for state, info in test_list: d[state].update(info) &gt;&gt;&gt; result = list(d.items()) &gt;&gt;&gt; pprint(result) [('Alabama', {'Baldwin County': 182265, 'Barbour County': 27457}), ('Arkansas', {'Newton County': 8330, 'Perry County': 10445, 'Phillips County': 21757}), ('California', {'Madera County': 150865, 'Marin County': 252409}), ('Colorado', {'Adams County': 441603, 'Alamosa County': 15445, 'Arapahoe County': 572003})] </code></pre>
python|dictionary|nested-lists|list-manipulation
3
1,905,595
15,909,279
How to prints out the information about a list if info is in list and an error message otherwise
<p>So I got this code that should let you display the information of a list if the 'name' is contained in the list. The list is loaded from <code>idel</code> by a different function <code>d = load_info('info.csv')</code>.</p> <pre><code>def display_info(name, info_list): for name[0] in info_list: if name[0] == name: print ' '.join(name) break else: print False </code></pre> <p>This function is run by this command <code>display_info('Greyson', d)</code></p> <p>However, I am getting this error </p> <blockquote> <p>TypeError: 'str' object does not support item assignment.</p> </blockquote> <p>How do I fix this?</p>
<pre><code>def display_info(name, info_list): for each_info in info_list: if each_info[0] == name: print ' '.join(each_info) break else: print False </code></pre>
python
0
1,905,596
49,288,254
sqlachemy UniqueConstraint with conditional?
<p>Is it possible to create a contraint on a table and specify a value on one or more of the columns? Condsider this example:</p> <pre><code>mytable = Table('mytable', meta, # per-column anonymous unique constraint Column('col1', Integer,), Column('col2', Integer), Column('col3', ENUM('ready', 'pass', 'fail'), UniqueConstraint('col2', 'col2', 'col3', name='uix_1') ) </code></pre> <p>But I dont only want uniqueness when col3 is equal to something like a state of 'ready' (I WANT multiple success or failures).</p> <pre><code> UniqueConstraint('col2', 'col2', 'col3 == ready', name='uix_1') </code></pre> <p>Is this possible in the sqlalchemy api?</p>
<p>There is a full example on this <a href="https://www.johbo.com/2016/creating-a-partial-unique-index-with-sqlalchemy-in-postgresql.html" rel="noreferrer">link</a>:</p> <pre><code>class ExampleTable(Base): __tablename__ = 'example_table' __table_args__ = ( Index( 'ix_unique_primary_content', # Index name 'object_type', 'object_id', # Columns which are part of the index unique=True, postgresql_where=Column('is_primary')), # The condition ) id = Column(Integer, primary_key=True) object_type = Column(Unicode(50)) object_id = Column(Integer) is_primary = Column(Boolean) </code></pre> <p>so you can use something like this:</p> <pre><code>Index( 'col1', 'col2', # Columns which are part of the index unique=True, postgresql_where=Column("col3='ready'")), # The condition </code></pre>
python|sqlalchemy
10
1,905,597
49,036,715
Efficient method of concatenating non-sequential columns in 2d numpy array
<p>I'm using np.concatenate to concatenate a non-sequential column with some sequential columns in a large dataset, and I realized my method would look rather cumbersome if I wanted to do this with multiple non-sequential columns. Would I just chain concatenate all of the individual columns? I'm looking for a broad answer, not a solution for say columns 2, 5 and 7.</p> <pre><code>import numpy as np rand_data = np.random.rand(156,26) new_array = np.concatenate((rand_data[:,22].reshape(-1,1),rand_data[:, 24:27]), axis = 1) </code></pre>
<p>An alternative to indexing and then concatenating, is to concatenate indices first.</p> <p><code>np.r_</code> is a handy of doing this (though not the fastest):</p> <pre><code>In [40]: np.r_[22,24:27] Out[40]: array([22, 24, 25, 26]) </code></pre> <p>Testing with your array:</p> <pre><code>In [29]: rand_data = np.random.rand(156,26) In [31]: new_array = np.concatenate((rand_data[:,[22]],rand_data[:, 24:27]), axis = 1) In [32]: new_array.shape Out[32]: (156, 3) </code></pre> <p>With <code>r_</code>:</p> <pre><code>In [41]: arr = rand_data[:,np.r_[22,24:27]] .... IndexError: index 26 is out of bounds for axis 1 with size 26 </code></pre> <p>oops, with advanced indexing out of bounds values are not allowed (in contrast to slice indexing)</p> <pre><code>In [42]: arr = rand_data[:,np.r_[22,24:26]] In [43]: arr.shape Out[43]: (156, 3) </code></pre> <p>Compare the times:</p> <pre><code>In [44]: timeit new_array = np.concatenate((rand_data[:,[22]],rand_data[:, 24:27 ...: ]), axis = 1) 15 µs ± 20.7 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) In [45]: timeit arr = rand_data[:,np.r_[22,24:26]] 29.7 µs ± 111 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each) </code></pre> <p>The <code>r_</code> approach is more compact, but actually a bit slower.</p>
python|numpy
1
1,905,598
70,753,591
How to compile Python to DLL or alternative?
<h4>How to compile Python project?</h4> <p>I have a large Python project consisting of multiple scripts and importing some heavy libraries like PyTorch. <br> I need to use this project as a part of the final solution made in .NET. <br> Moreover, it should be <strong>standalone</strong> and <strong>distributable</strong>, since setting up a Python environment on the customer's side is not an option. <br> The best way is to be able to make a <strong>DLL</strong> or <strong>static library</strong>, but any alternative way such as <strong>executable</strong> works just fine.</p> <hr /> <h4>What I have already tried.</h4> <h5>Compilers</h5> <p>PyInstaller, Nuitka. <br> In both cases, I have encountered some dead-end issues with one or more packages. E.g. Nuitka failing with PyTorch</p> <h5>RPC</h5> <p>There is the prototype of the final solution which relies on RPC communication between running Python and .NET programs. <br> But distribution and source code protection are unresolvable issues with the architecture.</p> <hr /> <p><strong>Update:</strong> The target platform is Windows</p>
<p>you can try py2exe, pyinstaller:</p> <p><strong>Step 1.</strong> <code>pip install pyinstaller.</code></p> <p><strong>step 2.</strong> new python file let's name it <code>code.py</code> .</p> <p><strong>step 3.</strong> Write some lines of code i.e print(&quot;Hello World&quot;)</p> <p><strong>step 4.</strong> Open Command Prompt in the same location and write pyinstaller code.py hit enter. Last Step see in the same location two folders name build, dist will be created. inside dist folder there is folder code and inside that folder there is an exe file code.exe along with required .dll files</p>
python|dll|compilation|pyinstaller|nuitka
1
1,905,599
70,936,484
Trying to store values and loop calculations from a function
<p>The equation requires the previous output to calculate the next value. Vi = V(i-1) + t((c*(V(i-1)^2)/m)-g). With V(0)=0 and all other values being defined I am not sure how to write a loop code that would store the previous value and then calculate the next iteration. I've tried a <code>def</code> approach but I could never figure out how to recall the previous calculation to use in the next step.</p> <pre><code> Vi = Vo + Dt*((c*(Vo**2)/m)-g) return Vi </code></pre>
<p>Assuming t, c, m, and g are constants you can pass them to the recursive function:</p> <pre><code>def V(input, t, c, m, g) -&gt; int: if input == 0: return 0 return V(input - 1, t, c, m, g) + t( (c * (V(input - 1, t, c, m, g) ** 2) / m) -g) </code></pre>
python
0