Unnamed: 0
int64
0
1.91M
id
int64
337
73.8M
title
stringlengths
10
150
question
stringlengths
21
64.2k
answer
stringlengths
19
59.4k
tags
stringlengths
5
112
score
int64
-10
17.3k
1,906,000
71,498,526
TypeError: cannot use a string pattern on a bytes-like object python3
<p>I have updated my project to Python 3.7 and Django 3.0</p> <p>Here is code of models.py</p> <pre><code>def get_fields(self): fields = [] html_text = self.html_file.read() self.html_file.seek(0) # for now just find singleline, multiline, img editable # may put repeater in there later (!!) for m in re.findall(&quot;(&lt;(singleline|multiline|img editable)[^&gt;]*&gt;)&quot;, html_text): # m is ('&lt;img editable=&quot;true&quot; label=&quot;Image&quot; class=&quot;w300&quot; width=&quot;300&quot; border=&quot;0&quot;&gt;', 'img editable') # or similar # first is full tag, second is tag type # append as a list # MUST also save value in here data = {'tag':m[0], 'type':m[1], 'label':'', 'value':None} title_list = re.findall(&quot;label\s*=\s*\&quot;([^\&quot;]*)&quot;, m[0]) if(len(title_list) == 1): data['label'] = title_list[0] # store the data fields.append(data) return fields </code></pre> <p>Here is my error traceback</p> <pre><code> File &quot;/home/harika/krishna test/dev-1.8/mcam/server/mcam/emails/models.py&quot;, line 91, in get_fields for m in re.findall(&quot;(&lt;(singleline|multiline|img editable)[^&gt;]*&gt;)&quot;, html_text): File &quot;/usr/lib/python3.7/re.py&quot;, line 225, in findall return _compile(pattern, flags).findall(string) TypeError: cannot use a string pattern on a bytes-like object </code></pre> <p>How can I solve my issue?</p>
<p>The thing is that python3's <code>read</code> returns bytes (i.e. &quot;raw&quot; representation) and not <code>string</code>. You can convert between bytes and string if you specify encoding, i.e. how are characters converted to bytes:</p> <pre><code>&gt;&gt;&gt; '☺'.encode('utf8') b'\xe2\x98\xba' &gt;&gt;&gt; '☺'.encode('utf16') b'\xff\xfe:&amp;' </code></pre> <p>the <code>b</code> before string signifies that the value is not <code>string</code> but rather <code>bytes</code>. You can also supply raw bytes if you use that prefix:</p> <pre><code>&gt;&gt;&gt; bytes_x = b'x' &gt;&gt;&gt; string_x = 'x' &gt;&gt;&gt; bytes_x == string_x False &gt;&gt;&gt; bytes_x.decode('ascii') == string_x True &gt;&gt;&gt; bytes_x == string_x.encode('ascii') True </code></pre> <p>Note you can only use basic (ASCII) characters if you are using <code>b</code> prefix:</p> <pre><code>&gt;&gt;&gt; b'☺' File &quot;&lt;stdin&gt;&quot;, line 1 SyntaxError: bytes can only contain ASCII literal characters. </code></pre> <p>So to fix your problem you need to either convert the input to a string with appropriate encoding:</p> <pre><code>html_text = self.html_file.read().decode('utf-8') # or 'ascii' or something else </code></pre> <p>Or -- probably better option -- is to use <code>bytes</code> in the <code>findall</code>s instead of strings:</p> <pre><code> for m in re.findall(b&quot;(&lt;(singleline|multiline|img editable)[^&gt;]*&gt;)&quot;, html_text): ... title_list = re.findall(b&quot;label\s*=\s*\&quot;([^\&quot;]*)&quot;, m[0]) </code></pre> <p>(note the <code>b</code> in front of each &quot;string&quot;)</p>
python
0
1,906,001
5,171,845
Bulkloader import list of integers
<p>How should I configure import_transform and export_transform im my configuration yaml file to be able to export and import multiple integer property?</p>
<p>I wrote this collection of helpers for bulk loading data:</p> <p><a href="http://code.google.com/p/bulkloader-gdata-connector/source/browse/bulk_helper.py" rel="nofollow">http://code.google.com/p/bulkloader-gdata-connector/source/browse/bulk_helper.py</a></p> <p>I think that list_to_json(int) and json_to_list(int) are what you need.</p> <p>Be aware that this will choke on entities that have don't have the list defined yet.</p>
java|python|google-app-engine
1
1,906,002
62,790,394
Data created with class failes to be displayed with flask app into html
<p>I'm tring to make a flask application for a tool that I've created in my internship. I have a class Activities that can generate a pandas dataframe, and I wish to display it into a table in an HTML using flask. I tested a simple example that works just fine and displays the given dataframe, but as soon as I try doing the same thing with Activites class, it doesn't work. The code in itself doesn't result in any errors in the kernel but when I go to http://localhost:5000/, my web browser(Chrome) says localhost does not allow the connection : ERR_CONNECTION_REFUSED. The class takes up to 15 minutes just to create the data, I was thinking that maybe the Chrome does not like to wait such a long time and just refuses the connection ? I am new with Flask so I am very confused. Thank you for your help :)</p> <p>This is the simple working code :</p> <pre><code>import pandas as pd from flask import Flask, redirect, url_for, request, render_template app = Flask(__name__) df = pd.DataFrame({'A': [0, 1, 2, 3, 4], 'B': [5, 6, 7, 8, 9], 'C': ['a', 'b', 'c', 'd', 'e']}) @app.route('/') @app.route('/home') def home(): return render_template('home.html') @app.route('/task1') def task1(): return render_template('task1.html') @app.route('/handle_data', methods=['POST']) def handle_data(): return render_template('simple.html', tables=[df.to_html(classes='data')], titles=df.columns.values) if __name__ == '__main__': app.run(debug=True) </code></pre> <p>This is what I'm trying to do that doesn't work :</p> <pre><code>import pandas as pd from flask import Flask, redirect, url_for, request, render_template from activities import Activities app = Flask(__name__) activities = Activities() activities.load_data() df = activities.raw_data.head() @app.route('/') @app.route('/home') def home(): return render_template('home.html') @app.route('/task1') def search_for_sailors(): return render_template('task1.html') @app.route('/handle_data', methods=['POST']) def handle_data(): return render_template('simple.html', tables=[df.to_html(classes='data')], titles=df.columns.values) if __name__ == '__main__': app.run(debug=True) </code></pre>
<p>Approach that I use for data that takes time to get back to client is AJAX to get data post page load.</p> <ol> <li>changed <code>/handle_data</code> route to respond to <strong>GET</strong> plus move <em>heavy</em> lifting build of data frame into it (logically)</li> <li>use <code>Response()</code> flask object to return html of data frame</li> <li>use JQuery to do AJAX call to this <code>/handle_data</code> route to get HTML of table</li> </ol> <p><strong>app.py</strong></p> <pre><code>import pandas as pd from flask import Flask, redirect, url_for, request, render_template, Response app = Flask(__name__) @app.route('/') @app.route('/home') def home(): return render_template('home.html') @app.route('/task1') def task1(): return render_template('task1.html') @app.route('/handle_data') def handle_data(): df = pd.DataFrame({'A': [0, 1, 2, 3, 4], 'B': [5, 6, 7, 8, 9], 'C': ['a', 'b', 'c', 'd', 'e']}) return Response(df.to_html()) if __name__ == '__main__': app.run(debug=True) </code></pre> <p><strong>home.html</strong></p> <pre><code>&lt;!DOCTYPE html&gt; &lt;html lang=&quot;en&quot;&gt; &lt;head&gt; &lt;meta charset=&quot;UTF-8&quot;&gt; &lt;meta name=&quot;viewport&quot; content=&quot;width=device-width, minimum-scale=1.0, maximum-scale=1.0, user-scalable=no, viewport-fit=cover&quot;&gt; &lt;title&gt;Example&lt;/title&gt; &lt;script src=&quot;https://code.jquery.com/jquery-3.5.1.min.js&quot; integrity=&quot;sha256-9/aliU8dGd2tb6OSsuzixeV4y/faTqgFtohetphbbj0=&quot; crossorigin=&quot;anonymous&quot;&gt;&lt;/script&gt; &lt;/head&gt; &lt;body&gt; &lt;header&gt; &lt;h1&gt;Header 1&lt;/h1&gt; &lt;/header&gt; &lt;main id=&quot;main&quot;&gt; &lt;section id=&quot;data-section&quot;&gt; &lt;h2&gt;Data&lt;/h2&gt; &lt;div id=&quot;data&quot;/&gt; &lt;/section&gt; &lt;/main&gt; &lt;/body&gt; &lt;script&gt; function apicall(url) { $.ajax({ type:&quot;GET&quot;, url:url, success: (data) =&gt; { $(&quot;#data&quot;).html(data); } }); } window.onload = function () { apicall(&quot;/handle_data&quot;); } &lt;/script&gt; &lt;/html&gt; </code></pre>
python|html|pandas|flask|python-class
1
1,906,003
62,481,431
First project, Python 3 on a pi, can't seem to get it running right at start
<p>I needed to make a simple thing and it seemed like a good starter project to learn python. I followed this GPIO music box tutorial (<a href="https://projects.raspberrypi.org/en/projects/gpio-music-box" rel="nofollow noreferrer">https://projects.raspberrypi.org/en/projects/gpio-music-box</a>) and it runs fine in MU, Thonny Python IDE, but when I run on Geany it will open in a terminal, run end, produce no sound on button push. What I need is for this script to start automatically once raspbian is booted up and play back sounds at start. I've tried editing rc.local, bashrc, and crontab for automatic startup.</p> <p>So this is running on a pi3 and the script looks like this basically:</p> <pre class="lang-py prettyprint-override"><code>import pygame from gpiozero import Button pygame.init() drum = pygame.mixer.Sound(&quot;/home/pi/gpio-music-box/samples/drum_tom_mid_hard.wav&quot;) cymbal = pygame.mixer.Sound(&quot;/home/pi/gpio-music-box/samples/drum_cymbal_hard.wav&quot;) snare = pygame.mixer.Sound(&quot;/home/pi/gpio-music-box/samples/drum_snare_hard.wav&quot;) bell = pygame.mixer.Sound(&quot;/home/pi/gpio-music-box/samples/drum_cowbell.wav&quot;) btn_drum = Button(4) btn_drum.when_pressed = drum.play </code></pre> <p>Is this not working because when the script is run in a terminal it doesn't import this python library? My only other experience programming is simple projects C# on Crestron units.</p> <p>Thanks</p>
<p>All you did was load in the sounds. In order to play the sound you need to type for example</p> <p>drum.play()</p> <p>in order for the drum sound to play.</p>
python|python-3.x|raspberry-pi|pygame|gpio
0
1,906,004
62,693,280
Attention Layer in TensorFlow 2: I get "TypeError: 'AdditiveAttention' object is not iterable"
<p>I got an error with <code>AdditiveAttention()</code> layers (i.e. <strong>Bahdanau</strong> Attention) in TensorFlow 2 that I don't fully understand. I want to train a chatbot with a seq2seq attentional model trained on two <code>Question</code> and <code>Answer</code> datasets.</p> <p>My problem is represented by an error I get when I try to add the Attention layer to the model. This is my build function:</p> <pre><code>def build_model(): import tensorflow as tf from tensorflow.keras.models import Model from tensorflow.keras.layers import Input, Embedding, LSTM, AdditiveAttention, Dense # Input: get char embeddings encoder_inputs = Input(shape=(200), name='encoder_inputs') encoder_embedding = Embedding(60, 200, name='encoder_embedding')(encoder_inputs) # LSTM Encoder receives Question - returns states encoder_lstm = LSTM(units=64, return_state=True, name='encoder_lstm') encoder_outputs, h, c = encoder_lstm(encoder_embedding) encoder_states = [h, c] # Bahdanau Attention context_vector, attention_weights = AdditiveAttention([h, encoder_outputs]) # Decoder Embedding layer receives Answer as input (teacher forcing) decoder_inputs = Input(shape=(None,), name='decoder_inputs') decoder_embedding = Embedding(60, 200, name='decoder_embedding')(decoder_inputs) concat = tf.concat([tf.expand_dims(context_vector, 1), decoder_embedding], axis=-1) # Decoder LSTM layer is set with Encoder LSTM's states as initial state decoder_lstm = LSTM(units=64, return_state=True, return_sequences=True, name='decoder_lstm') decoder_outputs, _, _ = decoder_lstm(concat) decoder_dense = Dense(units=60, activation='softmax', name='decoder_dense') decoder_outputs = decoder_dense(decoder_outputs) chatbot = Model(inputs=[encoder_inputs, decoder_inputs], outputs=[decoder_outputs]) return chatbot </code></pre> <p>When I run the function with:</p> <pre><code>bot = build_model() </code></pre> <p>I get the following error:</p> <blockquote> <p><code>TypeError: 'AdditiveAttention' object is not iterable</code></p> </blockquote> <p>Can someone help me understand the error, and make a correct implementation of an Attentional seq2seq model?</p>
<p>I had this same problem this week. It seems that the tf.keras Additive attention does not return the attention weights, only the context vector.</p> <p>Therefore you just need to eliminate &quot;attention_weights&quot; when calling AdditiveAttention() and you should be good.</p>
python|tensorflow|attention-model|tensorflow2
2
1,906,005
63,534,240
How to set window hint for GLFW in Python
<p>I have written this Python code which will draw a triangle in a window created using GLFW:</p> <pre class="lang-py prettyprint-override"><code>import glfw from OpenGL.GL import * from OpenGL.GL.shaders import compileProgram, compileShader import numpy as np vertex_src = &quot;&quot;&quot; # version 330 core in vec3 a_position; void main() { gl_position = vec4(a_position, 1.0); } &quot;&quot;&quot; fragment_src = &quot;&quot;&quot; # version 330 core out vec4 out_color; void main() { out_color = vec4(1.0, 0.0, 0.0, 1.0); } &quot;&quot;&quot; if not glfw.init(): print(&quot;Cannot initialize GLFW&quot;) exit() window = glfw.create_window(320, 240, &quot;OpenGL window&quot;, None, None) if not window: glfw.terminate() print(&quot;GLFW window cannot be creted&quot;) exit() glfw.set_window_pos(window, 100, 100) glfw.make_context_current(window) vertices = [-0.5, -0.5, 0.0, 0.5, -0.5, 0.0, 0.0, 0.5, 0.0] colors = [1, 0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0] vertices = np.array(vertices, dtype=np.float32) colors = np.array(colors, dtype=np.float32) shader = compileProgram(compileShader( vertex_src, GL_VERTEX_SHADER), compileShader(fragment_src, GL_FRAGMENT_SHADER)) buff_obj = glGenBuffers(1) glBindBuffer(GL_ARRAY_BUFFER, buff_obj) glBufferData(GL_ARRAY_BUFFER, vertices.nbytes, vertices, GL_STATIC_DRAW) position = glGetAttribLocation(shader, &quot;a_position&quot;) glEnableVertexAttribArray(position) glVertexAttribPointer(position, 3, GL_FLOAT, GL_FALSE, 0, ctypes.c_void_p(0)) glUseProgram(shader) glClearColor(0, 0.1, 0.1, 1) while not glfw.window_should_close(window): glfw.poll_events() glfw.swap_buffers(window) glfw.terminate() </code></pre> <p>On running the program, I got this error:</p> <pre><code>Traceback (most recent call last): File &quot;opengl.py&quot;, line 43, in &lt;module&gt; shader = compileProgram(compileShader( File &quot;/usr/local/lib/python3.8/dist-packages/OpenGL/GL/shaders.py&quot;, line 235, in compileShader raise ShaderCompilationError( OpenGL.GL.shaders.ShaderCompilationError: (&quot;Shader compile failure (0): b'0:2(10): error: GLSL 3.30 is not supported. Supported versions are: 1.10, 1.20, 1.30, 1.00 ES, and 3.00 ES\\n'&quot;, [b'\n# version 330 core\nin vec3 a_position;\nvoid main() {\n gl_position = vec4(a_position, 1.0);\n}\n'], GL_VERTEX_SHADER) </code></pre> <p>It clearly indicates that GLSL 3.30 is not supported. But, this does work in C by setting window hints:</p> <pre class="lang-c prettyprint-override"><code>glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); </code></pre> <p>How can I set these window hints in Python?</p>
<p>With Python syntax it is</p> <pre class="lang-py prettyprint-override"><code>glfw.window_hint(glfw.CONTEXT_VERSION_MAJOR, 3) glfw.window_hint(glfw.CONTEXT_VERSION_MINOR, 3) </code></pre> <hr /> <p>Note, there is a typo in your fragment shader. GLSL is case sensitive. It has to be <code>gl_Position</code> rather than <code>gl_position</code>.</p> <hr /> <p>In a core profile <a href="https://www.khronos.org/opengl/wiki/OpenGL_Context" rel="noreferrer">OpenGL Context</a> you've to use a named <a href="https://www.khronos.org/opengl/wiki/Vertex_Specification#Vertex_Array_Object" rel="noreferrer">Vertex Array Object</a>, because the default Vertex Array Object (0) is not valid:</p> <pre class="lang-py prettyprint-override"><code>vao = glGenVertexArrays(1) # &lt;---- glBindVertexArray(vao) # &lt;---- position = glGetAttribLocation(shader, &quot;a_position&quot;) glEnableVertexAttribArray(position) glVertexAttribPointer(position, 3, GL_FLOAT, GL_FALSE, 0, ctypes.c_void_p(0)) </code></pre> <hr /> <p>Finally you missed to draw the geometry. Clear the frame buffer and draw the array:</p> <pre class="lang-py prettyprint-override"><code>while not glfw.window_should_close(window): glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) glDrawArrays(GL_TRIANGLES, 0, 3) glfw.poll_events() glfw.swap_buffers(window) </code></pre> <hr /> <p>Complete example:</p> <p><a href="https://i.stack.imgur.com/KUKj4.png" rel="noreferrer"><img src="https://i.stack.imgur.com/KUKj4.png" alt="" /></a></p> <pre class="lang-py prettyprint-override"><code>import glfw from OpenGL.GL import * from OpenGL.GL.shaders import compileProgram, compileShader import numpy as np vertex_src = &quot;&quot;&quot; # version 330 core in vec3 a_position; void main() { gl_Position = vec4(a_position, 1.0); } &quot;&quot;&quot; fragment_src = &quot;&quot;&quot; # version 330 core out vec4 out_color; void main() { out_color = vec4(1.0, 0.0, 0.0, 1.0); } &quot;&quot;&quot; if not glfw.init(): print(&quot;Cannot initialize GLFW&quot;) exit() glfw.window_hint(glfw.CONTEXT_VERSION_MAJOR, 3) glfw.window_hint(glfw.CONTEXT_VERSION_MINOR, 3) glfw.window_hint(glfw.OPENGL_FORWARD_COMPAT, True) glfw.window_hint(glfw.OPENGL_PROFILE, glfw.OPENGL_CORE_PROFILE) window = glfw.create_window(320, 240, &quot;OpenGL window&quot;, None, None) if not window: glfw.terminate() print(&quot;GLFW window cannot be creted&quot;) exit() glfw.set_window_pos(window, 100, 100) glfw.make_context_current(window) vertices = [-0.5, -0.5, 0.0, 0.5, -0.5, 0.0, 0.0, 0.5, 0.0] colors = [1, 0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0] vertices = np.array(vertices, dtype=np.float32) colors = np.array(colors, dtype=np.float32) shader = compileProgram(compileShader( vertex_src, GL_VERTEX_SHADER), compileShader(fragment_src, GL_FRAGMENT_SHADER)) buff_obj = glGenBuffers(1) glBindBuffer(GL_ARRAY_BUFFER, buff_obj) glBufferData(GL_ARRAY_BUFFER, vertices.nbytes, vertices, GL_STATIC_DRAW) vao = glGenVertexArrays(1) glBindVertexArray(vao) position = glGetAttribLocation(shader, &quot;a_position&quot;) glEnableVertexAttribArray(position) glVertexAttribPointer(position, 3, GL_FLOAT, GL_FALSE, 0, ctypes.c_void_p(0)) glUseProgram(shader) glClearColor(0, 0.1, 0.1, 1) while not glfw.window_should_close(window): glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) glDrawArrays(GL_TRIANGLES, 0, 3) glfw.poll_events() glfw.swap_buffers(window) glfw.terminate() </code></pre>
python|opengl|glsl|glfw|pyopengl
5
1,906,006
60,871,557
Pandas Pivot Multiple Date Columns
<p>I have a data frame like this:</p> <pre><code> ColumnA ColumnB 1/1/20 1/2/20 1/3/20 0 Thing1 Item1 4 5 3 1 Thing2 Item1 4 4 5 2 Thing3 Item2 3 4 5 </code></pre> <p>That I'd like to look like this:</p> <p><a href="https://i.stack.imgur.com/mnAvZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mnAvZ.png" alt="Result"></a></p> <p>But I can't figure out the best method to achieve this via pandas. Any help would be much appreciated. Thanks. </p>
<h3><code>melt</code></h3> <pre><code>df.melt(['ColumnA', 'ColumnB'], var_name='Date', value_name='Count') ColumnA ColumnB Date Count 0 Thing1 Item1 1/1/20 4 1 Thing2 Item1 1/1/20 4 2 Thing3 Item2 1/1/20 3 3 Thing1 Item1 1/2/20 5 4 Thing2 Item1 1/2/20 4 5 Thing3 Item2 1/2/20 4 6 Thing1 Item1 1/3/20 3 7 Thing2 Item1 1/3/20 5 8 Thing3 Item2 1/3/20 5 </code></pre> <hr> <h3>Clever Comprehension</h3> <pre><code>colname_1, colname_2, *dates = [*df] data = [ (c1, c2, date, count) for c1, c2, *counts in zip(*map(df.get, df)) for date, count in zip(dates, counts) ] pd.DataFrame(data, columns=[colname_1, colname_2, 'Date', 'Count']) ColumnA ColumnB Date Count 0 Thing1 Item1 1/1/20 4 1 Thing1 Item1 1/2/20 5 2 Thing1 Item1 1/3/20 3 3 Thing2 Item1 1/1/20 4 4 Thing2 Item1 1/2/20 4 5 Thing2 Item1 1/3/20 5 6 Thing3 Item2 1/1/20 3 7 Thing3 Item2 1/2/20 4 8 Thing3 Item2 1/3/20 5 </code></pre>
pandas
3
1,906,007
66,103,644
Using pyparsing, how can I group expressions that are matched by OneOrMore(expre1|expr2)?
<p>My website receives allows users to post a string that contains several Questions followed by multiple choice answers. There is an enforced style-guide that allows the results to be parsed by Regex and then Questions + MCQ choices are stored in a database, to be later returned in randomized practice exams.</p> <p>I wanted to transition over to pyparsing, because the regex is not immediately readable and I feel a little locked in with it. I would like to have the option to easily expand functionality of my questionparser, and with Regex it feels very cumbersome.</p> <p>User input is in the form of:</p> <pre><code>quiz = [&lt;question-answer&gt;, &lt;q-start&gt;] &lt;question-answer&gt; = &lt;question&gt; + &lt;answer&gt; &lt;question&gt; = [&lt;q-text&gt;, \n] ?!= &lt;a-start&gt; &lt;answer&gt; = [&lt;answer&gt;, &lt;a-start&gt;] ?!= &lt;q-start&gt; &lt;q-start&gt; = &lt;nums&gt; + &quot;.&quot; | &quot;)&quot; &lt;a-start&gt; = &lt;alphas&gt; + &quot;.&quot; | &quot;)&quot; </code></pre> <p>Long user-input string is separated into question-answers, deliminated by the the next question-answer group's q-start. Questions are all text between q-start and a-start. Answers are a list of all text between a-start and a-start or the following q-start.</p> <p>Sample text:</p> <pre><code>3. A lesion that affects N. Solitarius will result in the patient having problems related to: a. taste and blood pressure regulation c. swallowing and respiration b. smell and taste d. voice quality and taste e. whistling and chewing 4. A patient comes to your office complaining of weakness on the right side of their body. You notice that their head is turned slightly to the left and their right shoulder droops. When asked to protrude their tongue, it deviates to the right. Eye movements and eye-related reflexes appear to be normal. The lesion most likely is located in the: c. left ventral medulla a. left ventral midbrain b. right dorsal medulla d. left ventral pons e. right ventral pons 5. A colleague {...} </code></pre> <p>Regex I have been using:</p> <pre><code># matches a question-answer block. Matching q-start until an empty line. regex1 = r&quot;(^[\t ]*[0-9]+[\)\.][\t ]+[\s\S]*?(?=^[\n\r]))&quot; # Within question-answer block, matches everything that does not start with a-start regex6 = r&quot;(^(?!(^[a-fA-F][\)\.]\s+[\s\S]+)).*)&quot; # Matches all text between a-start and the following a-start, or until the question-answer substring block ends. regex5 = r&quot;(^[a-fA-F][\)\.]\s+[\s\S]+)&quot; </code></pre> <p>Then a little python and re to trim away question numbers, mcq letters, join all broken lines in question, append MCQs into a list.</p> <p>In pyparsing I have tried this:</p> <pre><code>EOL = Suppress(LineEnd()) delim = oneOf(&quot;. )&quot;) q_start = LineStart() + Word(nums) + delim a_start = LineStart() + Char(alphas) + delim question = Optional(EOL) + Group(Suppress(q_start) + OneOrMore(SkipTo(LineEnd()) + EOL, stopOn=a_start)).setResultsName('question', listAllMatches=True) answer = Optional(EOL) + Group(Suppress(a_start) + OneOrMore( SkipTo(LineEnd()) + EOL, stopOn=(a_start | q_start | StringEnd()))).setResultsName('answer', listAllMatches=True) qi = Group(OneOrMore(question|answer)).setResultsName('group', listAllMatches=True) t = qi.parseString(test) print(t.dump()) </code></pre> <p>Results:</p> <pre><code>[[['The tectum of the midbrain comprises the:'], ['superior and inferior colliculi'], ['reticular formation'], ['internal arcuate fibers'], ['cerebellar peduncles'], ['pyramids'], ['Damage to the dorsal columns on one side of the spinal cord would results in:'], ['loss of MVP ipsilaterally below the level of the lesion'], ['hypertonicity of the contralateral limbs'], ['loss of pain and temperature contralaterally below the level of the lesion'], ['loss of MVP contralaterally above the level of the lesion'], ['loss of pain and temperature ipsilaterally above the level of the lesion']]] - group: [[['The tectum of the midbrain comprises the:'], ['superior and inferior colliculi'], ['reticular formation'], ['internal arcuate fibers'], ['cerebellar peduncles'], ['pyramids'], ['Damage to the dorsal columns on one side of the spinal cord would results in:'], ['loss of MVP ipsilaterally below the level of the lesion'], ['hypertonicity of the contralateral limbs'], ['loss of pain and temperature contralaterally below the level of the lesion'], ['loss of MVP contralaterally above the level of the lesion'], ['loss of pain and temperature ipsilaterally above the level of the lesion']]] [0]: [['The tectum of the midbrain comprises the:'], ['superior and inferior colliculi'], ['reticular formation'], ['internal arcuate fibers'], ['cerebellar peduncles'], ['pyramids'], ['Damage to the dorsal columns on one side of the spinal cord would results in:'], ['loss of MVP ipsilaterally below the level of the lesion'], ['hypertonicity of the contralateral limbs'], ['loss of pain and temperature contralaterally below the level of the lesion'], ['loss of MVP contralaterally above the level of the lesion'], ['loss of pain and temperature ipsilaterally above the level of the lesion']] - answer: [['superior and inferior colliculi'], ['reticular formation'], ['internal arcuate fibers'], ['cerebellar peduncles'], ['pyramids'], ['loss of MVP ipsilaterally below the level of the lesion'], ['hypertonicity of the contralateral limbs'], ['loss of pain and temperature contralaterally below the level of the lesion'], ['loss of MVP contralaterally above the level of the lesion'], ['loss of pain and temperature ipsilaterally above the level of the lesion']] [0]: ['superior and inferior colliculi'] [1]: ['reticular formation'] [2]: ['internal arcuate fibers'] [3]: ['cerebellar peduncles'] [4]: ['pyramids'] [5]: ['loss of MVP ipsilaterally below the level of the lesion'] [6]: ['hypertonicity of the contralateral limbs'] [7]: ['loss of pain and temperature contralaterally below the level of the lesion'] [8]: ['loss of MVP contralaterally above the level of the lesion'] [9]: ['loss of pain and temperature ipsilaterally above the level of the lesion'] - question: [['The tectum of the midbrain comprises the:'], ['Damage to the dorsal columns on one side of the spinal cord would results in:']] [0]: ['The tectum of the midbrain comprises the:'] [1]: ['Damage to the dorsal columns on one side of the spinal cord would results in:'] </code></pre> <p>This <strong>does</strong> match questions and answers, and properly bypasses linebreaks that may interrupt questions or answers. The issue I am having is that they are not grouped the way I expected. I was expecting something along the lines of group[0] = question, answer[1:4] group[2] = question, answer[1:4]</p> <p>Does anyone have any advice?</p> <p>Thanks!</p>
<p>I think you were on the right track - I took a separate pass at your parser and came up with very similar constructs, but just a few differences.</p> <pre><code>question = Combine(q_start.suppress() + SkipTo(EOL + a_start)) answer = Combine(a_start.suppress() + SkipTo(EOL + (a_start | q_start | StringEnd()))) q_a = Group(question(&quot;question&quot;) + answer[1, ...](&quot;answers&quot;)) for t in q_a[...].parseString(test): print(t.dump()) </code></pre> <p>The biggest difference was that the expression I used to parse your text did not just do <code>OneOrMore(question | answer)</code>, but instead defined a <code>Group(question + OneOrMore(answer))</code>. This creates a group for each question and its related answers. In your parser, using listAllMatches just creates one results name for all the questions, and another for all the answers, but loses all the associations between them. By creating the &quot;question + one or more answers&quot; group, then these associations are maintained.</p> <p>If you want to remove the '\n's, you can do that more easily with a parse action than with the EOL business.</p>
python|regex|pyparsing
0
1,906,008
68,907,849
How to infer the shape of the output when connecting convolution layer with dense layers?
<p>I am trying to construct a Convolutional Neural Network using <code>pytorch</code> and can not understand how to interpret the input neurons for the first densely connected layer. Say, for example, I have the following architecture:</p> <pre><code>self.conv_layer = nn.Sequential( nn.Conv2d(3, 32, 5), nn.Conv2d(32, 64, 5), nn.MaxPool2d(2, 2), nn.Conv2d(64, 128, 5), nn.Conv2d(128, 128, 5), nn.MaxPool2d(2, 2)) self.fc_layer = nn.Sequential( nn.Linear(X, 512), nn.Linear(512, 128), nn.Linear(128, 10)) </code></pre> <p>Here <code>X</code> would be the number of neurons in the first linear layer. So, do I need to keep track of the shape of the output tensor at each layer so that I can figure out <code>X</code>?</p> <p>Now, I can put the values in the formula <code>(W - F + 2P) / S + 1</code> and calculate the shape after each layer, that would be somewhat convenient.</p> <p>Isn't there something even more convenient which might do this automatically?</p>
<p>An easy solution would be to use <code>LazyLinear</code> layer: <a href="https://pytorch.org/docs/stable/generated/torch.nn.LazyLinear.html" rel="noreferrer">https://pytorch.org/docs/stable/generated/torch.nn.LazyLinear.html</a>.</p> <p>According to the documentation:</p> <blockquote> <p>A <code>torch.nn.Linear</code> module where <code>in_features</code> is inferred ... They will be initialized after the first call to <code>forward</code> is done and the module will become a regular <code>torch.nn.Linear</code> module. The <code>in_features</code> argument of the Linear is inferred from the <code>input.shape[-1]</code>.</p> </blockquote>
python|pytorch|conv-neural-network
5
1,906,009
62,065,336
Classification method for unevenly spread data
<p>I have several datasets with very unevenly distributed values: Most values are very low, but a few are very high, for example, in the histogram screenshot or even more extreme.</p> <p>I am actually interested in the differences in the high values.</p> <p>So what I am looking for is a classification method that sets many break values where there are few data values and large classes where there are many values. Maybe something like a reversed quantile classification.</p> <p>Do you have a suggestion on which algorithm could help with this task, preferably in Python?</p> <p><a href="https://i.stack.imgur.com/fEBQj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fEBQj.png" alt="enter image description here"></a></p>
<p>if you are using pandas, couldn't you just select the values above your chosen threshold and analyze the difference seperately?</p> <pre><code>import pandas as pd df = pd.DataFrame(your data) df_to_analyze_large_values = df[df.your_Column_of_interest &gt; 100000] </code></pre>
python|statistics|classification|distribution
0
1,906,010
35,416,209
How to save trajectories of tracked objects with trackpy?
<p>I am testing <a href="http://soft-matter.github.io/trackpy/stable/" rel="nofollow">http://soft-matter.github.io/trackpy/stable/</a></p> <p>You can access my image data here: <a href="http://goo.gl/fMv5oE" rel="nofollow">http://goo.gl/fMv5oE</a></p> <p>My code for tracking objects in subsequent video images is:</p> <pre><code>import matplotlib.pyplot as plt plt.rcParams['image.cmap'] = 'gray' # Set grayscale images as default. import trackpy as tp import pims v = pims.ImageSequence('F:/*.png') f = tp.batch(v[:100],diameter=21,threshold=25) t = tp.link_df(f, 5) </code></pre> <p>How can I save <code>t</code>? (I am new to Python)</p>
<p>As a rule of thumb you can serialize objects using <a href="https://docs.python.org/2/library/pickle.html" rel="nofollow">Pickle</a>. </p> <pre><code>import pickle pickle.dump(t,open("filename.pck","wb")) </code></pre> <p>Also looking at the <a href="http://soft-matter.github.io/trackpy/stable/tutorial/walkthrough.html?highlight=save#preview-of-advanced-features" rel="nofollow">documentation o TrackPy</a> you can find some ways to store data as a Panda matrix.</p>
python|trackpy
2
1,906,011
59,677,496
'numpy.float64' object has no attribute 'abs'
<p>I have this dataframe and code.</p> <pre><code>from pandas import DataFrame import pandas as pd import numpy as np df = pd.DataFrame({'userId': [10,20,10,20,10,20,10,20], 'movieId': [500,500,800,800,700,700,1100,1100], 'ratings': [4.5,4.5,2.0,2.0,4.0,1.5,3.5,2.5]}) def finding_rating(df): r = df.pivot(index="movieId",columns="userId") r.columns = ["u1","u2"] r["drate"] = r.u1.sub(r.u2).abs() v = r.drate.iloc[:-1].mean()-r.drate.iloc[-1].abs() print(r,v) finding_rating(df) </code></pre> <p>I'm trying to take abs() value of <code>v</code>but it's giving this error. <code>'numpy.float64' object has no attribute 'abs'</code></p>
<p>Because working with scalars use:</p> <pre><code>v = abs(r.drate.iloc[:-1].mean()-r.drate.iloc[-1]) </code></pre> <p>Or</p> <pre><code>v = (r.drate.iloc[:-1].mean()-r.drate.iloc[-1]).__abs__() </code></pre>
python|pandas|numpy
7
1,906,012
60,320,137
Python 3 urllib.request data struggles
<p>Can anyone explain me why 'version 1' of my code doesn't work and 'version 2' does? Output of both versions are below. For some reason the <code>my_data</code> is incorrectly inserted in the urllib request but I cannot figure out why. I tried 20 different examples and methods also straight from Python documentation but it's no bueno. It's not only the 'action' key, also when i want to insert the username and password as keys it's no success. Scratching my brain here...</p> <pre><code>import urllib.request import urllib.error import urllib.parse my_url = 'http://{0}:{1}/?username=myuser&amp;password=mypass'.format('10.10.127.47', 80) my_headers = { "Content-Type" : "application/x-www-form-urlencoded" } # ---- begin version 1 / not working ---------------------------------------------- my_data = { "action" : "getmetadata" } my_uedata = urllib.parse.urlencode(my_data) my_edata = my_uedata.encode('utf-8') req = urllib.request.Request(url=my_url, data=my_edata, headers=my_headers) # ---- end version 1 -------------------------------------------------------------- # ---- begin version 2 / works fine ----------------------------------------------- req = urllib.request.Request(url=''.join([my_url, '&amp;action=getmetadata']), data=None, headers=my_headers) # ---- end version 2 -------------------------------------------------------------- response = urllib.request.urlopen(req) html = response.read() print(html) </code></pre> <p>Version 1 output: </p> <pre><code>b'{"schemaVersion":"3.0.0","action":"Unknown","actionDetail":null,"userName":"myuser","password":"mypass","metadata":[],"configurations":[],"commandItems":[]}' </code></pre> <p>Version 2 output:</p> <pre><code>b'{"schemaVersion":"3.0.0","action":"GetMetadata","actionDetail":null,"userName":"myuser","password":"mypass","metadata":[{"key":"permissions","value":[]},{"key":"Title","value":"Blabla"},{"key":"Description","value":"fjshkfsdhskjhfsk"},{"key":"Keyword","value":""},{"key":"Learningdescription","value":""},{"key":"Rightsdescription","value":"Creative Commons"},{"key":"SessionId","value":""},{"key":"PauseAndResumeVideoTime","value":""},{"key":"VideoSegments","value":""},{"key":"VideoTrackPosition","value":"0,25:50,75;50,25:100,75"},{"key":"SlideTrackPositionIndex","value":"2"},{"key":"Coverage","value":""},{"key":"Language","value":"en"},{"key":"Structure","value":"linear"},{"key":"Aggregationlevel","value":"3"},{"key":"SubjectAreas","value":"NBC1"},{"key":"Version","value":"1.0"},{"key":"Status","value":"final"},{"key":"EducationalLearningResourceType","value":"informatiebron"},{"key":"intendedenduserrole","value":"learner"},{"key":"EducationalContext","value":"HBO"},{"key":"Typicalagerange","value":"18-24"},{"key":"Difficulty","value":"medium"},{"key":"Typicallearningtime","value":""},{"key":"EducationalLanguage","value":"en"},{"key":"Cost","value":"No"},{"key":"Copyrights","value":"Yes"},{"key":"Showincatalogue","value":"True"},{"key":"PublishDate","value":""},{"key":"ExpirationTime","value":"730"},{"key":"Duration","value":"0"},{"key":"contributors","value":[]}],"configurations":[],"commandItems":[]}' </code></pre>
<p>Looks like in the first version you're specifying a payload that is to be sent, but eventually is not used, since <code>urllib.request.Request</code> assumes <code>GET</code> as default method, which does not include any body. If you'd like to send the payload, please specify as constructor argument <code>method='POST'</code>, which will allow the server to read the body of your request.</p> <p>In the second scenario, you're passing the payload as URL parameters, which are normally recognized by <code>GET</code> method.</p> <p>For reference, please see <a href="https://docs.python.org/3/library/urllib.request.html#urllib.request.Request" rel="nofollow noreferrer">the documentation</a>.</p>
python|python-requests|urllib
0
1,906,013
67,836,561
convert a timestamp in a dictionary to a date in Python
<p>i would like to convert in the dictionary the date to a date format, instead of timestamp</p> <pre><code>import pandas as pd import numpy as np from datetime import datetime rng = pd.date_range('2015-02-24', periods=5, freq='D') df = pd.DataFrame({ 'Date': rng, 'Val' : np.random.randn(len(rng))}) df_dict=dict(df.values) df_dict </code></pre>
<p>Do you need the <code>dataframe</code>?</p> <p>This will create a dictionary with the dates in <code>YYYY-MM-DD</code> format.</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import numpy as np from datetime import datetime rng = pd.date_range('2015-02-24', periods=5, freq='D') val = np.random.randn(len(rng)) a_dict = {k.strftime('%Y-%m-%d'): val[i] for i, k in enumerate(rng)} print(a_dict) &quot;&quot;&quot; Example Output {'2015-02-24': -0.07282471983155527, '2015-02-25': -0.41013201628459295, '2015-02-26': -0.30191959095195636, '2015-02-27': 2.294166235809919, '2015-02-28': 1.465762794927064} &quot;&quot;&quot; </code></pre> <p>If you do need the <code>dataframe</code> this will convert the 'timestamp' column to a date.</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import numpy as np from datetime import datetime rng = pd.date_range('2015-02-24', periods=5, freq='D') df = pd.DataFrame({ 'Date': rng, 'Val' : np.random.randn(len(rng))}) df['Date'] = df['Date'].dt.date df_dict=dict(df.values) print(df_dict) &quot;&quot;&quot; Example Output {datetime.date(2015, 2, 24): 0.9373082567531855, datetime.date(2015, 2, 25): -1.0216555373785885, datetime.date(2015, 2, 26): -0.678286157996744, datetime.date(2015, 2, 27): 1.6241576949559353, datetime.date(2015, 2, 28): 0.9808792692339873} &quot;&quot;&quot; </code></pre> <p>If you want a formatted date you could use this.</p> <pre class="lang-py prettyprint-override"><code>df['Date'] = df['Date'].dt.strftime('%Y-%m-%d') &quot;&quot;&quot; Example Output {'2015-02-24': -0.7589858965910741, '2015-02-25': -0.2388121120855477, '2015-02-26': 0.8886907324406109, '2015-02-27': -0.7603131636217634, '2015-02-28': 0.675297144576459} &quot;&quot;&quot; </code></pre>
python|pandas|date|dictionary
0
1,906,014
66,992,375
Kafka consumer.poll hangs with bitnami container
<p>I have the latest bitnami kafka container installed on a remote server.</p> <pre><code>[2021-04-07 18:05:38,263] INFO Client environment:zookeeper.version=3.5.8-f439ca583e70862c3068a1f2a7d4d068eec33315, built on 05/04/2020 15:53 GMT (org.apache.zookeeper.ZooKeeper) [2021-04-07 18:05:40,137] INFO Kafka version: 2.7.0 (org.apache.kafka.common.utils.AppInfoParser) </code></pre> <p>My kafka is configured so that I can have external connections.</p> <pre><code>kafka: image: 'bitnami/kafka:latest' container_name: kafka ports: - '9092:9092' - '9093:9093' environment: - KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181 - ALLOW_PLAINTEXT_LISTENER=yes - KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CLIENT:PLAINTEXT,EXTERNAL:PLAINTEXT - KAFKA_CFG_LISTENERS=CLIENT://:9092,EXTERNAL://:9093 - KAFKA_CFG_ADVERTISED_LISTENERS=CLIENT://kafka:9092,EXTERNAL://localhost:9093 - KAFKA_INTER_BROKER_LISTENER_NAME=CLIENT </code></pre> <p>ping and telnet to the ip address both work.</p> <p>I am able to run a producer and send data in python.</p> <pre><code>import kafka from time import sleep from json import dumps from kafka import KafkaProducer from kafka import KafkaConsumer #Producer--------------------------------------------------------------- producer = KafkaProducer(bootstrap_servers=['192.xxx.xx.xx:9093'], value_serializer=lambda x: dumps(x).encode('utf-8')) producer.send('TestTopic1', value='MyTest') </code></pre> <p>But, I am unable to consume the data. The script hangs at consumer.poll and never changes lines.</p> <pre><code>import kafka from time import sleep from json import dumps from kafka import KafkaConsumer # Consumer--------------------------------------------------------------- consumer = KafkaConsumer( 'TestTopic1', bootstrap_servers=['192.xxx.xx.xx:9093'], auto_offset_reset='earliest', enable_auto_commit=False, group_id='testgroup', value_deserializer=lambda x : loads(x.decode('utf-8'))) #I've tried both with group_id to None or with a group_id. print('BEFORE subscribe: ') consumer.subscribe(['TestTopic1']) print('BEFORE poll: ') # HANGS HERE!! Never gets to the print after consumer.poll(timeout_ms=500) print('AFTER POLL: ') consumer.seek_to_beginning() print('partitions of the topic: ', consumer.partitions_for_topic('TestTopic1')) for msg in consumer: print(type(msg)) </code></pre> <p>In the Kafka logs, I see the Topic getting created as well as other lines that I'm not quite sure what they mean.</p> <pre><code>[2021-04-07 18:05:40,234] INFO [broker-1001-to-controller-send-thread]: Recorded new controller, from now on will use broker 1001 (kafka.server.BrokerToControllerRequestThread) [2021-04-07 18:06:37,509] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment Map(23 -&gt; ArrayBuffer(1001), 32 -&gt; ArrayBuffer(1001), 41 -&gt; ArrayBuffer(1001), 17 -&gt; ArrayBuffer(1001), 8 -&gt; ArrayBuffer(1001), 35 -&gt; ArrayBuffer(1001), 44 -&gt; ArrayBuffer(1001), 26 -&gt; ArrayBuffer(1001), 11 -&gt; ArrayBuffer(1001), 29 -&gt; ArrayBuffer(1001), 38 -&gt; ArrayBuffer(1001), 47 -&gt; ArrayBuffer(1001), 20 -&gt; ArrayBuffer(1001), 2 -&gt; ArrayBuffer(1001), 5 -&gt; ArrayBuffer(1001), 14 -&gt; ArrayBuffer(1001), 46 -&gt; ArrayBuffer(1001), 49 -&gt; ArrayBuffer(1001), 40 -&gt; ArrayBuffer(1001), 13 -&gt; ArrayBuffer(1001), 4 -&gt; ArrayBuffer(1001), 22 -&gt; ArrayBuffer(1001), 31 -&gt; ArrayBuffer(1001), 16 -&gt; ArrayBuffer(1001), 7 -&gt; ArrayBuffer(1001), 43 -&gt; ArrayBuffer(1001), 25 -&gt; ArrayBuffer(1001), 34 -&gt; ArrayBuffer(1001), 10 -&gt; ArrayBuffer(1001), 37 -&gt; ArrayBuffer(1001), 1 -&gt; ArrayBuffer(1001), 19 -&gt; ArrayBuffer(1001), 28 -&gt; ArrayBuffer(1001), 45 -&gt; ArrayBuffer(1001), 27 -&gt; ArrayBuffer(1001), 36 -&gt; ArrayBuffer(1001), 18 -&gt; ArrayBuffer(1001), 9 -&gt; ArrayBuffer(1001), 21 -&gt; ArrayBuffer(1001), 48 -&gt; ArrayBuffer(1001), 3 -&gt; ArrayBuffer(1001), 12 -&gt; ArrayBuffer(1001), 30 -&gt; ArrayBuffer(1001), 39 -&gt; ArrayBuffer(1001), 15 -&gt; ArrayBuffer(1001), 42 -&gt; ArrayBuffer(1001), 24 -&gt; ArrayBuffer(1001), 6 -&gt; ArrayBuffer(1001), 33 -&gt; ArrayBuffer(1001), 0 -&gt; ArrayBuffer(1001)) (kafka.zk.AdminZkClient) [2021-04-07 18:06:37,534] INFO [KafkaApi-1001] Auto creation of topic __consumer_offsets with 50 partitions and replication factor 1 is successful (kafka.server.KafkaApis) [2021-04-07 18:06:37,547] INFO Creating topic TestTopic1 with configuration {} and initial partition assignment Map(0 -&gt; ArrayBuffer(1001)) (kafka.zk.AdminZkClient) [2021-04-07 18:06:37,557] INFO [KafkaApi-1001] Auto creation of topic TestTopic1 with 1 partitions and replication factor 1 is successful (kafka.server.KafkaApis) [2021-04-07 18:06:37,906] INFO [ReplicaFetcherManager on broker 1001] Removed fetcher for partitions Set(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-38, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-13, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) [2021-04-07 18:06:37,979] INFO [Log partition=__consumer_offsets-0, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [2021-04-07 18:06:37,991] INFO Created log for partition __consumer_offsets-0 in /bitnami/kafka/data/__consumer_offsets-0 with properties {compression.type -&gt; producer, min.insync.replicas -&gt; 1, message.downconversion.enable -&gt; true, segment.jitter.ms -&gt; 0, cleanup.policy -&gt; compact, flush.ms -&gt; 9223372036854775807, retention.ms -&gt; 604800000, segment.bytes -&gt; 104857600, flush.messages -&gt; 9223372036854775807, message.format.version -&gt; 2.7-IV2, max.compaction.lag.ms -&gt; 9223372036854775807, file.delete.delay.ms -&gt; 60000, max.message.bytes -&gt; 1048588, min.compaction.lag.ms -&gt; 0, message.timestamp.type -&gt; CreateTime, preallocate -&gt; false, index.interval.bytes -&gt; 4096, min.cleanable.dirty.ratio -&gt; 0.5, unclean.leader.election.enable -&gt; false, retention.bytes -&gt; -1, delete.retention.ms -&gt; 86400000, segment.ms -&gt; 604800000, message.timestamp.difference.max.ms -&gt; 9223372036854775807, segment.index.bytes -&gt; 10485760}. (kafka.log.LogManager) [2021-04-07 18:06:37,992] INFO [Partition __consumer_offsets-0 broker=1001] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) [2021-04-07 18:06:37,994] INFO [Partition __consumer_offsets-0 broker=1001] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) [2021-04-07 18:06:38,011] INFO [Log partition=__consumer_offsets-29, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) </code></pre> <p>Then I have another series of this type of messages in my log.</p> <pre><code>[2021-04-07 18:06:38,563] INFO Created log for partition __consumer_offsets-13 in /bitnami/kafka/data/__consumer_offsets-13 with properties {compression.type -&gt; producer, min.insync.replicas -&gt; 1, message.downconversion.enable -&gt; true, segment.jitter.ms -&gt; 0, cleanup.policy -&gt; compact, flush.ms -&gt; 9223372036854775807, retention.ms -&gt; 604800000, segment.bytes -&gt; 104857600, flush.messages -&gt; 9223372036854775807, message.format.version -&gt; 2.7-IV2, max.compaction.lag.ms -&gt; 9223372036854775807, file.delete.delay.ms -&gt; 60000, max.message.bytes -&gt; 1048588, min.compaction.lag.ms -&gt; 0, message.timestamp.type -&gt; CreateTime, preallocate -&gt; false, index.interval.bytes -&gt; 4096, min.cleanable.dirty.ratio -&gt; 0.5, unclean.leader.election.enable -&gt; false, retention.bytes -&gt; -1, delete.retention.ms -&gt; 86400000, segment.ms -&gt; 604800000, message.timestamp.difference.max.ms -&gt; 9223372036854775807, segment.index.bytes -&gt; 10485760}. (kafka.log.LogManager) [2021-04-07 18:06:38,563] INFO [Partition __consumer_offsets-13 broker=1001] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) [2021-04-07 18:06:38,563] INFO [Partition __consumer_offsets-13 broker=1001] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) [2021-04-07 18:06:38,577] INFO [GroupMetadataManager brokerId=1001] Scheduling loading of offsets and group metadata from __consumer_offsets-22 (kafka.coordinator.group.GroupMetadataManager) [2021-04-07 18:06:38,579] INFO [GroupMetadataManager brokerId=1001] Scheduling loading of offsets and group metadata from __consumer_offsets-25 (kafka.coordinator.group.GroupMetadataManager) [2021-04-07 18:06:38,579] INFO [GroupMetadataManager brokerId=1001] Scheduling loading of offsets and group metadata from __consumer_offsets-28 (kafka.coordinator.group.GroupMetadataManager) .... [2021-04-07 18:06:38,589] INFO [GroupMetadataManager brokerId=1001] Finished loading offsets and group metadata from __consumer_offsets-22 in 12 milliseconds, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) [2021-04-07 18:06:38,596] INFO [GroupMetadataManager brokerId=1001] Finished loading offsets and group metadata from __consumer_offsets-25 in 17 milliseconds, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) [2021-04-07 18:06:38,597] INFO [GroupMetadataManager brokerId=1001] Finished loading offsets and group metadata from __consumer_offsets-28 in 18 milliseconds, of which 17 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) .... [2021-04-07 18:06:38,638] INFO [ReplicaFetcherManager on broker 1001] Removed fetcher for partitions Set(TestTopic1-0) (kafka.server.ReplicaFetcherManager) [2021-04-07 18:06:38,643] INFO [Log partition=TestTopic1-0, dir=/bitnami/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) [2021-04-07 18:06:38,644] INFO Created log for partition TestTopic1-0 in /bitnami/kafka/data/TestTopic1-0 with properties {compression.type -&gt; producer, min.insync.replicas -&gt; 1, message.downconversion.enable -&gt; true, segment.jitter.ms -&gt; 0, cleanup.policy -&gt; [delete], flush.ms -&gt; 9223372036854775807, retention.ms -&gt; 604800000, segment.bytes -&gt; 1073741824, flush.messages -&gt; 9223372036854775807, message.format.version -&gt; 2.7-IV2, max.compaction.lag.ms -&gt; 9223372036854775807, file.delete.delay.ms -&gt; 60000, max.message.bytes -&gt; 1048588, min.compaction.lag.ms -&gt; 0, message.timestamp.type -&gt; CreateTime, preallocate -&gt; false, index.interval.bytes -&gt; 4096, min.cleanable.dirty.ratio -&gt; 0.5, unclean.leader.election.enable -&gt; false, retention.bytes -&gt; -1, delete.retention.ms -&gt; 86400000, segment.ms -&gt; 604800000, message.timestamp.difference.max.ms -&gt; 9223372036854775807, segment.index.bytes -&gt; 10485760}. (kafka.log.LogManager) [2021-04-07 18:06:38,647] INFO [Partition TestTopic1-0 broker=1001] No checkpointed highwatermark is found for partition TestTopic1-0 (kafka.cluster.Partition) [2021-04-07 18:06:38,647] INFO [Partition TestTopic1-0 broker=1001] Log loaded for partition TestTopic1-0 with initial high watermark 0 (kafka.cluster.Partition) </code></pre> <p>I don't see anything related to the consumer in here.</p> <p>Note that this is only a dev server. We're supposed to use that as a proof of concept to see if Kafka works for us and see if we'll use it in prod.</p> <p>Any help would be appreciated as we'd really like to be able to make it work and use it in production.</p>
<blockquote> <p>installed on a remote server.</p> </blockquote> <p>Then you need to advertise <strong>that server's address</strong> in <code>KAFKA_CFG_ADVERTISED_LISTENERS</code>, just a port mapping isn't sufficient</p> <p>It's timing out because the bootstrap protocol returns the advertised address, so your remote consumer is trying to read from <code>localhost:9093</code></p> <p>Your producer would also have a similar issue, but you aren't flushing the producer to actually send data</p> <hr /> <p>If you productionize using Docker orchestration platforms, you'll need to work around other networking configurations</p>
python|docker|apache-kafka|kafka-consumer-api
1
1,906,015
66,833,728
Boxplots with Seaborn for all variables in a dataset at once
<p>I watched many videos, read the Seaborn documentation, checked many websites, but I still haven't found the answer to a question.</p> <p>This is from the Seaborn documentation:</p> <pre><code>iris = sns.load_dataset(&quot;iris&quot;) ax = sns.boxplot(data=iris, orient=&quot;h&quot;, palette=&quot;Set2&quot;) </code></pre> <p>This code creates boxplots for each numerical variable in a single graph.</p> <p><img src="https://i.stack.imgur.com/zQEK3.png" alt="Boxplots for Iris dataset" /></p> <p>When I tried to add the hue= &quot;species&quot;, ValueError: Cannot use <code>hue</code> without <code>x</code> and <code>y</code>. Is there a way to do this with Seaborn? I want to see Boxplots of all the numerical variables and explore a categorical variable. So the graph will show all numerical variables for each species. Since there are 3 species, the total of Boxplots will be 12 (3 species times 4 numerical variables).</p> <p>I am learning about EDA (exploratory data analysis). I think the above graph will help me explore many variables at once.</p> <p>Thank you for taking the time to read my question!</p>
<p>To apply &quot;hue&quot;, seaborn needs the dataframe in <a href="http://seaborn.pydata.org/tutorial/data_structure.html#long-form-vs-wide-form-data" rel="nofollow noreferrer">&quot;long&quot; form</a>. <a href="https://pandas.pydata.org/docs/reference/api/pandas.melt.html" rel="nofollow noreferrer"><code>df.melt()</code></a> is a pandas function that can help here. It converts the numeric columns into 2 new columns: one called &quot;variable&quot; with the old name of the column, and one called &quot;value&quot; with the values. The resulting dataframe will be 4 times as long so that &quot;value&quot; can be used for <code>x=</code>, and &quot;variable&quot; for <code>y=</code>.</p> <p>The long form looks like:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th></th> <th>species</th> <th>variable</th> <th>value</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>setosa</td> <td>sepal_length</td> <td>5.1</td> </tr> <tr> <td>1</td> <td>setosa</td> <td>sepal_length</td> <td>4.9</td> </tr> <tr> <td>2</td> <td>setosa</td> <td>sepal_length</td> <td>4.7</td> </tr> <tr> <td>3</td> <td>setosa</td> <td>sepal_length</td> <td>4.6</td> </tr> <tr> <td>4</td> <td>setosa</td> <td>sepal_length</td> <td>5.0</td> </tr> <tr> <td></td> <td>...</td> <td>...</td> <td>...</td> </tr> </tbody> </table> </div> <pre class="lang-py prettyprint-override"><code>import seaborn as sns from matplotlib import pyplot as plt iris = sns.load_dataset(&quot;iris&quot;) iris_long = iris.melt(id_vars=['species']) ax = sns.boxplot(data=iris_long, x=&quot;value&quot;, y=&quot;variable&quot;, orient=&quot;h&quot;, palette=&quot;Set2&quot;, hue=&quot;species&quot;) plt.tight_layout() plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/eSx7r.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eSx7r.png" alt="boxplot with hue" /></a></p>
python|pandas|matplotlib|seaborn|iris-dataset
4
1,906,016
65,904,886
Print only once after max is reached
<p>I need to execute the final <code>print</code> statement only once, after <code>max_rows</code> is reached.</p> <p>Let's say you need to fill a list with 30 elements, but only want to print the first 20 times, then simply want to keep appending to the list without printing anything, except &quot;Truncating...&quot; (once).</p> <p>This is an ugly way to accomplish this. What could be a more elegant approach?</p> <pre><code>max_rows = 20 max_rows_exceeded = False rows = 0 i = 0 my_list = list() while i &lt; 30: i += 1 if rows &lt; max_rows: print(&quot;rows &lt; max_rows - %s&quot; % rows) rows += 1 else: max_rows_exceeded = True my_list.append(i) if max_rows_exceeded: print(&quot;Truncating due to max displayable row number exceeded: %s&quot; % max_rows) </code></pre>
<p>There is no reason to check if the rows exceeded <em><strong>inside</strong></em> the loop. You can check this <em>&quot;mathematically&quot;</em> after the loop:</p> <pre class="lang-py prettyprint-override"><code>max_rows = 20 my_list = [] upper_limit = 30 for rows in range(upper_limit): if rows &lt; max_rows: print(&quot;rows &lt; max_rows - %s&quot; % rows) my_list.append(rows) if max_rows &lt; upper_limit: print(&quot;Truncating due to max displayable row number exceeded: %s&quot; % max_rows) </code></pre>
python|python-3.x
1
1,906,017
65,883,869
Can I override fields from a Pydantic parent model to make them optional?
<p>I have two pydantic classes like this.</p> <pre><code>class Parent(BaseModel): id: int name: str email: str class ParentUpdate(BaseModel): id: Optional[int] name: Optional[str] email: Optional[str] </code></pre> <p>Both of these are practically the same but the <code>Parent</code> class makes all fields required. I want to use the <code>Parent</code> class for POST request body in FastAPI, hence all fields should be required. But I want to use the latter for PUT request body since the user can set selective fields and the remaining stays the same. I have taken a look at <a href="https://pydantic-docs.helpmanual.io/usage/models/#required-optional-fields" rel="nofollow noreferrer">Required Optional Fields</a> but they do not correspond to what I want to do.</p> <p>If there was a way I could inherit the <code>Parent</code> class in <code>ParentUpdate</code> and modified all the fields in <code>Parent</code> to make them <code>Optional</code> that would reduce the clutter. Additionally, there are some validators present in the <code>Parent</code> class which I have to rewrite in the <code>ParentUpdate</code> class which I also want to avoid.</p> <p>Is there any way of doing this?</p>
<p>You can make optional fields required in subclasses, but you cannot make required fields optional in subclasses. In fastapi author tiangolo's boilerplate projects, he utilizes a pattern like this for your example:</p> <pre><code>class ParentBase(BaseModel): &quot;&quot;&quot;Shared properties.&quot;&quot;&quot; name: str email: str class ParentCreate(ParentBase): &quot;&quot;&quot;Properties to receive on item creation.&quot;&quot;&quot; # dont need id here if your db autocreates it pass class ParentUpdate(ParentBase): &quot;&quot;&quot;Properties to receive on item update.&quot;&quot;&quot; # dont need id as you are likely PUTing to /parents/{id} # other fields should not be optional in a PUT # maybe what you are wanting is a PATCH schema? pass class ParentInDBBase(ParentBase): &quot;&quot;&quot;Properties shared by models stored in DB - !exposed in create/update.&quot;&quot;&quot; # primary key exists in db, but not in base/create/update id: int class Parent(ParentInDBBase): &quot;&quot;&quot;Properties to return to client.&quot;&quot;&quot; # optionally include things like relationships returned to consumer # related_things: List[Thing] pass class ParentInDB(ParentInDBBase): &quot;&quot;&quot;Additional properties stored in DB.&quot;&quot;&quot; # could be secure things like passwords? pass </code></pre> <p>Yes, I agree this is incredibly verbose and I wish it wasn't. You still likely end up with other schemas more specific to particular forms in your UI. Obviously, you can remove some of these as they aren't necessary in this example, but depending on other fields in your DB, they may be needed, or you may need to set defaults, validation, etc.</p> <p>In my experience for validators, you have to re-declare them but you can use a shared function, ie:</p> <pre><code>def clean_article_url(cls, v): return clean_context_url(v.strip()) class MyModel(BaseModel): article_url: str _clean_url = pydantic.validator(&quot;article_url&quot;, allow_reuse=True)(clean_article_url) </code></pre>
python|pydantic
9
1,906,018
65,551,840
How can I encode string data (convert to bytes) in Python 3.7
<p>i have a problem.</p> <p>i get data like:</p> <pre><code>hex_num='0EE6' data_decode=str(codecs.decode(hex_num, 'hex'))[(0):(80)] print(data_decode) &gt;&gt;&gt;b'\x0e\xe6' </code></pre> <p>And i want encode this like:</p> <pre><code>data_enc=str(codecs.encode(data_decode, 'hex'))[(2):(6)] print(str(int(data_enc,16))) &gt;&gt;&gt;TypeError: encoding with 'hex' codec failed (TypeError: a bytes-like object is required, not 'str') </code></pre> <p>If i wrote this:</p> <pre><code>data_enc=str(codecs.encode(b'\x0e\xe6', 'hex'))[(2):(6)] print(str(int(data_enc,16))) &gt;&gt;&gt;3814 </code></pre> <p>It will retrun number what i want (3814)</p> <p>Please help.</p>
<p>You can remove the quotation marks like this: <code>data = b'\x0e\xe6'</code></p> <p>The <a href="https://docs.python.org/3/reference/lexical_analysis.html#string-and-bytes-literals" rel="nofollow noreferrer">Python 3 documentation</a> states:</p> <blockquote> <p>Bytes literals are always prefixed with 'b' or 'B'; they produce an instance of the bytes type instead of the str type. They may only contain ASCII characters; bytes with a numeric value of 128 or greater must be expressed with escapes.</p> </blockquote> <p>When <code>b</code> is within a string, it will not behave like a string literal prefix, so you have to remove the quotations for the literal to work, and convert the text to bytes directly.</p> <p>Corrected code:</p> <pre><code>import codecs data = b'\x0e\xe6' data_enc=str(codecs.encode(data, 'hex'))[(2):(6)] print(str(int(data_enc,16))) </code></pre> <p>Output:</p> <pre><code>3814 </code></pre>
python|string
1
1,906,019
64,466,795
How can i search for a string from one column in a dataframe in a string value in another column in python?
<p><a href="https://i.stack.imgur.com/bOuCi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bOuCi.png" alt="enter image description here" /></a> I want to search 'search_value' in 'Search in string' column for every row and return TRUE if any of the words separated by | in search_value are present in 'Search in string'</p> <p>Thanks in advance</p>
<p>You could make good use of the keyword <code>any</code> and use <code>apply(lambda x:)</code> to your dataframe. It would result in this:</p> <pre><code>df['Flag'] = df.apply(lambda x: True if any(i in x['Search in string'].split() for i in x['Search_value'].split('|')) else False,axis=1) </code></pre> <p>This results in the expected output:</p> <pre><code> search_value ... Flag 0 civic|men|boy|furnishing|clothing|non durable|... ... True 1 environmental|cosmetic|beauty|perfume|apparel|... ... True </code></pre>
python|pandas|string|nlp|contains
1
1,906,020
70,559,429
How to get the same class variable of a list of class instances of the same class?
<p>I have a class which has some class variables, methods, etc. Let's call it <code>Cell</code>.</p> <pre><code>class Cell: def __init__(self): self.status = 0 ... </code></pre> <p>I have a list of different instances of this class.</p> <pre><code>grid = [Cell.Cell() for i in range(x_size*y_size)] </code></pre> <p>Is it possible to get the upper shown <code>status</code> variable of each of the instances stored in <code>grid</code> in a vectorized manner without looping through the elements of the list?</p>
<p>Not in vanilla Python.</p> <pre><code>statuses = [x.status for x in grid] </code></pre> <p>If you are looking for something that abstracts away the explicit iteration, or even just the <code>for</code> keyword, perhaps you'd prefer</p> <pre><code>from operator import attrgetter statuses = list(map(attrgetter('status'), grid)) </code></pre> <p>?</p>
python
1
1,906,021
50,000,399
File not found from Python although file exists
<p>I'm trying to load a simple text file with an array of numbers into Python. A MWE is</p> <pre><code>import numpy as np BASE_FOLDER = 'C:\\path\\' BASE_NAME = 'DATA.txt' fname = BASE_FOLDER + BASE_NAME data = np.loadtxt(fname) </code></pre> <p>However, this gives an error while running:</p> <pre><code>OSError: C:\path\DATA.txt not found. </code></pre> <p>I'm using VSCode, so in the debug window the link to the path is clickable. And, of course, if I click it the file opens normally, so this tells me that the path is correct.</p> <p>Also, if I do <code>print(fname)</code>, VSCode also gives me a valid path.</p> <p>Is there anything I'm missing?</p> <h1>EDIT</h1> <p>As per your (very helpful for future reference) comments, I've changed my code using the <code>os</code> module and raw strings:</p> <pre><code>BASE_FOLDER = r'C:\path_to_folder' BASE_NAME = r'filename_DATA.txt' fname = os.path.join(BASE_FOLDER, BASE_NAME) </code></pre> <p>Still results in error.</p> <h1>Second EDIT</h1> <p>I've tried again with another file. Very basic path and filename</p> <pre><code>BASE_FOLDER = r'Z:\Data\Enzo\Waste_Code' BASE_NAME = r'run3b.txt' </code></pre> <p>And again, I get the same error. If I try an alternative approach,</p> <pre><code>os.chdir(BASE_FOLDER) a = os.listdir() </code></pre> <p>then select the right file,</p> <pre><code>fname = a[1] </code></pre> <p>I still get the error when trying to import it. Even though I'm retrieving it directly from <code>listdir</code>.</p> <pre><code>&gt;&gt; os.path.isfile(a[1]) False </code></pre>
<p>Using the module <code>os</code> you can check the existence of the file within python by running </p> <pre><code>import os os.path.isfile(fname) </code></pre> <p>If it returns <code>False</code>, that means that your file doesn't exist in the specified fname. If it returns <code>True</code>, it should be read by <code>np.loadtxt()</code>.</p> <p><em>Extra: good practice working with files and paths</em></p> <p>When working with files it is advisable to use the amazing functionality built in the Base Library, specifically the module <code>os</code>. Where <code>os.path.join()</code> will take care of the joins no matter the operating system you are using.</p> <pre><code>fname = os.path.join(BASE_FOLDER, BASE_NAME) </code></pre> <p>In addition it is advisable to use raw strings by adding an <code>r</code> to the beginning of the string. This will be less tedious when writing paths, as it allows you to copy-paste from the navigation bar. It will be something like <code>BASE_FOLDER = r'C:\path'</code>. Note that you don't need to add the latest '\' as <code>os.path.join</code> takes care of it.</p>
python|file-io
4
1,906,022
66,635,697
Pandas Time Series Plot: Secondary y-Axis Label , Set Lower Limit
<p>I would like to have a time series plot including 2 columns - which works fine the way described below. I tried many ways to introduce a label for the secondary y axis with no success.</p> <pre><code>df.plot(kind='line', x='measurement_date', y=['crack_total', &quot;crack_inv_vel&quot;], secondary_y=[&quot;crack_inv_vel&quot;], xlabel=&quot;date&quot;, ylabel=&quot;crack width [m]&quot;, title=&quot;crack survey&quot;) </code></pre> <p>Example Plot as produced with the code above:-</p> <p><a href="https://i.stack.imgur.com/dSPc6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dSPc6.png" alt="enter image description here" /></a></p>
<p>Assign the return value of <code>df.plot</code> to a variable; it will be of type <code>matplotlib.axes._subplots.AxesSubplot</code>. Then <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.axes.Axes.twinx.html" rel="nofollow noreferrer"><code>twinx</code></a> it to get access to the secondary y-axis. Then you can adjust label etc.:</p> <pre class="lang-py prettyprint-override"><code>ax = df.plot(kind=&quot;line&quot;, x=&quot;measurement_date&quot;, y=[&quot;crack_total&quot;, &quot;crack_inv_vel&quot;], secondary_y=[&quot;crack_inv_vel&quot;], xlabel=&quot;date&quot;, ylabel=&quot;crack width [m]&quot;, title=&quot;crack survey&quot;) ax_secondary = ax.twinx() ax_secondary.set_ylabel(&quot;secondary y-label&quot;) </code></pre>
python|pandas|axis-labels
0
1,906,023
64,838,481
How can I download videos from reddit using praw
<p>So I want to download videos from reddit, I've seen projects on github but I'm very new to python and don't know how it works, if someone could explain I'd appreciate it, I found this <a href="https://pypi.org/project/redvid/" rel="nofollow noreferrer">project</a> It works but it seperates audio and video and I want it all in one, I think you can combine those using ffmpeg but I don't know how that works either, also how do I configure some of this stuff like where the videos save and quality, here's my code.</p> <pre><code>from redvid import Downloader import praw reddit = praw.Reddit(client_id = &quot;a&quot;, client_secret = &quot;b&quot;, user_agent = &quot;c&quot;) subreddit = reddit.subreddit(&quot;learnpython&quot;) hot = subreddit.hot(limit=5) reddit1 = Downloader(max_q=True) for submission in hot: reddit1.url = submission.url reddit.download() </code></pre>
<p>There is a very easy to use library that installs reddit videos with sound just install <a href="https://pypi.org/project/RedDownloader/" rel="nofollow noreferrer">RedDownloader</a> by:</p> <pre><code>pip install RedDownloader </code></pre> <p>use it with:</p> <pre><code>from RedDownloader import RedDownloader file = RedDownloader.Download(url = &quot;url of post&quot; , output=&quot;output file name here&quot; , quality = 720) </code></pre> <p>quality is video quality where possible qualities are 360 , 720 , 1080</p>
python|python-3.x|ffmpeg|praw
4
1,906,024
64,019,438
Python Speech Recognition - Sphinx
<p>I am make a simple speech recognition program that will enable me to control my robot with voice command. I only want the program to look for certain words and be relatively fast. My project is based on Micheal Reeves 'I made a robot that shines a laser in my eye' and am trying to create something similar to the voice commands seen in his video.</p> <p>The issue I am having is that sphinx is fast, but (EDIT: NOT ACCURATE). As well as this, when I enable keywords, the out goes weird. If I say command shutdown, the output will be :</p> <pre><code>&quot;three nine one four five eight two one eight nine three four two six zero eight nine two one six four eight seven one three four nine five eight two eight four five nine three one two eight six nine three five seven two zero one nine five eight two four four nine one five eight three two six four two zero seven one nine three four five eight two five one three four eight two six eight zero one three four five two seven eight eight three nine five two four eight one two eight two eight two eight command shutdown command eight one four three eight two two eight &quot; </code></pre> <p>I am not sure to fix this and I tried to do recognise_google but and it was much more accurate but really slow and i want the keywords enabled so that it only checks if a said a collection of words and then prints it the the screen if i did.</p> <p>The other issue I am having is with the listen_in_background() function. I cant seem to get it working properly.</p> <p>Here is my code:</p> <pre><code>import speech_recognition as sr import pocketsphinx keywords = [ (&quot;command&quot;, 1), (&quot;one&quot;, 0), (&quot;two&quot;, 0), (&quot;three&quot;, 0), (&quot;four&quot;, 0), (&quot;five&quot;, 0), (&quot;six&quot;, 0), (&quot;seven&quot;, 0), (&quot;eight&quot;, 0), (&quot;nine&quot;, 0), (&quot;zero&quot;, 0), (&quot;command x axis add&quot;, 0), (&quot;command y axis add&quot;, 0), (&quot;command x axis subtract&quot;, 0), (&quot;command y axis subtract&quot;, 0), (&quot;command clear shift string&quot;, 0), (&quot;command shutdown&quot;, 0), (&quot;command flip tracking&quot;, 0), (&quot;command pause&quot;, 0), (&quot;command detect face&quot;, 0), (&quot;command detect body&quot;, 0) ] def speech2text(): r = sr.Recognizer() with sr.Microphone() as source: r.adjust_for_ambient_noise(source) audio = r.listen(source) #this is were i want to listen in the background to run it at the same #time as other code try: data = r.recognize_sphinx(audio, keyword_entries = keywords) return data except: return &quot;Error...&quot; while True: print(speech2text()) </code></pre>
<p>I had the same issue. I tried different sensitivities from 0 to 1 and found that if all keywords have a sensitivity of over 0.9, they recognize equally and reasonably accurately and don't randomly spam the output phrase. If the value is any lower than that it spits way too many keywords than is reasonable.</p> <p>I also got an UnknownValueError when ANY word that wasn't a keyword. If you're looking for a way to only detect those keywords, I'd definitely try setting their sensitivities all to 1 and seeing where that gets you. I think the only downside might be that if words in the keyword list are similar than you might get different hits than you'd expect.</p>
python|performance|speech-recognition|pocketsphinx
0
1,906,025
53,329,830
Python/Django: how do I determine the class to specify in an "except" statement?
<p>I am working with some Python/Django code that I inherited. In it there is a DB call surrounded by a try block, with an "except Exception ex" statement to catch all exceptions. I want to be more selective, and catch only the exception type that I anticipate. By examining the Exception object that is caught, I can tell that it's of type "DatabaseError". </p> <p>The comments in my code below show the many things I have tried, based on Googling the question and searching here, along with the errors that Python gives me when I try them. Most frustrating is that I've found plenty of examples on the web of code the people say works, that looks just like what I'm trying. But the code samples I find generally don't include the "import" lines.</p> <p>I suspect I need to import something more, but I can't figure out what.</p> <pre><code>import cx_Oracle ## import cx_Oracle.DatabaseError # creates an error saying "No name 'DatabaseError in module 'cx_Oracle'" ## from cx_Oracle import DatabaseError # creates an error saying "No name 'DatabaseError in module 'cx_Oracle'" . . . connection = connections['ims_db'] sqlUpdateQuery = "SELECT * FROM LOCKING WHERE IDENT={0} AND KEY_ID='SL_KEY' FOR UPDATE NOWAIT".format(self.serviceUid) cursor = connection.cursor() try: cursor.execute(sqlUpdateQuery) # this gets the error "Undefined variable 'Database'" ## except Database.DatabaseError as dbe3: # this gets the error "Undefined variable 'Oracle'" ## except Oracle.DatabaseError as dbe2: # this gets the error "Module 'cx_Oracle' has no 'DatabaseError' member" ## except cx_Oracle.DatabaseError as dbe: # this gets the error "Undefined variable 'DatabaseError'" ## except DatabaseError as dbe: # this gets the error "Catching an exception which doesn't inherit from BaseException: cx_Oracle" ## except cx_Oracle as dbe: # this gets the error "Module cx_Oracle has no '_error' member" ## except cx_Oracle._Error as dbe: except Exception as ex: # This gets the error "Module cx_Oracle has no 'DatabaseError' member" ## if isinstance(ex, cx_Oracle.DatabaseError) : # This gets the error "Undefined variable 'DatabaseError'" ## if isinstance(ex, DatabaseError) : className = type(ex).__name__ # This logs "... class = DatabaseError ..." log.error("Exception (class = {1}) while locking service {0}".format(self.serviceUid, className)) args = ex.args arg0=args[0] # arg0ClassName is "_Error" arg0ClassName = type(arg0).__name__ code = arg0.code # codeClassName is "int" codeClassName = type(code).__name__ msg = "Exception, code = {0}".format(code) log.debug(msg) raise </code></pre>
<p>The correct syntax is as follows:</p> <pre><code>try: cur.execute("some_sql_statement") except cx_Oracle.DatabaseError as e: error, = e.args print("CONTEXT:", error.context) print("MESSAGE:", error.message) </code></pre> <p>You can see that syntax in a few of the samples (like TypeHandlers.py) that you can find here: <a href="https://github.com/oracle/python-cx_Oracle/tree/master/samples" rel="nofollow noreferrer">https://github.com/oracle/python-cx_Oracle/tree/master/samples</a>.</p> <p>Try running the samples and working with them to see if you can resolve your issue. If not, please create an issue containing a complete runnable sample here: <a href="https://github.com/oracle/python-cx_Oracle/issues" rel="nofollow noreferrer">https://github.com/oracle/python-cx_Oracle/issues</a>.</p>
python|exception|exception-handling|cx-oracle
1
1,906,026
68,659,748
How to merge items based on first element in a python nested list and at the same time sum specific items and combine last item by dashes?
<p>I am new to Python and I am working on nested list in python. I am actually trying to integrate multiple operations within a nested list of items. I am not sure how to execute everything to achieve intended output.</p> <p>I have this nested list</p> <pre><code>lst = [['ABC', 'A-1', 10, 1], ['BCD', 'B-1', 5, 1], ['ABC', 'A-1', 15, 2], ['ABC', 'B-1', 3, 3], ['BCD', 'B-1', 20, 4], ['ABC', 'A-1', 5, 4]] </code></pre> <p>I am planning to do the following things within the nested list.</p> <ol> <li><p>If the first two elements of list within the nested list are same, then I have to sum the third element and combine fourth element by dashes(-). So, for example, three of the lists within the nested lists would merge and become</p> <p>['ABC', 'A-1', 30, '1-2-4']</p> </li> <li><p>Then, For each of the existing lists, if first element is same, I had to merge them based on first element. So I get the final output as mentioned below.</p> <p>output = [['ABC', ['A-1', 30, '1-2-4'], ['B-1', 3, '3']], ['BCD', ['B-1', 25, '1-4']]]</p> </li> </ol>
<p>You can use <code>collections.defaultdict</code>:</p> <pre><code>import collections lst = [['ABC', 'A-1', 10, 1], ['BCD', 'B-1', 5, 1], ['ABC', 'A-1', 15, 2], ['ABC', 'B-1', 3, 3], ['BCD', 'B-1', 20, 4], ['ABC', 'A-1', 5, 4]] d = collections.defaultdict(dict) for a, b, *c in lst: if b not in d[a]: d[a][b] = [c] else: d[a][b].append(c) result = [[a, *[[j, sum((l:=[*zip(*k)])[0]), '-'.join(map(str, l[1]))] for j, k in b.items()]] for a, b in d.items()] </code></pre> <p>Output:</p> <pre><code>[['ABC', ['A-1', 30, '1-2-4'], ['B-1', 3, '3']], ['BCD', ['B-1', 25, '1-4']]] </code></pre>
python|merge|nested-lists
0
1,906,027
68,836,335
Delete post in django
<p>I know similar kind of question is asked before, but I was not able to get it. This is my first project as a Django beginner.</p> <p>In my Django blog app, I made a delete button but it is not working and I am finding for answers, trying different methods on the web but it did not help.</p> <p>I am trying to do is when admin open the post, then on clicking the delete button, it take the post-id and delete that post and redirect to home page, but it is not working as expected. So, lastly I came here. Any help will be appreciated. Thanks!</p> <p>This is my urls.py file:</p> <pre><code>from django.urls import path from . import views urlpatterns = [ path('', views.index, name='index'), path('post/&lt;int:pk&gt;', views.post, name='post'), path('about', views.about, name='about'), path('contact_us', views.contact_us, name='contact_us'), path('register', views.register, name='register'), path('login', views.login, name='login'), path('logout', views.logout, name='logout'), path('create_post', views.create_post, name='create_post'), path('delete_post', views.delete_post, name='delete_post') ] </code></pre> <p>This is my views.py file:</p> <pre><code>def delete_post(request, *args, **kwargs): pk = kwargs.get('pk') post = get_object_or_404(Post, pk=pk) if request.method == 'POST': post.delete() return redirect('/') return render(request, 'delete-post.html') </code></pre> <p>This is delete post html form:</p> <pre><code>&lt;form action=&quot;{% url 'delete_post' post.id %}&quot; method=&quot;post&quot;&gt; {% csrf_token %} &lt;input type=&quot;submit&quot; value=&quot;Delete post&quot;&gt; &lt;/form&gt; </code></pre> <p>Delete button:</p> <pre><code>&lt;a href=&quot;delete_post&quot;&gt;&lt;button type=&quot;button&quot; class=&quot;btn btn-danger&quot; style=&quot;position:relative; right: -1145px;&quot;&gt;Delete&lt;/button&gt;&lt;/a&gt; </code></pre>
<p>for deleting a post</p> <pre><code>def delete_post(request, id): post = Post.objects.filter(id=id) address.delete() return redirect('/') </code></pre> <p>and in your html</p> <pre><code>&lt;a class=&quot;btn btn-outline-danger&quot; href=&quot;{% url 'appname:delete_post' id=post.id %}&quot;&gt;Delete It&lt;/a&gt; </code></pre> <p>and in your urls.py</p> <pre><code>path('&lt;int:id&gt;/delete-post',views.delete_post,name='delete_post') </code></pre>
python|django
1
1,906,028
71,529,078
Filtering pandas dataframe from user input
<p>I have successfully pulled a table from a website, but I am wanting to filter it by a list of numbers in different variables that the user inputs.</p> <p>in basic terms below, as I am still a little new to this all:</p> <p>( if input = list_1 then filter header 1 (table) by values in list_1 )</p> <p>Apologies If I have not exampled this clearly enough!</p> <pre><code>import pandas as pd import ssl list_1 = (1,2,3,4,5,6,) list_2 = (5,6,7,8,9,10) pd.set_option('display.max_rows', None) pd.set_option('display.max_columns', None) pd.set_option('display.width', None) pd.set_option('display.max_colwidth', None) ssl._create_default_https_context = ssl._create_unverified_context dfs = pd.read_html(&quot;Some_Wesbite&quot;) print(dfs) output: Header 1 | Header 2 | 1 text 234 text 56 text 23 text 7664 text etc etc </code></pre>
<p>Try:</p> <pre><code>dfs[dfs['Header 1'].isin(list_1)] </code></pre> <p>If this doesn't work make a minimal reproducible example with what you want.</p> <p>Edit:</p> <pre><code>list_1 = (1,2,3,4,5,6,) list_2 = (3,4,5,24) df= pd.DataFrame({'header 1': [2,1,4242,24,22,13], 'header 2': ['a','b','c','d','e','f']}) dic_list = {'list_1':list_1,'list_2':list_2} mask = dic_list[input(&quot;write list_1 or list2&quot;)] df[df['header 1'].isin(list_1)] </code></pre>
python|pandas|dataframe|web-scraping
0
1,906,029
10,345,278
Understanding lambda in python and using it to pass multiple arguments
<p>After reading everything I can find on lambda, I still don't understand how to make it do what I want.</p> <p>Everyone uses the example:</p> <pre><code>lambda x, y : x + y </code></pre> <p>Why do you need to state both <code>x</code> and <code>y</code> before the <code>:</code>? Also how do you make it return multiple arguments?</p> <p>for example:</p> <pre><code>self.buttonAdd_1 = Button(self, text='+', command=lambda : self.calculate(self.buttonOut_1.grid_info(), 1)) </code></pre> <p>This works just fine. But the following code does not:</p> <pre><code>self.entry_1.bind("&lt;Return&gt;", lambda : self.calculate(self.buttonOut_1.grid_info(), 1)) </code></pre> <p>It yields the error:</p> <blockquote> <p>TypeError: () takes no arguments (1 given)</p> </blockquote>
<blockquote> <p>Why do you need to state both 'x' and 'y' before the ':'?</p> </blockquote> <p>Because a lambda is (conceptually) the same as a function, just written inline. Your example is equivalent to </p> <pre><code>def f(x, y) : return x + y </code></pre> <p>just without binding it to a name like <code>f</code>.</p> <blockquote> <p>Also how do you make it return multiple arguments?</p> </blockquote> <p>The same way like with a function. Preferably, you return a tuple:</p> <pre><code>lambda x, y: (x+y, x-y) </code></pre> <p>Or a list, or a class, or whatever.</p> <p>The thing with <code>self.entry_1.bind</code> should be answered by Demosthenex.</p>
python|python-3.x|tkinter
159
1,906,030
10,272,417
Checking if word segmentation is possible
<p>This is a follow up question to <a href="https://stackoverflow.com/questions/3466972/how-to-split-a-string-into-words-ex-stringintowords-string-into-words/3469228#3469228">this response</a> and the pseudo-code algorithm that the user posted. I didn't comment on that question because of its age. I am only interested in validating whether or not a string can be split into words. The algorithm doesn't need to actually split the string. This is the response from the linked question:</p> <blockquote> <p>Let S[1..length(w)] be a table with Boolean entries. S[i] is true if the word w[1..i] can be split. Then set S[1] = isWord(w[1]) and for i=2 to length(w) calculate</p> <p>S[i] = (isWord[w[1..i] or for any j in {2..i}: S[j-1] and isWord[j..i]).</p> </blockquote> <p>I'm translating this algorithm into simple python code, but I'm not sure if I'm understanding it properly. Code:</p> <pre><code>def is_all_words(a_string, dictionary)): str_len = len(a_string) S = [False] * str_len S[0] = is_word(a_string[0], dictionary) for i in range(1, str_len): check = is_word(a_string[0:i], dictionary) if (check): S[i] = check else: for j in range(1, str_len): check = (S[j - 1] and is_word(a_string[j:i]), dictionary) if (check): S[i] == True break return S </code></pre> <p>I have two related questions. 1) Is this code a proper translation of the linked algorithm into Python, and if it is, 2) Now that I have S, how do I use it to tell if the string <em>is</em> only comprised of words? In this case, <code>is_word</code> is a function that simply looks a given word up in a list. I haven't implemented it as a trie yet. </p> <p>UPDATE: After updating the code to include the suggested change, it doesn't work. This is the updated code:</p> <pre><code>def is_all_words(a_string, dictionary)): str_len = len(a_string) S = [False] * str_len S[0] = is_word(a_string[0], dictionary) for i in range(1, str_len): check = is_word(a_string[0:i], dictionary) if (check): S[i] = check else: for j in range(1, i): #THIS LINE WAS UPDATED check = (S[j - 1] and is_word(a_string[j:i]), dictionary) if (check): S[i] == True break return S a_string = "carrotforever" S = is_all_words(a_string, dictionary) print(S[len(S) - 1]) #prints FALSE a_string = "hello" S = is_all_words(a_string, dictionary) print(S[len(S) - 1]) #prints TRUE </code></pre> <p>It should return <code>True</code> for both of these. </p>
<p>Here is a modified version of your code that should return good results. Notice that your mistake was simply in the translation from pseudocode array indexing (starting at 1) to python array indexing (starting at 0) therefore S[0] and S[1] where populated with the same value where S[L-1] was actually never computed. You can easily trace this mistake by printing the whole S values. You will find that S[3] is set true in the first example where it should be S[2] for the word "car". Also you could speed up the process by storing the index of composite words found so far, instead of testing each position.</p> <pre><code>def is_all_words(a_string, dictionary): str_len = len(a_string) S = [False] * (str_len) # I replaced is_word function by a simple list lookup, # feel free to replace it with whatever function you use. # tries or suffix tree are best for this. S[0] = (a_string[0] in dictionary) for i in range(1, str_len): check = a_string[0:i+1] in dictionary # i+1 instead of i if (check): S[i] = check else: for j in range(0,i+1): # i+1 instead of i if (S[j-1] and (a_string[j:i+1] in dictionary)): # i+1 instead of i S[i] = True break return S a_string = "carrotforever" S = is_all_words(a_string, ["a","car","carrot","for","eve","forever"]) print(S[len(a_string)-1]) #prints TRUE a_string = "helloworld" S = is_all_words(a_string, ["hello","world"]) print(S[len(a_string)-1]) #prints TRUE </code></pre>
python|algorithm|nlp|dynamic-programming|text-segmentation
2
1,906,031
62,722,728
AssertionError when comparing pd DataFrame
<p>I'm developing a test for a function that I created. My function returns a pandas DataFrame and my test consists in comparing it with a csv file that is stored. I'm using the following script to do so. When I run it, I get <code>AssertionError</code> with no other message.</p> <pre><code>rates_over = get_rates_over(args) gabarito = pd.read_csv(f'{ROOT_DIR}/data/static/rates_over_teste.csv', parse_dates=['date']) assert rates_over.equals(gabarito) </code></pre> <p>But I suspected that my function was good, so I did the following and it didn't print anything, showing that my intuition was right. What is happening?</p> <pre><code>for index, row in gabarito.iterrows(): if not row.equals(rates_over.iloc[index]): print('Not equal!') </code></pre> <p>EDIT: As suggested by @gallen, here is a print for type and head of both <code>gabarito</code> and <code>Rates_over</code>.</p> <p><a href="https://i.stack.imgur.com/7o8nz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7o8nz.png" alt="enter image description here" /></a></p>
<p>A DataFrame is <strong>never</strong> equal to a Series.</p> <h3><code>pd.DataFrame.equals</code></h3> <blockquote> <p>This function allows two Series or DataFrames to be compared against each other to see if they have the same shape and elements.</p> </blockquote> <p>It is meant to compare a <strong>DataFrame with a DataFrame</strong>, or a <strong>Series with a Series</strong>, not a mixture of a Series with a DataFrame.</p> <p>A Series and a DataFrame have entirely different dimensionality.</p> <pre><code>import pandas as pd df = pd.DataFrame({'foo': [1,2,3]}) s = df['foo'] print(df.shape) #(3, 1) print(s.shape) #(3,) </code></pre> <p>The first check in the <a href="https://github.com/pandas-dev/pandas/blob/v1.0.5/pandas/core/internals/managers.py#L1397-L1400" rel="nofollow noreferrer"><code>equals</code></a> method is to check the dimensionality, so it quickly returns False without ever checking the data.</p> <pre><code>def equals(self, other): self_axes, other_axes = self.axes, other.axes if len(self_axes) != len(other_axes): return False #... len(s._data.axes) #1 len(df._data.axes) #2 </code></pre> <hr /> <p>If you are <em>certain</em> your DataFrame only has a single column, then you can <code>squeeze</code> it before comparing with your Series.</p> <pre><code>df.squeeze().equals(s) #True </code></pre> <p>Alternatively convert your Series to a DataFrame using the Series name.</p> <pre><code>df.equals(s.to_frame(s.name)) #True </code></pre>
python|pandas|assertion
2
1,906,032
62,669,269
Crawl table data without 'next button' with Scrapy
<p>I am quite new to <code>Scrapy</code> and I try to get table data from every page from this <a href="https://www.uaeonlinedirectory.com/UFZOnlineDirectory.aspx?item=A" rel="nofollow noreferrer">website</a>.</p> <p><a href="https://i.stack.imgur.com/faPMh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/faPMh.png" alt="enter image description here" /></a></p> <p>This is my code:</p> <pre><code>import scrapy class UAESpider(scrapy.Spider): name = 'uae_free' allowed_domains = ['https://www.uaeonlinedirectory.com'] start_urls = [ 'https://www.uaeonlinedirectory.com/UFZOnlineDirectory.aspx?item=A' ] def parse(self, response): pages = response.xpath('//table[@class=&quot;GridViewStyle&quot;]//tr[12]') for page in pages[1:11]: rows = page.xpath('//table[@class=&quot;GridViewStyle&quot;]//tr') for row in rows[1:11]: yield { 'company_name': row.xpath('.//td[2]//text()').get(), 'company_name_link': row.xpath('.//td[2]//a/@href').get(), 'zone': row.xpath('.//td[4]//text()').get(), 'category': row.xpath('.//td[6]//text()').get(), 'category_link': row.xpath('.//td[6]//a/@href').get() } next_page = response.xpath('//table[@class=&quot;GridViewStyle&quot;]//tr[12]//td[11]//a/@href').get() if next_page: yield scrapy.Request(url=next_page, callback=self.parse) </code></pre> <p>But it doesn't work, I get this error, the URL below is the link to <code>page 11</code>:</p> <pre><code>ValueError: Missing scheme in request url: javascript:__doPostBack('ctl00$ContentPlaceHolder2$grdDirectory','Page$11') </code></pre> <p>Do you guys know how to fix the bug?</p> <p><strong>Update:</strong></p> <p>Follow the instruction from this <a href="https://stackoverflow.com/a/28976674/9500955">answer</a> suggested by @zmike, this is what I have done so far:</p> <pre><code>import scrapy from scrapy.http import FormRequest URL = 'https://www.uaeonlinedirectory.com/UFZOnlineDirectory.aspx?item=A' class UAESpider(scrapy.Spider): name = 'uae_free' allowed_domains = ['https://www.uaeonlinedirectory.com/UFZOnlineDirectory.aspx?item=A'] start_urls = [ 'https://www.uaeonlinedirectory.com/UFZOnlineDirectory.aspx?item=A' ] def parse(self, response): self.data = {} for form_input in response.css('form#aspnetForm input'): name = form_input.xpath('@name').extract()[0] try: value = form_input.xpath('@value').extract()[0] except IndexError: value = &quot;&quot; self.data[name] = value self.data['ctl00_ContentPlaceHolder2_panelGrid'] = 'ctl00$ContentPlaceHolder2$grdDirectory' self.data['__EVENTTARGET'] = 'ctl00$ContentPlaceHolder2$grdDirectory' self.data['__EVENTARGUMENT'] = 'Page$1' return FormRequest(url=URL, method='POST', callback=self.parse_page, formdata=self.data, meta={'page':1}, dont_filter=True) def parse_page(self, response): current_page = response.meta['page'] + 1 rows = response.xpath('//table[@class=&quot;GridViewStyle&quot;]//tr') for row in rows[1:11]: yield { 'company_name': row.xpath('.//td[2]//text()').get(), 'company_name_link': row.xpath('.//td[2]//a/@href').get(), 'zone': row.xpath('.//td[4]//text()').get(), 'category': row.xpath('.//td[6]//text()').get(), 'category_link': row.xpath('.//td[6]//a/@href').get() } return FormRequest(url=URL, method='POST', formdata={ '__EVENTARGUMENT': 'Page$%d' % current_page, '__EVENTTARGET': 'ctl00$ContentPlaceHolder2$grdDirectory', 'ctl00_ContentPlaceHolder2_panelGrid':'ctl00$ContentPlaceHolder2$grdDirectory', '':''}, meta={'page': current_page}, dont_filter=True) </code></pre> <p>And this code only gets table data from the first page, it doesn't move to the remaining page. Do you know where I do wrong?</p>
<p>Here is a working (albeit rough) implementation of the crawler that goes through all the pages. Some notes:</p> <ul> <li>The Form data required different parameters e.g. <code>__EVENTTARGET</code>, <code>__EVENTVALIDATION</code>, <code>__VIEWSTATEGENERATOR</code>, etc. <ul> <li>I used XPath to get them instead of regex</li> </ul> </li> <li>The following was not necessary: <code>self.data['ctl00_ContentPlaceHolder2_panelGrid'] = 'ctl00$ContentPlaceHolder2$grdDirectory'</code></li> <li>I combined the functions for simplicity's sake. <strong>The callback allows it to loop through all the pages</strong>.</li> </ul> <pre class="lang-py prettyprint-override"><code>import scrapy from scrapy.http import FormRequest class UAESpider(scrapy.Spider): name = 'uae_free' headers = { 'X-MicrosoftAjax': 'Delta=true', 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2272.76 Safari/537.36' } allowed_domains = ['www.uaeonlinedirectory.com'] # TODO: Include the urls for all other items (e.g. A-Z) start_urls = ['https://www.uaeonlinedirectory.com/UFZOnlineDirectory.aspx?item=A'] current_page = 0 def parse(self, response): # request the next page self.current_page = self.current_page + 1 if self.current_page == 1: # submit a form (first page) data = {} for form_input in response.css('form#aspnetForm input'): name = form_input.xpath('@name').extract()[0] try: value = form_input.xpath('@value').extract()[0] except IndexError: value = &quot;&quot; data[name] = value data['__EVENTTARGET'] = 'ctl00$MainContent$List' data['__EVENTARGUMENT'] = 'Page$1' else: # Extract the form fields and arguments using XPATH event_validation = response.xpath('//input[@id=&quot;__EVENTVALIDATION&quot;]/@value').extract() view_state = response.xpath('//input[@id=&quot;__VIEWSTATE&quot;]/@value').extract() view_state_generator = response.xpath('//input[@id=&quot;__VIEWSTATEGENERATOR&quot;]/@value').extract() view_state_encrypted = response.xpath('//input[@id=&quot;__VIEWSTATEENCRYPTED&quot;]/@value').extract() data = { '__EVENTTARGET': 'ctl00$ContentPlaceHolder2$grdDirectory', '__EVENTARGUMENT': 'Page$%d' % self.current_page, '__EVENTVALIDATION': event_validation, '__VIEWSTATE': view_state, '__VIEWSTATEGENERATOR': view_state_generator, '__VIEWSTATEENCRYPTED': view_state_encrypted, '__ASYNCPOST': 'true', '': '' } # Yield the companies # TODO: move this to a different function rows = response.xpath('//table[@class=&quot;GridViewStyle&quot;]//tr') for row in rows[1:11]: result = { 'company_name': row.xpath('.//td[2]//text()').get(), 'company_name_link': row.xpath('.//td[2]//a/@href').get(), 'zone': row.xpath('.//td[4]//text()').get(), 'category': row.xpath('.//td[6]//text()').get(), 'category_link': row.xpath('.//td[6]//a/@href').get() } print(result) yield result # TODO: check if there is a next page, and only yield if there is one yield FormRequest(url=self.start_urls[0], # TODO: change this so that index is not hardcoded method='POST', formdata=data, callback=self.parse, meta={'page': self.current_page}, dont_filter=True, headers=self.headers) </code></pre>
python|web-scraping|scrapy
1
1,906,033
61,789,846
IntegrityError in Django even after defining form_valid
<p>I'm trying to create and store the model form in the database. The model represents a product, and has field like the category, image, price... etc.</p> <p>Here's the model</p> <pre class="lang-py prettyprint-override"><code>class Product(models.Model): categories = (("Books", "Books/Study Materials"), ("Notebooks", "Notebooks/Rough Pads"), ("Equipments", "Equipments/Tools"), ("Cloths", "Cloths/Uniforms"), ("Sports", "Sports/Sportswear"), ("Miscellaneous", "Miscellaneous")) user = models.ForeignKey(User, on_delete = models.CASCADE) date_posted = models.DateTimeField(default = timezone.now) category = models.CharField(max_length = 13, choices = categories, default = "Miscellaneous") description = models.CharField(max_length = 75, null = True, blank = True) image = models.ImageField(default = "product/default.png", upload_to = "product") price = models.PositiveIntegerField() def __str__(self): return f"{self.category} by {self.user.username} for {self.price}" def save(self, *args, **kwargs): super().save() image = Image.open(self.image.path) image.thumbnail((600, 600), Image.ANTIALIAS) image = image.crop(((image.width - 600)//2, (image.height - 400)//2, (image.width + 600)//2, (image.height + 400)//2)) image.save(self.image.path) </code></pre> <p>Here's the form that same model</p> <pre class="lang-py prettyprint-override"><code>class ProductAddForm(forms.ModelForm): description = forms.CharField(max_length = 75, widget = forms.TextInput(attrs = {'placeholder': 'Description'}), help_text = "Not more than 75 characters") image = forms.ImageField(required = False) price = forms.IntegerField(required = False, widget = forms.TextInput(attrs = {'placeholder': 'Price'})) class Meta: model = Product fields = ('category', 'description', 'image', 'price') def clean_description(self, *args, **kwargs): description = self.cleaned_data.get('description') if len(description) == 0: raise forms.ValidationError('Description is required!') if len(description) &gt; 75: raise forms.ValidationError(f'Description should contains at most 75 characters, but bot {len(description)} characters!') return description def clean_price(self, *args, **kwargs): price = self.cleaned_data.get('price') if len(str(price)) == 0: raise forms.ValidationError('Product price is required!') elif price &lt; 0: raise forms.ValidationError('Negative price..... seriously?') return price </code></pre> <p>And below is the view I'm creating using django's generic CreateView</p> <pre class="lang-py prettyprint-override"><code>class product_add(CreateView): model = Product form_class = ProductAddForm template_name = 'Product/product_add.html' def form_valid(self, form, *args, **kwargs): form.instance.author = self.request.user return super().form_valid(form) </code></pre> <p>Above I'm defining the <code>form_valid</code> method to set the user of the product as the current user. But when submitting the form, the error still says-</p> <pre><code>IntegrityError at /product/add/ NOT NULL constraint failed: Product_product.user_id </code></pre> <p>And even if I don't define the <code>form_valid</code> I'm still getting the same error!</p> <p>The error is at <code>super.save()</code>, and it says <code>Error in formatting: RelatedObjectDoesNotExist: Product has no user.</code></p>
<p>Your <code>Product</code> relation to <code>User</code> is named <strong><code>user</code></strong> and not <code>author</code></p> <pre><code>def form_valid(self, form): instance = form.save(commit=False) instance.user = self.request.user instance.save() </code></pre>
python|django|django-models|django-forms|django-views
1
1,906,034
62,028,686
How to convert month name and year to datetime with pandas
<p>I am trying to convert the index of my data frame to a datetime object using pandas, but keep receiving this error--</p> <p>"ValueError: time data 'Jan 20' does not match format '%b, %y' (match)". </p> <p>I double checked the datetime format and removed the hyphen, but still no luck. How can I fix this? </p> <p>Here is what I've tried:</p> <pre><code>import pandas as pd table = pd.read_html('https://www.finra.org/investors/learn-to-invest/advanced-investing/margin-statistics') #set the index to date column df = table[0].set_index('Month/Year') df.index = df.index.str.replace("-", " ") df.index = pd.to_datetime(df.index, format="%b, %y") </code></pre>
<p>I think you had an extra comma. This is working fine for me:</p> <pre><code>df.index = pd.to_datetime(df.index, format="%b %y") print(df.index) </code></pre> <p>Output:</p> <pre><code>DatetimeIndex(['2020-01-01', '2020-02-01', '2020-03-01', '2020-04-01'], dtype='datetime64[ns]', name='Month/Year', freq=None) </code></pre>
python|pandas|dataframe|datetime
1
1,906,035
67,278,707
How to find specific script tag from a webpage using Beautifulsoup
<p>I'm new to python and beautifulsoup. I'm trying to find a json data inside script tag. My problem is the webpage contains many script tags.</p> <p>I need to get this script tag :</p> <pre><code>&lt;script type=&quot;text/javascript&quot;&gt; P.when('A').register(&quot;ImageBlockATF&quot;, function(A){ var data = { 'colorImages': { 'initial': [{&quot;hiRes&quot;:&quot;https://images-na.ssl-images-amazon.com/images/I/61mw5BDEYoL._AC_SL1003_.jpg&quot;, &quot;thumb&quot;:&quot;https://images-na.ssl-images-amazon.com/images/I/41lv4ReBL4L._AC_US40_.jpg&quot;, &quot;large&quot;:&quot;https://images-na.ssl-images-amazon.com/images/I/41lv4ReBL4L._AC_.jpg&quot;, &quot;main&quot;:{&quot;https://images-na.ssl-images-amazon.com/images/I/61mw5BDEYoL._AC_SY355_.jpg&quot;:[355,355], &quot;https://images-na.ssl-images-amazon.com/images/I/61mw5BDEYoL._AC_SY450_.jpg&quot;:[450,450], &quot;https://images-na.ssl-images-amazon.com/images/I/61mw5BDEYoL._AC_SX425_.jpg&quot;:[425,425], &quot;https://images-na.ssl-images-amazon.com/images/I/61mw5BDEYoL._AC_SX466_.jpg&quot;:[466,466], &quot;https://images-na.ssl-images-amazon.com/images/I/61mw5BDEYoL._AC_SX522_.jpg&quot;:[522,522], &quot;https://images-na.ssl-images-amazon.com/images/I/61mw5BDEYoL._AC_SX569_.jpg&quot;:[569,569], &quot;https://images-na.ssl-images-amazon.com/images/I/61mw5BDEYoL._AC_SX679_.jpg&quot;:[679,679]}, &quot;variant&quot;:&quot;MAIN&quot;,&quot;lowRes&quot;:null},{&quot;hiRes&quot;:&quot;https://images-na.ssl-images-amazon.com/images/I/61kOw5lC%2B%2BL._AC_SL1005_.jpg&quot;,&quot;thumb&quot;:&quot;https://images-na.ssl-images-amazon.com/images/I/41shdN1aAoL._AC_US40_.jpg&quot;,&quot;large&quot;:&quot;https://images-na.ssl-images-amazon.com/images/I/41shdN1aAoL._AC_.jpg&quot;,&quot;main&quot;:{&quot;https://images-na.ssl-images-amazon.com/images/I/61kOw5lC%2B%2BL._AC_SY355_.jpg&quot;:[355,355],&quot;https://images-na.ssl-images-amazon.com/images/I/61kOw5lC%2B%2BL._AC_SY450_.jpg&quot;:[450,450],&quot;https://images-na.ssl-images-amazon.com/images/I/61kOw5lC%2B%2BL._AC_SX425_.jpg&quot;:[425,425],&quot;https://images-na.ssl-images-amazon.com/images/I/61kOw5lC%2B%2BL._AC_SX466_.jpg&quot;:[466,466],&quot;https://images-na.ssl-images-amazon.com/images/I/61kOw5lC%2B%2BL._AC_SX522_.jpg&quot;:[522,522],&quot;https://images-na.ssl-images-amazon.com/images/I/61kOw5lC%2B%2BL._AC_SX569_.jpg&quot;:[569,569],&quot;https://images-na.ssl-images-amazon.com/images/I/61kOw5lC%2B%2BL._AC_SX679_.jpg&quot;:[679,679]},&quot;variant&quot;:&quot;PT01&quot;,&quot;lowRes&quot;:null},{&quot;hiRes&quot;:&quot;https://images-na.ssl-images-amazon.com/images/I/511019WE7xL._AC_SL1005_.jpg&quot;,&quot;thumb&quot;:&quot;https://images-na.ssl-images-amazon.com/images/I/41pt8OOHsaL._AC_US40_.jpg&quot;,&quot;large&quot;:&quot;https://images-na.ssl-images-amazon.com/images/I/41pt8OOHsaL._AC_.jpg&quot;,&quot;main&quot;:{&quot;https://images-na.ssl-images-amazon.com/images/I/511019WE7xL._AC_SY355_.jpg&quot;:[355,355],&quot;https://images-na.ssl-images-amazon.com/images/I/511019WE7xL._AC_SY450_.jpg&quot;:[450,450],&quot;https://images-na.ssl-images-amazon.com/images/I/511019WE7xL._AC_SX425_.jpg&quot;:[425,425],&quot;https://images-na.ssl-images-amazon.com/images/I/511019WE7xL._AC_SX466_.jpg&quot;:[466,466],&quot;https://images-na.ssl-images-amazon.com/images/I/511019WE7xL._AC_SX522_.jpg&quot;:[522,522],&quot;https://images-na.ssl-images-amazon.com/images/I/511019WE7xL._AC_SX569_.jpg&quot;:[569,569],&quot;https://images-na.ssl-images-amazon.com/images/I/511019WE7xL._AC_SX679_.jpg&quot;:[679,679]},&quot;variant&quot;:&quot;PT02&quot;,&quot;lowRes&quot;:null}]}, 'colorToAsin': {'initial': {}}, 'holderRatio': 1.0, 'holderMaxHeight': 700, 'heroImage': {'initial': []}, 'heroVideo': {'initial': []}, 'spin360ColorData': {'initial': {}}, 'spin360ColorEnabled': {'initial': 0}, 'spin360ConfigEnabled': false, 'spin360LazyLoadEnabled': false, 'showroomEnabled': false, 'showroomViewModel': {'initial': {}}, 'playVideoInImmersiveView':true, 'useTabbedImmersiveView':true, 'totalVideoCount':'0', 'videoIngressATFSlateThumbURL':'', 'mediaTypeCount':'0', 'atfEnhancedHoverOverlay' : true, 'winningAsin': 'B08373YYCM', 'weblabs' : {}, 'aibExp3Layout' : 1, 'aibRuleName' : 'frank-powered', 'acEnabled' : true, 'dp60VideoPosition': 0, 'dp60VariantList': '', 'dp60VideoThumb': '', 'dp60MainImage': 'https://images-na.ssl-images-amazon.com/images/I/61mw5BDEYoL._AC_SY355_.jpg', 'airyConfig' :A.$.parseJSON('{&quot;jsUrl&quot;:&quot;https://images-na.ssl-images-amazon.com/images/G/01/vap/video/airy2/prod/2.0.1460.0/js/airy.skin._CB485981857_.js&quot;,&quot;cssUrl&quot;:&quot;https://images-na.ssl-images-amazon.com/images/G/01/vap/video/airy2/prod/2.0.1460.0/css/beacon._CB485971591_.css&quot;,&quot;swfUrl&quot;:&quot;https://images-na.ssl-images-amazon.com/images/G/01/vap/video/airy2/prod/2.0.1460.0/flash/AiryBasicRenderer._CB485925577_.swf&quot;,&quot;foresterMetadataParams&quot;:{&quot;marketplaceId&quot;:&quot;A2VIGQ35RCS4UG&quot;,&quot;method&quot;:&quot;Kitchen.ImageBlock&quot;,&quot;requestId&quot;:&quot;4MGH16D6R7WCR018779W&quot;,&quot;session&quot;:&quot;259-8488476-1037262&quot;,&quot;client&quot;:&quot;Dpx&quot;}}') }; A.trigger('P.AboveTheFold'); // trigger ATF event. return data; }); &lt;/script&gt; </code></pre> <p>How i can get this script tag which starts &quot;P.when('A').register(&quot;ImageBlockATF&quot;, function(A){&quot; from the webpage using reqular expression ?</p>
<p>you can get all <code>script</code> tags by</p> <pre><code>page = requests.get(&quot;url&quot;) soup = BeautifulSoup(page.text, &quot;html.parser&quot;) results = soup.find_all(&quot;script&quot;) </code></pre> <p>and then you could have your <strong>filtering</strong> as</p> <pre><code>your_script_tag = [x for x in results if str(x).__contains__(&quot;P.when('A').register&quot;)] print(your_script_tag) </code></pre>
python|json|beautifulsoup
1
1,906,036
60,625,263
Time difference between two event rows for each user in Pandas df
<p>I have a dataframe as follows:</p> <pre><code> imei event_type time 1107 alarm 2020-01-28 11:32:42+00:00 1107 alarm_restored 2020-01-28 11:32:53+00:00 1107 alarm_emergency 2020-01-28 11:33:03+00:00 1107 alarm_emergency_restored 2020-01-28 11:33:06+00:00 1108 alarm 2020-01-28 11:42:42+00:00 1108 alarm_restored 2020-01-28 11:43:53+00:00 1109 alarm_emergency 2020-01-28 11:53:23+00:00 1109 alarm_emergency 2020-01-28 11:53:23+00:00 1109 alarm_emergency_restored 2020-01-28 11:57:06+00:00 1110 alarm_emergency 2020-01-29 10:23:05+00:00 1111 alarm_restored 2020-01-29 11:10:53+00:00 1112 alarm_emergency_restored 2020-01-29 12:13:23+00:00 </code></pre> <p>I want to find the time difference between alarm and restored type events for every user. I have no idea how to proceed with it. I tried <a href="https://stackoverflow.com/questions/54020248/calculate-the-time-difference-between-two-consecutive-rows-in-pandas">calculate the time difference between two consecutive rows in pandas</a> I tried </p> <pre><code> df_alarm['time'].diff(3) </code></pre> <p>and got :</p> <pre><code> 0 NaT 1 NaT 2 NaT 3 0 days 00:00:23.706000 4 0 days 00:27:28.364000 ... </code></pre> <p>Which is not how I expected the results. I want results in minutes/seconds</p> <p>UPDATE:</p> <p>I want to find time difference in every consecutive alarm and alarm_restored, alarm_emergency and alarm_emergency_restored only if they are consecutive rows. All other rows should be NaT.</p> <p>Expected Output:</p> <pre><code> imei event_type time time_diff 1107 alarm 2020-01-28 11:32:42+00:00 NaT 1107 alarm_restored 2020-01-28 11:32:53+00:00 00:00:11 1107 alarm_emergency 2020-01-28 11:33:03+00:00 NaT 1107 alarm_emergency_restored 2020-01-28 11:33:06+00:00 00:00:03 1108 alarm 2020-01-28 11:42:42+00:00 NaT 1108 alarm_restored 2020-01-28 11:43:53+00:00 00:01:11 1109 alarm_emergency 2020-01-28 11:14:27+00:00 NaT 1109 alarm_emergency 2020-01-28 11:53:23+00:00 NaT 1109 alarm_emergency_restored 2020-01-28 11:57:06+00:00 00:03:43 1110 alarm_emergency 2020-01-29 10:23:05+00:00 NaT 1111 alarm_restored 2020-01-29 11:10:53+00:00 NaT 1112 alarm_emergency_restored 2020-01-29 12:13:23+00:00 NaT </code></pre> <p>As you see, if there are two consecutive alarm_* events and one restoral after that(as in rows 1109-1109), I want to find difference only between row 2 and row 3 for 1109.</p>
<p>This will work.</p> <p>First you get the time difference between all the rows.</p> <pre><code>df["timediff"] = df.groupby(df.imei)["time"].diff() </code></pre> <p>Then, you just set timediff to NaT (not a time) for all rows that are not "*_restored" because you don't care about the time from "*_restored" to any other alarm events:</p> <pre><code>import numpy as np df["timediff"][df.event_type.str.contains("restored") == False] = np.datetime64('NaT') </code></pre> <p>This gives exactly what you want.</p>
python|pandas|dataframe
1
1,906,037
60,340,727
How to get the column name of max value in data fame python?
<p>I want to get the column name of the max value of each row.</p> <pre><code> S1 S2 S3 S4 Con1 -0.166277 0.329279 5.4941 3.6587 Con2 -0.015557 0.063506 6.5012 -2.6939 Con3 -0.230677 0.525414 7.2712 8.8743 Con4 -0.155739 0.335635 -6.2533 -4.6159 </code></pre> <p>when I use df.idxmax(axis=1) it shown below.</p> <pre><code>Con1 S1 Con2 S1 Con3 S4 Con4 S2 </code></pre> <pre><code>maxdf = df.idxmax(axis=1) </code></pre> <p>expected result:</p> <pre><code>S1: {Con1,Con2,} S2: {Con4} S3: {} S4: {Con3} </code></pre>
<p>Create <code>DataFrame</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.reset_index.html" rel="nofollow noreferrer"><code>Series.reset_index</code></a> and aggregate <code>set</code>s, last add missing values by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.reindex.html" rel="nofollow noreferrer"><code>Series.reindex</code></a>:</p> <pre><code>maxdf = df.idxmax(axis=1) print (maxdf) Con1 S3 Con2 S3 Con3 S4 Con4 S2 dtype: object s = maxdf.reset_index().groupby(0)['index'].apply(set).reindex(df.columns, fill_value={}) print (s) S1 {} S2 {Con4} S3 {Con2, Con1} S4 {Con3} Name: index, dtype: object </code></pre> <p>If want lists in output use:</p> <pre><code>s = maxdf.reset_index().groupby(0)['index'].apply(list).reindex(df.columns, fill_value=[]) print (s) S1 [] S2 [Con4] S3 [Con1, Con2] S4 [Con3] Name: index, dtype: object </code></pre>
python-3.x|pandas
2
1,906,038
60,736,380
My postgresql database not showing many to many field column but django admin page does. Why?
<p>I hope you got the question from the title itself.</p> <p>After migrating I can select multiple fields from that many to many field in django admin page but when I click on save it saves in the admin page of django but when I check the postgresql database everything that is not many to many field saves but the table lacks many to many field column.</p>
<p>There are no many-to-many connections in Postgres, nor other SQL databases as far as I know. These connections are generally made thru a third table (called thru-table sometimes), connecting values from two tables.<br /> Django does this behind the scenes for you.<br /> You should find the third table in the database. There are default names for them and you can choose a name too.</p>
python|django|postgresql|django-admin
2
1,906,039
60,627,698
why "num = 1" inside the loop is not incrementing?
<p>I was debugging the code and I don't understand why the num variable inside the for-loop is not incrementing and only (1) is incrementing ?</p> <p>def numpat(n):</p> <pre><code> num = 1 ----- (1) for i in range(n): num = 1 ------ (2) for j in range(i + 1): print(num, end=" ") num = num + 1 ----- (3) print("\r") </code></pre>
<p>Is the below output not what you expected?</p> <pre><code>In [1]: n = 5 In [2]: for i in range(n): ...: ...: num = 1 ...: ...: for j in range(i + 1): ...: print(num, end=" ") ...: num = num + 1 ...: ...: print("\r") ...: 1 1 2 1 2 3 1 2 3 4 1 2 3 4 5 </code></pre> <p>Edit: Counterfactual output if n=1 wasn't in the inner loop.</p> <pre><code>In [1]: n = 5 In [2]: num = 1 In [3]: for i in range(n): ...: for j in range(i + 1): ...: print(num, end=" ") ...: num = num + 1 ...: print("\r") ...: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 </code></pre>
python
3
1,906,040
70,194,145
Trigonometric simultaneous equation
<p>I can't solve this trigonometric simultaneous equation.</p> <blockquote> <p>(1) cos(C)=-sin(B)*sin(D)*cos(E)+sin(D)*sin(E)*cos(B)</p> <p>(2) -sin(A)*sin(B)*sin(D)*sin(E)-sin(A)*sin(D)*cos(B)*cos(E)-cos(A)*cos(D)=0</p> </blockquote> <p>I'd like to get sin(D), sin(E) only using the angle A, B, C which are constant numbers.</p> <p>I tried the code below and got the results in the <a href="https://i.stack.imgur.com/z2pju.png" rel="nofollow noreferrer">figure</a> which contains cos (D) and cos(E). Variables D and E need to be eliminated. How should I do for this?</p> <pre><code>import sympy as sp from sympy import sin, cos sp.init_printing() sp.var('A,B,C,D,E') eq1=sp.Eq(cos(C),-sin(B)*sin(D)*cos(E)+sin(D)*sin(E)*cos(B)) eq2=sp.Eq(0,-sin(A)*sin(B)*sin(D)*sin(E)-sin(A)*sin(D)*cos(B)*cos(E)-cos(A)*cos(D)) sp.solve ([eq1, eq2], [sin(D), sin(E)]) </code></pre>
<p>I think that the result you are looking for here is more complicated than you might expect. Your equations are polynomial in <code>sin(A)</code>, <code>cos(A)</code> etc. We also have polynomial relations like <code>sin(A)**2 + cos(A)**2 - 1 = 0</code>. Putting these together we can systematically eliminate variables using the method of resultants. Here I've replaced <code>sin(A)</code> and <code>cos(A)</code> with symbols <code>sA</code> and <code>cA</code> so that we can more clearly see that we are working with polynomials:</p> <pre><code>from sympy import * sA, cA, sB, cB, sC, cC, sD, cD, sE, cE = symbols('sA, cA, sB, cB, sC, cC, sD, cD, sE, cE') eq1 = Eq(cC, -sB*sD*cE + sD*sE*cB) eq2 = Eq(0, -sA*sB*sD*sE - sA*sD*cB*cE - cA*cD) p1 = eq1.rewrite(Add) p2 = eq2.rewrite(Add) # Eliminate cD p3 = sD**2 + cD**2 - 1 p1_1 = resultant(p1, p3, cD) p2_1 = resultant(p2, p3, cD) # Eliminate cE p4 = sE**2 + cE**2 - 1 p1_2 = resultant(p1_1, p4, cE) p2_2 = resultant(p2_1, p4, cE) # Now eliminate sD to get an univariate polynomial for sE p_final = resultant(p1_2, p2_2, sD) </code></pre> <p>This now gives us a polynomial equation for <code>sE</code> with coefficients that include <code>cA,cB,cC,sA,sB</code> i.e. we have eliminated <code>cos(E)</code>, <code>sin(D)</code> and <code>cos(D)</code> from the system reducing it down to a single polynomial equation for the single unknown <code>sin(E)</code>. The roots of this polynomial will give us the values of <code>sin(E)</code> that would be the solutions of the system of equations that you showed (taking into account the important relationships between <code>sin(D)</code> and <code>cos(D)</code> and so on). The polynomial for <code>sE</code> is of degree 16 and the expressions for the coefficients are actually too complicated to be included in a SO post which is limited to 30000 characters. Instead I will just show the leading coefficient (the coefficient of <code>sE**16</code>):</p> <pre><code>In [47]: p = Poly(p_final, sE) In [48]: str(p.LC()) Out[48]: 'cA**16*cB**16 + 8*cA**16*cB**14*sB**2 + 28*cA**16*cB**12*sB**4 + 56*cA**16*cB**10*sB**6 + 70*cA**16*cB**8*sB**8 + 56*cA**16*cB**6*sB**10 + 28*cA**16*cB**4*sB**12 + 8*cA**16*cB**2*sB**14 + cA**16*sB**16 + 8*cA**14*cB**16*cC**2*sA**2 + 32*cA**14*cB**14*cC**2*sA**2*sB**2 + 32*cA**14*cB**12*cC**2*sA**2*sB**4 - 32*cA**14*cB**10*cC**2*sA**2*sB**6 - 80*cA**14*cB**8*cC**2*sA**2*sB**8 - 32*cA**14*cB**6*cC**2*sA**2*sB**10 + 32*cA**14*cB**4*cC**2*sA**2*sB**12 + 32*cA**14*cB**2*cC**2*sA**2*sB**14 + 8*cA**14*cC**2*sA**2*sB**16 + 28*cA**12*cB**16*cC**4*sA**4 + 32*cA**12*cB**14*cC**4*sA**4*sB**2 - 112*cA**12*cB**12*cC**4*sA**4*sB**4 - 288*cA**12*cB**10*cC**4*sA**4*sB**6 - 344*cA**12*cB**8*cC**4*sA**4*sB**8 - 288*cA**12*cB**6*cC**4*sA**4*sB**10 - 112*cA**12*cB**4*cC**4*sA**4*sB**12 + 32*cA**12*cB**2*cC**4*sA**4*sB**14 + 28*cA**12*cC**4*sA**4*sB**16 + 56*cA**10*cB**16*cC**6*sA**6 - 32*cA**10*cB**14*cC**6*sA**6*sB**2 - 288*cA**10*cB**12*cC**6*sA**6*sB**4 + 32*cA**10*cB**10*cC**6*sA**6*sB**6 + 464*cA**10*cB**8*cC**6*sA**6*sB**8 + 32*cA**10*cB**6*cC**6*sA**6*sB**10 - 288*cA**10*cB**4*cC**6*sA**6*sB**12 - 32*cA**10*cB**2*cC**6*sA**6*sB**14 + 56*cA**10*cC**6*sA**6*sB**16 + 70*cA**8*cB**16*cC**8*sA**8 - 80*cA**8*cB**14*cC**8*sA**8*sB**2 - 344*cA**8*cB**12*cC**8*sA**8*sB**4 + 464*cA**8*cB**10*cC**8*sA**8*sB**6 + 1316*cA**8*cB**8*cC**8*sA**8*sB**8 + 464*cA**8*cB**6*cC**8*sA**8*sB**10 - 344*cA**8*cB**4*cC**8*sA**8*sB**12 - 80*cA**8*cB**2*cC**8*sA**8*sB**14 + 70*cA**8*cC**8*sA**8*sB**16 + 56*cA**6*cB**16*cC**10*sA**10 - 32*cA**6*cB**14*cC**10*sA**10*sB**2 - 288*cA**6*cB**12*cC**10*sA**10*sB**4 + 32*cA**6*cB**10*cC**10*sA**10*sB**6 + 464*cA**6*cB**8*cC**10*sA**10*sB**8 + 32*cA**6*cB**6*cC**10*sA**10*sB**10 - 288*cA**6*cB**4*cC**10*sA**10*sB**12 - 32*cA**6*cB**2*cC**10*sA**10*sB**14 + 56*cA**6*cC**10*sA**10*sB**16 + 28*cA**4*cB**16*cC**12*sA**12 + 32*cA**4*cB**14*cC**12*sA**12*sB**2 - 112*cA**4*cB**12*cC**12*sA**12*sB**4 - 288*cA**4*cB**10*cC**12*sA**12*sB**6 - 344*cA**4*cB**8*cC**12*sA**12*sB**8 - 288*cA**4*cB**6*cC**12*sA**12*sB**10 - 112*cA**4*cB**4*cC**12*sA**12*sB**12 + 32*cA**4*cB**2*cC**12*sA**12*sB**14 + 28*cA**4*cC**12*sA**12*sB**16 + 8*cA**2*cB**16*cC**14*sA**14 + 32*cA**2*cB**14*cC**14*sA**14*sB**2 + 32*cA**2*cB**12*cC**14*sA**14*sB**4 - 32*cA**2*cB**10*cC**14*sA**14*sB**6 - 80*cA**2*cB**8*cC**14*sA**14*sB**8 - 32*cA**2*cB**6*cC**14*sA**14*sB**10 + 32*cA**2*cB**4*cC**14*sA**14*sB**12 + 32*cA**2*cB**2*cC**14*sA**14*sB**14 + 8*cA**2*cC**14*sA**14*sB**16 + cB**16*cC**16*sA**16 + 8*cB**14*cC**16*sA**16*sB**2 + 28*cB**12*cC**16*sA**16*sB**4 + 56*cB**10*cC**16*sA**16*sB**6 + 70*cB**8*cC**16*sA**16*sB**8 + 56*cB**6*cC**16*sA**16*sB**10 + 28*cB**4*cC**16*sA**16*sB**12 + 8*cB**2*cC**16*sA**16*sB**14 + cC**16*sA**16*sB**16' </code></pre> <p>In general there are limitations on giving radical expressions for the roots of polynomials of degree 5 or more except in certain special cases due to the Abel-Ruffini theorem. If the coefficients are numeric rather than symbolic then we can work around those with things like RootOf. In this case though I doubt that it is possible to express the solutions of this system of equations using anything that you might recognise as an explicit closed form.</p> <p>If you substitute numeric values for the symbolic unknowns then it should be possible to get approximate roots numerically (e.g. with <code>nroots</code>) or it might become possible to get exact solutions:</p> <pre><code>In [62]: p_numeric = p.subs({cA:0,sA:1,cB:1/sqrt(2),sB:1/sqrt(2),cC:1}) In [63]: p_numeric Out[63]: 8 6 4 2 16 14 12 10 35⋅sE 7⋅sE 7⋅sE sE 1 sE - 4⋅sE + 7⋅sE - 7⋅sE + ────── - ───── + ───── - ─── + ─── 8 4 16 16 256 In [64]: roots(p_numeric) Out[64]: ⎧-√2 √2 ⎫ ⎨────: 8, ──: 8⎬ ⎩ 2 2 ⎭ In [69]: nroots(p_numeric, n=8, maxsteps=500) Out[69]: [-0.70710678, 0.70710677, 0.70710679, -0.70710679 + 2.4757313e-9⋅ⅈ, -0.70710679 - 2.9650933e-9⋅ⅈ, - 0.70710678 + 4.9626324e-9⋅ⅈ, -0.70710678 - 6.2970508e-9⋅ⅈ, -0.70710678 + 5.3076725e-9⋅ⅈ, -0.7071067 8 - 5.388738e-9⋅ⅈ, -0.70710678 + 2.7281099e-9⋅ⅈ, 0.70710678 + 5.5825484e-9⋅ⅈ, 0.70710678 - 6.328749 4e-9⋅ⅈ, 0.70710678 + 7.8957797e-9⋅ⅈ, 0.70710678 - 8.1564975e-9⋅ⅈ, 0.70710679 + 5.6265597e-9⋅ⅈ, 0.70 710679 - 5.3161607e-9⋅ⅈ] </code></pre> <p>Note here that <code>nsolve</code> has been unable to determine that there are repeated roots or whether or not the roots are real or have a small imaginary part. This is because finding the roots of a polynomial can be very ill-conditioned especially where there are many repeated roots (possibly the values that I chose to substitute are somewhat degenerate).</p> <p>Most likely though if you are going to substitute values then it is better to do that at the start and e.g. just call <code>solve/nsolve</code> with the substituted values.</p> <p><a href="https://en.wikipedia.org/wiki/Resultant" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Resultant</a></p> <p><a href="https://en.wikipedia.org/wiki/Abel%E2%80%93Ruffini_theorem" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Abel%E2%80%93Ruffini_theorem</a></p>
python|sympy|equation-solving
3
1,906,041
63,610,515
Compare differences of branches and categorise
<p>I'm new to python. Suppose test file changes goes to testing category, buildpack or code level changes goes to functional changes. I'm trying to find the difference between two branch (master, feature) of a project then read the code differences and categorise them.</p>
<p>You can check out <a href="https://tortoisesvn.net/TortoiseMerge.html" rel="nofollow noreferrer">tortoise</a> for code differences and <a href="https://riptutorial.com/git/example/18336/gitk-and-git-gui" rel="nofollow noreferrer">gitk command</a> for changes happened in the repo</p>
python|python-3.x|git-diff|github-api-v3
0
1,906,042
63,590,234
Why does this OpenCV threshold call return a blank image?
<p>I am following this tutorial on OpenCV, where the tutorial uses the following code:</p> <pre><code>import argparse import imutils import cv2 ap = argparse.ArgumentParser() ap.add_argument(&quot;-i&quot;, &quot;--image&quot;, required=True, help=&quot;path to input image&quot;) args = vars(ap.parse_args()) image = cv2.imread(args[&quot;image&quot;]) gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) thresh = cv2.threshold(gray, 255, 255, cv2.THRESH_BINARY_INV)[1] cv2.imshow(&quot;Thresh&quot;, thresh) cv2.waitKey() </code></pre> <p>Yet the image <code>thresh</code> displays as a blank window with a light grey background, where the tutorial is supposed to shoe a monochrome profile of an image that came with the tutorial source code.</p> <p>I'm using the same code, and the same input image, yet the tutorial tells me to expect a monochrome image showing object outlines, while I only get a blank, grey image. What could be wrong here?</p>
<p>Your <code>thresh</code> parameter value should be different than <code>maxval</code> value.</p> <pre><code>thresh = cv2.threshold(src=gray, thresh=127, maxval=255, type=cv2.THRESH_BINARY_INV)[1] </code></pre> <p>From the <a href="https://docs.opencv.org/master/d7/d1b/group__imgproc__misc.html#gae8a4a146d1ca78c626a53577199e9c57" rel="nofollow noreferrer">documentation</a></p> <ul> <li><code>gray</code> is your source image.</li> <li><code>thresh</code> is the threshold value</li> <li><code>maxval</code> is maximum value</li> <li><code>type</code>is thresholding type</li> </ul> <p>When you set both <code>thresh</code> and <code>maxval</code> the same value, you are saying:</p> <blockquote> <p>I want to display my pixels above 255 values, but none of the pixels should exceed 255.</p> </blockquote> <p>Therefore the result is a blank image.</p> <p>One possible way is setting thresh to 127.</p> <p>For instance:</p> <p>      Original Image                                                     Thresholded Image</p> <p><a href="https://i.stack.imgur.com/T9PAt.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/T9PAt.jpg" alt="enter image description here" /></a> <a href="https://i.stack.imgur.com/KDrix.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KDrix.jpg" alt="enter image description here" /></a></p> <pre><code>import cv2 image = cv2.imread(&quot;fgtc7_256_256.jpg&quot;) gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) thresh = cv2.threshold(src=gray, thresh=127, maxval=255, type=cv2.THRESH_BINARY_INV)[1] cv2.imshow(&quot;Thresh&quot;, thresh) cv2.waitKey(0) </code></pre>
python|python-3.x|opencv|graphics|opencv3.0
0
1,906,043
55,928,554
How to perform bitwise and operation on matrices of a tensor in numpy
<p>I have the following numpy tensor:</p> <pre><code>M = np.zeros((a,b,c), dtype=bool) </code></pre> <p>I wish to perform a bitwise and on all <code>a</code> matrices of dimension <code>b,c</code> to give a final matrix of dimensions <code>b,c</code>. I do not know how to achieve this efficiently. Something like</p> <p><code>np.apply_along_axis(func1d=np.bitwise_and, axis=0, arr=M)</code> but I get the error message: <code>ValueError: invalid number of arguments</code> and I am unclear why.</p> <p>UPDATE: This works, but is there a more (time) efficient way?</p> <pre><code>v = np.ones((b,c),dtype=bool) for i in range(0, a): v = v &amp; M[i] </code></pre>
<p>You can use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.all.html" rel="nofollow noreferrer"><code>all</code></a> for this:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt; M = np.zeros((8,9,10), dtype=np.bool) &gt;&gt; M.all(0).shape (9, 10) </code></pre>
python|numpy|matrix|bitwise-operators|logical-operators
1
1,906,044
56,735,601
Translation between languages with different numbers of plurals
<p>I am trying to create a translation file in simplified Chinese from an English source. This is all happening in a Flask project and I've been using Flask-Babel (so far successfully) to translate to Spanish and French. I added a Chinese translation file but I'm running into the following issue.</p> <p>Consider the following strings in English:</p> <pre><code>msgid "One message" msgid_plural "%(num)d messages" </code></pre> <p>When there is only one message, I don't want to display the number <code>1</code>, I want the spelled out version.</p> <p>Apparenly, in Chinese, there is no grammatical difference between singular and plural, so our translators only included one translation for both versions:</p> <pre><code>msgstr[0] "%(num)d [something in Chinese]" </code></pre> <p>When I tried to compile this file, I got the following error message:</p> <blockquote> <p>unknown named placeholder 'num'</p> </blockquote> <p>So I tried to duplicate the line like this (<code>一</code> is 1 in Chinese):</p> <pre><code>msgstr[0] "一 [something in Chinese]" msgstr[1] "%(num)d [something in Chinese]" </code></pre> <p>But then I got this error:</p> <blockquote> <p>msg has more translations than num_plurals of catalog</p> </blockquote> <p>which makes sense: Chinese has a <code>nplurals</code> of 1 so there shouldn't be more than one <code>msgstr</code>.</p> <p>I see two options at this stage:</p> <ol> <li>Cheat on my Chinese .po file and declare that <code>nplurals = 2</code> with the same rule as English: <code>"Plural-Forms: nplurals=2; plural=(n &gt; 1)"</code>.</li> <li>Update all my source strings so that I always use <code>%(num)d</code> in the singular version if I need it in the plural version.</li> </ol> <p>I'm not really satisfied with either option. Is there an alternative?</p>
<p>This seems to be a bug in babel, not a problem of gettext.</p> <p>The following PO excerpt is correct and compiles with <code>msgfmt --check</code>:</p> <pre><code>msgid "" msgstr "" ... "Language-Team: Chinese (simplified) &lt;i18n-zh@googlegroups.com&gt;\n" "Language: zh_CN\n" ... "Plural-Forms: nplurals=1; plural=0;\n" #: example.c:1 #, python-format msgid "One message" msgid_plural "%(num)d messages" msgstr[0] "%(num)d [something in Chinese]" </code></pre> <p>Until the bug is fixed, using option 1 (cheating on the plural-forms header) seems to be a viable workaround. Or just use <code>msgfmt --check</code> instead of the Python script for compiling the po file.</p>
python|internationalization|gettext|python-babel
0
1,906,045
69,991,773
Python Glob in Classes to access elements from file
<p>Hi there I'm having issues using glob for accessing list of files from multiple subdirectories.</p> <p>Here is my code:</p> <pre><code>class Folder: &quot;&quot;&quot;Attempt to model typical folder.&quot;&quot;&quot; def __init__(self, path): self.path = path def get_folder(self): files = glob.glob(os.path.join(self.path, &quot;*&quot;)) dir_list = [] for f in files: if os.path.isdir(f): dir_list = dir_list + [ os.path.join(self.path, elt) for elt in os.listdir(f) ] return dir_list def get_file(self): dir_list = self.get_folder() sf_list = [] for line in dir_list: sep = os.path.sep subfiles = glob.glob(sep.join([line, &quot;*&quot;])) sf_list.append(subfiles) return sf_list </code></pre> <p>Having this as code structure:</p> <pre><code> datasets Actor1, emotion1 one.wav two.wav emotion2 one.wav two.wav Actor2 emotion1 one.wav two.wav emotion2 one.wav two.wav </code></pre> <p>The output of my <code>Folder(datasets).get_folder()</code> is the following.</p> <pre><code>['...\\datasets\\Actor1\\emotion1', '...\\datasets\\Actor1\\emotion2', '...\\datasets\\Actor2\\emotion2', '...\\datasets\\Actor2\\emotion2',] </code></pre> <p>But when I try to access and save into variables the files in the subfolders, calling <code>Folder(datasets).get_file()</code> Instead of returning all sub-files,</p> <pre><code>['...\\datasets\\Actor1\\emotion1\\one.wav', '...\\datasets\\Actor1\\emotion1'\\two.wav', '...\\datasets\\Actor2\\emotion2\\one.wav', '...\\datasets\\Actor2\\emotion2\\two.wav',] </code></pre> <p>it outputs a list of multiple empty values: <code>Out[3]: [ [], [], [], []]</code></p> <p>Could you help?</p>
<p>I haven't tested this, but this might be a good start for you.</p> <pre><code>import pathlib class Folder(pathlib.Path): def files(self): return [f for f in self.iterdir() if f.is_file()] def folders(self): return [f for f in self.iterdir() if f.is_dir()] </code></pre> <p>See <a href="https://docs.python.org/3/library/pathlib.html#pathlib.Path.iterdir" rel="nofollow noreferrer">https://docs.python.org/3/library/pathlib.html#pathlib.Path.iterdir</a>.</p>
python|glob
1
1,906,046
69,899,099
Is there a way to isolate the elements of list without using a for loop?
<p>I'm trying to make a basic Python program where a user inputs a number at random, and if it matches one of the numbers in a list they get a printed &quot;congratulations&quot;. And if not, they get to choose again.</p> <p><img src="https://i.stack.imgur.com/KvnEd.png" alt="line 5 of this code is my problem" /></p> <p>My problem is, I don't know how isolate the elements of this list without using a <code>for</code> loop.</p> <p>For example, suppose I have a list: <code>numbers = [1, 9, 8,33,78,89,235]</code>. And I also have a <code>UserInput</code> function that will allow the user to select any number.</p> <p>I don't understand how to properly code <code>if UserInput == an *element in numbers*: print(&quot;congratulations&quot;)</code>.</p> <p>Does anyone know how to do this?</p>
<h4>This is how you do it:-</h4> <pre class="lang-py prettyprint-override"><code>numbers = [1, 9, 8,33,78,89,235] user_input = int(input(&quot;Enter a number: &quot;)) if user_input in numbers: print(&quot;Correct answer&quot;) else: print(&quot;Wrong answer&quot;) </code></pre>
python|list
1
1,906,047
66,171,914
Click events in python plotly-dash Scatterpolar plot
<p>I'd like to create two web pages which contain iframed plotly-dash graphs. I'd like to implement click events of the first graph to redirect to the other page (with more info about that specific datum)</p> <p>I followed <a href="https://plotly.com/python/click-events/" rel="noreferrer">this</a> guide, but the <strong>click event is not triggered</strong> in my setup even when I installed ipywidgets, node.js and plotlywidgets extension to the python venv I'm running it in. There is a note in the exaple above:</p> <blockquote> <p>Note: Callbacks will only be triggered when the trace belongs to a instance of plotly.graph_objs.FigureWidget and it is displayed in an ipywidget context. Callbacks will not be triggered on figures that are displayed using plot/iplot.</p> </blockquote> <p>The example is using the <code>FigureWidget</code> and I'm drawing it by passing the figure to <code>dcc.Graph</code> which should be the same as <code>fig.show()</code>. The whole code looks like this</p> <pre><code>import plotly.graph_objects as go import numpy as np np.random.seed(1) x = np.random.rand(100) y = np.random.rand(100) fig = go.FigureWidget([go.Scatter(x=x, y=y, mode='markers')]) scatter = fig.data[0] colors = ['#a3a7e4'] * 100 scatter.marker.color = colors scatter.marker.size = [10] * 100 fig.layout.hovermode = 'closest' # create our callback function def update_point(trace, points, selector): c = list(scatter.marker.color) s = list(scatter.marker.size) for i in points.point_inds: c[i] = '#bae2be' s[i] = 20 with fig.batch_update(): scatter.marker.color = c scatter.marker.size = s scatter.on_click(update_point) import dash import dash_core_components as dcc import dash_html_components as html app = dash.Dash() app.layout = html.Div([ dcc.Graph(figure=fig) ]) app.run_server(debug=True, use_reloader=False) </code></pre> <p>I don't know how to make/be sure that it is displayed in ipywidget context though.</p> <p>I'm aware of a workaround like <a href="https://stackoverflow.com/questions/58310199/select-and-delete-data-points-in-plotly-dash-3d-scatter">this</a> with <code>layout.clickmode = select</code> and a separate button, but I'd like to make the figure's <code>on_click()</code> events working.</p> <p>Could someone please help me to make this work?</p>
<p>Regarding this part of the documentation:</p> <blockquote> <p>in an ipywidget context</p> </blockquote> <p>It means that this will only work in a jupyter notebook for example.</p> <p>If you wish to have this kind of graph interaction when using dash, you should use the <code>dcc.Graph</code> attributes to fire your callbacks (<code>hoverData</code>, <code>clickData</code>, <code>selectedData</code>, <code>relayoutData</code>).</p> <p>See more here: <a href="https://dash.plotly.com/interactive-graphing" rel="nofollow noreferrer">Interactive Visualizations</a></p>
python|onclick|plotly-dash|ipywidgets
1
1,906,048
66,073,873
Pygame Scroll Bar To Play Volume Low OR High.?
<p>so I have a scroll bar in my game what Im trying to do is make it so if <em><strong>my mouse is over the bar1 button and we are on the moving_spot of the bar1 button then we can move it up and down on its y axis</strong></em></p> <p>how can I move the bar up and down and if its colliding with with any of the volume buttons I can change the volume of my background music either 0.1 or 0.2 or 0.3 so it controls my game volume<code>pygame.mixer.music.set_volume(0.3)</code> <a href="https://i.stack.imgur.com/8IjD4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8IjD4.png" alt="enter image description here" /></a></p> <p>my problem is im not sure how I could get this started I have everything in place but not sure where to start ***how can I move the bar with my mouse on the moving_spot on its y values only and if the bar1 is over and of the volume1 2 or 3 4 buttons then it should play the volume at defferent level im not sure how to approach this problem any help is appreciated I just need a way to adjust my music of my game if the player moves the bar up or down</p> <pre><code>while run: # Making game run with fps clock.tick(fps) for event in pygame.event.get(): if event.type == pygame.QUIT: run = False if event.type == pygame.MOUSEBUTTONDOWN: pos = pygame.mouse.get_pos() # this is our bar1 the gray part that we will be able to move if bar1.isOver(pos): bar1.y = pos print(&quot;WORKING{&quot;) </code></pre> <p>here are my buttons and positions places the <code>move_spot</code> is where the bar1 can only move up and down the bar1 is the bar that the player can control to control the volume and also the volume 1 2 3 4 are where the defferent volume of our background music will be set</p> <pre><code>move_spot = button((colors),720,125,25,260, '') bar1 = button((colors),715,125,30,60, '') volume1 = button((colors2),715,145,30,60, '') volume2 = button((colors2),715,210,30,60, '') volume3 = button((colors2),715,280,30,60, '') volume4 = button((colors2),715,350,30,60, '') buttons = [bar1,move_spot,volume1,volume2,volume3,volume4] </code></pre> <p>this is my buttons class</p> <pre><code># our buttons class button(): def __init__(self, color, x,y,width,height, text=''): self.color = color self.x = x self.y = y self.width = width self.height = height self.text = text self.over = False def draw(self,window,outline=None): #Call this method to draw the button on the screen if outline: pygame.draw.rect(window, outline, (self.x-2,self.y-2,self.width+4,self.height+4),0) pygame.draw.rect(window, self.color, (self.x,self.y,self.width,self.height),0) if self.text != '': font = pygame.font.SysFont('image/abya.ttf', 60) text = font.render(self.text, 1, (255,255,255)) window.blit(text, (self.x + (self.width/2 - text.get_width()/2), self.y + (self.height/2 - text.get_height()/2))) def isOver(self, pos): #Pos is the mouse position or a tuple of (x,y) coordinates if pos[0] &gt; self.x and pos[0] &lt; self.x + self.width: if pos[1] &gt; self.y and pos[1] &lt; self.y + self.height: return True return False def playSoundIfMouseIsOver(self, pos, sound): if self.isOver(pos): if not self.over: click.play() self.over = True else: self.over = False </code></pre> <p>here a minimal code you can run and test with this bar image<a href="https://i.stack.imgur.com/eLAMi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eLAMi.png" alt="enter image description here" /></a></p> <p>heres the background music <a href="http://soundimage.org/wp-content/uploads/2014/04/City-Beneath-the-Waves.mp3" rel="nofollow noreferrer">music</a></p> <pre><code>import pygame pygame.init() window = pygame.display.set_mode((800,800)) # our buttons class button(): def __init__(self, color, x,y,width,height, text=''): self.color = color self.x = x self.y = y self.width = width self.height = height self.text = text self.over = False def draw(self,window,outline=None): #Call this method to draw the button on the screen if outline: pygame.draw.rect(window, outline, (self.x-2,self.y-2,self.width+4,self.height+4),0) pygame.draw.rect(window, self.color, (self.x,self.y,self.width,self.height),0) if self.text != '': font = pygame.font.SysFont('freesansbold.ttf', 60) text = font.render(self.text, 1, (255,255,255)) window.blit(text, (self.x + (self.width/2 - text.get_width()/2), self.y + (self.height/2 - text.get_height()/2))) def isOver(self, pos): #Pos is the mouse position or a tuple of (x,y) coordinates if pos[0] &gt; self.x and pos[0] &lt; self.x + self.width: if pos[1] &gt; self.y and pos[1] &lt; self.y + self.height: return True return False def playSoundIfMouseIsOver(self, pos, sound): if self.isOver(pos): if not self.over: click.play() self.over = True else: self.over = False colors = 0,23,56 colors2 = 0,123,56 bar11 = pygame.image.load(&quot;bar.png&quot;).convert_alpha() move_spot = button((colors),720,125,25,260, '') bar1 = button((colors),715,125,30,60, '') volume1 = button((colors2),715,145,30,60, '') volume2 = button((colors2),715,210,30,60, '') volume3 = button((colors2),715,280,30,60, '') volume4 = button((colors2),715,350,30,60, '') buttons = [bar1,move_spot,volume1,volume2,volume3,volume4] # fos fps = 60 clock = pygame.time.Clock() # redraw def redraw(): window.fill((40,100,200)) for button in buttons: button.draw(window) window.blit(bar11,(bar1.x,bar1.y)) # main loop run = True while run: for event in pygame.event.get(): if event.type == pygame.QUIT: run = False redraw() pygame.display.update() pygame.quit() </code></pre>
<p>As a general rule of thumb, you want to delegate all the heavy lifting to classes, and leave the &quot;good&quot; stuff to the mainloop of your program. I have created a simple class here, which takes some inputs (width, height, number of slider options), and will take care of all the drawing, positioning, etc. for you. It also has an option of <code>self.chosen</code>, which tells you which option is picked. I then used this to set the volume outputted by the mixer, based on which option is chosen, in the <code>update()</code> function:</p> <pre class="lang-py prettyprint-override"><code>class VolumeBar(pygame.sprite.Sprite): def __init__(self, options, width, height): # Store these useful variables to the class self.options = options self.width = width self.height = height # The slider self.slider = pygame.image.load('slider.png') self.slider_rect = self.slider.get_rect() # The background &quot;green&quot; rectangles, mostly for decoration self.back = [] objectHeight = (height-options*6)/(options-1) self.backHeight = objectHeight for index in range(options-1): self.back.append(pygame.Rect((0, rint((6*index+6)+index*objectHeight)), (width, rint(objectHeight)))) # The foreground &quot;blue&quot; rectangles, mostly for decoration self.fore = [] for index in range(options): self.fore.append(pygame.Rect((0, rint(index*(objectHeight+6))), (width, 6))) # Get list of possible &quot;snaps&quot;, which the slider can be dragged to self.snaps = [] for index in range(options): self.snaps.append((width/2, 3+(index*(objectHeight+6)))) # A simple variable to tell you which option has been chosen self.chosen = 0 # Generate the image surface, then render the bar to it self.image = pygame.Surface((width, height)) self.rect = self.image.get_rect() self.render() self.focus = False def render(self): self.image.fill([255, 255, 255]) for back in self.back: pygame.draw.rect(self.image, [0, 192, 0], back) for fore in self.fore: pygame.draw.rect(self.image, [0, 0, 0], fore) self.image.blit(self.slider, (rint((self.width-self.slider_rect.width)/2), rint(self.snaps[self.chosen][1]-(self.slider_rect.height/2)))) def draw(self, surface): surface.blit(self.image, self.rect.topleft) def mouseDown(self, pos): if self.rect.collidepoint(pos): self.focus = True return True return False def mouseUp(self, pos): if not self.focus: return self.focus = False if not self.rect.collidepoint(pos): return pos = list(pos) # We've made sure the mouse started in our widget (self.focus), and ended in our widget (collidepoint) # So there is no reason to care about the exact position of the mouse, only where it is relative # to this widget pos[0] -= self.rect.x pos[1] -= self.rect.y # Calculating max distance from a given selection, then comparing that to mouse pos dis = self.backHeight/2 + 3 for index, snap in enumerate(self.snaps): if abs(snap[1]-pos[1]) &lt;= dis: self.chosen = index break self.render() def update(self): pygame.mixer.music.set_volume((self.options-self.chosen)*0.1) </code></pre> <p>Most of the <code>__init__</code> function is spent calculating positions for each of the green and black rectangles, which are drawn in <code>render()</code> and displayed in <code>draw()</code>. The other functions are there for the mouse input, the first checks if the <code>mouseDown</code> happened on said button, and if it did, it sets <code>self.focus</code> to <code>True</code>, so that the <code>mouseUp</code> handler knows that it should change the slider position.</p> <p>All of this works together to make a working <code>VolumeBar</code>. Below is an example of how it would be used in a mainloop:</p> <pre class="lang-py prettyprint-override"><code>import pygame pygame.init() rint = lambda x: int(round(x, 0)) # Rounds to the nearest integer. Very useful. class VolumeBar(pygame.sprite.Sprite): def __init__(self, options, width, height): # Store these useful variables to the class self.options = options self.width = width self.height = height # The slider self.slider = pygame.image.load('slider.png') self.slider_rect = self.slider.get_rect() # The background &quot;green&quot; rectangles, mostly for decoration self.back = [] objectHeight = (height-options*6)/(options-1) self.backHeight = objectHeight for index in range(options-1): self.back.append(pygame.Rect((0, rint((6*index+6)+index*objectHeight)), (width, rint(objectHeight)))) # The foreground &quot;blue&quot; rectangles, mostly for decoration self.fore = [] for index in range(options): self.fore.append(pygame.Rect((0, rint(index*(objectHeight+6))), (width, 6))) # Get list of possible &quot;snaps&quot;, which the slider can be dragged to self.snaps = [] for index in range(options): self.snaps.append((width/2, 3+(index*(objectHeight+6)))) # A simple variable to tell you which option has been chosen self.chosen = 0 # Generate the image surface, then render the bar to it self.image = pygame.Surface((width, height)) self.rect = self.image.get_rect() self.render() self.focus = False def render(self): self.image.fill([255, 255, 255]) for back in self.back: pygame.draw.rect(self.image, [0, 192, 0], back) for fore in self.fore: pygame.draw.rect(self.image, [0, 0, 0], fore) self.image.blit(self.slider, (rint((self.width-self.slider_rect.width)/2), rint(self.snaps[self.chosen][1]-(self.slider_rect.height/2)))) def draw(self, surface): surface.blit(self.image, self.rect.topleft) def mouseDown(self, pos): if self.rect.collidepoint(pos): self.focus = True return True return False def mouseUp(self, pos): if not self.focus: return self.focus = False if not self.rect.collidepoint(pos): return pos = list(pos) # We've made sure the mouse started in our widget (self.focus), and ended in our widget (collidepoint) # So there is no reason to care about the exact position of the mouse, only where it is relative # to this widget pos[0] -= self.rect.x pos[1] -= self.rect.y # Calculating max distance from a given selection, then comparing that to mouse pos dis = self.backHeight/2 + 3 for index, snap in enumerate(self.snaps): if abs(snap[1]-pos[1]) &lt;= dis: self.chosen = index break self.render() def update(self): pygame.mixer.music.set_volume((self.options-self.chosen)*0.1) screen = pygame.display.set_mode([500, 500]) test = VolumeBar(10, 30, 300) test.rect.x = 50 test.rect.y = 50 clock = pygame.time.Clock() game = True while game: for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() game = False if event.type == pygame.MOUSEBUTTONDOWN: test.mouseDown(pygame.mouse.get_pos()) if event.type == pygame.MOUSEBUTTONUP: test.mouseUp(pygame.mouse.get_pos()) if not game: break screen.fill([255, 255, 255]) test.update() test.draw(screen) pygame.display.update() clock.tick(60) </code></pre> <p>The final product:</p> <p><a href="https://i.gyazo.com/f6c2b5ede828f7715e5fd53a65c32c13.mp4" rel="nofollow noreferrer">https://i.gyazo.com/f6c2b5ede828f7715e5fd53a65c32c13.mp4</a></p> <p>As long as your <code>mouseDown</code> happened on this widget, <code>mouseUp</code> will determine where the slider ends up. Thusly, you can click, drag, or tap the slider anywhere on it, and the slider will go to the correct position.</p> <p>Accessing the position of the slider is quite simple, just look at <code>self.chosen</code>. It defaults to 0 (Because it was set to that in the <code>__init__</code>) function, which is at the very top.</p>
python|pygame
2
1,906,049
68,958,798
Visual studio 2019- Python
<p>Is there a recommended way to use Python in Visual Studio? I worked so far with C/C++ and I already have Visual Studio; however, when trying to open a Python project I get this message:</p> <p><img src="https://i.stack.imgur.com/fwBr6.png" alt="enter image description here" /></p> <p>What is the best way to use Python in Visual Studio?</p> <p>Or it's better to use other workspace for python ?</p>
<p>It depends on you, you can work with Visual Studio, you just need to install everything that is required to support Python, and you can just search on YouTube <code>how to use Python in Visual Studio</code> and just follow a tutorial and everything should work fine.</p> <p>I personally prefer to program in PyCharm, it is very recommended IDE for Python.</p>
python|visual-studio
0
1,906,050
68,878,864
How to fix "UnboundLocalError: local variable 'open' referenced before assignment" in write file?
<p>I am doing an assistant and I want it to write file all the conversations I have spoken but this happens when I run it &quot;UnboundLocalError: local variable 'open' referenced before assignment&quot; and I try many ways to fix it but I could not fix it. Hope anyone can help me to fix it. <em><strong>Thank you for any help</strong></em> Here is my code</p> <pre><code>#import import datetime import os import pyautogui import pyjokes import pyttsx3 import pywhatkit import requests import smtplib import speech_recognition as sr import webbrowser as we from email.message import EmailMessage from datetime import date from newsapi import NewsApiClient import random import os import wikipedia from subprocess import run from typing import Text import time user = &quot;Tom&quot; #your name assistant= &quot;Jarvis&quot; # Iron man Fan Jarvis_mouth = pyttsx3.init() Jarvis_mouth.setProperty(&quot;rate&quot;, 165) voices = Jarvis_mouth.getProperty(&quot;voices&quot;) # For Mail voice AKA Jarvis Jarvis_mouth.setProperty(&quot;voice&quot;, voices[1].id) def Jarvis_brain(audio): robot = Jarvis_brain print(&quot;Jarvis: &quot; + audio) Jarvis_mouth.say(audio) Jarvis_mouth.runAndWait() #Jarvis speech #Jarvis_ear = listener #Jarvis_brain = speak #Jarvis_mouth = engine #you = command def inputCommand(): # you = input() # For getting input from CLI Jarvis_ear = sr.Recognizer() you = &quot;&quot; with sr.Microphone(device_index=1) as mic: print(&quot;Listening...&quot;) Jarvis_ear.pause_threshold = 1 try: you = Jarvis_ear.recognize_google(Jarvis_ear.listen(mic, timeout=3), language=&quot;en-IN&quot;) except Exception as e: Jarvis_brain(&quot; &quot;) except: you = &quot;&quot; print(&quot;You: &quot; + you) return you print(&quot;Jarvis AI system is booting, please wait a moment&quot;) Jarvis_brain(&quot;Start the system, your AI personal assistant Jarvis&quot;) def greet(): hour=datetime.datetime.now().hour if hour&gt;=0 and hour&lt;12: Jarvis_brain(f&quot;Hello, Good Morning {user}&quot;) print(&quot;Hello,Good Morning&quot;) elif hour&gt;=12 and hour&lt;18: Jarvis_brain(f&quot;Hello, Good Afternoon {user}&quot;) print(&quot;Hello, Good Afternoon&quot;) else: Jarvis_brain(f&quot;Hello, Good Evening {user}&quot;) print(&quot;Hello,Good Evening&quot;) greet() def main(): while True: with open (&quot;Jarvis.txt&quot;, &quot;a&quot;) as Jarvis: Jarvis.write(&quot;Jarvis: &quot; + str(Jarvis_brain) + &quot;\n&quot; + &quot;You: &quot; + str(you) + &quot;\n&quot;) # Getting input from the user you = inputCommand().lower() #General question. if (&quot;hello&quot; in you) or (&quot;Hi&quot; in you): Jarvis_brain(random.choice([&quot;how may i help you, sir.&quot;,&quot;Hi,sir&quot;])) elif (&quot;time&quot; in you): now = datetime.datetime.now() robot_brain = now.strftime(&quot;%H hour %M minutes %S seconds&quot;) Jarvis_brain(robot_brain) elif (&quot;date&quot; in you): today = date.today() robot_brain = today.strftime(&quot;%B %d, %Y&quot;) Jarvis_brain(robot_brain) elif (&quot;joke&quot; in you): Jarvis_brain(pyjokes.get_joke()) elif &quot;president of America&quot; in you: Jarvis_brain(&quot;Donald Trump&quot;) #open application elif (&quot;play&quot; in you): song = you.replace('play', '') Jarvis_brain('playing ' + song) pywhatkit.playonyt(song) elif (&quot;open youtube&quot; in you): Jarvis_brain(&quot;opening YouTube&quot;) we.open('https://www.youtube.com/') elif (&quot;open google&quot; in you): Jarvis_brain(&quot;opening google&quot;) we.open('https://www.google.com/') elif &quot;information&quot; in you: you = you.replace(&quot;find imformation&quot;, &quot;&quot;) Jarvis_brain(&quot;what news you what to know about&quot;) topic=inputCommand() Jarvis_brain(&quot;open &quot; + topic) we.open(&quot;https://www.google.com/search?q=&quot; + topic) elif (&quot;open gmail&quot; in you): Jarvis_brain(&quot;opening gmail&quot;) we.open('https://mail.google.com/mail/u/2/#inbox') elif &quot;vietnamese translate&quot; in you: Jarvis_brain(&quot;opening Vietnamese Translate&quot;) we.open('https://translate.google.com/?hl=vi') elif &quot;english translate&quot; in you: Jarvis_brain(&quot;opening English Translate&quot;) we.open('https://translate.google.com/?hl=vi&amp;sl=en&amp;tl=vi&amp;op=translate') elif (&quot;open internet&quot;) in you: Jarvis_brain(&quot;opening internet&quot;) open = &quot;C:\Program Files (x86)\BraveSoftware\Brave-Browser\Application\Brave.exe&quot; os.startfile(open) elif &quot;wikipedia&quot; in you: you = you.replace(&quot;wikipedia&quot;, &quot;&quot;) Jarvis_brain(&quot;what topic you need to listen to&quot;) topic=inputCommand() results = wikipedia.summary(topic, sentences=2, auto_suggest=False, redirect=True) print(results) Jarvis_brain(results) #New&amp;Covid-19 elif (&quot;news&quot; in you): newsapi = NewsApiClient(api_key='d4eb31a4b9f34011a0c243d47b9aed4d') Jarvis_brain(&quot;What topic you need the news about&quot;) topic = inputCommand() data = newsapi.get_top_headlines( q=topic, language=&quot;en&quot;, page_size=5) newsData = data[&quot;articles&quot;] for y in newsData: Jarvis_brain(y[&quot;description&quot;]) elif (&quot;covid-19&quot; in you): r = requests.get('https://coronavirus-19-api.herokuapp.com/all').json() Jarvis_brain(f'Confirmed Cases: {r[&quot;cases&quot;]} \nDeaths: {r[&quot;deaths&quot;]} \nRecovered {r[&quot;recovered&quot;]}') else: if &quot;goodbye&quot; in you: hour = datetime.datetime.now().hour if (hour &gt;= 21) and (hour &lt; 5): Jarvis_brain(f&quot;Good Night {user}! Have a nice Sleep&quot;) else: Jarvis_brain(f&quot;Bye {user}&quot;) quit() main() </code></pre> <p>Here is my terminal</p> <pre><code>Traceback (most recent call last): File &quot;c:\Users\PC\Documents\Code\assistant\Jarvis.py&quot;, line 181, in &lt;module&gt; main() File &quot;c:\Users\PC\Documents\Code\assistant\Jarvis.py&quot;, line 86, in main with open (&quot;Jarvis.txt&quot;, &quot;a&quot;) as Jarvis: UnboundLocalError: local variable 'open' referenced before assignment </code></pre> <p><em><strong>Thanks for any help</strong></em></p>
<p>This is because you have defined the <strong>open</strong> command, which is one of the main Python commands, as a variable.</p> <p>At this point, it looks like you put this string in <strong>with</strong> and try to open a file using a string, which causes an error.</p> <pre class="lang-py prettyprint-override"><code>open = &quot;C:\Program Files (x86)\BraveSoftware\Brave-Browser\Application\Brave.exe&quot; </code></pre> <p>Rename the open variable you defined on line 146 to another name like <code>open_</code> to fix your problem.</p>
python|python-3.x|python-3.9|writefile
1
1,906,051
59,439,268
Python tuple() : when does it reorder?
<p>I am using Python 3.7 and get confused at tuple(). Sometimes it re-order the data, sometimes not:</p> <pre><code>&gt;&gt;&gt; a=tuple([2, 1, 3]) &gt;&gt;&gt; print(a) (2, 1, 3) &lt;== the tuple from list is not re-ordered &gt;&gt;&gt; s={2, 1, 3} &gt;&gt;&gt; b=tuple(s) &gt;&gt;&gt; print(b) (1, 2, 3) &lt;== the tuple from set is re-ordered &gt;&gt;&gt; print(tuple({10, 5, 30})) (10, 5, 30) &lt;== the tuple from set is not re-ordered &gt;&gt;&gt; print(s) {1, 2, 3} &lt;== the set itself is re-ordered </code></pre> <p>I have 2 questions:</p> <blockquote> <ol> <li><p>What are expected behavior of tuple() as of whether:</p> <p>1.1 Generate an ordered tuple?</p> <p>1.2 Modify the input?</p></li> <li><p>Where can I find the definitive documentation? I checked Python on-line doc, <a href="https://docs.python.org/3/library/stdtypes.html#tuple" rel="nofollow noreferrer">https://docs.python.org/3/library/stdtypes.html#tuple</a>, it does not provide such info at all.</p></li> </ol> </blockquote> <p>Thanks.</p>
<p>A set is unordered, and a list is ordered. So <code>tuple(a_list)</code> retains the order of the list but <code>tuple(a_set)</code> has no stated order to follow.</p>
python|set
5
1,906,052
72,876,514
Type two statements. The first reads user input into person_name. The second reads user input into person_age. Use the int() function to convert
<p>[problem being presented]<br /> <img src="https://i.stack.imgur.com/QYi8k.png" alt="1" /></p> <p>I have tried every which way that I have learned, and nothing seems to be giving me the answer that zybooks wants. It wants me to use the int() function, but every time I run it, it pops up with the error code that I cannot use a built-in function. Any help or guidance would be greatly appreciated.</p>
<p>Here is a snippet of code that should work.</p> <pre><code>name = input('Enter Name: ') age = int(input('Enter Age: ')) print('Name entered: ', name, 'Age entered: ', age) </code></pre> <p>I just tried it out and it stored the name and age. Perhaps you weren't wrapping the age &quot;input&quot; inside the &quot;int&quot; function.</p> <p>See if that helps.</p> <p>Regards.</p>
python
0
1,906,053
62,188,838
Login to mt4 account by python script
<p>I know integrating python with Metatrader5 isn't that hard, its relatively easy compared to python integration with mt4. But the Issue is I don't need the mt5 integration, I only need mt4 integration</p> <p>I wanted to know if it's possible to login to mt4 account via python script (python 3.6 and above) where I enter the account number, password and server address in python script ,</p> <p>doing this in mt5 is simple, but I cant seem to find a way to do it in mt4. Is it even possible for mt4 or am I just pushing it for no reason</p>
<p>You would need to hop through MQL and a system-level dll implementation of some kind of messaging service warper. MQL can load a dll and you can expose functions from that dll in MQL: <a href="https://docs.mql4.com/basis/preprosessor/import" rel="nofollow noreferrer">https://docs.mql4.com/basis/preprosessor/import</a></p> <p><a href="https://docs.mql4.com/runtime/imports" rel="nofollow noreferrer">https://docs.mql4.com/runtime/imports</a></p> <p>You would then post a login message to a topic, the dll will get the message(loop or callback), the MLQ script will pick up the request(loop or <a href="https://docs.mql4.com/basis/function/events" rel="nofollow noreferrer">ontimer</a>) and log you in. </p>
python|mt4
1
1,906,054
62,186,251
iterate over unique elements of df.index to find minimum in column
<p>My df looks like this:</p> <pre><code>import date time as dt data = [{'expiry': dt.datetime(2020,6,26), 'strike': 137.5, 'diff': 0.797}, {'expiry': dt.datetime(2020,6,26), 'strike': 138.0, 'diff': 0.305}, {'expiry': dt.datetime(2020,6,26), 'strike': 138.5, 'diff': 0.188}, {'expiry': dt.datetime(2020,6,26), 'strike': 139.0, 'diff': 0.688}, {'expiry': dt.datetime(2020,7,24), 'strike': 137.5, 'diff': 0.805}, {'expiry': dt.datetime(2020,7,24), 'strike': 138.0, 'diff': 0.305}, {'expiry': dt.datetime(2020,7,24), 'strike': 138.5, 'diff': 0.203}, {'expiry': dt.datetime(2020,7,24), 'strike': 139.0, 'diff': 0.703}] df = pd.DataFrame(data).set_index('expiry') </code></pre> <p>am looking to find the minimum per unique index (expiry). The following works but is rather slow. Looking for a faster way to do this, either in pure python, NumPy or pandas.</p> <pre><code>atm_df = pd.DataFrame() for date in df.index.unique(): _df = df.loc[date] atm_df = atm_df.append(_df.loc[(_df['diff'] == _df['diff'].min())]) atm_df </code></pre> <p>Desired output looks like this (but don't mind if this is a df or a dict):</p> <pre><code> strike diff expiry 2020-06-26 138.5 0.188 2020-07-24 138.5 0.203 </code></pre>
<p>One based on <a href="https://numpy.org/doc/stable/reference/generated/numpy.ufunc.reduceat.html" rel="nofollow noreferrer"><code>np.minimum.reduceat</code></a> -</p> <pre><code>sidx = df.index.argsort() df_s = df.iloc[sidx] I = df_s.index.values cutidx = np.flatnonzero(np.r_[True,I[:-1]!=I[1:]]) out = np.minimum.reduceat(df_s.values, cutidx, axis=0) df_out = pd.DataFrame(out, index=I[cutidx], columns=df_s.columns) </code></pre> <p>If the input dataframe is already sorted by <code>index</code>, use <code>df</code> as <code>df_s</code> directly.</p>
pandas|numpy|python-3.8
2
1,906,055
62,071,354
Creating multiple folders dynamically if not exist in Box using Python
<p>Can anybody help me , how to create multiple folders with Year format(Folders like 2018,2019 etc if it does not exists) in box inside a folder(named as Archive) using Python.</p> <p>I have a piece of code like below.However I am unable to create any folder dynamicaly.</p> <p>it = shared_folder.get_items() for i in it: if (i.name == 'Lithuansa'):</p> <pre><code>#print('{0} {1} is named "{2}"'.format(i.type.capitalize(), i.id, i.name)) </code></pre>
<p>You can use the Python SDK.</p> <p>Starting at your root folder ('0') you can create subsequent subfolders using this API call.</p> <pre><code>subfolder = client.folder('0').create_subfolder('Folder 1') </code></pre> <p>This subfolder will then have an ID which you can then use to create subfolders in that folder.</p> <p>Additionally, you might want to run a check to see if a folder already exists before this is created. You can do this by listing the files and folders in the folder.</p> <pre><code>items = client.folder(folder_id='22222').get_items() for item in items: print('{0} {1} is named "{2}"'.format(item.type.capitalize(), item.id, item.name)) </code></pre>
python|box-api|box
0
1,906,056
62,405,541
Web scraping BeautifulSoup (Python)
<p>I have a jupyter notebook script extracting text from a <a href="http://arhiva.zdravlje.gov.rs/showelement.php?id=8464" rel="nofollow noreferrer">webpage</a> and putting it into a dataframe. I need to get each line of the <code>("div",{"align":"justify"})</code> tag: the first line is hospital name, second is address, third is phone number, and fourth is url. </p> <p>I am iterating over the <code>&lt;strong&gt;</code> element, but this hasn't worked. With the code below I have only managed to get the first name plus the weird spaces after it.</p> <pre><code>from selenium import webdriver from bs4 import BeautifulSoup as soup import pandas as pd from urllib.request import urlopen as uReq myurl = 'http://arhiva.zdravlje.gov.rs/showelement.php?id=8464' #opening up connection, grabbing the page uClient = uReq(myurl) #put content into a variable and close connection page_html = uClient.read() uClient.close() page_soup = soup(page_html, 'html') divTag = page_soup.findAll("div",{"align":"justify"}) #iterate over 'strong' tag and put into list mylist = [] for tag in divTag: # print(tag.text) hospital_name = tag.strong.get_text() mylist.append(str(hospital_name)) print(hospital_name) df = pd.DataFrame({'address':mylist}) </code></pre> <p>This is what <code>mylist</code> looks like: </p> <pre><code>['Северно Бачки округ', 'Дом здравља Бачка Топола \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0\xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0\xa0 \xa0 \xa0 \xa0 \xa0 \xa0\xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 \xa0 ', 'Дом здравља Алибунар'] </code></pre> <p>Here's a sample of the <code>&lt;div align="justify"&gt;</code> section of the <code>page_soup</code> variable (note the spaces):</p> <pre><code>&lt;div align="justify"&gt;&lt;div align="center"&gt;&lt;hr/&gt;&lt;strong&gt;Северно Бачки округ&lt;br/&gt;&lt;/strong&gt;&lt;hr/&gt;&lt;strong&gt;&lt;br/&gt;&lt;/strong&gt;&lt;/div&gt;&lt;/div&gt;&lt;div align="justify"&gt;&lt;strong&gt;Дом здравља Бачка Топола &lt;/strong&gt;&lt;br/&gt;Адреса: Светог Стефана 1, Бачка Топола&lt;br/&gt;Број телефона: 024/715-425&lt;br/&gt;Званична интернет презентација: &lt;a href="http://www.dzbt.co.rs/"&gt;www.dzbt.co.rs&lt;/a&gt;&lt;br/&gt;&lt;br/&gt;&lt;strong&gt;Дом здравља Мали Иђош&lt;/strong&gt;&lt;br/&gt;Адреса: Занатлијска 1, 24321 Мали Иђош&lt;br/&gt;Број телефона: 024/730-236&lt;br/&gt;Званична интернет презентација: &lt;a href="http://www.dzmi.rs/"&gt;www.dzmi.rs&lt;br/&gt;&lt;/a&gt;&lt;br/&gt;&lt;strong&gt;Дом здравља Суботица&lt;/strong&gt;&lt;br/&gt;Адреса: Петефи Шандора 7, 24000 Суботица&lt;br/&gt;Број телефона: 024/600735&lt;br/&gt;Званична интернет презентација: &lt;a href="http://domzdravlja.org.rs/"&gt;domzdravlja.org.rs&lt;br/&gt;&lt;/a&gt;&lt;br/&gt;&lt;strong&gt;Општа Болница Суботица&lt;/strong&gt;&lt;br/&gt;Адреса: Изворска 3, 24000 Суботица&lt; </code></pre> <p>Thank you very much in advance.</p>
<p>Parsing this kind of documents is quite hard (it seems that the document is not machine generated, but generated by hand).</p> <p>You can try this example to get all addresses to DataFrane:</p> <pre><code>import pandas as pd from bs4 import BeautifulSoup as soup, Tag, Comment from urllib.request import urlopen as uReq myurl = 'http://arhiva.zdravlje.gov.rs/showelement.php?id=8464' #opening up connection, grabbing the page uClient = uReq(myurl) #put content into a variable and close connection page_html = uClient.read() uClient.close() page_soup = soup(page_html, 'html.parser') all_strongs = page_soup.select('div[align="justify"] &gt; strong:not(:contains("нема"))') data = [] for s in all_strongs: out = '' n = s.next_sibling while n: if isinstance(n, Tag) and n.name == 'strong' and n in all_strongs: break if isinstance(n, Tag) and n.name == 'div' and 'align' in n.attrs and n['align'] == 'center': n = n.next_sibling continue if not isinstance(n, Comment): out += str(n) n = n.next_sibling data.append( BeautifulSoup(out, 'html.parser').get_text(strip=True, separator='\n') ) df = pd.DataFrame({'Text': data}) print(df) </code></pre> <p>Prints:</p> <pre><code> Text 0 Адреса: Светог Стефана 1, Бачка Топола\nБрој т... 1 Адреса: Занатлијска 1, 24321 Мали Иђош\nБрој т... 2 Адреса: Петефи Шандора 7, 24000 Суботица\nБрој... 3 Адреса: Изворска 3, 24000 Суботица\nБрој телеф... 4 Адреса: Матије Гупца 26, 24000 Суботица\nБрој ... .. ... 340 Адреса: Требевићка 16, 11030 Београд\nБрој тел... 341 Адреса: др Суботића 5, 11000 Београд\nБрој тел... 342 Адреса: Војводе Степе 458, 11152 Београд\nБрој... 343 Адреса: Стари град Булевар деспота Стефана 54а... 344 Адреса: 38252 Шилово\nБрој телефона:\nЗванична... [345 rows x 1 columns] </code></pre>
python|web-scraping|beautifulsoup
0
1,906,057
35,715,218
How do I set the attributes of a SQLAlchemy ORM class before query?
<p>For example, using Flask-SQLAlchemy and jsontools to serialize to JSON like shown <a href="https://stackoverflow.com/a/31569287/680464">-here-</a>, and given a model like this:</p> <pre><code>class Engine(db.Model): __tablename__ = "engines" id = db.Column(db.Integer, primary_key=True) this = db.Column(db.String(10)) that = db.Column(db.String(10)) parts = db.relationship("Part") schema = ["id" , "this" , "that" , "parts" ] def __json__(self): return self.schema class Part(db.Model): __tablename__ = "parts" id = db.Column(db.Integer, primary_key=True) engine_id = db.Column(db.Integer, db.ForeignKey("engines.id")) code = db.Column(db.String(10)) def __json__(self): return ["id", "code"] </code></pre> <p>How do I change the <code>schema</code> attribute before query so that it takes effect on the return data?</p> <pre><code>enginelist = db.session.query(Engine).all() return enginelist </code></pre> <p>So far, I have succeeded with subclassing and single-table inheritance like so:</p> <pre><code>class Engine_smallschema(Engine): __mapper_args__ = {'polymorphic_identity': 'smallschema'} schema = ["id" , "this" , "that" ] </code></pre> <p>and</p> <pre><code>enginelist = db.session.query(Engine_smallschema).all() return enginelist </code></pre> <p>...but it seems there should be a better way without needing to subclass (I'm not sure if this is wise). I've tried various things such as setting an attribute or calling a method to set an internal variable. Problem is, when trying such things, the query doesn't like the instance object given it and I don't know SQLAlchemy well enough yet to know if queries can be executed on pre-made instances of these classes.</p> <p>I can also loop through the returned objects, setting a new schema, and get the wanted JSON, but this isn't a solution for me because it launches new queries (I usually request the small dataset first).</p> <p>Any other ideas?</p>
<p>The JSON serialization takes place in flask, not in SQLAlchemy. Thus, the <code>__json__</code> function is not consulted until after you return from your view function. This has therefore nothing to do with SQLAlchemy, and instead it has to do with the custom encoding function, which presumably you can change.</p> <p>I would actually suggest not attempting to do it this way if you have different sets of attributes you want to serialize for a model. Setting a magic attribute on an instance that affects how it's serialized violates the principle of least surprise. Instead, you can, for example, make a <code>Serializer</code> class that you can initialize with the list of fields you want to be serialized, then pass your <code>Engine</code> to it to produce a <code>dict</code> that can be readily converted to JSON.</p> <p>If you insist on doing it your way, you can probably just do this:</p> <pre><code>for e in enginelist: e.__json__ = lambda: ["id", "this", "that"] </code></pre> <p>Of course, you can change <code>__json__</code> to be a property instead if you want to avoid the lambda.</p>
python|json|flask|sqlalchemy|flask-sqlalchemy
0
1,906,058
35,618,307
How to transform string of space-separated key,value pairs of unique words into a dict
<p>I've got a string with words that are separated by spaces (all words are unique, no duplicates). I turn this string into list: </p> <pre><code>s = "#one cat #two dogs #three birds" out = s.split() </code></pre> <p>And count how many values are created: </p> <pre><code>print len(out) # Says 192 </code></pre> <p>Then I try to delete everything from the list: </p> <pre><code>for x in out: out.remove(x) </code></pre> <p>And then count again: </p> <pre><code>print len(out) # Says 96 </code></pre> <p>Can someone explain please why it says 96 instead of 0?</p> <p>MORE INFO </p> <p>Each line starts with '#' and is in fact a space-separated pair of words: the first in the pair is the key and second is the value.</p> <p>So, what I am doing is: </p> <pre><code>for x in out: if '#' in x: ind = out.index(x) # Get current index nextValue = out[ind+1] # Get next value myDictionary[x] = nextValue out.remove(nextValue) out.remove(x) </code></pre> <p>The problem is I cannot move all key,value-pairs into a dictionary since I only iterate through 96 items.</p>
<p>As for what actually happened in the <strong>for</strong> loop: </p> <blockquote> <p><strong>From the</strong> <a href="https://docs.python.org/2/reference/compound_stmts.html#the-for-statement" rel="noreferrer"><strong>Python for statement documentation</strong></a>:</p> <p>The expression list is evaluated <em>once</em>; it should yield an iterable object. An iterator is created for the result of the <code>expression_list</code>. The suite is then executed <em>once</em> for each item provided by the iterator, <strong>in the order of ascending indices</strong>. Each item in turn is assigned to the target <em>list</em> using the standard rules for assignments, and then the suite is executed. <strong>When the items are exhausted</strong> (which is immediately when the sequence is <strong>empty</strong>), the suite in the <code>else</code> clause, if present, is executed, and the <code>loop</code> <strong>terminates</strong>.</p> </blockquote> <p>I think it is best shown with the aid of an <strong>illustration</strong>. </p> <p>Now, suppose you have an <code>iterable object</code> (such as <code>list</code>) like this:</p> <pre><code>out = [a, b, c, d, e, f] </code></pre> <p>What happen when you do <code>for x in out</code> is that it <strong>creates internal indexer</strong> which goes like this (I illustrate it with the symbol <code>^</code>):</p> <pre><code>[a, b, c, d, e, f] ^ &lt;-- here is the indexer </code></pre> <p>What normally happen is that: as you finish one cycle of your loop, <strong>the indexer moves forward</strong> like this:</p> <pre><code>[a, b, c, d, e, f] #cycle 1 ^ &lt;-- here is the indexer [a, b, c, d, e, f] #cycle 2 ^ &lt;-- here is the indexer [a, b, c, d, e, f] #cycle 3 ^ &lt;-- here is the indexer [a, b, c, d, e, f] #cycle 4 ^ &lt;-- here is the indexer [a, b, c, d, e, f] #cycle 5 ^ &lt;-- here is the indexer [a, b, c, d, e, f] #cycle 6 ^ &lt;-- here is the indexer #finish, no element is found anymore! </code></pre> <blockquote> <p>As you can see, the indexer <strong>keeps moving forward till the end of your list, regardless of what happened to the list</strong>!</p> </blockquote> <p>Thus when you do <code>remove</code>, this is what happened internally:</p> <pre><code>[a, b, c, d, e, f] #cycle 1 ^ &lt;-- here is the indexer [b, c, d, e, f] #cycle 1 - a is removed! ^ &lt;-- here is the indexer [b, c, d, e, f] #cycle 2 ^ &lt;-- here is the indexer [c, d, e, f] #cycle 2 - c is removed ^ &lt;-- here is the indexer [c, d, e, f] #cycle 3 ^ &lt;-- here is the indexer [c, d, f] #cycle 3 - e is removed ^ &lt;-- here is the indexer #the for loop ends </code></pre> <p>Notice that there are only <strong>3 cycles</strong> there instead of <strong>6 cycles</strong>(!!) (which is the number of the elements in the original list). And that's why you left with <strong>half</strong> <code>len</code> of your original <code>len</code>, because that is the number of cycles it takes to complete the loop when you remove one element from it for each cycle.</p> <hr> <p>If you want to clear the list, simply do:</p> <pre><code>if (out != []): out.clear() </code></pre> <p>Or, alternatively, to remove the element one by one, you need to do it <strong>the other way around - from the end to the beginning</strong>. Use <code>reversed</code>:</p> <pre><code>for x in reversed(out): out.remove(x) </code></pre> <hr> <p>Now, why would the <code>reversed</code> work? If the indexer keeps moving forward, wouldn't <code>reversed</code> also should not work because the number of element is reduced by one per cycle anyway?</p> <p>No, it is not like that, </p> <blockquote> <p>Because <code>reversed</code> method <strong>changes</strong> the way to the internal indexer works! What happened when you use <code>reversed</code> method is <strong>to make the internal indexer moves backward</strong> (from the end) instead of <strong>forward</strong>.</p> </blockquote> <p>To illustrate, this is what normally happens:</p> <pre><code>[a, b, c, d, e, f] #cycle 1 ^ &lt;-- here is the indexer [a, b, c, d, e, f] #cycle 2 ^ &lt;-- here is the indexer [a, b, c, d, e, f] #cycle 3 ^ &lt;-- here is the indexer [a, b, c, d, e, f] #cycle 4 ^ &lt;-- here is the indexer [a, b, c, d, e, f] #cycle 5 ^ &lt;-- here is the indexer [a, b, c, d, e, f] #cycle 6 ^ &lt;-- here is the indexer #finish, no element is found anymore! </code></pre> <p>And thus when you do one removal per cycle, it doesn't affect how the indexer works:</p> <pre><code>[a, b, c, d, e, f] #cycle 1 ^ &lt;-- here is the indexer [a, b, c, d, e] #cycle 1 - f is removed ^ &lt;-- here is the indexer [a, b, c, d, e] #cycle 2 ^ &lt;-- here is the indexer [a, b, c, d] #cycle 2 - e is removed ^ &lt;-- here is the indexer [a, b, c, d] #cycle 3 ^ &lt;-- here is the indexer [a, b, c] #cycle 3 - d is removed ^ &lt;-- here is the indexer [a, b, c] #cycle 4 ^ &lt;-- here is the indexer [a, b] #cycle 4 - c is removed ^ &lt;-- here is the indexer [a, b] #cycle 5 ^ &lt;-- here is the indexer [a] #cycle 5 - b is removed ^ &lt;-- here is the indexer [a] #cycle 6 ^ &lt;-- here is the indexer [] #cycle 6 - a is removed ^ &lt;-- here is the indexer </code></pre> <p>Hope the illustration helps you to understand what's going on internally...</p>
python|list|loops|split|iteration
13
1,906,059
58,738,544
How to read and write from datatap using Tensorflow on BlueData?
<p>I want to be able to use BlueData's <a href="https://bluedata.zendesk.com/hc/en-us/articles/220223188-DataTap-Overview" rel="nofollow noreferrer">datatap</a> directly from TensorFlow.</p> <p>With pyspark, I can do something like this:</p> <pre><code>df.write.parquet('dtap://OtherDataTap/airline-safety_zero_incidents.parquet') </code></pre> <p>Note that I don't need to set up any libraries - it's ready to go out of the box.</p> <p>What do I need to do for reading/writing data over DataTap from Tensorflow?</p>
<p>As per the docs: <a href="http://docs.bluedata.com/40_datatap-tensorflow-support" rel="nofollow noreferrer">http://docs.bluedata.com/40_datatap-tensorflow-support</a></p> <pre><code>import tensorflow as tf import os from tensorflow.python.framework.versions import CXX11_ABI_FLAG CXX11_ABI_FLAG bdfs_file_system_library = os.path.join("/opt/bluedata","libbdfs_file_system_shared_r1_9.so") tf.load_file_system_library(bdfs_file_system_library) with tf.gfile.Open("dtap://TenantStorage/tmp/tensorflow/dtap.txt", 'w') as f: f.write("This is the dtap test file") with tf.gfile.Open("dtap://TenantStorage/tmp/tensorflow/dtap.txt", 'r') as f: content = f.read() </code></pre>
python|tensorflow|bluedata
0
1,906,060
58,753,854
Use python to check age of Servicenow ticket
<p>Hey all I am new to Python and I'm trying to do the following:</p> <p>I'm hitting a Servicenow API to get the date that a ticket was created and it comes back in the following format:</p> <p>2019-06-06 13:11:39</p> <p>I'm struggling with using datetime, date, strftime, strptime, etc to format the output above into something I can do math on. Basically I need to close any tickets that are older than 90 days, so the time in the output above can be discarded. I need to:</p> <ol> <li>Get the numerical age in days from the output above based on the current date.</li> <li>If that value is greater than 90, close the ticket.</li> </ol> <p>Here is some of what I have:</p> <pre><code>from datetime import datetime, date today = date.today() print(today) # Print for debug purposes #API query omitted..... getdata = response.json() for item in getdata["result"]: print(item["sys_created_on"]) createdstr = datetime(item["sys_created_on"]) created = createdstr.strftime("%Y-%m-%d") delta = today - date(created) print(delta.days) if delta.day &gt; 90: #close the ticket </code></pre> <p>....and here is the output:</p> <pre><code>2019-11-07 2019-06-06 13:11:39 Traceback (most recent call last): File "./getUnapprovedSIDs.py", line 32, in &lt;module&gt; createdstr = datetime(item["sys_created_on"]) TypeError: an integer is required (got type str) </code></pre> <p>I realize looking at my code is like staring into the sun. Any help is greatly appreciated!</p> <p>Thanks.</p>
<pre><code>from datetime import datetime created = datetime.strptime(item["sys_created_on"].split(' ')[0], '%Y-%m-%d') delta = datetime.now() - created print(delta.days) </code></pre>
python|date|datetime
0
1,906,061
73,318,272
Image can't be inverted by the "~" operator
<p>When <code>inverting</code> the image with &quot;<code>~</code>&quot; operator, it will cause the following error.</p> <pre><code>import numpy as np import matplotlib.pyplot as plt test_bytes = b'\x8bE\xd4=\xacF\xdc\xbd\xfc\x02\x1f&gt;=\r\xba\xbd\x0c\x9a\xbd&gt;\x11tP&gt;E,\x10?\x9d(\x03?\xb0\x18\xfa\xbd-$\xd4=#P\\=k\xe9}?\xac\r\x93\xbe\x87\xe1\xfd&gt;\xff0e=C\xdf-\xbcCc\xe4\xbd?ff\xbf\x989\xbf\xbe5\x8ag\xbf\xaa_\x14?\xb7\xd6;\xbe\x9c*`\xbe\xf12\xe2=U\xc5l&gt;)7\xa3&gt;F^\xea\xbe\xc0\\8&lt;\xc5\xb2\x03?\x9c\x8e\x1a\xbf\xe5lV:\x8b\xbf\x98&gt;\x9e\xc5Y\xbf\x93\xec\xe2&gt;e\xd0\x81\xbe\xf8\x94[&gt;\x01S5=\xdb\xab\xb8&gt;\xef\x05\xb6\xbd.\xaaS\xbf\xfaS\xa2\xbe\x14\x0f0\xbe\xed\xaf\x8c&gt;/\xdc\x86\xbf\xb8\x8e\xbb&gt;*\x97\xed\xbeS@\x0f;$c\x82\xbe_\xd5\xb2\xbe)\xa6\xa1=\xe9\xe9\x04\xbe9J\x9a?\x99\xbf\x02&gt;\xcaA\x8d&gt;C\xe73?\xbbgI&gt;\x93\xf8\xea&gt;\x92\xbd\x10?A\x85\x93\xbf\x18Q&amp;?;\x80V\xbe\xe6\xaa;\xbf\x07\x02\xae&gt;C1\x89&gt;F\x12\xb5&gt;\x90\xf6\xbe\xbc\xb2\xf4\x95\xbeo\x0e\xae\xbe\x97\xdfV=\xef\xeb\xe2\xbdNc0\xbf@zx\xbfv\x86Y?\x1c\xc7l\xbf\xe7\x0ed?\x19Wk\xbf\xcb_.\xbd\x17\xdd\xb7=\xb7\xfc\xe1\xbd \x07\xb9=\xaa\xa3\xd3&gt;t\x8f6\xbe\x7fWC&gt;\xa6\x07\x11?iN\x9d\xbd\xa2g\x1d?\xef\xff\x16?\x9b\xde\xf3\xbc\xbdn\x14\xbf\xfe\x9dL\xbe\xb3W0&gt;\x8fqJ\xbf\xcd\xb8\x9e&lt;\xc6\n\xdf\xbeU\x18\x8b&gt;\x1e\xf8\x9c&gt;\x84&amp;\xb9\xbdK\x13`\xbel\xd0B&gt;\x88\xb6\x8e&gt;\xb8`\xa2&gt;2#\\\xbf\xb7\xdb\xaf=\x91\x12\x93&gt;\xbf\xcc.\xbf\xfd\x88\xe3\xbe\xc9\xab\xe6=|`a?3\xfb\x94&gt;\x9d\xb9\x0e?&gt;\xf0\xab&gt;o\xf3\x9e\xbe,\xb3\x11?`V\x1a\xbf\x849\xa8&gt;\xca\x04}?\x96{\xb5=\x0f\x80\xa5&gt;\'o\xef&lt;\x99\xd9\x81\xbe \xed\xa2&gt;\xa3j&lt;?&amp;a~?\xe8R\x05=\xce\xea\xb1\xbef8`&gt;\xc5 \x05\xbf\xb0\xc6\x83\xbeY\x88\x0f\xbfC\xff}\xbeU\xc3\x11&gt;0\x18&gt;\xbf\xbe\x82\'\xbf4\xe9\x93\xbe\x84\xc0\xa7&gt;\xfb\xf6\x8d\xbeI\xbe\xb8\xbd\xd2\x964\xbe\xa5L\xa0\xbe\x93\x1d\x9c\xbe\xac\xea\xb8\xbd\xd5\xf5y?6\x81&gt;?\x01I\xae&gt;\xdc\xd8\x11\xbfHW\x1e?\xdc_\x87\xbe\xd3\x17&quot;?`w\xb3\xbc\xa053&gt;,P8=P\xb50&gt;\x8b\xc4\xba&gt;\t\xbc%\xbe\x13Q\x82&gt;\xba.\x9e=\xc5f\n\xbe\xd9\xc9\xed=\xf7\xf0+?n\x1f\xe5&gt;7\xd1\xdb&gt;\xea\xef&quot;\xbe\xd1/3?x\x15*?.\xdcO&gt;3\x99\x90=8&amp;\xa0?\x9b\xdc#&gt;:&gt;\xd4&gt;\xa3\x9f\xd0\xbe\x16\xc6&amp;?\'@|\xbf\x18\x05\xf2=\x08\xc4~&gt;\x87XY&gt;\xd8=\x19\xbe2\xeb,?-\x08\xaa\xbde\x93\n&gt;n[\xce\xbem\xe0\xa2\xbe\xfe\x1e\xa0&gt;(\xd6?&gt;\xfe\xf9\x14?\xea\x0e\xc9\xbeh\x0c\x94?\xd1m\xc4\xbfS\x0c3\xbe\xb3\x9e@&gt;\xb5`\x82\xbe\x8e\x8b\xd4&gt;\x1c\xa4\xad&gt;\x96(\xfc\xbbp;P&gt;\xb0\xc6\xb2&gt;R\xeb\xc3\xbe\tj\xdd&gt;x\x95!\xbf*\xaa\x1a\xbf\xb8\xa7~&gt;z\xdc\x13?\xcf\xd3Y\xbf\xe5\xcd\x0b&gt;\x95\x12\xfb\xbek\xa5m\xbeB\xbd\x14?\xe6p_&gt;\xe6\xcb\x9b\xbee\xc1&amp;?U\x9d\xb0\xbfjL@?\xc0-N&gt;\x11\x9d \xbcU\xdf@?\xaex%&gt;\xee{\x08?\xa2\xc3k\xbf&lt;\xed\xe5\xbc{T\x96\xbc0&quot;*&gt;w6%\xbfPh\r\xbe\x12y:\xbf\xdd\xaa\xdd=\x17t &gt;\xb4c\xfd==\x84\x10\xbfl\'\x0c\xbd\x9a\x0f\x1d\xbe\xb2\x03\xec\xbe\xf0\xa6\xa3&gt;\xe2d\x00&gt;h\xb8\xfd\xbd7\xac\xa1&gt;J\xa7\xfa=\xe9\xab\x9e&gt;l\xb7/\xbe\xa3`\x05?\xa9\x1e\x85?\x05\x05*?&quot;\xec\xb4&gt;\x0c\xe8\xee&gt;\xbc\xbaS?i\x16d\xbd\xba\xed3&gt;\xd8S[&gt;\x1d\x7f\x94?\x084\xe0\xbd+\n\xb4\xbe\x89HG\xbf\xf1\x93\x0f?\x987\n\xbf\\\x8fs?\xf6\xbe ?\xa8\xca\xf3=\x84\x07U\xberJ\x92\xbe\xc0\xad\xac&gt;\x12MY\xbf\x12{\xf4\xbe\xbc\xc6.?\x9f%\xd8\xbe\xa8\xaaL&gt;y\x99\x95&gt;\x8f\xc0\x07?\xa5\x84\x89\xbe\xd6\xb1\xa3\xbeT\xec\x0c?\xddZ\x88&gt;\xf5\xd3*\xbe\xc1\xa3\xbd\xbd\x853\x91\xbe\xca\x0c\x14&gt;\x11\x18x\xbe:q]\xbf\xfc\xb1\x82\xbf ;\xe6&gt;\xdc;\xcc\xbd\xe9X\x05\xbf\xcf\xcd\xb9\xbfe]t&gt;\xf0\x84\xc1?\x8c\xffM\xbe\xc8\xb5 &gt;\xa5\x9d\xdd=\xe8c\r\xbf\x89\xd8\x1f&gt;\xaf\xe8P\xbfI\\\x83\xbdGH\xe3\xbe\\3\xbf&gt;\xa2\x1b\xd5&gt;\x0c\x1a\x04?\x18\x17\xad?\xf0\r\xc6&gt;\xa2\xc2\x9f\xbd\xe9M\x80?\xa62\x95\xbd9H\xb9&gt;w,\x1a\xbe\xb0ch?\x89&quot;m&gt;\x8ex\x12=\xc2[Z=\xc2\xa3\xc3\xbf\x86dB&gt;\xff\xee\x1e\xbfi\xa4\x9e&gt;(P9\xbe\xa5\xf0\x1d?=\x17U\xbe\xe9\x0e\x18=\xea\xb6\xaf&gt;\xb8t\'\xbe\xca\xbe[\xbf\x7f\xd9\x8f=&quot;\x9aL\xbe\xa4\x10?&gt;2\xa2\x9b\xbeb\xbcR\xbd\x15^\x9a=)\xb8\xe7\xbdJg\x03?\xe6\xf6g&gt;#\xde#\xbe\xca\x92\x19\xbe\x8d\x95\\\xbe\x97I\x12\xbe\x9eF\x03\xbf\xc04\xbf=u\xa9\xe9\xbe+\x1b\xb1&gt;\xff\x8bB\xbd\x16V\xde\xbe\x82\x8b\xe4\xbe\xb0\xb8\x88&gt;\x19\xb3\x9b\xbc\x8f\x82\x90\xbc\x7f\x14\xc2=G{\xe5&gt;\x90\x1b\xbd=\xcf\x14c\xbf\t\xf7O&gt;\xe2\xbc_&gt;\x1e&quot;G\xbd\x14]@&gt;\x15G\xce\xbe2\x8a\xa7\xbe\x9b\x033?\x9f\x11G\xbfT\x0e\xbf\xbe\xe2\xc5\xf9&gt;\xb9\xcd\xd5\xbc\xd6#\x96&gt;\x83W\r&lt;!U6&gt;\xe2\xcd\xfa=E\x80\x81\xbe\xa8\xaa\x83\xbdk=B&gt;^\x03\x89&gt;E\xeb\x13\xbe\x07\x8c\x8f?\xbe\xbb\x12\xbe\x9c.\xaa\xbeY\xd6\x04=\xd9\x8c\xa7&gt;7s\'&gt;\xc0\x18\xb0\xbe\xe0)z?\xa1^\xad&gt;1\x96\x8d\xbea\x8c\xba&gt;\r\x04\xb1&gt;\xdf\xa4C\xbf\x05AS?\x07+\r&gt;\xb7L\xa2\xbfw\x83&amp;?\x9e]Y\xbf[+\x15=\x84\xfb\x0b\xbe\xda\x12\xe3&gt;\x9b:e?\x82\x07=\xbe\xe7{\xcd&gt;\xb2\xbae&gt;\xa0Jj\xbf\x86Y_\xbf\xdb\xd58&gt;:a\xcf=\xe7[\x13\xbe\\\x18*\xbfFE\xd1\xbe\x91\xfe\xa9&gt;\xd1\x006\xbfX\x1f:?\xc7\xfd\x87\xbeR\xf3\n\xbf\t5.?\xe4\xe0!?^\xd8.\xbf0\x0c\xb2\xbeK\xf06?I \xab\xbd\xcd\xf7\x15?\xeat\x97\xbe`_\xc9&gt;\r\x88i=\xa5v\x0f?\x91\xe9\xa3\xbeYB\xdb&gt;\xebG6?\xcd\xb2\x06?\x9f\xc7\xc3&gt;\xb7Qg=A\xff\x10\xbf\\^\xc6\xbe\xbd\x83\x11\xbf\x8b;\xee\xbc\xe9c\x88\xbe+V\x92?\x06\xc2p&gt;\xa1\x87\x15\xbfr*\xff\xbe\t\xd5\x98&gt;\x8b\xee+\xbe&amp;H_?\xa6\x009&gt;\xd6\x039\xbf0\x85\x16?\x9d\xd1\x90\xbdDl\x99\xbe\xa5.\xa1&gt;\xa39Y\xbe|\x87\xa5\xbe?r\xb8\xbc\xf2c.&gt;#H&quot;\xbf\xde\\\x89&lt;\x08\x85n\xbe\xe7\xe3\x1e\xbfG\xfc\xad\xbe\xcf\xfc\xfb&gt;\xc8a\xd5\xbe|!\xc8&gt;|5W\xbe\x85\x91\x04\xbf\xaa\x15a\xbeN\x7f\xae\xbe\xe4)=\xbe\x97\x02w&lt;S\x0c\x11=\xdfh\x8a\xbe\xaa\x92\xb4&gt;\xc2\xac\xdd=\xe5\xe1\x1b?\xed\n]&gt;\xf75f\xbf\xc5d\xa4\xbeF\x1f\x9e\xbf\x06\xd8)&gt;\x9c\x7fQ\xbf\xee\x95-?\x94Mv=\xc5\xa8\xff=\x04\xc0\xd4\xbcm\x88\xb3\xbe\x02a^\xbfG&lt;\xa4\xbe+\x16\x8b\xbepl\x83\xbd\x99\x9a\xb4&gt;\xd3t\x02&gt;\x85\x00\xb0=\x89\xcc\xb6=\r\x03\xbd&gt;[\xa4\xc1&gt;\x81\xb6g\xbes&quot;)?\x92\xce\x1b\xbe(9 &gt;+\x02u\xbeK,\x10\xbfr\xf6s\xbfe\x96\xed\xbe\xc4\xb3Q\xbdE-a&gt;\x08A\x86?\xa3\x92\x82=\xae\xdf\xb2\xbd\x1f\xea\x0b\xbf\x07\xcb&gt;\xbf)\xf6\xeb&gt;\xf2o\x07\xbeZ4\x85?\x9fN\x8a\xbef\xb6\xb0\xbcy\x12{\xbd%\x0e\xba\xbe\xcd&quot;&amp;&gt;\x1d\xff&gt;\xbf\xc3P\x8c&lt;G\xbf\xb4=\xbd7\x9a\xbe\xe6\xc1\x89&gt;c\xecY?C\xc5\x80=XHD\xbf\xfe\xef\xf6\xbe\xc9\x95\xa4\xbd\xa0\xef\xd4&gt;R\x9a\xb5\xbeVz\xb4&gt;\xb6\x94\xe3\xbc+\xad\xc6&gt;\'\xa5\xd5\xbe\xab\x90\xbb\xbe\xc2%d&gt;\xcc|-?B\x96,\xbe\xa4J\xa5&gt;\x92j\x8e\xbdZ\xb2\x8b?A@\x88?\xc5\xdf\x8e&gt;\xef\x1dH=\xb9\x8e\x13&gt;\xdbK\xcd&gt;\xe2n\x96&gt;\xef(t\xbe\xb0\x7f\r?\\\xa8\xe3&gt;\xedz\x0c\xbf\xa5a\xa3\xbe\xc2\xb4\x80&gt;W1-\xbeD\xb5]&gt;\xad,\x8f&gt;m\x00\x17?XX&quot;\xbe\x8c\xaf\x04\xbf\xe6\xd0\x1a\xbe\xd8\xbb\xeb=\xfe_\xb7&lt;\xff\x12\xcf&gt;;\x9bs&gt;\xd4\xb5\x12\xbe&amp;\xc8\xb7\xbed\xcb\xd2\xbd\xcfb&quot;=\xee.\xf6\xbe\x01\xb5\xf7&gt;:I\xa7\xbd\xb1\x93\x01\xbd\xad4\xe1&gt;\x94\xcb\x88\xbd]\x9d\x80\xbf1\xac\xb9&lt;q\xccL\xbeUsv\xbe\x8a\x9b\xc6\xbc\xa2U5\xbf\xc7\xdb\x1b?,\xd1\x93&gt;\xc7\xa5\x1b&gt;\xaf\xba\x06&gt;p\xb1P?h\xb8\x01=\xee6\x0b\xbf&amp;`)&gt;!\x11\x11?\x9a\xd51?PG\x86&gt;\r\x9bz?\xf6\x81\xdb&gt;!\xc8\xa4&gt;L\x96j&gt;.\x0b\x91&gt;\x94\xec!&gt;\xef4\xcd\xbe\xdf3??\x1a\xff\xa2\xbe|[\x8f&gt;\xac)\xa8\xbe\x89H\x8a?\x85&quot;\x05\xbf\xf7[\xa7&gt;.\xeeW?\x91\xf2\x90&gt;\xa9\x1a\xb2=0\xf1\xe5\xbeG\xbf\x8c\xb9NH\x96?,)\x8f&gt;\x0f\xb1\xd6\xbe4-9&gt;1pH?\x1c\xbd\xe6&gt;!\x82-&gt;\xfe\xa0\xf6\xbe\x01\xfd\xc5&gt;\xd020\xbf\x98\xa8\xb3\xbf\xa3\xad\x14\xbd\xf6\xa08?\xcc\x0c\xa7B\xab\xd1\xa8B\xe4$\xa8B\xc4\x999C\xd3\xa69Coq9C\x19\xbf\x1fC\xeb\x8d\x1fC\xfc\'\x1eC\x16I\x17C-\xbd\x17Cn\x8b\x16C,\xbfoBl\xc8pBJ\xb0nBb\xbc\x12B\xb5\x00\x0eBF&lt;\x11B\xa4g\xe4\xbe\xe1\x15V?\xfd\x00\xa5\xbe\xec\x8fb&gt;R=\x04\xbf\xb7u\x98&gt;L\xda\xcf=#\xfb\x1c\xbf\x1672\xbfp\xb6\xbd&gt;\x06w\x91&gt;\x8f\x0c\x87=\xeb\xd3n&gt;\xc9\ri&gt;\xc8+\xee=\xd7H\xb6=\x04\x01\xa6\xbd\xb7r!\xbf@#\xe3&gt;oL\x04\xbfr\xe5?\xbee\x1f\x88=ZM\xe7\xbeg\xf1\x0c\xbf\xa9\xcd)?\xcf\xd5\x81\xbd\xb1\x0b\x19\xbf\x162j&gt;[Q\x0e\xbe\xea\x8b\x94?\x15\xc9\x92&gt;\x9f\xa0\xa8\xbe\x0f~\x10&gt;\xda&amp;\x8d=\xf3\x18\x93\xbf\xdf)\xd8&gt;9@\x00?\xa2|m\xbe\xd3\x1fG\xbfE\x11l\xbd\xe1\x88\x0c?\xa7\xed\xa0\xbe\xdeY\x07&gt;\xac3\x1a\xbe\xee\x1f\xff=$-H\xbeg\x04\x83=CP\xb5\xbe\x02\x06&quot;&lt;\x10j\x8a&gt;\xdf7\xf3=\x1eTl&gt;\\\x88K&gt;P\xbf\x17\xbe\xcb2\x8c&gt;\xf6v\xcc&gt;k\xb8}&gt;gnb\xbfK\x0f3\xbf&lt;uI&gt;&gt;\x98\x90&gt;+z\xe0&gt;\xd5\xdfE\xbf\xcc\n2\xbe7\x1e\x02\xbf\xd3\x91{\xbd\x04*]C\x92\xc0]CMv^C\xd2`~C]\xfb}C\x81\xb4}C\x90\xf6}C\x89*\x7fC\xd1\xf8}C1Y~C\x95\x80}CO\x91~C\xf3&quot;~CsF~C_\xeb}C\x0cCqC\xe8\xe6pC\t\xe3pC\xad\xe8EC&lt;\xa9EC/XFC\x0e:FCK;FC\xcb\xc9EC\xcd}FC\x1a\xcbFC\xe9\x89EC\x0b\x06FC\xc3\x1aFCB\xc7EC\x01\xa2EC\x9ecFC\x12\xd5EC\xa9\xe6ECO&gt;EC\x1f3FC 8FC\x1e\xd9EC\xefjFC\xd7\xfeFC\xe75EC\xc2:ECNx*Cm**Cc\x17*C\x03\xf1QB\x99\xceKB\xefyTB\x9d8\xc8\xbe\xa6\x9d~?\xcc\xf1\x93\xbe\xe5\xceJ&gt;\x1cM\x83\xbb\xba\xdaF&gt;VxC?Y1\x1b?\x94\x9e\xb6\xbd_MK&gt;\xf7\x0b\x89&gt;&quot;\xe3\x07\xbf_*p&gt;\xf5/\t&gt;1\x92\x81\xbf2\xd1\x1b\xbd\xc9;m\xbf\x84\xe6I\xbd\xfc|\x8d?1\xc6h&lt;o\x0b\x19\xbf\xf9\xee\xa4\xbe\x14I\xa8\xbeL7\'\xbfm\xb9\x9b=\xdd\'\x84\xbd\xaf\xa5\r\xbf\xb2\x80\xed\xbe\xb5\xc9\x0e&gt;\x82\xe6\xe1\xbd\xb3\xa3\xdb\xbd)\x1f\x7f\xbfa6&quot;?s\x11\xed&gt;\x8dR.&gt;Co\xea\xbd6\xd9\x85B\xcc\xe7\x86B\xd0\xf9\x85B\xcd\x94\xe6B\xbd\xb4\xe3B\xcav\xe5B\x82%\x90B\xb3\xff\x90BDS\x91B\xdfZ\xe3B\xf8\x80\xe2B\xc0\xa5\xe3B;\xd6&quot;C\xe6\x99#C\xdb\x97#C\xafnbC0(cC\xa8\xfdbCD\xaf}C\x95\xf7~C\x1a\x85~CZ\x1faC\'F`C\xb7\x99`C\x129}C/\x08}C\xe2|~C[\xdd}C\xdf\xc4~C+\xae}C\x9a\x81}C\x9c}~C\x19\xe8~Cm|yC\xaf\x0bzC\x878yC\xab\xa8dC+\x1aeC{\xb7dC\xa9t~CA\xda}C\xef\xc8~C:\xdd~C\xaf\xee}CVc~Cyi\x0cC\x949\x0cC\x98\xf3\x0bCD(\x99&gt;-#\x85&gt;\x97\xb9\x80?`\xef\xcf=\x05&amp;\x92&gt;E\x0bx=\xc0\xb5\x8f&gt; n\xa8&gt;\xb9R\xaf\xbe\x1c\x88\x10\xbf\xd5e\xe6\xbd\xa3\xd6_&gt;Nz\xcf\xbe\x9c\xb3E&gt;\xe4\xdf\xcd&gt;\xc5-,\xbf\xf5{U&gt;\xffA\x81\xbf\xc0\x9b]&gt;Y0\x98\xbdU\x01\xc8\xbf&amp;\xc5r?\x9f\xa7\xcf=\xce\x1f\x15&gt;\xae\xd03&gt;\xe4:v&gt;\xd9\xa1\x8a\xbe\xe8\x8e\t\xbfe\xa5\x08\xbf \x98\xc1&gt;t\xc2:\xbf\x08\xd7b?\x1dA==\x1ck\x00?\xceU\xeb&gt;\x84\xad\r=\x05\xd7\x8f?X\x96\x82\xbe\xd9\xd18\xbf@31\xbe&amp;7\x05?lpw&gt;o\xbe\x17\xbe@*\x95\xb9\x1d&gt;V&lt;Z\x05\xf2\xbc$9\xa3\xbf\xc7\xf2\\?U\xff\x81\xbe0\x17C\xbf\x03\xe71\xbf\xd2\xb6\x89A/\x1f\x81A\x9b[\x84A\xdf\xd6\x82B\xac\xfe\x82B6\xb0\x85BZ-eAH\xb9mAt\x11WA\xe86\x85B#\xac\x86B\xbb:\x87Blm\x85B\xdb\x8f\x85B\xf2V\x86B,\xf8\x86B\x94X\x84B\xa5\xb7\x86B PoB\xf6\x98nBIYjB\xcd\x9d\xa1A\x7fW\xaeAZ\x1a\xa7A\xe6?lC\xbfSlC\xe5\x82kC\x8e+~C\x89\xa2~C\xef`~C\xf8\x1d\xd3B\x1e\xf4\xd2B\xbf^\xd3Bi=\xe6\xbebiZ?W\xc2\xfb\xbe\x1c\x11f\xbetZ]\xbe(\x9f\x93\xbf\x9910\xbe\xe1$\n?\x07\xe4\xad\xbe\x8d9\x83?\x12A&gt;\xbfn.$\xbft\x82\xd1\xbe\x8c\xd9R?&quot;\xd9\xd9=p\xf7\x1e?\xe5\xebX?\xa5tJ&gt;\t\x91q\xbd\xd4p\xd9\xbe\x96\xf6N\xbez\x8c\x97\xbe\xe8.Z\xbf*\x02]&gt;d\xdb\xd0\xbe\x84\x91\xb6\xbe V\xa7\xbe\xa7\xa8i&gt;\xca\x88&quot;=\x99\xe4\x0b?\xceK\x80\xbf_\x95k\xbd\xf2Gy&gt;\xd1\xde\xc1&gt;2\x1a\xf1\xbd\x11\x1c\xed\xbe%m\x1f\xbe&quot;\xf2&quot;&gt;HX\xe4=\xf5\xa4\xc2&gt;\x91\xf7\x8d\xbe\x16\xec\xcc&gt;\x02\x8bg\xbe\x03n\x84&gt;\xbe\xca\x94?\x19\x80\xe0\xbe\xe1\xd9\x17?q\x15=?\xdf\xa4]\xbd\x896\x1d\xbeM\xe5h&gt;N\xf7\x07?i\x9c\xaa&gt;\x01\xef\x15?\xc6V\xcb&gt;\x8c\x1cv&gt;\xe5\t\x9c\xbe\xedz\xee&gt;w\xc5&lt;&gt;\x1a\xed\x99\xbd\x1a\x1e\xd9\xbejY4\xbf\xcb-&gt;\xbe\xa7$\xaf&gt;\x93\x99D?T\xe4\x0b?\xa4\xb1\xbb&gt;\xa2E\x88\xbe\xfc\xcd\xdc\xbeF\xe9/\xbfH\x08\xc6&gt;)\x8ek?\xb1\xde\xa5BK\x10\xa4B9w\xa5B\xe4*}C|\xcb|C\xd4\xc4|Cy\xe1PC#\xe6PCp\xb3PCQ1\x8dA\xae\xff\x90A\xcd\x9e\x8cA\x83\xa1\x19\xbf2\xa1z\xbf`\rU&gt;z\xc6\x05?=\x89\xb9&gt;\x85\xb9\x8c=~\x18\xc6&lt;`K\x89&gt;\x93\xd2\xf4&gt;\xae\x04p\xbf\xcd\x0e\xcb\xbe \x8f\xa2\xbf\x0b\x15\xb1&gt;O\x9b\xd7=\x82\x04\x04?*\x8b\x04\xbf\xbd])?I\xf3\xf5\xbeTWT\xbf\xb8 \x14\xbfDc,?\x17\xd9\x17?\xf6\x1b`?\x88&amp;K\xbd\x07\x82\x00&gt;8q?\xbd\x85\xca\xf0&gt;\xbd\xec\xd4&gt;9q\xa0=H\x0f\\&gt;X\x8d\x83\xbd\xae\xe0\x04\xbdG&amp;\x0b?\xf1\xd4W\xbf\xa1\xbdY\xbd\x80#r\xbe\xd0\x7f\xe3&gt;\x8a\x19\r\xbf*w]\xbe\xc6\xf6\xc6&gt;.8S\xbe\xb7\xacM&gt;\x1a\xd2\x11\xbf\xe7\xe9\xf4\xbeH\x18=?e@#\xbfJ=c=\x0f35?\x03G5\xbf\x13E!&gt;\'\xe3S?\xb7/\x93\xbd\xc6\xf00\xbf_L\x16&gt;t2\x01?h\xfe\xb8\xbe\x9c\xaa\xa7\xbet\x96\xd4\xbd\x82\xfe\x90&gt;]\x97\x84&gt;\xbe\x18F;\x93\x0c\x1a?\xb7\x1e\xb2&lt;\xa7n\x06\xbf\xaa\xf8\'\xbf\xa9\xe2q&gt;\x88\xc8\n?\x95R\xf5\xbe\xa9\xc7\x9f&gt;\xcd\x07\xb4A\xc4P\xb3A\xfe\xc4\xb3A\n\xa3hC\x9a\xc9hCH\x16iC\xb7\x0f\x7fC\x83h\x7fC\x8c\xf6~CSG\xa7BMa\xa6B\x80&gt;\xa6B\xef^+\xbf8O\xcc\xbd\xb4=\xd3\xbd\x97\x1d\xae&gt;k%\xc2\xbe\xee)\x0b\xbd\x1a\xf8\x85?\x83H\x86\xbf\xd4\xc7\xdf&gt;c\xd3\xd3\xbe\xa4\xf8\x9f&gt;\xdcL\xc8&gt;\x06\xb0\xa5\xbe\xc9\x02\t\xbe\x8c3\xd1\xbdXD\x9b\xbe\xb7\x8e\xd1&gt;\xc6V9?\xe8\x87\xa8\xbf\x103\x1e?\x81\x9e~\xbf\xd4b\r\xbf\xf9\xdco?\xef\x99\x19?\txq\xbe \x82\x00\xbfAb\x18&gt;w\x97C\xbf\x82\xa5\xe9&gt;\xd0d\xd0&gt;\xd1\xbft=\xe6\xdd\xa6:\xd5\x96\x06??\x91(\xbf\xb4N\xad\xbe\xa7\xf6\xd8\xbe5&lt;-\xbf\xba\xbe\x9b\xbc\xb7q\n&gt;\xecL\x1d=\x00\xca4\xbf\xa3f{\xbe\xfe\xc6\x19\xbe\x1c\xb1\x1a\xbf\x08\xaeB\xbf\x92\xa98?\x06\x94\x1c&gt;r\xd9\xe7\xbb\xd8P\x0b?7o\xf8\xbeS\x90C?&gt;\xf9\x17\xbf\x10\xf2\x1c?\xd0U.\xbd\x8fb\x00\xbe\xc3&gt;Q?\xe1\x14\xb4=\xdd\x07j=\x89\xf1\x9d&gt;q\xf7\x8b\xbf\xd2\xc4\x93&gt;\x94sL\xbfXO\x1d\xbf\xb8:\xfd\xbe\x1b\xebH=p\xd3/?=\x98\x0c?O\x1cM?\xf99\xc5&gt;\xcfy\xfd&gt;\x8c\xe7\xb6\xbd\x12O\x0e\xbfR@\x00C\xaf\xeb\x00C&gt;\x0b\x01C\x1fM}Cpo}C`\xb7}C\xd2OnC\x10xmC\xe9\xa5mCv\xad1B\xbc%-B\xf3\x8f/B\xea\xe3\x87\xbed\x89\x1d\xbf\x96%\xb3&gt;\x8a\x8a#?h\x10\xae&gt;\xefx\x12=\xfb\xcb\x91\xbe\xe1\xcf\xcf&gt;A\x96%?I \xdf\xbdbBO\xbfj\xaeb\xbf=\x0e\xcc&lt;\x18\xb4\x87&gt;;c\xef\xbe\xaf\xf5\r\xbf\x1b\xe1-\xbffy\x8c=9*\xaa&gt;rr\xb1=\xb9\xef)?\x18\xab\xd4\xbec$\xd6=\x85\xaco\xbd\x0b*0?p\x99/\xbe\x99\xb4\x0c\xbe\x1e\xb94?\xad\x05P&gt;O\xda9\xbd\x8d)C\xbf\x92x\xc6&gt;&quot;k\x95?:\x9e+&gt;\x08\xdd&amp;\xbf\x82\x9c\x83\xbeJ\x8e\x01\xbcQ}\xb4&gt;n\x82\x1c?\x86#t=\x1f0\xd8\xbe\xa8n\xa3\xbf.3\x1b\xbf\xb9\xc6\x07\xbfq\xb0\xad=)\x9cm\xbe\x93^N?o}F&gt;\xcd\xcf\xd2&gt;&amp;\xc4+\xbd\xc9\x05\xc8&gt;\xb3D.\xbe\xec\xbc\x01&gt;\xa6\x15\xd3\xbd\x1a\xf7#?x\xf3\x13?J\xb3\xea&gt;\x97\x02&gt;\xbe\xc3\x81\x1c\xbdu\x01r\xbe|\n/\xbd\xb7\x84\x1d?\xbb\x0e\xf8\xbeO\x8cp?\xdf\x91b\xbfw\xc0*\xbea\xbe)\xbd\xd0F;\xbf\x8d\xa3\xba&gt;\x80\x98hBb9oBWFlB\x1eDyC\xfa\x87xC\x8b\xddxC*D}CK\xce~C\x18\x99~C\x80\xe2wB\xc2\xd4zB3OvB[)K?\x0b\xa0\x12&gt;\x82!\x15\xbfB \x00&gt;\x00t\xca&gt;G\xb7+?\xdf\xfd\xf4&gt;\xe7\x00\xaa&gt;wZ\x83=L\xd8\x8b\xbe\x0e\xc5\xcc\xbe\x9d\x7f==\xd4KE&gt;\x02\xf4!&gt;\xbe\xb0%\xbf\xbf\xd8\xc8\xbe\x85&gt;\xfb\xbe\xeb\xf9{?.\xdb\xce=@\x84\x1d\xbf=\x8e\xd1&gt;\xd8\xe1\x8a\xbf\xcc\x95o=\x9aK\x85?&gt;\x151?\x07\xac\x03\xbf\xcb\xe2\x17&gt;\xae).\xbe\xa4=\xf7&gt;\xb8\xf7(\xbf2\xd3\xe1\xbd\xedw\x9c=\\\xda\x8d&gt;\xcc5\xe7&gt;g\xb8\x15?\x91Vb\xbf4r\x80?\x8e\x03\xe6\xbeU\xf3\xde\xbe\'\xb4\x85\xbf\xec\'\x98\xbf\x0f\xe3\xba\xbe\xa3_|&gt;+X\xb4\xbe\xaf\xdf\x1c?M\x9d\n\xbf\xa1\x86\xe6\xba\xae\rC?i\xf5U&gt;\xa0Ab&gt;\xda\xb9\xc1=\xa7\xc2\x81\xbe\xcd\x9a\x06\xbf\xcb\xa9\xb5\xbdd\x15\xd3&gt;ed\n&gt;\xa3\xbbp\xbf\xb7\xfat&gt;\xa3\xe6\xd7\xbd\x82\xca\x98?6x\xfd\xbe\x9d.O?\x8d\xb5\xa1&lt;\xd6\x02\xa3&gt;.\xc5q\xbe\xcd\xd7F\xbe\xd8\xcbj\xbe\x0b\xbb\xc0&gt;\xc8.\xa7&gt;i\xe4\xd0\xbe\x00\xcf\xb9&gt;)\xffG=\x9a\xc7\x04C\xa1\xee\x04C\xf17\x05C,\xd3}C\x12\xf2}C\xa8-~C\xcf=:Cj\x7f:C\xda\xad:C\xb7i\x8c@\x8c=\xc0@\xc0\xf4\x97@\xd4K=\xbe\xd3\xa2\xdd\xbd\\o(&gt;s\xcf\xb5\xbf\xd48\xcf\xbe&lt;\xbf\x83\xbfU\x97\xf8&lt;\xfa\x89\x02?\x81%\x7f\xbd$\x7fh\xbfQa\x05=\xb2\x0c\xb9\xbce\x9d\xa1?\xb4\xf4\x02?\xe1&lt;\x7f&gt;;\x8f&lt;\xbf\xa6\xb9\xd9\xbe8\xe9B\xbf\xaa\xc3\xa0\xbf\x85\xb4T\xbd =\x01?\xf1\xb6\xc4&gt;\x1d\xf1\xbb&gt;r\x80\xb5\xbf0i\x06\xbf\xc1\x10t\xbe\x9f\xd4\x92=\t\x19]\xbf2\xe3\x1e\xbe\x08\'#\xbf\xd3K\xf2\xbc\xfep\x96\xbe\xa6\xb7Y&gt;&lt;\r_?\xd6j\t=Yy\x07\xbf`\x91e?2\xe2\x1d&gt;r\r\x17\xbe\x9c\xfe[\xbd\xd6\x88\xe8=\x1c\xea\xcb\xbe;\xf7!?j\xc0(\xbe\xb8\xc1\xb6&gt;Uf\xf8&gt;\'\x05E\xbeFA\xc8\xbe\xab}\x8f\xbc\x85\x089&lt;\x07\xb9R?\x023\x19\xbf3\xcd{&gt;\xd85\x07&gt;\x1a`\x02\xbf3\xb6\xb6\xbd&amp;\x94u?\xedY\x8b&gt;\x0e\xa8\x82&gt;}s\xe4\xbd&lt;\x87\x8d\xbe1\x8cy&gt;G\xe9E?\xd3J\x87\xbe\xc2\x0f\xc1?\xea\x1bx\xbe\x95]\xea\xbe\x00x+\xbf\xac\xd1\xb1\xbe\x92\xe5\rA\t\x98\x0fA\x1aL\x10A\x8fOMC\xb2TLC\xc5\xa4LC\x06vxC\\\xc0xC(\xaawC]shB\x95\x0ejB\xda\x0biB\x89.\xa1\xbf6\xcd\xc6&gt;\x8by\x84\xbf\x05\xfa\xd5&gt;\x8b\x1aN\xbdz\xd9&amp;\xbd\xbf\x13\xda&gt;\x9cke?\xe4gl&gt;\x85\xa0P&gt;\xfe\x95\x9f&lt;\xa3\xab&amp;\xbf[^E?c`\xa1\xbe\xe8\xbc\xa6=E\xd3\xe3&gt;\xa7\x7f\xd0\xbc\x89FI\xbd_\xba\xa8\xbf5?Z\xbe\x0e\xban&gt;\x8e\xf1\xf0&gt;\x9f\xe8\x05\xbdQ\x1a\x8b\xbaJ,\x04?\xcd\x0f\x90?\xd1\xd8\xa1&gt;\xa6\xb3y&gt;\xfd^&quot;&gt;5\xb1&lt;\xbe\x0c\x08\xce=:l\\&gt;\xd0\xc8G\xbf\xb8\xe2d?\x85\x02B\xbf1\x05z\xbfP\xf6\x1e\xbe\xdc(\x06\xbf\xcde\x0f\xbd\xf4B\x1e\xbe\xa3QJ\xbd\x04Z\x06\xbf\x03L\x00&gt;b\xbf\xa3&gt;\x026\xf0&gt;\xfe(0&gt;\xc5\xfe_\xbe\xae\x05/\xbf\xd8F\xc5\xbb\x05\xf1\xad\xbe\xb4x\xfe\xbe\xbf\xfe\xe3=\xd6\x9a*\xbfin&lt;&gt;\x90#!&gt;\x04i3\xbf\x8c;-\xbf\x95\x8e\x13\xbf/\xf1j=H\xb7\xd4=\x1d\n\xfe&gt;`\xa1\x0e\xbf\x1f\x8f\xd7=\xd0C\xfd=\xb3G.\xbe\xef\xc3\xf4&gt;\n{\x0c&gt;\x9d\xff\x10?l\xa2\x8b&gt;iG\x18\xbfKE\x81&gt;\x94\xd0\xa5\xbei\xdc\xfbBw\x8f\xfdB\xb7@\xfdB\xd8\xc9~C\xb0\xb8}Cp\xd9|C\x8c&lt;6C\xa2\xe75CG\xec5C@\x11I&gt;j\xe7\xec=\xd5\x87;\xbf%7\x9f&gt;\xebG\xb5\xbd\xbf&lt;\x1f\xbf/\x98\x1b&gt;\xeaj\xd4&gt;\x0c\xa6\x16\xbe\xb1\xb2\xf3&gt;L\xc9\xb2\xbe\xf6&quot;\x08\xbe\xad\xe4\xe1\xbe\xd2\x1eY?\xe9\xbb(\xbfQ\xce\xf5\xbe\xf4&lt;\x9d\xbeSq\xb9&gt;\xbb\x1d\xb8&gt;\xd3\xad\x01?@\x89\xac\xbe~-\xa0\xbe\x99\xa7\x80&gt;\x9f*\x9c\xbe\xaf6\xff&gt;E\xbe\\?\xcb\xb7\x0c\xbf#!\xc7\xbe)4\x94=\xda)T\xbfe\x7fk\xbe\x94\x00\x15?\x8e\xd2J\xbb\n\xe2\x03\xbfV\xcb\xa5&gt;\x8d\r\x15&gt;\xdci\x16\xbf\x10\xff\xd9\xbe\xf8.~?\xbc\xbc\xb2&gt;\xdd\x06\xca\xbe\xef\xf62\xbf^\xfc\x04\xbfwY&quot;&gt;~\x90\x98\xbe|\xdf\xde\xbem&amp;2\xbfx\xac\x8e?\x80\x99\xdf=Wm\xec\xbcm\xdb\xa5\xbe\r\xfe\xdb&gt;\\\x1cg\xbed\x9c\xbe\xbe&lt;\x9b\xaa=q\xa9W?\xd5\x91\x12?&quot;$\xbb\xbaH\xd7\x1e&gt;\xc6N\x97\xbe\xaa\x8d\x8a\xbf\xa5\xff\x06\xbf\xe9u\xd5\xbe\x1f\xa1*&gt;\rr\x1f\xbf\xfdG\x15\xbf\xd6\xfd\x0e\xbe/,\xe7&gt;\xa4NU\xbe]\x99j=\xd0Lq\xbdk0\xa2\xbf\x97\xeb\x95B\xfe\xdb\x95B}!\x95B\xecjzC\xe8\xc7yC\xf6\xeczC\xfcbpC\xa8{pC\xa8\x05pC4\'aB\'\x03bB\n\x9eeB}2\x01?\x14H\x8f&gt;\xf5U*?K|7&gt;\xefx1\xbf\xd2\x1b\x9a?WL\x8a\xbf\xfb\n\xc2\xbe\\\xb4\xa5\xbd\xe1\xdc\xd3\xbd\xf9\xcf\xc3\xbc\xfc\xda5&gt;%\xd4\x02\xbf\xf8\x7fy\xbb\xa2-\xe1&gt;\xc1\xbdP\xbf)\x8c6?Z\xd6\x08?\xd1\xe2X\xbeJ\x00\xd9\xbf\xefo,?\xff\xb5\xa9&gt;]\x92\x02\xbe\x8b\xb8\xfa\xbd\x03\xc1\xdd&gt;\xfa\x04\xb6&gt;\xce7\x06&gt;\t4\x06\xbf\xaf\xea\xd2\xbd?1\x04\xbe\xfa\xcb\xfa=\xfc\xd8\x86&gt;\x84\x96\x85\xbe\xe1\xc2i\xbf\xd4\x06J\xbeK\x0f\x0e?*-\x9d&gt;H\x02\x03?n\xf9\xa0\xbd\xd9\xea\x18\xbe\xb0\xd5\x15=\x1b\x87\x08\xbf\xaa\x0cE?\xa8b\x9a\xbd\xa2n\\\xbf\xc6\x82)\xbf\x18\x1e\x89\xbf1lN?\xd8\xa1\xc4\xbe\xb5\x90\xe7\xbe\x86P??9\xfd\xa4=c\xcaH\xbfb\x04#\xbe\xfd\xfb`?\x16\xd7#=\xeb\x9c\xad?\xdeD\xef&gt;\x1e\t\xd0\xbel\x99\xb6\xbd\xd5\x92\xc7\xbe\x9b\x81?&gt;\xd1\x9c\x01?\xaaj\xab&gt;\xf4\xa8N&gt;;\xf6F= s%\xbf\x7f\xf0\x91&gt;wW\\\xbf\xd1D\x92A\xec\xef\x98AC\xfc\x9cA\xf8\x9b\\C\xdb*]Ck\x1f]C\xe9;~C]Y~C\xef*~C\xd0\xee%C\x04u%C\x1f-&amp;C\xba\x08\x86&gt;\xd8\xb5\x07?\xc2\x17\x93\xbe&lt;\xaf\xaa?(\x8f\x9a\xbe\xdeFP&gt;ww\x11\xbfo\xe2J\xbd\x7f\x14\xb8\xbd\xc7.0\xbf\xb1\xbf2&gt;|\xd2e\xbf\x9d\x0c\xa5?_\x11\x11\xbf2m\xc2\xbea\xcc5=?R\x81\xbf\xa7\x9f\x97?\xfd\xae\xe5\xbe\x82\xbd\xe6\xbe\x86\tA\xbe\x9eP\x8a\xbfJey=TY\x08&gt;\x11\xaa\xa9\xbf\xc5\xb9^\xbe\x97\x8c2\xbf\xbc(\xd4\xbe\x90(W\xbcU\xcc\xed&gt;=\x1bh\xbe\xe6\xf36\xbe \xe3e\xbf\xea\x8e??T*\xa2?\x81\xb8c?\xa5&quot;\xd3&gt;a\xa3\xe0&gt;\xdc\x15\x8b\xbe\x04P\x15\xbf\x8b\x1bV&gt;{\xa9:\xbd\x10(w&gt;\xc8\x9a1?a\x06}&gt;\xb8\xfb\x14\xbe\xa9S\x06?\xb5\xb4\xba&gt;\x03\xaf\xed\xbe\x05)\xfb&gt;_\xa1\x84&gt;l\x85\x12&gt;\xbfpv\xbf\x90u:\xbe^c9\xbf\xeb\xb1\x82?\xe0\x9f\x0f\xbf\xa1;\x1a\xbfP\xa3\xc6&gt;\x08\xa0\xbe&lt;\x82\xc0\x94\xbd%\xdeJ\xbf\xc3\xd3\x03?\xdb\xa7 &gt;\xee\x01\xf2\xbdw\xed\x08\xbf\xd6XP?f\'\x0b\xbe2\x05\xde\xbe\x0b\\@@\x19]\x1c@\xa3ZE@\xb3bJC\xc0{JC\xab\xe4JC\x1b\xea}CuK}C\xee\x93}C\xc5\xa4ZCC\x8a[C\xf7\xabZC\x8f\x04\nB\x91y\rB\x9e\xf7\x0cB\x9d\x19\xdd&gt;\xf4\xbc\t?\x83\xfd\xf3&gt;Z\x07\xe9\xbe e\xd7&gt;\xe0\xf2\xbf&gt;\xbc\xb2{\xbf\x07}\xf0\xbc\xc0\x1fI&gt;\xcc\x12\x04?{yd=qY\xa6\xbe\xad7C&gt;\x84\xc1\x82=\xbbn\xd9\xbe\xc0\x84,?\xe3\x12\xa2\xbe\x8a\x81(?\x06\xccH\xbe\xec\x95T&gt;\xcb\x13\x8b\xbe\xb5I\xe9\xbe&amp;_\x02&gt;\xafq\x04?\xad\xbe\xfb\xbd\xbaiU?\xbe\xca\x16\xbfJ^F\xbf&gt;6\x89&gt;\xd8\xac\'\xbf\xcaJ\xe9&gt;I_\x8f&gt;\xa2)\x8b?f.D&gt;\xf2&quot;\xde&gt;LD\xca=(Ox\xbdD\xf7v=\xec\x01;\xbf\x1c!l&gt;\x92\x98&quot;&gt;d\xdcM\xbe:\xfcF\xbf\xd0I\x82&gt;|\x0b\xcd\xbd\xd3W\x0b?b{\xac\xbct5\x0b?\xc5\xc9?\xbe\x8fh\x91?\x93\xdc\xcf\xbe\x8by\xee\xbd\xcc %?\xca\xc6\xd0&gt;\x8c\x92\xec\xbck\x8d\x85\xbf\x0bt\x9b\xbf\x92Z0\xbf\x1fW\x1a?\xfd\x904?P`.&gt;\xa0\x84\xb1&gt;t\xe0\xaa=|\xa5\xfa\xbe&gt;zC?\xca\xf4\x01\xbf\x00\x0e\xc5\xbd\xb9\xcb\xeb\xbe\xd7Gt?\xf4\xff\x17B\xbe\x12\x17B\xfc\xe8\x19B\x18\x18~C\x1b1~C:T}C\x9c\xf7}C\xfa\x0e~C\xaa\xc2}C\xa4H\x99B\x15\xf4\x9bB\xdcq\x97B\xfbWh=r&lt; \xbfDQi&gt;\xfd\xf6\x02\xbf\x10&amp;\xbb&gt;\xb5\x90\xff&gt;\xab\xf2Y\xbfg&lt;9\xbeh\xc6]?S\xe2\xa5&gt;s\x7f\x95&gt;\x18^F\xbf\xcd\x8ac&gt;\xf9\x87s&gt;\xf3\xaf\xac&gt;\xed\x18\xc2&gt;m\x7f\x04?y\xf8i&gt;\xaa\x93&lt;\xbe/\x9d\x04\xbfS\x17\x8b&gt;\x88\r\x9b&gt;\x18\xc4Q?V\x1c,?W\\\x1e?yW\xb9\xbe`.m&gt;\x1e?&amp;\xbf\xe2\xff/&gt;\xa8\x88\xe1=\xd8\xaa\x82&gt;\xc4\xce\xc1\xbe\xed\x0c%\xbf7\x14b&gt;\xac\xb3\x1e?\x87j\xb9\xbf\xb1\xf3\xba&gt;\xdf=%\xbf\x003:?(\xc9\t\xbfD\x17\xc4\xbdt\xf2\n?\xce\x19\xd6=+\xc0\x8c&gt;\xf3mL?\xde\xef\x9b?@\x87\x9d&gt;\xe4\xf7\xc3\xbe4\xe2\x9e\xbe\x97\xfb\x01?\xb76\x9b\xbd\xc1\xb8\x0e?\xb5(\xd6&gt;\xf3\xa3\xa8\xbe}$\x9d&gt;\xd3\x1e6?\xc4H\x8e\xbe\xa10\x9b&gt;`\x10#?\xf2i\xf4\xbeX\xf6\xe0=^L\x14\xbftX\x04\xbf\x8e\x17\xb7&gt;\xe3\xa0\xc8&gt;o\x8e~\xbd;&amp;??\xe3\xa6A?u\xa61\xbeG\xde\xfaA\xb2\xf6\xf6A\xed\x03\xf0A\xfb\x93_CE\x02`Co\xa1`C)\xc8}C\x1bx}C\x11\xed}C\xf9\xe2\xe4B\x06\xa4\xe4B\xd7\xdd\xe6B\x9a\xce\xcd?\xda\\\x88?\x8f[\x8f?Wx\x16?Q!\x11\xbe\x8b\x0b=?\x13)\xf9:\xb9\x04\x9a\xbf\xf1\xcf\xcb\xbe\xb2\xca\x8f\xbf\xa7\xa1\x87&gt;\xfb\xaa\x1c\xbfxU0\xbf\xf7\xfd\xa9&lt;\xab\x03\xab==\x91F&gt;\x13\xdcB\xbf\x80\x1b\xda&gt;\xf7\xd55\xbeN\xca\x8b&lt;\xf5i\x1a&gt;:-L\xbe\x9f\n\xe4\xbdZ\xce\x89\xbe\x99P\x91&gt;\xc9\x10 \xbf\xb9Oh&gt;\x856\x02?\xf5\xa3\x0e&lt;V\xa3\xab\xbde\xc2$&gt;\xe1Ye?;\xfe\x93\xbd\xdf~\xfe=]\x18\xa5\xbe\xd0\t\xe1&lt;\x9e:\x0e?K\x1a\xab?\xe0\x95O?\xf4\xb9\x1d&gt;\xc5\xd4\xda=[Q@\xbf7`1&gt;_9\xec\xbd\x1a\x1e\x1c\xbf\xe4\xd7\xf7\xbe\xf0#\x17\xbdY%\xa0=\xdd\xf8\x1a\xbf:B\xf8\xbd\xcd*\x8f&gt;h\xc6x&gt;x6\xf6=\xc1\x0c\xa6\xbf\xea\nw\xbf\x8d\xa4K\xbf\xcf\xc8\x7f\xbezH#\xbf\xe5\xf0S&gt;$\xc5\x04\xbf\x97\x8fF\xbe\x0b\\\x95\xbeh\xc8\x89\xbf\xa0W\x1c?\x9c!\x92?&lt;\x1c\xc4&gt;\xc5\x13\x14&gt;\xa3nE\xbd4\x01\xa9&gt;A\x82\x84\xbf \x0b\xac\xbf\x1a\xf0\x9d&gt;\xe5\x03\x05Ck\x08\x04C\x8aC\x05C\xc2\x9e}C2h}C\x1d)~CS\t~C\xa4b~Cf\xa8}C\x81mOB\x16EOBF\xc0QBs\xf9r\xbfo\x8aP\xbf\xb4\x80\xca&gt;Y\xa2\xc0&gt;L\x176\xbf\x98T$\xbf\xbc,\x95?G\xe4M&gt;D\xd0\xa4\xbd\xb7\xb5\x16?G\x08B?X\xfdI&gt;\x8e\x0e\x9a&gt;\xeb\x93\t&gt;\xe9\xfe\x9c&gt;t\x8cM?RS\xa6&gt;4\x15@=\xf2N\xc0\xbe\xc4|\x1f&gt;E\xe4\x80\xbd|@\xf2&gt;\x97\xb11\xbeY\xcd&amp;\xbf\xe8j\xf0\xbeYG8&gt;\xc4\x1a\x11\xbeY\xe0m\xbf\xf3?\x06\xbe\xf4\xbe\x9c&gt;-\x88Z\xbek\xd0\xfd\xbeD\xed\x8c\xbd\xa8\x92\x0c?\xd4\\\x1b&gt;F\xbe\n\xbf]\xaf+?x\xd7\x16\xbe\x81-\x03?\xd79\xb2\xbd\x95\xdf\xa6&gt;\xe0\xe26?B\xbb&quot;\xbf!\x11\xf5&gt;\x87\xfa\x0f\xbf\xe4\xced&gt;\x92h\xe9&gt;\x8a\xac\x13\xbf\x9c\xe5@\xbeFJ\x1e\xbf\xf1#\xaf&gt;\xbc\x1f\xa7?\x9e\xc2\xa1\xbe\n\xb1\xac&gt;~\x94\xa0\xbei\xef\x17?\x1c\x9b\x94?k\xef\xdb&gt;\xb3\xfb\x93\xbfT\xfa\x89\xbe\xd1\xe8z?\xc6\xac\x1f\xbf&amp;\xb8\xcc\xbe\xa1|\x00?\xab\xc9\x16?\xb2\x00\xbb&gt;\x9a\x10\xe4=\xeb\x85\xd1\xbe\xaa\x12\xae\xbe\xc2\xc4rB\\8pB\x04nuB}\x89qCj\xb4qC{\xb5qC\x97$~C\x92g~C9\x87}C\x0b\\~C\x8a\xc3}C\xee\xd3}C\xbd/RB\xde\xc0PB\xcfGNBf\xc4W?[L\xa3&gt;\xf5\x84\xc5\xbe\x07\x07\xf4&gt;\xb5\x9b\x8d\xbe\x9dG\xde\xbe\xb2\x05\x85&gt;\x02\xe0\xa7&gt;V\x91\x93&gt;\xc5\xfb\xbb\xbe\x15\xd4\xcf&gt;$\x91\xf3=\x17\x94C??F$\xbfa&quot;\x11?_\x01\x00&gt;0\x1f\x82\xbfC\x02o&gt;\x9a\\H\xbf(\xcfZ\xbf)B\x97\xbcjk\x91&gt;\xaf\x9c\xb5\xbe\x8a\x1aF?&quot;&quot;N&gt;5Zw&gt;\x83\xb91?1\xecO&gt;\x8b\xfa\xee&gt;\x9d\xef\x9e=\xf0\xc4\xe9\xbe\xa5\x04\xe0&lt;\xd9\xca\xb0?\xabs\xed\xbe\xad\x12\x80=\x06\x9d\x02\xbf\\\n&amp;\xbf}\xd2\xae\xbe\xcd\xfeC?\x10i\x9d&gt;(\xafm?&gt;#\x9c=\xe4\xd8\x92=\xdb9\xdc\xbeI\x11\x97&gt;\x15\t\xd3=\x9e2\x1d?)c\xfb&gt;\xef`\xab&gt;\xc7\x85\xe4\xbe\xeb-\x0c\xbe\x91\xbe\xda&gt;\\\xdf\x9e?\xcb\xafl\xbeN\xd2\x83\xbe\x8a\xb2:\xbevA\x8b\xbdrJh\xbe\x95\xe1\xd4\xbe\xa4r\x80?\xd7\xbe\x01\xbf$^u&gt;\xbf\x1b6?\x8bi.?n\xd3\x93&gt;\xe34\x12\xbdT(\x02?\xd2\xcd\xdc&gt;\xb4t;&gt;uJ\xf2B\x0b\x87\xf2B\xec\x90\xf1B\x1b\x18~CW|~C`\x8c}C^\x1e~CD~~C\xbaN~C\x8e)[C\xcc\x04[C\xf9\xb8[C\x9aG\x1fB\xc0\xa4 BI\'!Bd\xf3A\xbfm.\xd6\xbe\x13\x05\xc9\xbe\xe5\xb4\xeb\xbd\xe63\xeb\xbd#\x9c\x04\xbe\xdc\xa2\x06\xbf\tc\x02?\xda%o&gt;\x92\x18S\xbf\x8dD\x8b?\x0e\x8f\xbe=@_*?\xd6\x12C?w+\x91=\xdf\xf3A?\xbd\xc4\x1c?\xcf\xa7\x1d?v\xbdA?\x07\x9b\xa1\xbc\x99\xca\xa9&gt;\xacA\xfe\xbb.\x8aK\xbe\xda \xee&gt;Q\xe2g&lt;\x0br\x13?&amp;\x8b|\xbd\x80\x9a,?\x03\xd7-&gt;`\xd3\xa1\xbd\xa3g[\xbe\xdf\x99\xe9&lt;?\xe2\x06=?{\x04\xbe\xe1\xe7\x16?\xfeU\x11=\\\xc1O?H\xda\xf8\xbe\xe3\xfeu\xbd\xd5\xdf\x19\xbe\'w\xa7&gt;{H \xbeN_1&gt;\x8b\xe3\x8e=\xfdE\xe8&gt;\xff\x173\xbd\xda\x07&amp;\xbf\\ B=\xb3\xb2t\xbeh\x0cg\xbf\x8cw\xeb\xbc\x85\xa1\x18&gt;\xe0\x0c\n?F\xdbL&gt;\x02\xe3\x00?R\xe7\x0b\xbf\xb9\xcc\x08&gt;,\x00&lt;\xbd\xcd\x87\n?\x1a\xec\x90\xbeL&amp;}9\x95\t\xd6\xbd)\x1a\xb0=\x0c\xf1\x87?\n\xd0;\xbfB`\xa4?\x8f\xd5\xd1\xbe~\x14r?L\xc7b&gt;\xc5a\xf0B\xf7y\xf1B\xf13\xf3B\xae!~C\xfb\x0e\x7fC\x91=}C0INCK*NC\xc6\x8eNC\xaa\n\x91A\xeb\x01\x91Aor\x8bA\xbf\x0e.&gt;3\x93D&gt;U&quot;-&gt;\xe7\xed\xd4&gt;\xb1\xed\x8f\xbe\x07\x865?f\x85\xfe\xbey\xbe\x03?m\xbd\xd7=\\Z\x01?]z\xc6\xbe\x9ag\x81\xbfa\xa2:\xbe\xd1\x0c\x86\xbe\xdeu\xaa&gt;~\x89\x8c?m{\x8a?\xad\xbb\x97\xbf@\x9d\xda&gt;\xe9\x15\x86\xbeY\x80*\xbf\x8fd&lt;\xbfH~9?\x1b\x81e?E\xac\xfd=\x08\xc32&gt;\x86\x95\xb9\xbe\xb5\xcc%?\xc5s\'\xbd\xeb\xf1\xc2\xbe\x92\xab\x18?\x1e\x9a\xfe&gt;p\x1d\xa8?Y&quot;\x1a\xbd\xce\xd5{=0K]\xbf\xa2?\xe3&gt;\x02\xba\xe6\xbeV\xe2\x94?\x1d\xe3\xfc&gt;\xea\x9cg\xbe\xba\xb4R?\xa5q\x1e\xbf\xff\xf47\xbf\xf5\x80F&gt;\xbclA\xbb\xe5\x14\x11\xbf\xd7y\xec\xbey3\x0f?\x86\x91\xc6=\xe8\x96~&gt;\xf3d\x16\xbf1v\xd0&gt;\xdd\xa1$&gt;k\xa5 &gt;\xf9k\x82&gt;\xc1\xbf\x9f&gt;\n\x94y&lt;\x96_6?\xbb\x8b\xe5&gt;\xa8N{&gt;\xd4\xfd\t?\x1f\x1f6\xbe\x99\xd5\xd0;G\xc9\xde=nSa&gt;\x9d\x00N?5z\x0c?-xJ&gt;\xb5P\x08?\xa3\xed\x9a\xbe\xf7\xdc\x04\xbf.,#\xbe\xc3l%?l\x95R&gt;2\x1a\xc1\xbd5=\xde\xbd\xf8\xecE&gt;X\xfd\x89\xbe\x1d4\xaa\xbd\xe3\x9ag&gt;vT\xbc:/s\xe3&gt;\x17\xe0C?\x8d\x8f=\xbf&lt;\\2=\xfbA?\xbf\x0cG\xf5=\xdb\x80\x12&gt;\xbb\xad\x00\xbf\xea\x96C\xbfq\xa8\xf8&gt;\xde\x84\xbb\xbeq\x19F\xbe\xe6\xbbw?_H\x1b\xbf(\xbe\x87\xbe%\xa1\xfe\xbaW\xc3Q\xbe%\x87D?\n\x8a]\xbe.P\xa0\xbe/u\xce?\x87&amp;\x92&gt;h\xb7\xf9&gt;md&quot;\xbc\x1d\xc6\x85\xbeG\x1a:?\xa0\xbc\x8c\xbe\xa8\xd8)&gt;\xbd\xfb\xdf&gt;\xaa;4\xbe!E\xf0\xbe\x90Ql&gt;\x075\xdc?\\\xf9$\xbf*M\x83\xbe\xa9\xb6R=;\xb5\x94&gt;\xec\xdd\x07&gt;\xc6C\xcf\xbd\xf3fE\xbe!A\x8b\xbc\xe2\x18\xa1\xbe\xff\x89\x1a?tN\x8f&gt;' test_bytes = bytes(test_bytes) test_image = np.frombuffer(bytes(test_bytes), dtype=np.float32).reshape(28, 28, 3) fig, axes = plt.subplots(1, 2, figsize=(16, 4)) fig.subplots_adjust(wspace=0.1, hspace=0.1) plt.subplot(1, 2, 1) plt.imshow(test_image) plt.title(&quot;test_image&quot;) plt.subplot(1, 2, 2) plt.imshow(~test_image) plt.title(&quot;test_image - invert&quot;) plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/byejG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/byejG.png" alt="enter image description here" /></a></p> <p><strong>Here is the error:</strong></p> <pre><code>Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /tmp/ipykernel_28462/820192811.py in &lt;cell line: 17&gt;() 15 16 plt.subplot(1, 2, 2) ---&gt; 17 plt.imshow(~test_image) 18 plt.title(&quot;test_image - invert&quot;) 19 TypeError: ufunc 'invert' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe'' </code></pre> <p>How to <code>invert</code> this test image with &quot;<code>~</code>&quot; operator?</p> <p>And how to fix this error in this image:</p> <pre><code>Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers). </code></pre>
<p>You need to rescale your data from the -1.695 ... 255.408 to the range of 0 ... 255. Due to floating point operations, you still need to <a href="https://numpy.org/doc/stable/reference/generated/numpy.clip.html" rel="nofollow noreferrer"><code>clip</code></a> the rescaled image to the range of exactly 0 ... 255. After converting to integers inverting is just doing <code>255 - image</code>:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt test_bytes = b'\x8bE\xd4=\xacF\xdc\xbd\xfc\x02\x1f&gt;=\r\xba\xbd\x0c\x9a\xbd&gt;\x11tP&gt;E,\x10?\x9d(\x03?\xb0\x18\xfa\xbd-$\xd4=#P\\=k\xe9}?\xac\r\x93\xbe\x87\xe1\xfd&gt;\xff0e=C\xdf-\xbcCc\xe4\xbd?ff\xbf\x989\xbf\xbe5\x8ag\xbf\xaa_\x14?\xb7\xd6;\xbe\x9c*`\xbe\xf12\xe2=U\xc5l&gt;)7\xa3&gt;F^\xea\xbe\xc0\\8&lt;\xc5\xb2\x03?\x9c\x8e\x1a\xbf\xe5lV:\x8b\xbf\x98&gt;\x9e\xc5Y\xbf\x93\xec\xe2&gt;e\xd0\x81\xbe\xf8\x94[&gt;\x01S5=\xdb\xab\xb8&gt;\xef\x05\xb6\xbd.\xaaS\xbf\xfaS\xa2\xbe\x14\x0f0\xbe\xed\xaf\x8c&gt;/\xdc\x86\xbf\xb8\x8e\xbb&gt;*\x97\xed\xbeS@\x0f;$c\x82\xbe_\xd5\xb2\xbe)\xa6\xa1=\xe9\xe9\x04\xbe9J\x9a?\x99\xbf\x02&gt;\xcaA\x8d&gt;C\xe73?\xbbgI&gt;\x93\xf8\xea&gt;\x92\xbd\x10?A\x85\x93\xbf\x18Q&amp;?;\x80V\xbe\xe6\xaa;\xbf\x07\x02\xae&gt;C1\x89&gt;F\x12\xb5&gt;\x90\xf6\xbe\xbc\xb2\xf4\x95\xbeo\x0e\xae\xbe\x97\xdfV=\xef\xeb\xe2\xbdNc0\xbf@zx\xbfv\x86Y?\x1c\xc7l\xbf\xe7\x0ed?\x19Wk\xbf\xcb_.\xbd\x17\xdd\xb7=\xb7\xfc\xe1\xbd \x07\xb9=\xaa\xa3\xd3&gt;t\x8f6\xbe\x7fWC&gt;\xa6\x07\x11?iN\x9d\xbd\xa2g\x1d?\xef\xff\x16?\x9b\xde\xf3\xbc\xbdn\x14\xbf\xfe\x9dL\xbe\xb3W0&gt;\x8fqJ\xbf\xcd\xb8\x9e&lt;\xc6\n\xdf\xbeU\x18\x8b&gt;\x1e\xf8\x9c&gt;\x84&amp;\xb9\xbdK\x13`\xbel\xd0B&gt;\x88\xb6\x8e&gt;\xb8`\xa2&gt;2#\\\xbf\xb7\xdb\xaf=\x91\x12\x93&gt;\xbf\xcc.\xbf\xfd\x88\xe3\xbe\xc9\xab\xe6=|`a?3\xfb\x94&gt;\x9d\xb9\x0e?&gt;\xf0\xab&gt;o\xf3\x9e\xbe,\xb3\x11?`V\x1a\xbf\x849\xa8&gt;\xca\x04}?\x96{\xb5=\x0f\x80\xa5&gt;\'o\xef&lt;\x99\xd9\x81\xbe \xed\xa2&gt;\xa3j&lt;?&amp;a~?\xe8R\x05=\xce\xea\xb1\xbef8`&gt;\xc5 \x05\xbf\xb0\xc6\x83\xbeY\x88\x0f\xbfC\xff}\xbeU\xc3\x11&gt;0\x18&gt;\xbf\xbe\x82\'\xbf4\xe9\x93\xbe\x84\xc0\xa7&gt;\xfb\xf6\x8d\xbeI\xbe\xb8\xbd\xd2\x964\xbe\xa5L\xa0\xbe\x93\x1d\x9c\xbe\xac\xea\xb8\xbd\xd5\xf5y?6\x81&gt;?\x01I\xae&gt;\xdc\xd8\x11\xbfHW\x1e?\xdc_\x87\xbe\xd3\x17&quot;?`w\xb3\xbc\xa053&gt;,P8=P\xb50&gt;\x8b\xc4\xba&gt;\t\xbc%\xbe\x13Q\x82&gt;\xba.\x9e=\xc5f\n\xbe\xd9\xc9\xed=\xf7\xf0+?n\x1f\xe5&gt;7\xd1\xdb&gt;\xea\xef&quot;\xbe\xd1/3?x\x15*?.\xdcO&gt;3\x99\x90=8&amp;\xa0?\x9b\xdc#&gt;:&gt;\xd4&gt;\xa3\x9f\xd0\xbe\x16\xc6&amp;?\'@|\xbf\x18\x05\xf2=\x08\xc4~&gt;\x87XY&gt;\xd8=\x19\xbe2\xeb,?-\x08\xaa\xbde\x93\n&gt;n[\xce\xbem\xe0\xa2\xbe\xfe\x1e\xa0&gt;(\xd6?&gt;\xfe\xf9\x14?\xea\x0e\xc9\xbeh\x0c\x94?\xd1m\xc4\xbfS\x0c3\xbe\xb3\x9e@&gt;\xb5`\x82\xbe\x8e\x8b\xd4&gt;\x1c\xa4\xad&gt;\x96(\xfc\xbbp;P&gt;\xb0\xc6\xb2&gt;R\xeb\xc3\xbe\tj\xdd&gt;x\x95!\xbf*\xaa\x1a\xbf\xb8\xa7~&gt;z\xdc\x13?\xcf\xd3Y\xbf\xe5\xcd\x0b&gt;\x95\x12\xfb\xbek\xa5m\xbeB\xbd\x14?\xe6p_&gt;\xe6\xcb\x9b\xbee\xc1&amp;?U\x9d\xb0\xbfjL@?\xc0-N&gt;\x11\x9d \xbcU\xdf@?\xaex%&gt;\xee{\x08?\xa2\xc3k\xbf&lt;\xed\xe5\xbc{T\x96\xbc0&quot;*&gt;w6%\xbfPh\r\xbe\x12y:\xbf\xdd\xaa\xdd=\x17t &gt;\xb4c\xfd==\x84\x10\xbfl\'\x0c\xbd\x9a\x0f\x1d\xbe\xb2\x03\xec\xbe\xf0\xa6\xa3&gt;\xe2d\x00&gt;h\xb8\xfd\xbd7\xac\xa1&gt;J\xa7\xfa=\xe9\xab\x9e&gt;l\xb7/\xbe\xa3`\x05?\xa9\x1e\x85?\x05\x05*?&quot;\xec\xb4&gt;\x0c\xe8\xee&gt;\xbc\xbaS?i\x16d\xbd\xba\xed3&gt;\xd8S[&gt;\x1d\x7f\x94?\x084\xe0\xbd+\n\xb4\xbe\x89HG\xbf\xf1\x93\x0f?\x987\n\xbf\\\x8fs?\xf6\xbe ?\xa8\xca\xf3=\x84\x07U\xberJ\x92\xbe\xc0\xad\xac&gt;\x12MY\xbf\x12{\xf4\xbe\xbc\xc6.?\x9f%\xd8\xbe\xa8\xaaL&gt;y\x99\x95&gt;\x8f\xc0\x07?\xa5\x84\x89\xbe\xd6\xb1\xa3\xbeT\xec\x0c?\xddZ\x88&gt;\xf5\xd3*\xbe\xc1\xa3\xbd\xbd\x853\x91\xbe\xca\x0c\x14&gt;\x11\x18x\xbe:q]\xbf\xfc\xb1\x82\xbf ;\xe6&gt;\xdc;\xcc\xbd\xe9X\x05\xbf\xcf\xcd\xb9\xbfe]t&gt;\xf0\x84\xc1?\x8c\xffM\xbe\xc8\xb5 &gt;\xa5\x9d\xdd=\xe8c\r\xbf\x89\xd8\x1f&gt;\xaf\xe8P\xbfI\\\x83\xbdGH\xe3\xbe\\3\xbf&gt;\xa2\x1b\xd5&gt;\x0c\x1a\x04?\x18\x17\xad?\xf0\r\xc6&gt;\xa2\xc2\x9f\xbd\xe9M\x80?\xa62\x95\xbd9H\xb9&gt;w,\x1a\xbe\xb0ch?\x89&quot;m&gt;\x8ex\x12=\xc2[Z=\xc2\xa3\xc3\xbf\x86dB&gt;\xff\xee\x1e\xbfi\xa4\x9e&gt;(P9\xbe\xa5\xf0\x1d?=\x17U\xbe\xe9\x0e\x18=\xea\xb6\xaf&gt;\xb8t\'\xbe\xca\xbe[\xbf\x7f\xd9\x8f=&quot;\x9aL\xbe\xa4\x10?&gt;2\xa2\x9b\xbeb\xbcR\xbd\x15^\x9a=)\xb8\xe7\xbdJg\x03?\xe6\xf6g&gt;#\xde#\xbe\xca\x92\x19\xbe\x8d\x95\\\xbe\x97I\x12\xbe\x9eF\x03\xbf\xc04\xbf=u\xa9\xe9\xbe+\x1b\xb1&gt;\xff\x8bB\xbd\x16V\xde\xbe\x82\x8b\xe4\xbe\xb0\xb8\x88&gt;\x19\xb3\x9b\xbc\x8f\x82\x90\xbc\x7f\x14\xc2=G{\xe5&gt;\x90\x1b\xbd=\xcf\x14c\xbf\t\xf7O&gt;\xe2\xbc_&gt;\x1e&quot;G\xbd\x14]@&gt;\x15G\xce\xbe2\x8a\xa7\xbe\x9b\x033?\x9f\x11G\xbfT\x0e\xbf\xbe\xe2\xc5\xf9&gt;\xb9\xcd\xd5\xbc\xd6#\x96&gt;\x83W\r&lt;!U6&gt;\xe2\xcd\xfa=E\x80\x81\xbe\xa8\xaa\x83\xbdk=B&gt;^\x03\x89&gt;E\xeb\x13\xbe\x07\x8c\x8f?\xbe\xbb\x12\xbe\x9c.\xaa\xbeY\xd6\x04=\xd9\x8c\xa7&gt;7s\'&gt;\xc0\x18\xb0\xbe\xe0)z?\xa1^\xad&gt;1\x96\x8d\xbea\x8c\xba&gt;\r\x04\xb1&gt;\xdf\xa4C\xbf\x05AS?\x07+\r&gt;\xb7L\xa2\xbfw\x83&amp;?\x9e]Y\xbf[+\x15=\x84\xfb\x0b\xbe\xda\x12\xe3&gt;\x9b:e?\x82\x07=\xbe\xe7{\xcd&gt;\xb2\xbae&gt;\xa0Jj\xbf\x86Y_\xbf\xdb\xd58&gt;:a\xcf=\xe7[\x13\xbe\\\x18*\xbfFE\xd1\xbe\x91\xfe\xa9&gt;\xd1\x006\xbfX\x1f:?\xc7\xfd\x87\xbeR\xf3\n\xbf\t5.?\xe4\xe0!?^\xd8.\xbf0\x0c\xb2\xbeK\xf06?I \xab\xbd\xcd\xf7\x15?\xeat\x97\xbe`_\xc9&gt;\r\x88i=\xa5v\x0f?\x91\xe9\xa3\xbeYB\xdb&gt;\xebG6?\xcd\xb2\x06?\x9f\xc7\xc3&gt;\xb7Qg=A\xff\x10\xbf\\^\xc6\xbe\xbd\x83\x11\xbf\x8b;\xee\xbc\xe9c\x88\xbe+V\x92?\x06\xc2p&gt;\xa1\x87\x15\xbfr*\xff\xbe\t\xd5\x98&gt;\x8b\xee+\xbe&amp;H_?\xa6\x009&gt;\xd6\x039\xbf0\x85\x16?\x9d\xd1\x90\xbdDl\x99\xbe\xa5.\xa1&gt;\xa39Y\xbe|\x87\xa5\xbe?r\xb8\xbc\xf2c.&gt;#H&quot;\xbf\xde\\\x89&lt;\x08\x85n\xbe\xe7\xe3\x1e\xbfG\xfc\xad\xbe\xcf\xfc\xfb&gt;\xc8a\xd5\xbe|!\xc8&gt;|5W\xbe\x85\x91\x04\xbf\xaa\x15a\xbeN\x7f\xae\xbe\xe4)=\xbe\x97\x02w&lt;S\x0c\x11=\xdfh\x8a\xbe\xaa\x92\xb4&gt;\xc2\xac\xdd=\xe5\xe1\x1b?\xed\n]&gt;\xf75f\xbf\xc5d\xa4\xbeF\x1f\x9e\xbf\x06\xd8)&gt;\x9c\x7fQ\xbf\xee\x95-?\x94Mv=\xc5\xa8\xff=\x04\xc0\xd4\xbcm\x88\xb3\xbe\x02a^\xbfG&lt;\xa4\xbe+\x16\x8b\xbepl\x83\xbd\x99\x9a\xb4&gt;\xd3t\x02&gt;\x85\x00\xb0=\x89\xcc\xb6=\r\x03\xbd&gt;[\xa4\xc1&gt;\x81\xb6g\xbes&quot;)?\x92\xce\x1b\xbe(9 &gt;+\x02u\xbeK,\x10\xbfr\xf6s\xbfe\x96\xed\xbe\xc4\xb3Q\xbdE-a&gt;\x08A\x86?\xa3\x92\x82=\xae\xdf\xb2\xbd\x1f\xea\x0b\xbf\x07\xcb&gt;\xbf)\xf6\xeb&gt;\xf2o\x07\xbeZ4\x85?\x9fN\x8a\xbef\xb6\xb0\xbcy\x12{\xbd%\x0e\xba\xbe\xcd&quot;&amp;&gt;\x1d\xff&gt;\xbf\xc3P\x8c&lt;G\xbf\xb4=\xbd7\x9a\xbe\xe6\xc1\x89&gt;c\xecY?C\xc5\x80=XHD\xbf\xfe\xef\xf6\xbe\xc9\x95\xa4\xbd\xa0\xef\xd4&gt;R\x9a\xb5\xbeVz\xb4&gt;\xb6\x94\xe3\xbc+\xad\xc6&gt;\'\xa5\xd5\xbe\xab\x90\xbb\xbe\xc2%d&gt;\xcc|-?B\x96,\xbe\xa4J\xa5&gt;\x92j\x8e\xbdZ\xb2\x8b?A@\x88?\xc5\xdf\x8e&gt;\xef\x1dH=\xb9\x8e\x13&gt;\xdbK\xcd&gt;\xe2n\x96&gt;\xef(t\xbe\xb0\x7f\r?\\\xa8\xe3&gt;\xedz\x0c\xbf\xa5a\xa3\xbe\xc2\xb4\x80&gt;W1-\xbeD\xb5]&gt;\xad,\x8f&gt;m\x00\x17?XX&quot;\xbe\x8c\xaf\x04\xbf\xe6\xd0\x1a\xbe\xd8\xbb\xeb=\xfe_\xb7&lt;\xff\x12\xcf&gt;;\x9bs&gt;\xd4\xb5\x12\xbe&amp;\xc8\xb7\xbed\xcb\xd2\xbd\xcfb&quot;=\xee.\xf6\xbe\x01\xb5\xf7&gt;:I\xa7\xbd\xb1\x93\x01\xbd\xad4\xe1&gt;\x94\xcb\x88\xbd]\x9d\x80\xbf1\xac\xb9&lt;q\xccL\xbeUsv\xbe\x8a\x9b\xc6\xbc\xa2U5\xbf\xc7\xdb\x1b?,\xd1\x93&gt;\xc7\xa5\x1b&gt;\xaf\xba\x06&gt;p\xb1P?h\xb8\x01=\xee6\x0b\xbf&amp;`)&gt;!\x11\x11?\x9a\xd51?PG\x86&gt;\r\x9bz?\xf6\x81\xdb&gt;!\xc8\xa4&gt;L\x96j&gt;.\x0b\x91&gt;\x94\xec!&gt;\xef4\xcd\xbe\xdf3??\x1a\xff\xa2\xbe|[\x8f&gt;\xac)\xa8\xbe\x89H\x8a?\x85&quot;\x05\xbf\xf7[\xa7&gt;.\xeeW?\x91\xf2\x90&gt;\xa9\x1a\xb2=0\xf1\xe5\xbeG\xbf\x8c\xb9NH\x96?,)\x8f&gt;\x0f\xb1\xd6\xbe4-9&gt;1pH?\x1c\xbd\xe6&gt;!\x82-&gt;\xfe\xa0\xf6\xbe\x01\xfd\xc5&gt;\xd020\xbf\x98\xa8\xb3\xbf\xa3\xad\x14\xbd\xf6\xa08?\xcc\x0c\xa7B\xab\xd1\xa8B\xe4$\xa8B\xc4\x999C\xd3\xa69Coq9C\x19\xbf\x1fC\xeb\x8d\x1fC\xfc\'\x1eC\x16I\x17C-\xbd\x17Cn\x8b\x16C,\xbfoBl\xc8pBJ\xb0nBb\xbc\x12B\xb5\x00\x0eBF&lt;\x11B\xa4g\xe4\xbe\xe1\x15V?\xfd\x00\xa5\xbe\xec\x8fb&gt;R=\x04\xbf\xb7u\x98&gt;L\xda\xcf=#\xfb\x1c\xbf\x1672\xbfp\xb6\xbd&gt;\x06w\x91&gt;\x8f\x0c\x87=\xeb\xd3n&gt;\xc9\ri&gt;\xc8+\xee=\xd7H\xb6=\x04\x01\xa6\xbd\xb7r!\xbf@#\xe3&gt;oL\x04\xbfr\xe5?\xbee\x1f\x88=ZM\xe7\xbeg\xf1\x0c\xbf\xa9\xcd)?\xcf\xd5\x81\xbd\xb1\x0b\x19\xbf\x162j&gt;[Q\x0e\xbe\xea\x8b\x94?\x15\xc9\x92&gt;\x9f\xa0\xa8\xbe\x0f~\x10&gt;\xda&amp;\x8d=\xf3\x18\x93\xbf\xdf)\xd8&gt;9@\x00?\xa2|m\xbe\xd3\x1fG\xbfE\x11l\xbd\xe1\x88\x0c?\xa7\xed\xa0\xbe\xdeY\x07&gt;\xac3\x1a\xbe\xee\x1f\xff=$-H\xbeg\x04\x83=CP\xb5\xbe\x02\x06&quot;&lt;\x10j\x8a&gt;\xdf7\xf3=\x1eTl&gt;\\\x88K&gt;P\xbf\x17\xbe\xcb2\x8c&gt;\xf6v\xcc&gt;k\xb8}&gt;gnb\xbfK\x0f3\xbf&lt;uI&gt;&gt;\x98\x90&gt;+z\xe0&gt;\xd5\xdfE\xbf\xcc\n2\xbe7\x1e\x02\xbf\xd3\x91{\xbd\x04*]C\x92\xc0]CMv^C\xd2`~C]\xfb}C\x81\xb4}C\x90\xf6}C\x89*\x7fC\xd1\xf8}C1Y~C\x95\x80}CO\x91~C\xf3&quot;~CsF~C_\xeb}C\x0cCqC\xe8\xe6pC\t\xe3pC\xad\xe8EC&lt;\xa9EC/XFC\x0e:FCK;FC\xcb\xc9EC\xcd}FC\x1a\xcbFC\xe9\x89EC\x0b\x06FC\xc3\x1aFCB\xc7EC\x01\xa2EC\x9ecFC\x12\xd5EC\xa9\xe6ECO&gt;EC\x1f3FC 8FC\x1e\xd9EC\xefjFC\xd7\xfeFC\xe75EC\xc2:ECNx*Cm**Cc\x17*C\x03\xf1QB\x99\xceKB\xefyTB\x9d8\xc8\xbe\xa6\x9d~?\xcc\xf1\x93\xbe\xe5\xceJ&gt;\x1cM\x83\xbb\xba\xdaF&gt;VxC?Y1\x1b?\x94\x9e\xb6\xbd_MK&gt;\xf7\x0b\x89&gt;&quot;\xe3\x07\xbf_*p&gt;\xf5/\t&gt;1\x92\x81\xbf2\xd1\x1b\xbd\xc9;m\xbf\x84\xe6I\xbd\xfc|\x8d?1\xc6h&lt;o\x0b\x19\xbf\xf9\xee\xa4\xbe\x14I\xa8\xbeL7\'\xbfm\xb9\x9b=\xdd\'\x84\xbd\xaf\xa5\r\xbf\xb2\x80\xed\xbe\xb5\xc9\x0e&gt;\x82\xe6\xe1\xbd\xb3\xa3\xdb\xbd)\x1f\x7f\xbfa6&quot;?s\x11\xed&gt;\x8dR.&gt;Co\xea\xbd6\xd9\x85B\xcc\xe7\x86B\xd0\xf9\x85B\xcd\x94\xe6B\xbd\xb4\xe3B\xcav\xe5B\x82%\x90B\xb3\xff\x90BDS\x91B\xdfZ\xe3B\xf8\x80\xe2B\xc0\xa5\xe3B;\xd6&quot;C\xe6\x99#C\xdb\x97#C\xafnbC0(cC\xa8\xfdbCD\xaf}C\x95\xf7~C\x1a\x85~CZ\x1faC\'F`C\xb7\x99`C\x129}C/\x08}C\xe2|~C[\xdd}C\xdf\xc4~C+\xae}C\x9a\x81}C\x9c}~C\x19\xe8~Cm|yC\xaf\x0bzC\x878yC\xab\xa8dC+\x1aeC{\xb7dC\xa9t~CA\xda}C\xef\xc8~C:\xdd~C\xaf\xee}CVc~Cyi\x0cC\x949\x0cC\x98\xf3\x0bCD(\x99&gt;-#\x85&gt;\x97\xb9\x80?`\xef\xcf=\x05&amp;\x92&gt;E\x0bx=\xc0\xb5\x8f&gt; n\xa8&gt;\xb9R\xaf\xbe\x1c\x88\x10\xbf\xd5e\xe6\xbd\xa3\xd6_&gt;Nz\xcf\xbe\x9c\xb3E&gt;\xe4\xdf\xcd&gt;\xc5-,\xbf\xf5{U&gt;\xffA\x81\xbf\xc0\x9b]&gt;Y0\x98\xbdU\x01\xc8\xbf&amp;\xc5r?\x9f\xa7\xcf=\xce\x1f\x15&gt;\xae\xd03&gt;\xe4:v&gt;\xd9\xa1\x8a\xbe\xe8\x8e\t\xbfe\xa5\x08\xbf \x98\xc1&gt;t\xc2:\xbf\x08\xd7b?\x1dA==\x1ck\x00?\xceU\xeb&gt;\x84\xad\r=\x05\xd7\x8f?X\x96\x82\xbe\xd9\xd18\xbf@31\xbe&amp;7\x05?lpw&gt;o\xbe\x17\xbe@*\x95\xb9\x1d&gt;V&lt;Z\x05\xf2\xbc$9\xa3\xbf\xc7\xf2\\?U\xff\x81\xbe0\x17C\xbf\x03\xe71\xbf\xd2\xb6\x89A/\x1f\x81A\x9b[\x84A\xdf\xd6\x82B\xac\xfe\x82B6\xb0\x85BZ-eAH\xb9mAt\x11WA\xe86\x85B#\xac\x86B\xbb:\x87Blm\x85B\xdb\x8f\x85B\xf2V\x86B,\xf8\x86B\x94X\x84B\xa5\xb7\x86B PoB\xf6\x98nBIYjB\xcd\x9d\xa1A\x7fW\xaeAZ\x1a\xa7A\xe6?lC\xbfSlC\xe5\x82kC\x8e+~C\x89\xa2~C\xef`~C\xf8\x1d\xd3B\x1e\xf4\xd2B\xbf^\xd3Bi=\xe6\xbebiZ?W\xc2\xfb\xbe\x1c\x11f\xbetZ]\xbe(\x9f\x93\xbf\x9910\xbe\xe1$\n?\x07\xe4\xad\xbe\x8d9\x83?\x12A&gt;\xbfn.$\xbft\x82\xd1\xbe\x8c\xd9R?&quot;\xd9\xd9=p\xf7\x1e?\xe5\xebX?\xa5tJ&gt;\t\x91q\xbd\xd4p\xd9\xbe\x96\xf6N\xbez\x8c\x97\xbe\xe8.Z\xbf*\x02]&gt;d\xdb\xd0\xbe\x84\x91\xb6\xbe V\xa7\xbe\xa7\xa8i&gt;\xca\x88&quot;=\x99\xe4\x0b?\xceK\x80\xbf_\x95k\xbd\xf2Gy&gt;\xd1\xde\xc1&gt;2\x1a\xf1\xbd\x11\x1c\xed\xbe%m\x1f\xbe&quot;\xf2&quot;&gt;HX\xe4=\xf5\xa4\xc2&gt;\x91\xf7\x8d\xbe\x16\xec\xcc&gt;\x02\x8bg\xbe\x03n\x84&gt;\xbe\xca\x94?\x19\x80\xe0\xbe\xe1\xd9\x17?q\x15=?\xdf\xa4]\xbd\x896\x1d\xbeM\xe5h&gt;N\xf7\x07?i\x9c\xaa&gt;\x01\xef\x15?\xc6V\xcb&gt;\x8c\x1cv&gt;\xe5\t\x9c\xbe\xedz\xee&gt;w\xc5&lt;&gt;\x1a\xed\x99\xbd\x1a\x1e\xd9\xbejY4\xbf\xcb-&gt;\xbe\xa7$\xaf&gt;\x93\x99D?T\xe4\x0b?\xa4\xb1\xbb&gt;\xa2E\x88\xbe\xfc\xcd\xdc\xbeF\xe9/\xbfH\x08\xc6&gt;)\x8ek?\xb1\xde\xa5BK\x10\xa4B9w\xa5B\xe4*}C|\xcb|C\xd4\xc4|Cy\xe1PC#\xe6PCp\xb3PCQ1\x8dA\xae\xff\x90A\xcd\x9e\x8cA\x83\xa1\x19\xbf2\xa1z\xbf`\rU&gt;z\xc6\x05?=\x89\xb9&gt;\x85\xb9\x8c=~\x18\xc6&lt;`K\x89&gt;\x93\xd2\xf4&gt;\xae\x04p\xbf\xcd\x0e\xcb\xbe \x8f\xa2\xbf\x0b\x15\xb1&gt;O\x9b\xd7=\x82\x04\x04?*\x8b\x04\xbf\xbd])?I\xf3\xf5\xbeTWT\xbf\xb8 \x14\xbfDc,?\x17\xd9\x17?\xf6\x1b`?\x88&amp;K\xbd\x07\x82\x00&gt;8q?\xbd\x85\xca\xf0&gt;\xbd\xec\xd4&gt;9q\xa0=H\x0f\\&gt;X\x8d\x83\xbd\xae\xe0\x04\xbdG&amp;\x0b?\xf1\xd4W\xbf\xa1\xbdY\xbd\x80#r\xbe\xd0\x7f\xe3&gt;\x8a\x19\r\xbf*w]\xbe\xc6\xf6\xc6&gt;.8S\xbe\xb7\xacM&gt;\x1a\xd2\x11\xbf\xe7\xe9\xf4\xbeH\x18=?e@#\xbfJ=c=\x0f35?\x03G5\xbf\x13E!&gt;\'\xe3S?\xb7/\x93\xbd\xc6\xf00\xbf_L\x16&gt;t2\x01?h\xfe\xb8\xbe\x9c\xaa\xa7\xbet\x96\xd4\xbd\x82\xfe\x90&gt;]\x97\x84&gt;\xbe\x18F;\x93\x0c\x1a?\xb7\x1e\xb2&lt;\xa7n\x06\xbf\xaa\xf8\'\xbf\xa9\xe2q&gt;\x88\xc8\n?\x95R\xf5\xbe\xa9\xc7\x9f&gt;\xcd\x07\xb4A\xc4P\xb3A\xfe\xc4\xb3A\n\xa3hC\x9a\xc9hCH\x16iC\xb7\x0f\x7fC\x83h\x7fC\x8c\xf6~CSG\xa7BMa\xa6B\x80&gt;\xa6B\xef^+\xbf8O\xcc\xbd\xb4=\xd3\xbd\x97\x1d\xae&gt;k%\xc2\xbe\xee)\x0b\xbd\x1a\xf8\x85?\x83H\x86\xbf\xd4\xc7\xdf&gt;c\xd3\xd3\xbe\xa4\xf8\x9f&gt;\xdcL\xc8&gt;\x06\xb0\xa5\xbe\xc9\x02\t\xbe\x8c3\xd1\xbdXD\x9b\xbe\xb7\x8e\xd1&gt;\xc6V9?\xe8\x87\xa8\xbf\x103\x1e?\x81\x9e~\xbf\xd4b\r\xbf\xf9\xdco?\xef\x99\x19?\txq\xbe \x82\x00\xbfAb\x18&gt;w\x97C\xbf\x82\xa5\xe9&gt;\xd0d\xd0&gt;\xd1\xbft=\xe6\xdd\xa6:\xd5\x96\x06??\x91(\xbf\xb4N\xad\xbe\xa7\xf6\xd8\xbe5&lt;-\xbf\xba\xbe\x9b\xbc\xb7q\n&gt;\xecL\x1d=\x00\xca4\xbf\xa3f{\xbe\xfe\xc6\x19\xbe\x1c\xb1\x1a\xbf\x08\xaeB\xbf\x92\xa98?\x06\x94\x1c&gt;r\xd9\xe7\xbb\xd8P\x0b?7o\xf8\xbeS\x90C?&gt;\xf9\x17\xbf\x10\xf2\x1c?\xd0U.\xbd\x8fb\x00\xbe\xc3&gt;Q?\xe1\x14\xb4=\xdd\x07j=\x89\xf1\x9d&gt;q\xf7\x8b\xbf\xd2\xc4\x93&gt;\x94sL\xbfXO\x1d\xbf\xb8:\xfd\xbe\x1b\xebH=p\xd3/?=\x98\x0c?O\x1cM?\xf99\xc5&gt;\xcfy\xfd&gt;\x8c\xe7\xb6\xbd\x12O\x0e\xbfR@\x00C\xaf\xeb\x00C&gt;\x0b\x01C\x1fM}Cpo}C`\xb7}C\xd2OnC\x10xmC\xe9\xa5mCv\xad1B\xbc%-B\xf3\x8f/B\xea\xe3\x87\xbed\x89\x1d\xbf\x96%\xb3&gt;\x8a\x8a#?h\x10\xae&gt;\xefx\x12=\xfb\xcb\x91\xbe\xe1\xcf\xcf&gt;A\x96%?I \xdf\xbdbBO\xbfj\xaeb\xbf=\x0e\xcc&lt;\x18\xb4\x87&gt;;c\xef\xbe\xaf\xf5\r\xbf\x1b\xe1-\xbffy\x8c=9*\xaa&gt;rr\xb1=\xb9\xef)?\x18\xab\xd4\xbec$\xd6=\x85\xaco\xbd\x0b*0?p\x99/\xbe\x99\xb4\x0c\xbe\x1e\xb94?\xad\x05P&gt;O\xda9\xbd\x8d)C\xbf\x92x\xc6&gt;&quot;k\x95?:\x9e+&gt;\x08\xdd&amp;\xbf\x82\x9c\x83\xbeJ\x8e\x01\xbcQ}\xb4&gt;n\x82\x1c?\x86#t=\x1f0\xd8\xbe\xa8n\xa3\xbf.3\x1b\xbf\xb9\xc6\x07\xbfq\xb0\xad=)\x9cm\xbe\x93^N?o}F&gt;\xcd\xcf\xd2&gt;&amp;\xc4+\xbd\xc9\x05\xc8&gt;\xb3D.\xbe\xec\xbc\x01&gt;\xa6\x15\xd3\xbd\x1a\xf7#?x\xf3\x13?J\xb3\xea&gt;\x97\x02&gt;\xbe\xc3\x81\x1c\xbdu\x01r\xbe|\n/\xbd\xb7\x84\x1d?\xbb\x0e\xf8\xbeO\x8cp?\xdf\x91b\xbfw\xc0*\xbea\xbe)\xbd\xd0F;\xbf\x8d\xa3\xba&gt;\x80\x98hBb9oBWFlB\x1eDyC\xfa\x87xC\x8b\xddxC*D}CK\xce~C\x18\x99~C\x80\xe2wB\xc2\xd4zB3OvB[)K?\x0b\xa0\x12&gt;\x82!\x15\xbfB \x00&gt;\x00t\xca&gt;G\xb7+?\xdf\xfd\xf4&gt;\xe7\x00\xaa&gt;wZ\x83=L\xd8\x8b\xbe\x0e\xc5\xcc\xbe\x9d\x7f==\xd4KE&gt;\x02\xf4!&gt;\xbe\xb0%\xbf\xbf\xd8\xc8\xbe\x85&gt;\xfb\xbe\xeb\xf9{?.\xdb\xce=@\x84\x1d\xbf=\x8e\xd1&gt;\xd8\xe1\x8a\xbf\xcc\x95o=\x9aK\x85?&gt;\x151?\x07\xac\x03\xbf\xcb\xe2\x17&gt;\xae).\xbe\xa4=\xf7&gt;\xb8\xf7(\xbf2\xd3\xe1\xbd\xedw\x9c=\\\xda\x8d&gt;\xcc5\xe7&gt;g\xb8\x15?\x91Vb\xbf4r\x80?\x8e\x03\xe6\xbeU\xf3\xde\xbe\'\xb4\x85\xbf\xec\'\x98\xbf\x0f\xe3\xba\xbe\xa3_|&gt;+X\xb4\xbe\xaf\xdf\x1c?M\x9d\n\xbf\xa1\x86\xe6\xba\xae\rC?i\xf5U&gt;\xa0Ab&gt;\xda\xb9\xc1=\xa7\xc2\x81\xbe\xcd\x9a\x06\xbf\xcb\xa9\xb5\xbdd\x15\xd3&gt;ed\n&gt;\xa3\xbbp\xbf\xb7\xfat&gt;\xa3\xe6\xd7\xbd\x82\xca\x98?6x\xfd\xbe\x9d.O?\x8d\xb5\xa1&lt;\xd6\x02\xa3&gt;.\xc5q\xbe\xcd\xd7F\xbe\xd8\xcbj\xbe\x0b\xbb\xc0&gt;\xc8.\xa7&gt;i\xe4\xd0\xbe\x00\xcf\xb9&gt;)\xffG=\x9a\xc7\x04C\xa1\xee\x04C\xf17\x05C,\xd3}C\x12\xf2}C\xa8-~C\xcf=:Cj\x7f:C\xda\xad:C\xb7i\x8c@\x8c=\xc0@\xc0\xf4\x97@\xd4K=\xbe\xd3\xa2\xdd\xbd\\o(&gt;s\xcf\xb5\xbf\xd48\xcf\xbe&lt;\xbf\x83\xbfU\x97\xf8&lt;\xfa\x89\x02?\x81%\x7f\xbd$\x7fh\xbfQa\x05=\xb2\x0c\xb9\xbce\x9d\xa1?\xb4\xf4\x02?\xe1&lt;\x7f&gt;;\x8f&lt;\xbf\xa6\xb9\xd9\xbe8\xe9B\xbf\xaa\xc3\xa0\xbf\x85\xb4T\xbd =\x01?\xf1\xb6\xc4&gt;\x1d\xf1\xbb&gt;r\x80\xb5\xbf0i\x06\xbf\xc1\x10t\xbe\x9f\xd4\x92=\t\x19]\xbf2\xe3\x1e\xbe\x08\'#\xbf\xd3K\xf2\xbc\xfep\x96\xbe\xa6\xb7Y&gt;&lt;\r_?\xd6j\t=Yy\x07\xbf`\x91e?2\xe2\x1d&gt;r\r\x17\xbe\x9c\xfe[\xbd\xd6\x88\xe8=\x1c\xea\xcb\xbe;\xf7!?j\xc0(\xbe\xb8\xc1\xb6&gt;Uf\xf8&gt;\'\x05E\xbeFA\xc8\xbe\xab}\x8f\xbc\x85\x089&lt;\x07\xb9R?\x023\x19\xbf3\xcd{&gt;\xd85\x07&gt;\x1a`\x02\xbf3\xb6\xb6\xbd&amp;\x94u?\xedY\x8b&gt;\x0e\xa8\x82&gt;}s\xe4\xbd&lt;\x87\x8d\xbe1\x8cy&gt;G\xe9E?\xd3J\x87\xbe\xc2\x0f\xc1?\xea\x1bx\xbe\x95]\xea\xbe\x00x+\xbf\xac\xd1\xb1\xbe\x92\xe5\rA\t\x98\x0fA\x1aL\x10A\x8fOMC\xb2TLC\xc5\xa4LC\x06vxC\\\xc0xC(\xaawC]shB\x95\x0ejB\xda\x0biB\x89.\xa1\xbf6\xcd\xc6&gt;\x8by\x84\xbf\x05\xfa\xd5&gt;\x8b\x1aN\xbdz\xd9&amp;\xbd\xbf\x13\xda&gt;\x9cke?\xe4gl&gt;\x85\xa0P&gt;\xfe\x95\x9f&lt;\xa3\xab&amp;\xbf[^E?c`\xa1\xbe\xe8\xbc\xa6=E\xd3\xe3&gt;\xa7\x7f\xd0\xbc\x89FI\xbd_\xba\xa8\xbf5?Z\xbe\x0e\xban&gt;\x8e\xf1\xf0&gt;\x9f\xe8\x05\xbdQ\x1a\x8b\xbaJ,\x04?\xcd\x0f\x90?\xd1\xd8\xa1&gt;\xa6\xb3y&gt;\xfd^&quot;&gt;5\xb1&lt;\xbe\x0c\x08\xce=:l\\&gt;\xd0\xc8G\xbf\xb8\xe2d?\x85\x02B\xbf1\x05z\xbfP\xf6\x1e\xbe\xdc(\x06\xbf\xcde\x0f\xbd\xf4B\x1e\xbe\xa3QJ\xbd\x04Z\x06\xbf\x03L\x00&gt;b\xbf\xa3&gt;\x026\xf0&gt;\xfe(0&gt;\xc5\xfe_\xbe\xae\x05/\xbf\xd8F\xc5\xbb\x05\xf1\xad\xbe\xb4x\xfe\xbe\xbf\xfe\xe3=\xd6\x9a*\xbfin&lt;&gt;\x90#!&gt;\x04i3\xbf\x8c;-\xbf\x95\x8e\x13\xbf/\xf1j=H\xb7\xd4=\x1d\n\xfe&gt;`\xa1\x0e\xbf\x1f\x8f\xd7=\xd0C\xfd=\xb3G.\xbe\xef\xc3\xf4&gt;\n{\x0c&gt;\x9d\xff\x10?l\xa2\x8b&gt;iG\x18\xbfKE\x81&gt;\x94\xd0\xa5\xbei\xdc\xfbBw\x8f\xfdB\xb7@\xfdB\xd8\xc9~C\xb0\xb8}Cp\xd9|C\x8c&lt;6C\xa2\xe75CG\xec5C@\x11I&gt;j\xe7\xec=\xd5\x87;\xbf%7\x9f&gt;\xebG\xb5\xbd\xbf&lt;\x1f\xbf/\x98\x1b&gt;\xeaj\xd4&gt;\x0c\xa6\x16\xbe\xb1\xb2\xf3&gt;L\xc9\xb2\xbe\xf6&quot;\x08\xbe\xad\xe4\xe1\xbe\xd2\x1eY?\xe9\xbb(\xbfQ\xce\xf5\xbe\xf4&lt;\x9d\xbeSq\xb9&gt;\xbb\x1d\xb8&gt;\xd3\xad\x01?@\x89\xac\xbe~-\xa0\xbe\x99\xa7\x80&gt;\x9f*\x9c\xbe\xaf6\xff&gt;E\xbe\\?\xcb\xb7\x0c\xbf#!\xc7\xbe)4\x94=\xda)T\xbfe\x7fk\xbe\x94\x00\x15?\x8e\xd2J\xbb\n\xe2\x03\xbfV\xcb\xa5&gt;\x8d\r\x15&gt;\xdci\x16\xbf\x10\xff\xd9\xbe\xf8.~?\xbc\xbc\xb2&gt;\xdd\x06\xca\xbe\xef\xf62\xbf^\xfc\x04\xbfwY&quot;&gt;~\x90\x98\xbe|\xdf\xde\xbem&amp;2\xbfx\xac\x8e?\x80\x99\xdf=Wm\xec\xbcm\xdb\xa5\xbe\r\xfe\xdb&gt;\\\x1cg\xbed\x9c\xbe\xbe&lt;\x9b\xaa=q\xa9W?\xd5\x91\x12?&quot;$\xbb\xbaH\xd7\x1e&gt;\xc6N\x97\xbe\xaa\x8d\x8a\xbf\xa5\xff\x06\xbf\xe9u\xd5\xbe\x1f\xa1*&gt;\rr\x1f\xbf\xfdG\x15\xbf\xd6\xfd\x0e\xbe/,\xe7&gt;\xa4NU\xbe]\x99j=\xd0Lq\xbdk0\xa2\xbf\x97\xeb\x95B\xfe\xdb\x95B}!\x95B\xecjzC\xe8\xc7yC\xf6\xeczC\xfcbpC\xa8{pC\xa8\x05pC4\'aB\'\x03bB\n\x9eeB}2\x01?\x14H\x8f&gt;\xf5U*?K|7&gt;\xefx1\xbf\xd2\x1b\x9a?WL\x8a\xbf\xfb\n\xc2\xbe\\\xb4\xa5\xbd\xe1\xdc\xd3\xbd\xf9\xcf\xc3\xbc\xfc\xda5&gt;%\xd4\x02\xbf\xf8\x7fy\xbb\xa2-\xe1&gt;\xc1\xbdP\xbf)\x8c6?Z\xd6\x08?\xd1\xe2X\xbeJ\x00\xd9\xbf\xefo,?\xff\xb5\xa9&gt;]\x92\x02\xbe\x8b\xb8\xfa\xbd\x03\xc1\xdd&gt;\xfa\x04\xb6&gt;\xce7\x06&gt;\t4\x06\xbf\xaf\xea\xd2\xbd?1\x04\xbe\xfa\xcb\xfa=\xfc\xd8\x86&gt;\x84\x96\x85\xbe\xe1\xc2i\xbf\xd4\x06J\xbeK\x0f\x0e?*-\x9d&gt;H\x02\x03?n\xf9\xa0\xbd\xd9\xea\x18\xbe\xb0\xd5\x15=\x1b\x87\x08\xbf\xaa\x0cE?\xa8b\x9a\xbd\xa2n\\\xbf\xc6\x82)\xbf\x18\x1e\x89\xbf1lN?\xd8\xa1\xc4\xbe\xb5\x90\xe7\xbe\x86P??9\xfd\xa4=c\xcaH\xbfb\x04#\xbe\xfd\xfb`?\x16\xd7#=\xeb\x9c\xad?\xdeD\xef&gt;\x1e\t\xd0\xbel\x99\xb6\xbd\xd5\x92\xc7\xbe\x9b\x81?&gt;\xd1\x9c\x01?\xaaj\xab&gt;\xf4\xa8N&gt;;\xf6F= s%\xbf\x7f\xf0\x91&gt;wW\\\xbf\xd1D\x92A\xec\xef\x98AC\xfc\x9cA\xf8\x9b\\C\xdb*]Ck\x1f]C\xe9;~C]Y~C\xef*~C\xd0\xee%C\x04u%C\x1f-&amp;C\xba\x08\x86&gt;\xd8\xb5\x07?\xc2\x17\x93\xbe&lt;\xaf\xaa?(\x8f\x9a\xbe\xdeFP&gt;ww\x11\xbfo\xe2J\xbd\x7f\x14\xb8\xbd\xc7.0\xbf\xb1\xbf2&gt;|\xd2e\xbf\x9d\x0c\xa5?_\x11\x11\xbf2m\xc2\xbea\xcc5=?R\x81\xbf\xa7\x9f\x97?\xfd\xae\xe5\xbe\x82\xbd\xe6\xbe\x86\tA\xbe\x9eP\x8a\xbfJey=TY\x08&gt;\x11\xaa\xa9\xbf\xc5\xb9^\xbe\x97\x8c2\xbf\xbc(\xd4\xbe\x90(W\xbcU\xcc\xed&gt;=\x1bh\xbe\xe6\xf36\xbe \xe3e\xbf\xea\x8e??T*\xa2?\x81\xb8c?\xa5&quot;\xd3&gt;a\xa3\xe0&gt;\xdc\x15\x8b\xbe\x04P\x15\xbf\x8b\x1bV&gt;{\xa9:\xbd\x10(w&gt;\xc8\x9a1?a\x06}&gt;\xb8\xfb\x14\xbe\xa9S\x06?\xb5\xb4\xba&gt;\x03\xaf\xed\xbe\x05)\xfb&gt;_\xa1\x84&gt;l\x85\x12&gt;\xbfpv\xbf\x90u:\xbe^c9\xbf\xeb\xb1\x82?\xe0\x9f\x0f\xbf\xa1;\x1a\xbfP\xa3\xc6&gt;\x08\xa0\xbe&lt;\x82\xc0\x94\xbd%\xdeJ\xbf\xc3\xd3\x03?\xdb\xa7 &gt;\xee\x01\xf2\xbdw\xed\x08\xbf\xd6XP?f\'\x0b\xbe2\x05\xde\xbe\x0b\\@@\x19]\x1c@\xa3ZE@\xb3bJC\xc0{JC\xab\xe4JC\x1b\xea}CuK}C\xee\x93}C\xc5\xa4ZCC\x8a[C\xf7\xabZC\x8f\x04\nB\x91y\rB\x9e\xf7\x0cB\x9d\x19\xdd&gt;\xf4\xbc\t?\x83\xfd\xf3&gt;Z\x07\xe9\xbe e\xd7&gt;\xe0\xf2\xbf&gt;\xbc\xb2{\xbf\x07}\xf0\xbc\xc0\x1fI&gt;\xcc\x12\x04?{yd=qY\xa6\xbe\xad7C&gt;\x84\xc1\x82=\xbbn\xd9\xbe\xc0\x84,?\xe3\x12\xa2\xbe\x8a\x81(?\x06\xccH\xbe\xec\x95T&gt;\xcb\x13\x8b\xbe\xb5I\xe9\xbe&amp;_\x02&gt;\xafq\x04?\xad\xbe\xfb\xbd\xbaiU?\xbe\xca\x16\xbfJ^F\xbf&gt;6\x89&gt;\xd8\xac\'\xbf\xcaJ\xe9&gt;I_\x8f&gt;\xa2)\x8b?f.D&gt;\xf2&quot;\xde&gt;LD\xca=(Ox\xbdD\xf7v=\xec\x01;\xbf\x1c!l&gt;\x92\x98&quot;&gt;d\xdcM\xbe:\xfcF\xbf\xd0I\x82&gt;|\x0b\xcd\xbd\xd3W\x0b?b{\xac\xbct5\x0b?\xc5\xc9?\xbe\x8fh\x91?\x93\xdc\xcf\xbe\x8by\xee\xbd\xcc %?\xca\xc6\xd0&gt;\x8c\x92\xec\xbck\x8d\x85\xbf\x0bt\x9b\xbf\x92Z0\xbf\x1fW\x1a?\xfd\x904?P`.&gt;\xa0\x84\xb1&gt;t\xe0\xaa=|\xa5\xfa\xbe&gt;zC?\xca\xf4\x01\xbf\x00\x0e\xc5\xbd\xb9\xcb\xeb\xbe\xd7Gt?\xf4\xff\x17B\xbe\x12\x17B\xfc\xe8\x19B\x18\x18~C\x1b1~C:T}C\x9c\xf7}C\xfa\x0e~C\xaa\xc2}C\xa4H\x99B\x15\xf4\x9bB\xdcq\x97B\xfbWh=r&lt; \xbfDQi&gt;\xfd\xf6\x02\xbf\x10&amp;\xbb&gt;\xb5\x90\xff&gt;\xab\xf2Y\xbfg&lt;9\xbeh\xc6]?S\xe2\xa5&gt;s\x7f\x95&gt;\x18^F\xbf\xcd\x8ac&gt;\xf9\x87s&gt;\xf3\xaf\xac&gt;\xed\x18\xc2&gt;m\x7f\x04?y\xf8i&gt;\xaa\x93&lt;\xbe/\x9d\x04\xbfS\x17\x8b&gt;\x88\r\x9b&gt;\x18\xc4Q?V\x1c,?W\\\x1e?yW\xb9\xbe`.m&gt;\x1e?&amp;\xbf\xe2\xff/&gt;\xa8\x88\xe1=\xd8\xaa\x82&gt;\xc4\xce\xc1\xbe\xed\x0c%\xbf7\x14b&gt;\xac\xb3\x1e?\x87j\xb9\xbf\xb1\xf3\xba&gt;\xdf=%\xbf\x003:?(\xc9\t\xbfD\x17\xc4\xbdt\xf2\n?\xce\x19\xd6=+\xc0\x8c&gt;\xf3mL?\xde\xef\x9b?@\x87\x9d&gt;\xe4\xf7\xc3\xbe4\xe2\x9e\xbe\x97\xfb\x01?\xb76\x9b\xbd\xc1\xb8\x0e?\xb5(\xd6&gt;\xf3\xa3\xa8\xbe}$\x9d&gt;\xd3\x1e6?\xc4H\x8e\xbe\xa10\x9b&gt;`\x10#?\xf2i\xf4\xbeX\xf6\xe0=^L\x14\xbftX\x04\xbf\x8e\x17\xb7&gt;\xe3\xa0\xc8&gt;o\x8e~\xbd;&amp;??\xe3\xa6A?u\xa61\xbeG\xde\xfaA\xb2\xf6\xf6A\xed\x03\xf0A\xfb\x93_CE\x02`Co\xa1`C)\xc8}C\x1bx}C\x11\xed}C\xf9\xe2\xe4B\x06\xa4\xe4B\xd7\xdd\xe6B\x9a\xce\xcd?\xda\\\x88?\x8f[\x8f?Wx\x16?Q!\x11\xbe\x8b\x0b=?\x13)\xf9:\xb9\x04\x9a\xbf\xf1\xcf\xcb\xbe\xb2\xca\x8f\xbf\xa7\xa1\x87&gt;\xfb\xaa\x1c\xbfxU0\xbf\xf7\xfd\xa9&lt;\xab\x03\xab==\x91F&gt;\x13\xdcB\xbf\x80\x1b\xda&gt;\xf7\xd55\xbeN\xca\x8b&lt;\xf5i\x1a&gt;:-L\xbe\x9f\n\xe4\xbdZ\xce\x89\xbe\x99P\x91&gt;\xc9\x10 \xbf\xb9Oh&gt;\x856\x02?\xf5\xa3\x0e&lt;V\xa3\xab\xbde\xc2$&gt;\xe1Ye?;\xfe\x93\xbd\xdf~\xfe=]\x18\xa5\xbe\xd0\t\xe1&lt;\x9e:\x0e?K\x1a\xab?\xe0\x95O?\xf4\xb9\x1d&gt;\xc5\xd4\xda=[Q@\xbf7`1&gt;_9\xec\xbd\x1a\x1e\x1c\xbf\xe4\xd7\xf7\xbe\xf0#\x17\xbdY%\xa0=\xdd\xf8\x1a\xbf:B\xf8\xbd\xcd*\x8f&gt;h\xc6x&gt;x6\xf6=\xc1\x0c\xa6\xbf\xea\nw\xbf\x8d\xa4K\xbf\xcf\xc8\x7f\xbezH#\xbf\xe5\xf0S&gt;$\xc5\x04\xbf\x97\x8fF\xbe\x0b\\\x95\xbeh\xc8\x89\xbf\xa0W\x1c?\x9c!\x92?&lt;\x1c\xc4&gt;\xc5\x13\x14&gt;\xa3nE\xbd4\x01\xa9&gt;A\x82\x84\xbf \x0b\xac\xbf\x1a\xf0\x9d&gt;\xe5\x03\x05Ck\x08\x04C\x8aC\x05C\xc2\x9e}C2h}C\x1d)~CS\t~C\xa4b~Cf\xa8}C\x81mOB\x16EOBF\xc0QBs\xf9r\xbfo\x8aP\xbf\xb4\x80\xca&gt;Y\xa2\xc0&gt;L\x176\xbf\x98T$\xbf\xbc,\x95?G\xe4M&gt;D\xd0\xa4\xbd\xb7\xb5\x16?G\x08B?X\xfdI&gt;\x8e\x0e\x9a&gt;\xeb\x93\t&gt;\xe9\xfe\x9c&gt;t\x8cM?RS\xa6&gt;4\x15@=\xf2N\xc0\xbe\xc4|\x1f&gt;E\xe4\x80\xbd|@\xf2&gt;\x97\xb11\xbeY\xcd&amp;\xbf\xe8j\xf0\xbeYG8&gt;\xc4\x1a\x11\xbeY\xe0m\xbf\xf3?\x06\xbe\xf4\xbe\x9c&gt;-\x88Z\xbek\xd0\xfd\xbeD\xed\x8c\xbd\xa8\x92\x0c?\xd4\\\x1b&gt;F\xbe\n\xbf]\xaf+?x\xd7\x16\xbe\x81-\x03?\xd79\xb2\xbd\x95\xdf\xa6&gt;\xe0\xe26?B\xbb&quot;\xbf!\x11\xf5&gt;\x87\xfa\x0f\xbf\xe4\xced&gt;\x92h\xe9&gt;\x8a\xac\x13\xbf\x9c\xe5@\xbeFJ\x1e\xbf\xf1#\xaf&gt;\xbc\x1f\xa7?\x9e\xc2\xa1\xbe\n\xb1\xac&gt;~\x94\xa0\xbei\xef\x17?\x1c\x9b\x94?k\xef\xdb&gt;\xb3\xfb\x93\xbfT\xfa\x89\xbe\xd1\xe8z?\xc6\xac\x1f\xbf&amp;\xb8\xcc\xbe\xa1|\x00?\xab\xc9\x16?\xb2\x00\xbb&gt;\x9a\x10\xe4=\xeb\x85\xd1\xbe\xaa\x12\xae\xbe\xc2\xc4rB\\8pB\x04nuB}\x89qCj\xb4qC{\xb5qC\x97$~C\x92g~C9\x87}C\x0b\\~C\x8a\xc3}C\xee\xd3}C\xbd/RB\xde\xc0PB\xcfGNBf\xc4W?[L\xa3&gt;\xf5\x84\xc5\xbe\x07\x07\xf4&gt;\xb5\x9b\x8d\xbe\x9dG\xde\xbe\xb2\x05\x85&gt;\x02\xe0\xa7&gt;V\x91\x93&gt;\xc5\xfb\xbb\xbe\x15\xd4\xcf&gt;$\x91\xf3=\x17\x94C??F$\xbfa&quot;\x11?_\x01\x00&gt;0\x1f\x82\xbfC\x02o&gt;\x9a\\H\xbf(\xcfZ\xbf)B\x97\xbcjk\x91&gt;\xaf\x9c\xb5\xbe\x8a\x1aF?&quot;&quot;N&gt;5Zw&gt;\x83\xb91?1\xecO&gt;\x8b\xfa\xee&gt;\x9d\xef\x9e=\xf0\xc4\xe9\xbe\xa5\x04\xe0&lt;\xd9\xca\xb0?\xabs\xed\xbe\xad\x12\x80=\x06\x9d\x02\xbf\\\n&amp;\xbf}\xd2\xae\xbe\xcd\xfeC?\x10i\x9d&gt;(\xafm?&gt;#\x9c=\xe4\xd8\x92=\xdb9\xdc\xbeI\x11\x97&gt;\x15\t\xd3=\x9e2\x1d?)c\xfb&gt;\xef`\xab&gt;\xc7\x85\xe4\xbe\xeb-\x0c\xbe\x91\xbe\xda&gt;\\\xdf\x9e?\xcb\xafl\xbeN\xd2\x83\xbe\x8a\xb2:\xbevA\x8b\xbdrJh\xbe\x95\xe1\xd4\xbe\xa4r\x80?\xd7\xbe\x01\xbf$^u&gt;\xbf\x1b6?\x8bi.?n\xd3\x93&gt;\xe34\x12\xbdT(\x02?\xd2\xcd\xdc&gt;\xb4t;&gt;uJ\xf2B\x0b\x87\xf2B\xec\x90\xf1B\x1b\x18~CW|~C`\x8c}C^\x1e~CD~~C\xbaN~C\x8e)[C\xcc\x04[C\xf9\xb8[C\x9aG\x1fB\xc0\xa4 BI\'!Bd\xf3A\xbfm.\xd6\xbe\x13\x05\xc9\xbe\xe5\xb4\xeb\xbd\xe63\xeb\xbd#\x9c\x04\xbe\xdc\xa2\x06\xbf\tc\x02?\xda%o&gt;\x92\x18S\xbf\x8dD\x8b?\x0e\x8f\xbe=@_*?\xd6\x12C?w+\x91=\xdf\xf3A?\xbd\xc4\x1c?\xcf\xa7\x1d?v\xbdA?\x07\x9b\xa1\xbc\x99\xca\xa9&gt;\xacA\xfe\xbb.\x8aK\xbe\xda \xee&gt;Q\xe2g&lt;\x0br\x13?&amp;\x8b|\xbd\x80\x9a,?\x03\xd7-&gt;`\xd3\xa1\xbd\xa3g[\xbe\xdf\x99\xe9&lt;?\xe2\x06=?{\x04\xbe\xe1\xe7\x16?\xfeU\x11=\\\xc1O?H\xda\xf8\xbe\xe3\xfeu\xbd\xd5\xdf\x19\xbe\'w\xa7&gt;{H \xbeN_1&gt;\x8b\xe3\x8e=\xfdE\xe8&gt;\xff\x173\xbd\xda\x07&amp;\xbf\\ B=\xb3\xb2t\xbeh\x0cg\xbf\x8cw\xeb\xbc\x85\xa1\x18&gt;\xe0\x0c\n?F\xdbL&gt;\x02\xe3\x00?R\xe7\x0b\xbf\xb9\xcc\x08&gt;,\x00&lt;\xbd\xcd\x87\n?\x1a\xec\x90\xbeL&amp;}9\x95\t\xd6\xbd)\x1a\xb0=\x0c\xf1\x87?\n\xd0;\xbfB`\xa4?\x8f\xd5\xd1\xbe~\x14r?L\xc7b&gt;\xc5a\xf0B\xf7y\xf1B\xf13\xf3B\xae!~C\xfb\x0e\x7fC\x91=}C0INCK*NC\xc6\x8eNC\xaa\n\x91A\xeb\x01\x91Aor\x8bA\xbf\x0e.&gt;3\x93D&gt;U&quot;-&gt;\xe7\xed\xd4&gt;\xb1\xed\x8f\xbe\x07\x865?f\x85\xfe\xbey\xbe\x03?m\xbd\xd7=\\Z\x01?]z\xc6\xbe\x9ag\x81\xbfa\xa2:\xbe\xd1\x0c\x86\xbe\xdeu\xaa&gt;~\x89\x8c?m{\x8a?\xad\xbb\x97\xbf@\x9d\xda&gt;\xe9\x15\x86\xbeY\x80*\xbf\x8fd&lt;\xbfH~9?\x1b\x81e?E\xac\xfd=\x08\xc32&gt;\x86\x95\xb9\xbe\xb5\xcc%?\xc5s\'\xbd\xeb\xf1\xc2\xbe\x92\xab\x18?\x1e\x9a\xfe&gt;p\x1d\xa8?Y&quot;\x1a\xbd\xce\xd5{=0K]\xbf\xa2?\xe3&gt;\x02\xba\xe6\xbeV\xe2\x94?\x1d\xe3\xfc&gt;\xea\x9cg\xbe\xba\xb4R?\xa5q\x1e\xbf\xff\xf47\xbf\xf5\x80F&gt;\xbclA\xbb\xe5\x14\x11\xbf\xd7y\xec\xbey3\x0f?\x86\x91\xc6=\xe8\x96~&gt;\xf3d\x16\xbf1v\xd0&gt;\xdd\xa1$&gt;k\xa5 &gt;\xf9k\x82&gt;\xc1\xbf\x9f&gt;\n\x94y&lt;\x96_6?\xbb\x8b\xe5&gt;\xa8N{&gt;\xd4\xfd\t?\x1f\x1f6\xbe\x99\xd5\xd0;G\xc9\xde=nSa&gt;\x9d\x00N?5z\x0c?-xJ&gt;\xb5P\x08?\xa3\xed\x9a\xbe\xf7\xdc\x04\xbf.,#\xbe\xc3l%?l\x95R&gt;2\x1a\xc1\xbd5=\xde\xbd\xf8\xecE&gt;X\xfd\x89\xbe\x1d4\xaa\xbd\xe3\x9ag&gt;vT\xbc:/s\xe3&gt;\x17\xe0C?\x8d\x8f=\xbf&lt;\\2=\xfbA?\xbf\x0cG\xf5=\xdb\x80\x12&gt;\xbb\xad\x00\xbf\xea\x96C\xbfq\xa8\xf8&gt;\xde\x84\xbb\xbeq\x19F\xbe\xe6\xbbw?_H\x1b\xbf(\xbe\x87\xbe%\xa1\xfe\xbaW\xc3Q\xbe%\x87D?\n\x8a]\xbe.P\xa0\xbe/u\xce?\x87&amp;\x92&gt;h\xb7\xf9&gt;md&quot;\xbc\x1d\xc6\x85\xbeG\x1a:?\xa0\xbc\x8c\xbe\xa8\xd8)&gt;\xbd\xfb\xdf&gt;\xaa;4\xbe!E\xf0\xbe\x90Ql&gt;\x075\xdc?\\\xf9$\xbf*M\x83\xbe\xa9\xb6R=;\xb5\x94&gt;\xec\xdd\x07&gt;\xc6C\xcf\xbd\xf3fE\xbe!A\x8b\xbc\xe2\x18\xa1\xbe\xff\x89\x1a?tN\x8f&gt;' test_bytes = bytes(test_bytes) test_image = np.frombuffer(bytes(test_bytes), dtype=np.float32).reshape(28, 28, 3) # rescale to range 0 ... 255 test_image = np.clip((test_image - test_image.min()) * 255 / (test_image.max() - test_image.min()), 0, 255).astype(np.uint8) fig, axes = plt.subplots(1, 2, figsize=(16, 4)) fig.subplots_adjust(wspace=0.1, hspace=0.1) plt.subplot(1, 2, 1) plt.imshow(test_image) plt.title(&quot;test_image&quot;) plt.subplot(1, 2, 2) plt.imshow(255 - test_image) plt.title(&quot;test_image - invert&quot;) plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/6Behv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6Behv.png" alt="enter image description here" /></a></p> <p>Alternatively you could also keep the image as floats and rescale to the range of 0.0 ... 1.0:</p> <pre><code>test_image = np.clip((test_image - test_image.min()) / (test_image.max() - test_image.min()), 0, 1) </code></pre> <p>Inversion is then <code>1.0 - test_image</code>.</p> <p>PS: I also changed <code>fromstring</code> to <code>frombuffer</code> to suppress the warning.</p>
python|numpy|matplotlib
1
1,906,062
31,266,249
command for inverse ERF function in python
<p>What is the command to calculate Inverse Error function (erf) of a function in a python and which module is needed to import?</p>
<p>For the <em>inverse</em> error function, <code>scipy.special</code> has <code>erfinv</code>:</p> <p><a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.special.erfinv.html#scipy.special.erfinv">http://docs.scipy.org/doc/scipy/reference/generated/scipy.special.erfinv.html#scipy.special.erfinv</a></p> <pre><code>In [4]: from scipy.special import erfinv In [5]: erfinv(1) Out[5]: inf In [6]: erfinv(0.4) Out[6]: 0.37080715859355784 </code></pre>
python|python-2.7|python-3.x|numpy
17
1,906,063
15,600,280
assign value in a list
<pre><code>while True: p = [] q = [] H = [] E = [] for m in range(n): p = (i-d[m-2][1])/d[m-2][0] q.append(p) print q r = q[0] if r &lt; g[1][0]: r == g[1][0] elif r &gt; g[1][1]: r == g[1][1] print r </code></pre> <p>For the problem of the coding above, I want to explain it in a simpler manner.</p> <pre><code>i = [3,5] r = [5,6] if i[0] &lt; r[0]: i[0] == r[0] print i </code></pre> <p>However, i could not get <code>[5,5]</code> in this example, why? the output is still <code>[3,5]</code></p>
<p>The following lines are comparisons rather than assignments:</p> <pre><code> r == g[1][0] r == g[1][1]     i[0] == r[0] </code></pre> <p>I suspect you meant to write</p> <pre><code> r = g[1][0] r = g[1][1] i[0] = r[0] </code></pre>
python|list|limit
1
1,906,064
15,863,618
python dictionary first value
<p>the following code save in a dictionary the object candle by timestamp. i have had a problem with the initialisation and i get the following output</p> <pre><code>class CandleBuffer: def __init__(self,candle=None): self.candlebuffer={0: candle} def update(self,candle): #print candlebuffer self.candlebuffer[candle.ref_timestamp]= candle #print self.candlebuffer[candle.ref_timestamp].last_price #print candle.last_price for matel in self.candlebuffer: print matel candleBuffer = CandleBuffer() </code></pre> <p>output</p> <pre> 0 2013-04-04 15:38:00 </pre> <p>Would you know i can get rid of that zero "first" value?</p>
<p>In <code>__init__</code> set</p> <pre><code>self.candlebuffer = {} </code></pre> <p>I don't see the reason to put a value you don't want</p>
python
3
1,906,065
59,565,679
Python: Importing file from another location
<p>I'm not sure where I'm going wrong, I've tried a bunch of the methods listed in other questions, so I'm going to re-ask in case I'm missing something.</p> <p>I have the following structure:</p> <pre><code>|-bin/ -file.py |-unittests/ -__init__.py |-test_bin/ -__init__.py -test_file.py </code></pre> <p>I've tried the following inside <code>test_file.py</code> to no avail:</p> <p>1) <code>Import Error: No module named bin.file</code></p> <pre><code>from bin.file import * </code></pre> <p>2) <code>Import Error: No module named bin.file</code></p> <pre><code>import sys from os import path sys.path.append(path.dirname(path.dirname(path.abspath(__file__)))) from bin.file import * </code></pre> <p>3) <code>ValueError: Attempted relative import in non-package</code></p> <pre><code>from ...bin.file import * </code></pre> <p>The command I'm using is <code>python test_file.py</code></p>
<p>For your import to work you need to be at the root of your project.</p> <p>However, you can add a setup.py file at the root of your package which will allow you to import your functons from anywhere (see below):</p> <p>Setup.py :</p> <pre><code>from setuptools import setup setup( name='test', version='0.1', packages=['bin'] ) </code></pre> <p>and then you run the following shell command at the root of your project:</p> <pre><code>python setup.py develop </code></pre> <p><a href="https://i.stack.imgur.com/t0RDM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/t0RDM.png" alt="enter image description here"></a></p> <p>With these steps you should be able to import your file as wanted.</p> <p>Note that the test.eff-info folder is automatically created.</p> <p>I hope it helps</p>
python|unit-testing|python-import
2
1,906,066
59,609,732
Equivalent tensorflow.js function for tf.compat.v1.image.resize?
<p>I want to replicate the functionality of <code>tf.compat.v1.image.resize(inputs, [height, width], align_corners=True)</code> using tensorflow.js. What function or combination of functions in tensorflow.js would give the equivalent result as the above function?</p>
<p>There is <code>tf.image.resizeBilinear</code> (respectively <code>tf.image.resizeNearestNeighbor</code>) which will performs a resize using the bilinear sampling (respectively the nearest neighbor sampling)</p>
javascript|tensorflow|keras|tensorflow.js
0
1,906,067
59,714,742
GeoJSON file not being rendered using Folium
<p>I am trying to create a choropleth map using Folium. I exported a GeoJSON file for London boroughs from an official GIS shapefile. After hours of researching about possible causes, I noticed in my file that the features come up in a different order compared to another GeoJSON file that works, which I assume is the reason for not appearing on the map. Basically the order in mine is something like</p> <pre><code>"features": [ "geometry": {...}, "properties": {...}, etc </code></pre> <p>and the working GeoJSON has</p> <pre><code>"features": [ "properties": {...}, "geometry": {...}, </code></pre> <p>My question is How can I change the order of the features or how to make it render with Folium?</p> <p>The code for creating the map is as follows:</p> <pre><code>london = r'london_simple.json' # geojson file # create a plain London map london_map = folium.Map(location=[51.5074, 0.1278], zoom_start=10) london_map.choropleth( geo_data = london, data = dfl1, columns = ['Area_name', 'GLA_Population_Estimate_2017'], key_on='feature.properties.Counties_1', fill_color = 'YlOrRd', fill_opacity = 0.7, line_opacity=0.2, legend_name='Population size in London' ) london_map </code></pre> <p>I'm working in a jupyter notebook on IBM Watson, if that makes any difference. If I use my geojson file, no choropleth regions appear. If I change to the other file, it works (provided I change the map coordinates to Toronto ([37.7749, -122.4194]).</p> <p>My code doesn't generate any error, just the plain map focused on London without the choropleth regions.</p> <p><a href="https://github.com/adrianturculet/Coursera_Capstone/blob/master/sanfran.json" rel="nofollow noreferrer">Link to working geojson</a></p> <p><a href="https://github.com/adrianturculet/Coursera_Capstone/blob/master/london_simple.json" rel="nofollow noreferrer">Link to my problematic geojson</a></p>
<p>Have you tried this instead?</p> <pre><code>key_on='feature.properties.Counties_a' </code></pre> <p>I think the code beginning with E should identify the relevant part of the shapefile.</p>
python-3.x|geojson|folium|choropleth
0
1,906,068
59,681,344
How to check that all elements in a list are unique
<p>How can i write a program to check that all elements in a list are unique. I have a list that is entered by the user and i would like the program to check that the elements are unique, if they are, say list=[1,2,3,4,5], then the program continues. If not, say list=[1,2,3,4,5,5,5], then the user must reenter the list. Thankyou </p>
<p>This link has explained in details <a href="https://www.geeksforgeeks.org/python-check-if-list-contains-all-unique-elements/" rel="nofollow noreferrer">https://www.geeksforgeeks.org/python-check-if-list-contains-all-unique-elements/</a></p> <pre><code># to check all unique list elements flag = len(set(test_list)) == len(test_list) </code></pre>
python|list|loops|pairwise
2
1,906,069
59,711,256
expire JWT token if new JWT token id generated
<p>How do i expire generated JWT token if new JWT token is generated for same credentials.</p> <pre><code>user = User.objects.get(email=req['email']) payload = jwt_payload_handler(user) token = jwt_encode_handler(payload) </code></pre>
<p>This is core feature of JWT tokens - <strong>token contains validity time in itself</strong>, and there is no need to store token in database or make a database (or other) call to validate JWT token - just check its expiration time field.</p> <p><em><strong>Token expires</strong></em> and is no longer accepted <em><strong>automatically after its expiration date</strong></em>. One approach is to use <em><strong>short-lived</strong></em> tokens, so in case of manual invalidation (which is usually generation of new token) at the backend there is small acceptable window while old token is active.</p> <p>Another option is to add manually expired token to the <strong>blacklist</strong> and for each incoming request / token check that it is not present in the blacklist.</p>
python|django|django-rest-framework|django-rest-framework-jwt
3
1,906,070
49,258,897
Converting a entire python project to exe
<p>So I've made a project which takes a photo of your face or you can use any image and let it be fed into a neural network trained to classify celebrities. With that, you can figure out what celeb you look like.</p> <p>Here is the <a href="https://www.youtube.com/watch?v=TLlnnqRcJXw" rel="nofollow noreferrer">Video</a> and <a href="https://github.com/5Volts/Celebrities" rel="nofollow noreferrer">GitHub</a> link.</p> <p>I wanted to convert it to exe so that it's easier for people to download and use it but so far I've tried cx_freeze, pyinstaller and none of them work. I can't use py2exe because I'm on python 3.5.</p> <p>I was able to convert live_detect.py(which is the main python script that runs the program) to an exe with pyinstaller but when I run it I got the following error.<a href="https://i.stack.imgur.com/zP4Vb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zP4Vb.png" alt="img of error"></a></p> <p>I think a lot of the python to exe converter would have trouble converting a project this size. Other than modules, there are a lot of different items needed in different directory for the neural network model and so on. Was wondering if any of you had any suggestion.</p>
<p>There is a program called <code>auto-py-to-exe</code> which creates an exe out of your program without requiring you to create a setup file. The interface is great and allows you to easily create and exe without creating a setup.py. This also allows you to package your app as a single exe, without any other files. Below is a screenshot:</p> <p><a href="https://i.stack.imgur.com/nEgR2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nEgR2.png" alt="screenshot" /></a></p> <p>You can install the program by typing into the command line:</p> <p><code>python -m pip install auto-py-to-exe</code></p> <p>You can run it by typing in:</p> <p><code>auto-py-to-exe</code></p> <p>To see more about auto-py-to-exe, please visit the PyPI page at <a href="https://pypi.org/project/auto-py-to-exe/" rel="nofollow noreferrer">https://pypi.org/project/auto-py-to-exe/</a>.</p>
python|exe|pyinstaller|py2exe|cx-freeze
0
1,906,071
25,222,833
Change the "send code to interpreter" (C-c |) command in python-mode
<p>I am used to "C-c C-r" command to send code to the interpreter in R with Emacs speaks statistics. How can I set python-mode to use "C-c C-r" rather than "C-c |" to evaulate code?</p> <p>Thanks! </p>
<p>It's already bound to <code>C-c C-r</code> in the built-in python.el, but here's a command that binds the key. If you're using python-mode.el, you'll have to change the library name, the command, and maybe the map.</p> <pre><code>(eval-after-load "python" '(progn (define-key python-mode-map (kbd "C-c C-r") 'python-shell-send-region))) </code></pre>
python|emacs|python-mode
1
1,906,072
24,977,898
Why does collections.MutableSet not bestow an update method?
<p>When implementing a class that works like a set, one can inherit from <code>collections.MutableSet</code>, which will bestow the new class with several mixin methods, if you implement the methods they require. (Said otherwise, some of the methods of a set can be implemented in terms of other methods. To save you from this boredom, <code>collections.MutableSet</code> and friends contain just those implementations.)</p> <p><a href="https://docs.python.org/2/library/collections.html#collections-abstract-base-classes" rel="noreferrer">The docs</a> say the abstract methods are:</p> <blockquote> <p><code>__contains__</code>, <code>__iter__</code>, <code>__len__</code>, <code>add</code>, <code>discard</code></p> </blockquote> <p>and that the mixin methods are</p> <blockquote> <p>Inherited <code>Set</code> methods and <code>clear</code>, <code>pop</code>, <code>remove</code>, <code>__ior__</code>, <code>__iand__</code>, <code>__ixor__</code>, and <code>__isub__</code></p> </blockquote> <p>(And, just to be clear that <code>update</code> is not part of the "Inherited <code>Set</code> methods, <code>Set</code>'s mixin methods are:</p> <blockquote> <p><code>__le__</code>, <code>__lt__</code>, <code>__eq__</code>, <code>__ne__</code>, <code>__gt__</code>, <code>__ge__</code>, <code>__and__</code>, <code>__or__</code>, <code>__sub__</code>, <code>__xor__</code>, and <code>isdisjoint</code></p> </blockquote> <p>However, <code>Set</code> refers to an immutable set, which naturally would not have <code>update</code>.)</p> <p><strong>Why is <code>update</code> not among these methods?</strong> I find it surprising — unintuitive even — that <code>set</code> contains this method, but <code>collections.Set</code> does not. For example, it causes the following:</p> <pre><code>In [12]: my_set Out[12]: &lt;ms.MySet at 0x7f947819a5d0&gt; In [13]: s Out[13]: set() In [14]: isinstance(my_set, collections.MutableSet) Out[14]: True In [15]: isinstance(s, collections.MutableSet) Out[15]: True In [16]: s.update Out[16]: &lt;function update&gt; In [17]: my_set.update --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) &lt;ipython-input-17-9ed968a9eb18&gt; in &lt;module&gt;() ----&gt; 1 my_set.update AttributeError: 'MySet' object has no attribute 'update' </code></pre> <p>Perhaps stranger is that <code>MutableMapping</code> <em>does</em> bestow an <code>update</code> method, while <code>MutableSet</code> does not. AFAICT, the <a href="http://hg.python.org/cpython/file/default/Lib/_collections_abc.py" rel="noreferrer">source code</a> does not mention any reason for this.</p>
<p>The API for <em>MutableSet</em> was designed by Guido van Rossum. His proposal is articulated in <a href="http://legacy.python.org/dev/peps/pep-3119/#sets">PEP 3119's section on for Sets</a>. Without elaboration, he specified that:</p> <blockquote> <p>"This class also defines concrete operators to compute union, intersection, symmetric and asymmetric difference, respectively __or__, __and__, __xor__ and __sub__"</p> </blockquote> <p>... </p> <blockquote> <p>"This also supports the in-place mutating operations |=, &amp;=, ^=, -=. These are concrete methods whose right operand can be an arbitrary Iterable, except for &amp;=, whose right operand must be a Container. This ABC does not provide the named methods present on the built-in concrete set type that perform (almost) the same operations."</p> </blockquote> <p>There is not a bug or oversight here; rather, there is a matter of opinion about whether you like or don't like Guido's design.</p> <p>The <em>Zen of Python</em> has something to say about that:</p> <ul> <li>There should be one-- and preferably only one --obvious way to do it.</li> <li>Although that way may not be obvious at first unless you're Dutch.</li> </ul> <p>That said, abstract base classes were designed to be easy to extend. It is trivial to add your own <em>update()</em> method to your concrete class with <code>update = Set.__ior__</code>. </p>
python|collections|set|mixins
9
1,906,073
59,946,059
How to manually reset the data in a step of a formtool wizard?
<p>I have a wizard with several steps organized as follow:</p> <pre><code>1 -&gt; 2 -&gt; 3 -&gt; 4 -&gt; 6 | ^ | | --&gt; 5 --- </code></pre> <p>where step 6 is just a review of the data in previous steps. Steps 4 and 5 are mutually exclusive. </p> <p>If a user travels the form 1,2,3,4,6 and then decides to use 5 instead of 4 I want to be able to reset the data in step 4. How can I manually reset the data already stored for step 4 (or any step) of the wizard?</p>
<p>This is not really documented in django-formtools, but you'll find that the <code>WizardView</code> has a property <code>self.storage</code> which is an instance of <code>BaseStorage</code> (in 'formtools.wizard.storage.base'). </p> <p><code>self.storage.data</code> is a dictionary of all the stored data. It's a bit dangerous to manipulate this dictionary directly, better use the method <code>self.storage.set_step_data(step, data)</code> to change the data for specific step:</p> <pre><code>self.storage.set_step_data('4', {}) </code></pre> <p>will empty the data for step '4'.</p> <p>Note: If you're also uploading files, you should remove them, which is a bit tricky, because <code>self.storage.set_step_files(step, files)</code> doesn't do anything if <code>files</code> is empty (<code>{}</code>). Look at that method to either override it or see how to remove the files from the data dictionary.</p>
django|python-3.x|django-formtools
1
1,906,074
67,981,723
Unable to locate element href </a>
<p>Please find the attached image i want to fetch the Admins and moderators name and href link .</p> <p>i have tried below :</p> <pre><code>grp=&quot;https://m.facebook.com/groups/162265541050378?view=members&amp;ref=m_notif&amp;notif_t=group_r2j_approved&quot; driver.get(grp) root1=driver.find_element_by_id(&quot;//*[@id='rootcontainer']&quot;) if root1&gt;0: admin=driver.find_elements_by_xpath(&quot;//*[@class='_4kk6 _5b6s']&quot;) ilink = admin.get_attribute('href') ilink2=admin.get_attribute('&lt;a&gt;') print(ilink) </code></pre> <p>error</p> <pre><code>selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {&quot;method&quot;:&quot;css selector&quot;,&quot;selector&quot;:&quot;[id=&quot;//*[@id='rootcontainer']&quot;]&quot;} (Session info: chrome=91.0.4472.101) </code></pre> <p><a href="https://i.stack.imgur.com/LesB4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LesB4.png" alt="enter image description here" /></a></p>
<p>First of all <code>//*[@id='rootcontainer']</code> is XPath, not ID. So you can use it like this</p> <pre><code>root1=driver.find_element_by_xpath(&quot;//*[@id='rootcontainer']&quot;) </code></pre> <p>or this:</p> <pre><code>root1=driver.find_element_by_id(&quot;rootcontainer&quot;) </code></pre> <p>Also this <code>ilink2=admin.get_attribute('&lt;a&gt;')</code> is not correct. Will not work.</p>
python-3.x|selenium|selenium-webdriver|web-scraping|href
1
1,906,075
30,575,072
Error when trying to scrape images
<p>I'm trying to download images via URL's stored in a .txt file using Python 3 and I'm getting an error when trying to do so on some websites.This is the error i get: </p> <pre><code> File "C:/Scripts/ImageScraper/ImageScraper.py", line 14, in &lt;module&gt; dl() File "C:/Scripts/ImageScraper/ImageScraper.py", line 10, in dl urlretrieve(URL, IMAGE) File "C:\Python34\lib\urllib\request.py", line 186, in urlretrieve with contextlib.closing(urlopen(url, data)) as fp: File "C:\Python34\lib\urllib\request.py", line 161, in urlopen return opener.open(url, data, timeout) File "C:\Python34\lib\urllib\request.py", line 469, in open response = meth(req, response) File "C:\Python34\lib\urllib\request.py", line 579, in http_response 'http', request, response, code, msg, hdrs) File "C:\Python34\lib\urllib\request.py", line 507, in error return self._call_chain(*args) File "C:\Python34\lib\urllib\request.py", line 441, in _call_chain result = func(*args) File "C:\Python34\lib\urllib\request.py", line 587, in http_error_default raise HTTPError(req.full_url, code, msg, hdrs, fp) urllib.error.HTTPError: HTTP Error 403: Forbidden </code></pre> <p>using this code:</p> <pre><code>from urllib.request import urlretrieve def dl(): with open('links.txt', 'r') as input_file: for line in input_file: URL = line IMAGE = URL.rsplit('/',1)[1] urlretrieve(URL, IMAGE) if __name__ == '__main__': dl() </code></pre> <p>I'm assuming its because they do not allow 'bots' to access their website, but with some research I found out there is a way around, atleast when using urlopen, but I cant manage to apply the workaround to my code when I'm using urlretrieve. Is it possible to get this to work?</p>
<p>I think the error is an actual HTTP Error : 403, saying Access is forbidden to that URL. You might want to try and print the URL before it is accessed and try accessing the URL through your browser. You should also get a forbidden error (403). Learn more about <a href="https://en.wikipedia.org/wiki/List_of_HTTP_status_codes" rel="nofollow">http_status_codes</a> and specifically <a href="https://en.wikipedia.org/wiki/HTTP_403" rel="nofollow">403 forbidden</a></p>
python|python-3.x|urllib
1
1,906,076
42,834,490
Too few parameters error, while no parameters placeholders used
<p>I am trying to execute SQL query within Access database using PYODBC and I get following error: </p> <blockquote> <p>pyodbc.Error: ('07002', '[07002] [Microsoft][ODBC Microsoft Access Driver] Too few parameters. Expected 1. (-3010) (SQLExecDirectW)') </p> </blockquote> <p>The problem is that I am not using any additional parameters. Here is the code:</p> <pre><code>access_con_string = r"Driver={};Dbq={};".format(driver, base) cnn = pyodbc.connect(access_con_string) db_cursor = cnn.cursor() expression = """SELECT F_ARODES.ARODES_INT_NUM, F_ARODES.TEMP_ADRESS_FOREST,F_AROD_LAND_USE.ARODES_INT_NUM, F_ARODES.ARODES_TYP_CD FROM F_ARODES LEFT JOIN F_AROD_LAND_USE ON F_ARODES.ARODES_INT_NUM = F_AROD_LAND_USE.ARODES_INT_NUM WHERE (((F_AROD_LAND_USE.ARODES_INT_NUM) Is Null) AND ((F_ARODES.ARODES_TYP_CD)="wydziel") AND ((F_ARODES.TEMP_ACT_ADRESS)=True));""" db_cursor.execute(expression) </code></pre> <p>Query itself, if used inside MS-Access works fine. Also, connection is OK, as other queries are executed properly. What am I doing wrong?</p>
<p>I had a similar problem, with an update I was trying to perform with pyodbc. When executed in Access, the query worked fine, same for when using the application (it allows some queries from within the app). But when ran in python with pyodbc the same text would throw errors. I determined the problem is the double quote (OP's query has a set of them as well). The query began to work when I replaced them with single quotes.</p> <p>This does not work:</p> <pre><code>Update ApplicationStandards Set ShortCutKey = "I" Where ShortName = "ISO" </code></pre> <p>This does:</p> <pre><code>Update ApplicationStandards Set ShortCutKey = 'I' Where ShortName = 'ISO' </code></pre>
python|ms-access|pyodbc
2
1,906,077
42,699,230
How to auto extract data from a html file with python?
<p>I'm beginning to learn python (2.7) and would like to extract certain information from a html code stored in a text file. The code below is just a snippet of the whole html code. In the full html text file the code structure is the same for all other firms data as well and these html code "blocks" are positioned underneath each other (if the latter info helps). </p> <p>The html snippet code:</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code> &lt;body&gt;&lt;div class="tab_content-wrapper noPrint"&gt;&lt;div class="tab_content_card"&gt; &lt;div class="card-header"&gt; &lt;strong title="" d.="" kon.="" nl=""&gt;"Liberty Associates LLC"&lt;/strong&gt; &lt;span class="tel" title="Phone contacts"&gt;Phone contacts&lt;/span&gt; &lt;/div&gt; &lt;div class="card-content"&gt; &lt;table&gt; &lt;tbody&gt; &lt;tr&gt; &lt;td colspan="4"&gt; &lt;label class="downdrill-sbi" title="Industry: Immigration"&gt;Industry: Immigration&lt;/label&gt; &lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td width="20"&gt;&amp;nbsp;&lt;/td&gt; &lt;td width="245"&gt;&amp;nbsp;&lt;/td&gt; &lt;td width="50"&gt;&amp;nbsp;&lt;/td&gt; &lt;td width="80"&gt;&amp;nbsp;&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td colspan="2"&gt; 59 Wall St&lt;/td&gt; &lt;td&gt;&lt;/td&gt; &lt;td&gt;&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td colspan="2"&gt;NJ 07105&amp;nbsp;&amp;nbsp; &lt;label class="downdrill-sbi" title="New York"&gt;New York&lt;/label&gt; &lt;/td&gt; &lt;td&gt;&lt;/td&gt; &lt;td&gt;&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td&gt;&amp;nbsp;&lt;/td&gt; &lt;td&gt;&amp;nbsp;&lt;/td&gt; &lt;td&gt;&amp;nbsp;&lt;/td&gt; &lt;td&gt;&amp;nbsp;&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt;&lt;td&gt;Phone:&lt;/td&gt;&lt;td&gt;+1 973-344-8300&lt;/td&gt;&lt;td&gt;Firm Nr:&lt;/td&gt;&lt;td&gt;KL4568TL&lt;/td&gt;&lt;/tr&gt; &lt;tr&gt;&lt;td&gt;Fax:&lt;/td&gt;&lt;td&gt;+1 973-344-8300&lt;/td&gt;&lt;td colspan="2"&gt;&lt;/td&gt;&lt;/tr&gt; &lt;tr&gt; &lt;td colspan="2"&gt; &lt;a href="http://www.liberty.edu/" target="_blank"&gt;www.liberty.edu&lt;/a&gt; &lt;/td&gt; &lt;td&gt;Active:&lt;/td&gt; &lt;td&gt;Yes&lt;/td&gt; &lt;/tr&gt; &lt;/tbody&gt; &lt;/table&gt; &lt;/div&gt; &lt;/div&gt;&lt;/div&gt;&lt;/body&gt;</code></pre> </div> </div> </p> <p>How it looks like on a webpage: <a href="https://i.stack.imgur.com/BRreD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BRreD.png" alt="enter image description here"></a></p> <p>Right now im using the following script to extract the desired information:</p> <pre><code>from lxml import html str = open('html1.txt', 'r').read() tree = html.fromstring(str) for variable in tree.xpath('/html/body/div/div'): company_name = variable.xpath('/html/body/div/div/div[1]/strong/text()') location = variable.xpath('/html/body/div/div/div[2]/table/tbody/tr[4]/td[1]/label/text()') website = variable.xpath('/html/body/div/div/div[2]/table/tbody/tr[8]/td[1]/a/text()') print(company_name, location, website) </code></pre> <p>Printed result:</p> <pre><code>('"Liberty Associates LLC"', 'New York', 'www.liberty.edu') </code></pre> <p>So far so good. However, when I use the script above to scape the <strong>whole</strong> html file, results are <em>printed</em> right after each other on one single line. But I would like to print the data (html code "blocks") under eachother like this:</p> <pre><code>Liberty Associates LLC | New York | +1 973-344-8300 | www.liberty.edu Company B | Los Angeles | +1 213-802-1770 | perchla.com </code></pre> <p>I know I can use <code>[0]</code>, <code>[1]</code>, <code>[2]</code> etc. to get the data under each other like I would like, but doing this manually for all thousands of html "blocks" is just not really feasible.</p> <p>So my question: how can I automatically extract the data "block by block" from the html code and print the results <em>under each other</em> like illustrated above? </p>
<p>I think what you want is<p></p> <pre><code>print(company_name, location, website,'\n') </code></pre>
python|html|xml-parsing
0
1,906,078
72,361,243
Why showing Error in installing kivyMD (window)
<p>I just installed kivy in window, after that I am trying to install kivymd via <code>python3 -m pip install kivymd</code> command but it gives error that says <code>the system cannot execute the specified program</code></p>
<p>What <code>CMD</code> or <code>command-line-interface</code> are you using?</p> <p>Try:</p> <p><code>pip install kivymd</code></p> <p>without mentioning <code>python</code>.</p> <p><a href="https://www.freecodecamp.org/news/how-to-use-pip-install-in-python/" rel="nofollow noreferrer">https://www.freecodecamp.org/news/how-to-use-pip-install-in-python/</a></p>
python|kivymd
0
1,906,079
65,587,625
How to efficiently load mixed-type pandas DataFrame into an Oracle DB
<p>Happy new year everyone!</p> <p>I'm currently struggling with <strong>ETL performance issues</strong> as I'm trying to write <strong>larger Pandas DataFrames (1-2 mio rows, 150 columns) into an Oracle data base</strong>. Even for just 1000 rows, Panda's default <code>to_sql()</code> method runs well over 2 minutes (see code snippet below).</p> <p>My strong hypothesis is that these performance issues are in some way related to the <strong>underlying data types</strong> (mostly strings). I ran the same job on 1000 rows of random strings (benchmark: 3 min) and 1000 rows of large random floats (benchmark: 15 seconds).</p> <pre><code>def_save(self, data: pd.DataFrame): engine = sqlalchemy.create_engine(self._load_args['con']) table_name = self._load_args[&quot;table_name&quot;] if self._load_args.get(&quot;schema&quot;, None) is not None: table_name = self._load_args['schema'] + &quot;.&quot; + table_name with engine.connect() as conn: data.to_sql( name=table_name, conn=conn, if_exists='replace', index=False, method=None# oracle dialect does not support multiline inserts ) return </code></pre> <p>Anyone here how has experience in efficiently loading mixed data into an Oracle data base using python?</p> <p>Any hints, code snippets and/or API recommendations are very much appreciated.</p> <p>Cheers,</p>
<p>As said in your question, you are not able to use <code>method='multi'</code> with you db flavor. This is the key reason inserts are so slow, as data going in row by row.</p> <p>Using SQL*Loader as suggested by @GordThompson may be fastest route for relatively wide/big table. <a href="https://stackoverflow.com/a/41080/327165">Example on setting up SQL*Loader</a></p> <p>Another option to consider is <a href="https://oracle.github.io/python-cx_Oracle/" rel="nofollow noreferrer">cx_Oracle</a>. See <a href="https://stackoverflow.com/a/42769557/327165">Speed up to_sql() when writing Pandas DataFrame to Oracle database using SqlAlchemy and cx_Oracle</a></p>
python|pandas|oracle|sqlalchemy|cx-oracle
3
1,906,080
50,918,538
Initializing or populating multiple numpy arrays from h5 file groups
<p>I have an h5 file with 5 groups, each group containing a 3D dataset. I am looking to build a for loop that allows me to extract each group into a numpy array and assign the numpy array to an object with the group header name. I am able to get a number of different methods to work with one group, but when I try to build a for loop that applies to code to all 5 groups, it breaks. For example:</p> <pre><code>import h5py as h5 import numpy as np f = h5.File("FFM0012.h5", "r+") #read in h5 file print(list(f.keys())) #['FFM', 'Image'] for my dataset FFM = f['FFM'] #Generate object with all 5 groups print(list(FFM.keys())) #['Amp', 'Drive', 'Phase', 'Raw', 'Zsnsr'] for my dataset Amp = FFM['Amp'] #Generate object for 1 group Amp = np.array(Amp) #Turn into numpy array, this works. </code></pre> <p>Now when I try to apply the same logic with a for loop:</p> <pre><code>h5_keys = [] FFM.visit(h5_keys.append) #Create list of group names ['Amp', 'Drive', 'Phase', 'Raw', 'Zsnsr'] for h5_key in h5_keys: tmp = FFM[h5_key] h5_key = np.array(tmp) print(Amp[30,30,30]) #To check that array is populated </code></pre> <p>When I run this code I get "NameError: name 'Amp' is not defined". I've tried initializing the numpy array before the for loop with:</p> <pre><code>h5_keys = [] FFM.visit(h5_keys.append) #Create list of group names Amp = np.array([]) for h5_key in h5_keys: tmp = FFM[h5_key] h5_key = np.array(tmp) print(Amp[30,30,30]) #To check that array is populated </code></pre> <p>This produces the error message "IndexError: too many indices for array"</p> <p>I've also tried generating a dictionary and creating numpy arrays from the dictionary. That is a similar story where I can get the code to work for one h5 group, but it falls apart when I build the for loop. </p> <p>Any suggestions are appreciated!</p>
<p>You seem to have jumped to using <code>h5py</code> and <code>numpy</code> before learning much of Python</p> <pre><code>Amp = np.array([]) # creates a numpy array with 0 elements for h5_key in h5_keys: # h5_key is set of a new value each iteration tmp = FFM[h5_key] h5_key = np.array(tmp) # now you reassign h5_key print(Amp[30,30,30]) # Amp is the original (0,) shape array </code></pre> <p>Try this basic python loop, paying attention to the value of <code>i</code>:</p> <pre><code>alist = [1,2,3] for i in alist: print(i) i = 10 print(i) print(alist) # no change to alist </code></pre> <hr> <p><code>f</code> is the file.</p> <pre><code>FFM = f['FFM'] </code></pre> <p>is a <code>group</code></p> <pre><code>Amp = FFM['Amp'] </code></pre> <p>is a dataset. There are various ways of load the dataset into an numpy array. I believe the <code>[...]</code> slicing is the current preferred one. <code>.value</code> used to used but is now <a href="http://docs.h5py.org/en/latest/whatsnew/2.1.html#dataset-value-property-is-now-deprecated" rel="nofollow noreferrer">deprecated</a> (<a href="http://docs.h5py.org/en/latest/high/dataset.html#reading-writing-data" rel="nofollow noreferrer">loading dataset</a>)</p> <pre><code>Amp = FFM['Amp'][...] </code></pre> <p>is an array.</p> <pre><code>alist = [FFM[key][...] for key in h5_keys] </code></pre> <p>should create a list of arrays from the <code>FFM</code> group.</p> <p>If the shapes are compatible, you can concatenate the arrays into one array:</p> <pre><code>np.array(alist) np.stack(alist) np.concatatenate(alist, axis=0) # or other axis </code></pre> <p>etc</p> <pre><code>adict = {key: FFM[key][...] for key in h5_keys} </code></pre> <p>should crate of dictionary of array keyed by dataset names.</p> <p>In Python, lists and dictionaries are the ways of accumulating objects. The <code>h5py</code> groups behave much like dictionaries. Datasets behave much like numpy arrays, though they remain on the disk until loaded with <code>[...]</code>.</p>
python|numpy|h5py
1
1,906,081
50,576,617
HTTP Error 404: Not Found when starting up pdfbox
<p>I wan't to use pdfbox in python, I have installed using this <a href="https://pypi.org/project/python-pdfbox/" rel="nofollow noreferrer">https://pypi.org/project/python-pdfbox/</a> , but when I try to run <code>p = pdfbox.PDFBox()</code> I am getting following error.</p> <pre><code>Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "/home/suyog/anaconda3/lib/python3.6/site-packages/pdfbox/__init__.py", line 81, in __init__ self.pdfbox_path = self._get_pdfbox_path() File "/home/suyog/anaconda3/lib/python3.6/site-packages/pdfbox/__init__.py", line 57, in _get_pdfbox_path r = urllib.request.urlopen(pdfbox_url) File "/home/suyog/anaconda3/lib/python3.6/urllib/request.py", line 223, in urlopen return opener.open(url, data, timeout) File "/home/suyog/anaconda3/lib/python3.6/urllib/request.py", line 532, in open response = meth(req, response) File "/home/suyog/anaconda3/lib/python3.6/urllib/request.py", line 642, in http_response 'http', request, response, code, msg, hdrs) File "/home/suyog/anaconda3/lib/python3.6/urllib/request.py", line 570, in error return self._call_chain(*args) File "/home/suyog/anaconda3/lib/python3.6/urllib/request.py", line 504, in _call_chain result = func(*args) File "/home/suyog/anaconda3/lib/python3.6/urllib/request.py", line 650, in http_error_default raise HTTPError(req.full_url, code, msg, hdrs, fp) urllib.error.HTTPError: HTTP Error 404: Not Found </code></pre> <p>Any idea how to use PDFBOX in ubuntu?</p>
<p>Adding on to this answer, since it feels incomplete to a person installing this for the first time.</p> <p>Doing a <code>pip install python-pdfbox</code> points to the project <a href="https://pypi.org/project/python-pdfbox/" rel="nofollow noreferrer">https://pypi.org/project/python-pdfbox/</a>, that is the expected behavior.</p> <p>The usage instructions indicate to instantiate the pdfbox object like so: <code>p = pdfbox.PDFbox()</code>.</p> <p>At this point, some of us seeking answers may encounter said HTTP Error in this question.</p> <p>Looking into the repository, notice that the version of pdfbox to download is <a href="https://github.com/lebedov/python-pdfbox/blob/master/pdfbox/__init__.py" rel="nofollow noreferrer">hardcoded</a>. This would imply anyone who pip installs this package will need to be "lucky" enough to have the version of apache pdfbox (which is a java library) at the same version as that.</p> <h2>Solution:</h2> <p>Disclaimer: I sought to make this work for Windows 10.</p> <p>The package init looks for pdfbox-app on the <a href="https://pypi.org/project/python-pdfbox/" rel="nofollow noreferrer">environment variable</a>. If it does not find it, it tries to download one. Hence the error.</p> <ol> <li>Download the latest <code>pdfbox-app-{version}.jar</code> from <a href="https://pdfbox.apache.org/download.cgi" rel="nofollow noreferrer">pdfbox apache</a>.</li> <li>Set the environment variable for PDFBOX e.g <code>set PDFBOX=C:\Dev\pdfbox-app-2.0.11.jar</code></li> <li>Start a new command line and try: <ul> <li><code>import pdfbox</code></li> <li><code>p = pdfbox.PDFBox()</code></li> <li><code>p.extract_text("some_filename")</code></li> </ul></li> </ol> <p>Caveat: extract_text() does not recognize spaces file names with spaces, somehow...</p>
python|pdfbox
1
1,906,082
50,522,042
Python JSON format issue
<pre><code>&lt;pixel: u'Crop Marks', size=2478x3509, x=1, y=0, visible=1, mask=None, effects=[]&gt; </code></pre> <p>I got an output from a <strong>psd parser</strong> in Python.</p> <p>Which type of format is this? </p>
<p>This is an instance of the <code>psd_tools.user_api.layers.PixelLayer</code> class of the psd_tools library. Everything in python is an instance of some type and therefore this. You can know it using the <code>type(&lt;object&gt;)</code> function.</p> <p>Try <code>dir(&lt;object&gt;)</code> for seeing the list of attributes/ properties of that particular object. In your case, <code>dir(p)</code> where <code>p = &lt;pixel: u'Crop Marks', size=2478x3509, x=1, y=0, visible=1, mask=None, effects=[]&gt;</code> outputs,</p> <p><code>['__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_channels', '_clip_layers', '_effects', '_index', '_info', '_mask', '_parent', '_psd', '_record', '_tagged_blocks', 'as_PIL', 'as_pymaging', 'bbox', 'blend_mode', 'bottom', 'clip_layers', 'effects', 'flags', 'get_tag', 'has_box', 'has_clip_layers', 'has_effects', 'has_mask', 'has_pixels', 'has_relevant_pixels', 'has_tag', 'has_vector_mask', 'height', 'is_group', 'is_visible', 'kind', 'layer_id', 'left', 'mask', 'name', 'opacity', 'parent', 'right', 'tagged_blocks', 'top', 'vector_mask', 'visible', 'width'] </code></p> <p>It is a list of all the attributes, functions or the properties you can access from the psd instance. We can see that a custom <code>__repr__</code> function has been defined for this which upon calling using <code>p.__repr__()</code> outputs the following format as a string <code>"&lt;pixel: u'Crop Marks', size=2478x3509, x=1, y=0, visible=1, mask=None, effects=[]&gt;"</code>. Hope it answers your question.</p>
python|json|xml|psd
0
1,906,083
50,631,393
Python replace values in unknown structure JSON file
<p>Say that I have a JSON file whose structure is either unknown or may change overtime - I want to replace all values of "REPLACE_ME" with a string of my choice in Python.</p> <p>Everything I have found assumes I know the structure. For example, I can read the JSON in with <code>json.load</code> and walk through the dictionary to do replacements then write it back. This assumes I know Key names, structure, etc.</p> <p>How can I replace ALL of a given string value in a JSON file with something else?</p>
<p>This function recursively replaces all strings which equal the value <code>original</code> with the value <code>new</code>.</p> <p>This function works on the python structure - but of course you can use it on a json file - by using <code>json.load</code></p> <p>It doesn't replace keys in the dictionary - just the values.</p> <pre><code>def nested_replace( structure, original, new ): if type(structure) == list: return [nested_replace( item, original, new) for item in structure] if type(structure) == dict: return {key : nested_replace(value, original, new) for key, value in structure.items() } if structure == original: return new else: return structure d = [ 'replace', {'key1': 'replace', 'key2': ['replace', 'don\'t replace'] } ] new_d = nested_replace(d, 'replace', 'now replaced') print(new_d) ['now replaced', {'key1': 'now replaced', 'key2': ['now replaced', "don't replace"]}] </code></pre>
python|json
6
1,906,084
26,495,953
youtube-dl python library documentation
<p>is there any documentation for use youtube-dl as a python library in a project?</p> <p>I know that I can use the main class, but I can't find any documentation or example...</p> <pre><code>import youtube_dl ydl = youtube_dl.YoutubeDL(params) ... ? </code></pre>
<p>similar question: <a href="https://stackoverflow.com/questions/18054500/how-to-use-youtube-dl-from-a-python-program">How to use youtube-dl from a python program</a></p> <p>Check this file in sources: <a href="https://github.com/rg3/youtube-dl/blob/master/youtube_dl/__init__.py" rel="noreferrer"><code>https://github.com/rg3/youtube-dl/blob/master/youtube_dl/__init__.py</code></a></p> <p>You need options dict (originally generated using parameters received from command line):</p> <pre><code>ydl_opts = { 'usenetrc': opts.usenetrc, 'username': opts.username, 'password': opts.password, # ... all options list available in sources 'exec_cmd': opts.exec_cmd, } </code></pre> <p>and then create <code>YoutubeDL</code> instance and call some methods with self-described names:</p> <pre><code>with YoutubeDL(ydl_opts) as ydl: ydl.print_debug_header() ydl.add_default_info_extractors() # PostProcessors # Add the metadata pp first, the other pps will copy it if opts.addmetadata: ydl.add_post_processor(FFmpegMetadataPP()) if opts.extractaudio: ydl.add_post_processor(FFmpegExtractAudioPP(preferredcodec=opts.audioformat, preferredquality=opts.audioquality, nopostoverwrites=opts.nopostoverwrites)) if opts.recodevideo: ydl.add_post_processor(FFmpegVideoConvertor(preferedformat=opts.recodevideo)) if opts.embedsubtitles: ydl.add_post_processor(FFmpegEmbedSubtitlePP(subtitlesformat=opts.subtitlesformat)) if opts.xattrs: ydl.add_post_processor(XAttrMetadataPP()) if opts.embedthumbnail: if not opts.addmetadata: ydl.add_post_processor(FFmpegAudioFixPP()) ydl.add_post_processor(AtomicParsleyPP()) # Please keep ExecAfterDownload towards the bottom as it allows the user to modify the final file in any way. # So if the user is able to remove the file before your postprocessor runs it might cause a few problems. if opts.exec_cmd: ydl.add_post_processor(ExecAfterDownloadPP( verboseOutput=opts.verbose, exec_cmd=opts.exec_cmd)) # Update version if opts.update_self: update_self(ydl.to_screen, opts.verbose) # Remove cache dir if opts.rm_cachedir: ydl.cache.remove() # Maybe do nothing if (len(all_urls) &lt; 1) and (opts.load_info_filename is None): if not (opts.update_self or opts.rm_cachedir): parser.error(u'you must provide at least one URL') else: sys.exit() try: if opts.load_info_filename is not None: retcode = ydl.download_with_info_file(opts.load_info_filename) else: retcode = ydl.download(all_urls) except MaxDownloadsReached: ydl.to_screen(u'--max-download limit reached, aborting.') retcode = 101 </code></pre>
python|youtube-dl
9
1,906,085
56,657,219
How to apply a function on a DataFrame Column using multiple rows and columns as input?
<p>I have a sequence of events, and based on some variables (previous command, previous/current code and previous/current status) I need to decide which command is related to that event.</p> <p>I actually have a code that works as expected, but it's kind of slow. So I've tried to use df.apply, but I don't think it's possible to use more than the current element as input. (The code starts at 1 because the first row is always a "begin" command)</p> <pre class="lang-py prettyprint-override"><code>def mark_commands(df): for i in range(1, len(df)): prev_command = df.loc[i-1, 'Command'] prev_code, cur_code = df.loc[i-1, 'Code'], df.loc[i, 'Code'] prev_status, cur_status = df.loc[i-1, 'Status'], df.loc[i, 'Status'] if (prev_command == "end" and ((cur_code == 810 and cur_status in [10, 15]) or (cur_code == 830 and cur_status == 15))): df.loc[i, 'Command'] = "ignore" elif ((cur_code == 800 and cur_status in [20, 25]) or (cur_code in [810, 830] and cur_status in [10, 15])): df.loc[i, 'Command'] = "end" elif ((prev_code != 800) and ((cur_code == 820 and cur_status == 25) or (cur_code == 820 and cur_status == 20 and prev_code in [810, 820] and prev_status == 20) or (cur_code == 830 and cur_status == 25 and prev_code == 820 and prev_status == 20))): df.loc[i, 'Command'] = "continue" else: df.loc[i, 'Command'] = "begin" return df </code></pre> <p>And here is a correctly labeled sample in a CSV format (Which can serve as input, since the only difference is that everything on the command line is empty after the first begin):</p> <pre><code>Code,Status,Command 810,20,begin 810,10,end 810,25,begin 810,15,end 810,15,ignore 810,20,begin 810,10,end 810,25,begin 810,15,end 810,15,ignore 810,20,begin 800,20,end 810,10,ignore 810,25,begin 820,25,continue 820,25,continue 820,25,continue 820,25,continue 800,25,end </code></pre>
<p>You're code is mostly perfect (you could have used <code>df.iterrows()</code>, more bulletproof if your index is not linear, in the <code>for</code> loop but it wouldn't have changed the speed).</p> <p>After trying extensively to use <code>df.apply</code>, I realized there was a fatal flow since your <code>"Command"</code> column is continuously updating from one row to another. The following wouldn't work since <code>df</code> is somehow "static":</p> <pre><code>df['Command'] = df.apply(lambda row: mark_commands(row), axis=1) </code></pre> <p>Eventually, to save you some calculation, you could insert a <code>continue</code> statement each time a condition is met if your <code>if</code>, <code>elif</code> statements to go directly to the next iteration:</p> <pre><code>if (prev_command == "end" and ....) : df.loc[i, 'Command'] = "ignore" continue </code></pre> <p>That being said, your code works great.</p>
python|pandas
0
1,906,086
45,063,099
Python logger per function or per module
<p>I am trying to start using logging in python and have read several blogs. One issue that is causing confusion for me is whether to create the logger per function or per module. In this <a href="https://fangpenlin.com/posts/2012/08/26/good-logging-practice-in-python/" rel="noreferrer">Blog: Good logging practice in Python</a> it is recommended to get a logger per function. For example:</p> <pre><code>import logging def foo(): logger = logging.getLogger(__name__) logger.info('Hi, foo') class Bar(object): def __init__(self, logger=None): self.logger = logger or logging.getLogger(__name__) def bar(self): self.logger.info('Hi, bar') </code></pre> <p>The reasoning given is that </p> <blockquote> <p>The logging.fileConfig and logging.dictConfig disables existing loggers by default. So, those setting in file will not be applied to your logger. It’s better to get the logger when you need it. It’s cheap to create or get a logger.</p> </blockquote> <p>The recommended way I read everywhere else is like as shown below. The blog states that this approach <code>"looks harmless, but actually, there is a pitfall"</code>.</p> <pre><code>import logging logger = logging.getLogger(__name__) def foo(): logger.info('Hi, foo') class Bar(object): def bar(self): logger.info('Hi, bar') </code></pre> <p>I find the former approach to be tedious as I would have to remember to get the logger in each function. Additionally getting the logger in each function is surely more expensive than once per module. Is the author of the blog advocating a non-issue? Would following logging best practices avoid this issue?</p>
<p>I would agree with you; getting logger in each and every function you use creates too much unnecessary cognitive overhead, to say the least.</p> <p>The author of the blog is right about the fact that <em>you should be careful to properly initialize</em> (configure) your logger(s) before using them.</p> <p>But the approach he suggests makes sense only in the case you have no control over your application loading and the application entry point (which usually you do).</p> <p>To avoid premature (implicit) creation of loggers <a href="https://docs.python.org/3/library/logging.html#logging.basicConfig" rel="noreferrer">that happens with a first call to any of the message logging functions</a> (like <code>logging.info()</code>, <code>logging.error()</code>, etc.) if a root logger hasn't been configured beforehand, <strong>simply make sure you configure your logger before logging</strong>.</p> <p>Initializing the logger from the main thread before starting other threads is also recommended in <a href="https://docs.python.org/2/library/logging.html#logging.basicConfig" rel="noreferrer">Python docs</a>.</p> <p>Python's logging tutorial (<a href="https://docs.python.org/3/howto/logging.html#logging-basic-tutorial" rel="noreferrer">basic</a> and <a href="https://docs.python.org/3/howto/logging.html#logging-advanced-tutorial" rel="noreferrer">advanced</a>) can serve you as a reference, but for a more concise overview, have a look at the <a href="https://python-guide.readthedocs.io/en/latest/writing/logging/" rel="noreferrer">logging section of The Hitchhiker's Guide to Python</a>.</p> <h3>A simple blueprint for logging from multiple modules</h3> <p>Have a look at this modified <a href="https://docs.python.org/3/howto/logging.html#logging-from-multiple-modules" rel="noreferrer">example from Python's logging tutorial</a>:</p> <pre><code># myapp.py import logging import mylib # get the fully-qualified logger (here: `root.__main__`) logger = logging.getLogger(__name__) def main(): logging.basicConfig(format='%(asctime)s %(name)-12s %(levelname)-8s %(message)s', level=logging.DEBUG) # note the `logger` from above is now properly configured logger.debug("started") mylib.something() if __name__ == "__main__": main() </code></pre> <p>And</p> <pre><code># mylib.py import logging # get the fully-qualified logger (here: `root.mylib`) logger = logging.getLogger(__name__) def something(): logger.info("something") </code></pre> <p>Producing this on <code>stdout</code> (note the correct <code>name</code>):</p> <pre><code>$ python myapp.py 2017-07-12 21:15:53,334 __main__ DEBUG started 2017-07-12 21:15:53,334 mylib INFO something </code></pre>
python|logging
19
1,906,087
61,229,316
Can't fix ValueError: Building a simple neural network model in Keras
<p>I am new to TensorFlow and Keras and I wanted to build a simple neural network in Keras that can count from 0 to 7 in binary (i.e. 000-111). The network should have:</p> <ul> <li>1 input layer with 3 nodes,</li> <li>1 hidden layer with 8 nodes,</li> <li>1 output layer with 3 nodes. </li> </ul> <p>It sounds simple but I have problems with building the model. I get the following error:</p> <pre><code>ValueError: Error when checking target: expected dense_2 to have shape (1,) but got array with shape (3,) </code></pre> <p>The code I've tried so far:</p> <pre><code>import plaidml.keras plaidml.keras.install_backend() import os os.environ["KERAS_BACKEND"] = plaidml.keras.backend import keras from keras.models import Sequential from keras.layers import Dense, Dropout, Flatten from keras.layers import Conv2D, MaxPooling2D from keras import backend as K import numpy as np x_train = [ [0.0, 0.0, 0.0], [0.0, 0.0, 1.0], [0.0, 1.0, 0.0], [0.0, 1.0, 1.0], [1.0, 0.0, 0.0], [1.0, 0.0, 1.0], [1.0, 1.0, 0.0], [1.0, 1.0, 1.0]] y_train = [ [0.0, 0.0, 1.0], [0.0, 1.0, 0.0], [0.0, 1.0, 1.0], [1.0, 0.0, 0.0], [1.0, 0.0, 1.0], [1.0, 1.0, 0.0], [1.0, 1.0, 1.0], [0.0, 0.0, 0.0]] x_train = np.array(x_train) y_train = np.array(y_train) x_test = x_train y_test = y_train print(x_train) print(y_train) print("x_test_len", len(x_test)) print("y_test_len", len(y_test)) # Build a CNN model. You should see INFO:plaidml:Opening device xxx after you run this chunk model = keras.Sequential() model.add(Dense(input_dim=3, output_dim=8, activation='relu')) model.add(Dense(input_dim=8, output_dim=3, activation='relu')) # Compile the model model.compile(optimizer='adam', loss=keras.losses.sparse_categorical_crossentropy, metrics=['accura cy']) # Fit the model on training set model.fit(x_train, y_train, epochs=10) # Evaluate the model on test set score = model.evaluate(x_test, y_test, verbose=0) # Print test accuracy print('\n', 'Test accuracy ', score[1]) </code></pre> <p>I think there are probably a couple of things I did't get right.</p>
<p>There are two problems:</p> <ol> <li><p>You are using <code>'relu'</code> as the activation of last layer (i.e. output layer), however your model should produce vectors of zeros and ones. Therefore, you need to use <code>'sigmoid'</code> instead.</p></li> <li><p>Since <code>'sigmoid'</code> would be used for the activation of last layer, you also need to use <code>binary_crossentropy</code> for the loss function.</p></li> </ol> <p>To give you a better understanding of the problem, you can view it as a multi-label classification problem: each of the 3 nodes in the output layer should act as an independent binary classifier (hence using sigmoid as the activation and binary crossentropy as the loss function).</p>
python|tensorflow|machine-learning|keras|neural-network
1
1,906,088
61,380,281
Tkinter Menubar
<p>I just started learning Tkinter and had an issue in making the menu bar as the error and code attached. Thanks in advance. </p> <pre><code>from tkinter import * root = Tk() root.title("HP SIMPLE FINANCE") w, h = root.winfo_screenwidth(), root.winfo_screenheight() root.geometry("%dx%d+0+0" % (w, h)) def donothing(): pass menubar = Menu(root) root.config(menu=menubar) filemenu = Menu(menubar) menubar.add_cascade(label="File", Menu=filemenu) filemenu.add_command(label="Open Portfolio file", command=donothing) filemenu.add_command(label="New Portfolio", command=donothing) filemenu.add_command(label="Reports", command=donothing) filemenu.add_command(label="Restore from backup", command=donothing) filemenu.add_command(label="Exit", command=root.quit) root.mainloop() </code></pre> <pre><code>File "F:/Finance software/Main window.py", line 28, in &lt;module&gt; menubar.add_cascade(label="File", Menu=filemenu) File "C:\Users\harshparmar\AppData\Local\Programs\Python\Python38-32\lib\tkinter\__init__.py", line 3289, in add_cascade self.add('cascade', cnf or kw) File "C:\Users\harshparmar\AppData\Local\Programs\Python\Python38-32\lib\tkinter\__init__.py", line 3284, in add self.tk.call((self._w, 'add', itemType) + _tkinter.TclError: unknown option "-Menu" </code></pre> <p>Process finished with exit code 1</p>
<p>replace</p> <pre><code>menubar.add_cascade(label="File", Menu=filemenu) </code></pre> <p>to</p> <pre><code>menubar.add_cascade(label="File", menu=filemenu) </code></pre>
python|tkinter|menubar
1
1,906,089
58,152,060
How to get episodes from exact story
<p>I have episode, which is related with story(foreign key) with url</p> <pre><code>router = routers.DefaultRouter() router.register('', StoryView, basename='stories') router.register('episodes', EpisodeView, basename='episodes') </code></pre> <p>View:</p> <pre><code>class EpisodeView(viewsets.ModelViewSet): </code></pre> <p>models:</p> <pre><code>class Story(models.Model): title = models.CharField(max_length=255) description = models.TextField(max_length=255) cover = models.ImageField(upload_to=upload_location) genre = models.ManyToManyField(Genre) author = models.ForeignKey(get_user_model(), on_delete=models.CASCADE) created_at = models.DateTimeField(auto_now_add=True) class Episode(models.Model): title = models.CharField(max_length=255) cover = models.ImageField(upload_to=upload_location) story = models.ForeignKey(Story, on_delete=models.CASCADE) created_at = models.DateTimeField(auto_now_add=True) episode_number = models.IntegerField(null=True) </code></pre> <p>I need to get episodes of story. How to do that is this case?</p>
<p>you must add router to get episode id</p> <pre><code>router = routers.DefaultRouter() router.register('', StoryView, basename='stories') router.register('episodes', EpisodeView, basename='episodes') router.register('(?P&lt;story_id&gt;[0-9]+)/episodes', StoryEpisodeView, basename='episodes') </code></pre> <p>then write update your list method for StoryEpisodeView:</p> <pre><code>def get_queryset(self): return Episode.objects.filter(story_id=self.story_id) def list(self, request, *args, **kwargs): self.story_id = kwargs.get(story_id) super(request, *args, **kwargs) </code></pre>
python|django|django-rest-framework
0
1,906,090
56,311,034
Pytorch: no CUDA-capable device is detected on Linux
<p>When trying to run some Pytorch code I get this error:</p> <pre><code>THCudaCheck FAIL file=/pytorch/aten/src/THC/THCGeneral.cpp line=74 error=38 : no CUDA-capable device is detected Traceback (most recent call last): File "demo.py", line 173, in test pca = torch.FloatTensor( np.load('../basics/U_lrw1.npy')[:,:6]).cuda() RuntimeError: cuda runtime error (38) : no CUDA-capable device is detected at /pytorch/aten/src/THC/THCGeneral.cpp:74 </code></pre> <p>I a running a cloud virtual machine using the 'Google Deep Learning VM' Version: tf-gpu.1-13.m25 Based on: Debian GNU/Linux 9.9 (stretch) (GNU/Linux 4.9.0-9-amd64 x86_64\n) Linux tf-gpu-interruptible 4.9.0-9-amd64 #1 SMP Debian 4.9.168-1 (2019-04-12) x86_64</p> <p>Environment info:</p> <pre><code>$ nvidia-smi Sun May 26 05:32:33 2019 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 410.72 Driver Version: 410.72 CUDA Version: 10.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Tesla K80 Off | 00000000:00:04.0 Off | 0 | | N/A 42C P0 74W / 149W | 0MiB / 11441MiB | 100% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ $ echo $CUDA_PATH $ echo $LD_LIBRARY_PATH /usr/local/cuda/lib64:/usr/local/nccl2/lib:/usr/local/cuda/extras/CUPTI/lib64 $ env | grep CUDA CUDA_VISIBLE_DEVICES=0 $ pip freeze DEPRECATION: Python 2.7 will reach the end of its life on January 1st, 2020. Please upgrade your Python as Python 2. 7 won't be maintained after that date. A future version of pip will drop support for Python 2.7. audioread==2.1.7 backports.functools-lru-cache==1.5 certifi==2019.3.9 chardet==3.0.4 cloudpickle==1.1.1 cycler==0.10.0 dask==1.2.2 decorator==4.4.0 dlib==19.17.0 enum34==1.1.6 filelock==3.0.12 funcsigs==1.0.2 future==0.17.1 gdown==3.8.1 idna==2.8 joblib==0.13.2 kiwisolver==1.1.0 librosa==0.6.3 llvmlite==0.28.0 </code></pre>
<p>I didn't get the main reason for your problem. But I noticed one thing, GPU-Util 100%, while there are no processes running behind.</p> <p>You can try out in the following directions.</p> <ol> <li>sudo nvidia-smi -pm 1 </li> </ol> <p>which enables in persistence mode. This might solve your problem. The combination of ECC with non persistence mode can lead to 100% Utilization of GPU. </p> <ol start="2"> <li><p>You can also disable ECC with the command nvidia -smi -e 0</p></li> <li><p>Or best will be restart once again the whole process from the starting i.e reboot the Operating System once again.</p></li> </ol> <p>Note: I'm not sure whether it will work for you or not. I had faced similar issue earlier so I am just telling based on my experience. Hope this will help you.</p>
pytorch|google-dl-platform
1
1,906,091
56,089,019
numba does not accept numpy arrays with dtype=object
<p>I have an empty array that I want to fill with arbitrary length lists at each index [i,j]. So I initialize an empty array that is supposed to hold objects like this:</p> <pre><code>@jit(nopython=True, parrallel=True) def numba_function(): values = np.empty((length, length), dtype=object) for i in range(10): for j in range(10): a_list_of_things = [1,2,3,4] values[i,j] = a_list_of_things </code></pre> <p>This fails with:</p> <pre><code> TypingError: Failed in nopython mode pipeline (step: nopython frontend) Untyped global name 'object': cannot determine Numba type of &lt;class 'type'&gt; </code></pre> <p>If I turn off numba by setting <code>nopython=False</code> the code works fine. Setting <code>dtype=list</code> in the <code>values</code> array does not improve things.</p> <p>Any smart tricks to overcome this?</p>
<p>Numba in nopython mode (as of version 0.43.1) does not support object arrays.</p> <p>The correct way to type an object array would be:</p> <pre><code>import numba as nb import numpy as np @nb.njit def numba_function(): values = np.empty((2, 2), np.object_) return values </code></pre> <p>But as stated, that doesn't work:</p> <pre><code>TypingError: Failed in nopython mode pipeline (step: nopython frontend) Internal error at resolving type of attribute "object_" of "$0.4": NotImplementedError: object </code></pre> <p>This is also mentioned in <a href="https://numba.pydata.org/numba-doc/0.43.0/reference/numpysupported.html#scalar-types" rel="nofollow noreferrer">the numba documentation</a>:</p> <blockquote> <h1>2.7.1. Scalar types</h1> <p>Numba supports the following Numpy scalar types:</p> <ul> <li>Integers: all integers of either signedness, and any width up to 64 bits</li> <li>Booleans</li> <li>Real numbers: single-precision (32-bit) and double-precision (64-bit) reals</li> <li>Complex numbers: single-precision (2x32-bit) and double-precision (2x64-bit) complex numbers</li> <li>Datetimes and timestamps: of any unit</li> <li>Character sequences (but no operations are available on them)</li> <li>Structured scalars: structured scalars made of any of the types above and arrays of the types above</li> </ul> <p><strong>The following scalar types and features are not supported:</strong></p> <ul> <li><strong>Arbitrary Python objects</strong></li> <li>Half-precision and extended-precision real and complex numbers</li> <li>Nested structured scalars the fields of structured scalars may not contain other structured scalars</li> </ul> <p>[...]</p> <h1>2.7.2. Array types</h1> <p>Numpy arrays of any of the scalar types above are supported, regardless of the shape or layout.</p> </blockquote> <p>(Emphasis mine)</p> <p>Since <code>dtype=object</code> would allow arbitrary Python objects it's not supported. And <code>dtype=list</code> is just equivalent to <code>dtype=object</code> (<a href="https://docs.scipy.org/doc/numpy/reference/arrays.dtypes.html#specifying-and-constructing-data-types" rel="nofollow noreferrer">documentation</a>)</p> <blockquote> <h1>Built-in Python types</h1> <p>Several python types are equivalent to a corresponding array scalar when used to generate a dtype object:</p> <pre><code>int np.int_ bool np.bool_ float np.float_ complex np.cfloat bytes np.bytes_ str np.bytes_ (Python2) or np.unicode_ (Python3) unicode np.unicode_ buffer np.void (all others) np.object_ </code></pre> </blockquote> <hr> <p>All that said: It would be quite slow to have <code>object</code> arrays, that applies to NumPy arrays and numba functions. Whenever you choose to use such <code>object</code> arrays you <em>implicitly</em> decide that you don't want high-performance.</p> <p>So if you want performance and use NumPy arrays, then you need to rewrite it so you don't use object arrays first and if it's still to slow, then you can think about throwing numba at the non-object arrays.</p>
python|arrays|numba
4
1,906,092
56,306,151
How to run Python code using a specific Python environment in Visual Studio?
<p>I've started working on a large-scale Python program, and I'm using Visual Studio as my main IDE.</p> <p>I have the entirety of Visual Studio's extensions for the use of Python, including Anaconda, which I used to create an environment from which I want to run my Python code. </p> <p>Any ideas on how I can get that to happen?</p>
<p>Okay, so the answer was going into</p> <pre><code>View &gt; Other Windows &gt; Python Environments </code></pre> <p>and selecting the version of Python to work with.</p> <p><a href="https://docs.microsoft.com/en-us/visualstudio/python/managing-python-environments-in-visual-studio?view=vs-2019" rel="nofollow noreferrer">https://docs.microsoft.com/en-us/visualstudio/python/managing-python-environments-in-visual-studio?view=vs-2019</a> <a href="https://i.stack.imgur.com/9c3KR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9c3KR.png" alt="https://docs.microsoft.com/en-us/visualstudio/python/managing-python-environments-in-visual-studio?view=vs-2019"></a></p>
python|visual-studio
0
1,906,093
18,677,543
how to add a dict as a value to a key in python
<p>I have a dict that is - <code>team={ludo:4,monopoly:5}</code></p> <p>How can I form a new dict that has a key called <code>board_games</code> with value has another dict which has a key the team dict above which should look like - </p> <pre><code>new_team = { board_games : {junior:{ludo:4,monopoly:5}}} </code></pre> <p>basically I am trying to do something like perlish - </p> <pre><code>new_team['board_games']['junior'] = team </code></pre>
<p>I fail to see the problem:</p> <pre><code>&gt;&gt;&gt; team = {"ludo": 4, "monopoly": 5} &gt;&gt;&gt; new_team = {"board_games": {"junior": team}} &gt;&gt;&gt; new_team {'board_games': {'junior': {'ludo': 4, 'monopoly': 5}}} </code></pre> <p>If you want to construct it dynamically, <a href="http://docs.python.org/2/library/collections.html#collections.defaultdict" rel="nofollow"><code>collections.defaultdict</code></a> is what you need:</p> <pre><code>&gt;&gt;&gt; from collections import defaultdict &gt;&gt;&gt; new_dict = defaultdict(dict) &gt;&gt;&gt; new_dict['board_games']['junior'] = team &gt;&gt;&gt; new_dict defaultdict(&lt;type 'dict'&gt;, {'board_games': {'junior': {'ludo': 4, 'monopoly': 5}}}) </code></pre>
python|dictionary
3
1,906,094
69,520,475
How to sum elements in a nested list?
<p>I want to sum elements I [1] (second elements in each list) in the nested list so that it doesn't exceed 23, i.e. &lt;= 23 and create a sublist from that. Now I only get the same list, but I want a list where the total value doesn't exceed 23:</p> <pre><code>this = [['A', 5, 310],['B', 3, 270], ['C', 4.5, 220], ['D', 1, 150], ['E', 3.5, 140], ['F', 2.5, 90], ['G', 4, 70], ['H', 3, 60], ['I', 2, 50], ['J', 1, 30]] max_weight = [i for i in this if i[1]&lt;=23] print(max_weight) </code></pre>
<p>do you want to do a sum from all items?</p> <pre class="lang-py prettyprint-override"><code>max_weight = sum(i[1] for i in this if i[1]&lt;=23) print(max_weight) #29.5 </code></pre> <p>EDIT:</p> <p>i think you want to do this, (I'm not sure if its possible to do in a one-liner)</p> <pre class="lang-py prettyprint-override"><code>this = [['A', 5, 310],['B', 3, 270], ['C', 4.5, 220], ['D', 1, 150], ['E', 3.5, 140], ['F', 2.5, 90], ['G', 4, 70], ['H', 3, 60], ['I', 2, 50], ['J', 1, 30]] total = 0 new = [] for i in this: total += i[1] if total &lt;= 23: new.append(i) print(new) #[['A', 5, 310], ['B', 3, 270], ['C', 4.5, 220], ['D', 1, 150], ['E', 3.5, 140], ['F', 2.5, 90]] </code></pre>
python
1
1,906,095
55,517,950
How to write multiprocessing code correctly in Python 2
<p>I'm trying to implement very simple multiprocessing code in python 2.7, but it looks like the code run serially and not parallel. The following code prints *****1***** while I expect it to print *****2***** immediately after *****1*****.</p> <pre class="lang-py prettyprint-override"><code> import os import multiprocessing from time import sleep def main(): func1_proc = multiprocessing.Process(target=func1()) func2_proc = multiprocessing.Process(target=func2()) func1_proc.start() func2_proc.start() pass def func1(): print "*****1*****" sleep(100) def func2(): print "*****2*****" sleep(100) if __name__ == "__main__": main() </code></pre>
<p>You're calling <code>func1</code> and <code>func2</code> before passing their returning values to <code>Process</code>, so <code>func1</code> is going to sleep 100 seconds before returning <code>None</code>, for which <code>Process</code> will raise an error.</p> <p>You should pass function objects to <code>Process</code> instead so that it will run them in separate processes:</p> <pre><code>func1_proc = multiprocessing.Process(target=func1) func2_proc = multiprocessing.Process(target=func2) </code></pre>
python|multiprocessing
3
1,906,096
57,361,860
Loop over multiple Dataframes
<p>I have to resample a bunch of dataframes. My dataframes are called simply df_1, df_2 and so on (I have about 50 of them). I can easily resample each one separately, this way:</p> <pre><code>df_out_1 = resample(df_1, replace=False, n_samples=50, random_state=11) df_out_2 = resample(df_2, replace=False, n_samples=50, random_state=11) .... </code></pre> <p>It works, but it's not very intelligent to write 50 almost same lines of code. So I tried a loop:</p> <pre><code>df_list=[('df_'+str(i),'df_out_'+str(i)) for i in range(1,52)] for (df,df_out) in df_list: # Downsample majority class df_out = resample(df, replace=False, n_samples=50, random_state=11) </code></pre> <p>It doesn't work because for python df and df_out in the loop are not dataframes but strings. I don't know how I can cure it. :(</p> <p>Thanks in advance, D.</p>
<p>Use <code>globals()[string]</code> to reference the variable named in the string</p> <p>Full code:</p> <pre><code>df_list=[('df_'+str(i),'df_out_'+str(i)) for i in range(1,52)] for (df_str,df_out_str) in df_list: df = globals()[df_str] df_out = globals()[df_out_str] # Downsample majority class df_out = resample(df, replace=False, n_samples=50, random_state=11) </code></pre>
python|pandas
1
1,906,097
42,579,202
UnboundLocalError: local variable 'y' referenced before assignment
<p>I have below code on list comprehension.</p> <pre><code>x = 2 y = 3 [x*y for x in range(x) for y in range(y)] </code></pre> <p>This is giving me below error</p> <pre><code>Traceback (most recent call last): File "&lt;pyshell#35&gt;", line 1, in &lt;module&gt; [x*y for x in range(x) for y in range(y)] File "&lt;pyshell#35&gt;", line 1, in &lt;listcomp&gt; [x*y for x in range(x) for y in range(y)] UnboundLocalError: local variable 'y' referenced before assignment </code></pre> <p>However, below code works.</p> <pre><code>[x*y for x in range(x)] [0, 5] </code></pre> <p>Is there any scoping rule for the second <code>for</code> loop in list comprehension?</p> <p>I am using Python 3.6.</p>
<p>Good question,however this code works well in <code>Python2.x</code>,and it will throw <code>UnboundLocalError</code> in <code>Python3.x</code>.</p> <blockquote> <p>It can be a surprise to get the <strong>UnboundLocalError</strong> in previously working code when it is modified by adding an assignment statement somewhere in the body of a function.</p> </blockquote> <p>This is because when you make an assignment to a variable in a <strong>scope</strong>, that variable becomes <strong>local to that scope and shadows any similarly named variable in the outer scope</strong>. Since it assigns a new value to x, the compiler recognizes it as a <strong>local variable</strong>. Thus the earlier variable attempts to print the uninitialized local variable and an error results.</p> <p>See more details from <a href="https://docs.python.org/3/faq/programming.html#why-am-i-getting-an-unboundlocalerror-when-the-variable-has-a-value" rel="nofollow noreferrer">Why am I getting an UnboundLocalError when the variable has a value?</a>.</p>
python|python-3.x
2
1,906,098
42,405,186
Python - Function does not produce same dimensional array
<p>I'm ultimately trying to plot three combined integrals to where each integral evaluates a certain portion of array <code>z=np.linspace(1e+9,0)</code></p> <p><a href="https://i.stack.imgur.com/1moBq.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1moBq.gif" alt="enter image description here"></a></p> <pre><code>import numpy as np import matplotlib.pyplab as plt import scipy as sp import scipy.integrate as integrate z = np.linspace(1e+9, 0, 1000) mass = 1000 Omega_m0 = 0.3 Omega_L0 = 0.7 h = 0.7 def FreeStreamLength(z, mass, Omega_m0, Omega_L0, h): kb = 8.617e-5 ## kev K^-1 c = 3e+5 ## km/s T0 = 2.7 ## K T_uni = mass/kb a = 1./(z+1.) z_nr = T_uni/T0 - 1. ## redshift at non relativistic a_nr = 1/(z_nr + 1.) ## scale factor at non relativistic Omega_r0 = (4.2e-5)/h/h a_eq = Omega_r0/Omega_m0 z_eq = 1/a_eq - 1 a1 = a[a &lt;= a_nr] ## scale factor before particles become non-relativistic a2 = a[a_nr &lt;= a.all() &lt;= a_eq] a3 = a[a_eq &lt;= a] integrand = lambda x: 1./x/x/np.sqrt( Omega_m0/x/x/x + Omega_L0 ) epoch_nr = [ c/H0 *integrate.quad(integrand, 0, i )[0] for i in a1] epoch_nreq = [c/H0/a_nr * integrate.quad( integrand, a2, a_eq )[0] ] epoch_eq = [c/H0/a_eq * integrate.quad( integrand, i, 1 )[0] for i in a3] return epoch_nr + epoch_nreq + epoch_eq </code></pre> <p><code>z</code> should pass through <code>a</code>, so effectively, these values should correlate with one another. </p> <p>For the <code>return</code> line I combined all the lists to create this new array for my function.</p> <pre><code>FSL = FreeStreamLength(z, mass, Omega_m0, Omega_L0, h) fig = plt.figure() ax = fig.add_subplot(111) ax.plot(z, FSL, color="blue", label=r"$z=0$") plt.show() </code></pre> <p>I'm returned with <code>ValueError: x and y must have same first dimension</code></p> <p>How come my new list does not match with the list I had before?</p> <p>I believe it has to with how i'm iterating out elements from the passed array before I defined the integrand in my function.</p>
<p>I don't know exactly what you are trying to do there, but when you index an array of booleans to another array in python you get an array with the elements where the array of booleans are True.</p> <pre><code>&gt;&gt;&gt; import numpy as np &gt;&gt;&gt; a = np.asarray([1,2,3]) &gt;&gt;&gt; b = np.asarray([True,False,True]) &gt;&gt;&gt; print(a[b]) array([1, 3]) </code></pre> <p>That is what's happening in this line</p> <pre><code>a1 = a[a &lt;= a_nr] ## scale factor before particles become non-relativistic </code></pre> <p>Also, here</p> <pre><code>epoch_nr + epoch_nreq + epoch_eq </code></pre> <p>you are adding three lists (two of them have length 1). When you do this you get those lists appended and the result will have the length of those 3 lists summed:</p> <pre><code>&gt;&gt;&gt; a = [1,2,3] &gt;&gt;&gt; b = [1] &gt;&gt;&gt; c = [1] &gt;&gt;&gt; a + b + c [1, 2, 3, 1, 1] </code></pre>
python|arrays|numpy|scipy
0
1,906,099
54,012,328
Workflow framework in Python
<p>I'm trying to create workflow in <code>Python</code> that will have the following features below:</p> <blockquote> <ol> <li>Dynamic scheduling.</li> <li>Parallelism - many threads within one process.</li> <li>Running a flow same as running a task.</li> <li>Works on Windows.</li> </ol> </blockquote> <p>From the knowledge that i get it's seems that 1) &amp; 3) are achievable in many workflow frameworks, but 2) is not that easy. In my research i was mostly looked at <code>Celery</code> &amp; <code>Luigi</code> frameworks.</p> <ul> <li>For <code>Celery</code> I did found out that 2 could be done using the <code>--pool</code> argument, so I would like to know if I can combine the worker &amp; the trigger to the same python module?</li> <li>For <code>Luigi</code> I would like to know if it's possible to run tasks as multi-threads and not multi-processes?</li> </ul> <p>In addition I would appreciate any suggestions for other <code>Python</code> framework that could help me before I start to create my own workflow?</p>
<p>In luigi, it is possible for the tasks themselves to use multithreading, but in luigi each worker will itself be a process, as it can run on a different machine. You could run with just one worker if you have something against having two processes.</p> <p>Another one to look at is airflow.</p>
python|frameworks|celery|workflow|luigi
0