Unnamed: 0
int64
0
1.91M
id
int64
337
73.8M
title
stringlengths
10
150
question
stringlengths
21
64.2k
answer
stringlengths
19
59.4k
tags
stringlengths
5
112
score
int64
-10
17.3k
500
72,985,072
Memory Error when applying spacy model to large log file
<p>I am currently working on tokenizing a large log file that contains 39296844 characters. I am using the <code>nlp = spacy.load('en_core_web_sm')</code> model for this text file. Additionally I established the <code>nlp.max_length = 100000000000</code> so that I can read very large files. However, when I run the code <code>doc = nlp(df.iloc[161][1], disable=['ner', 'parser', &quot;textcat&quot;])</code> where <code>df.iloc[161][1]</code> contains the text of the log file, I run into the following memory error:</p> <pre><code>--------------------------------------------------------------------------- MemoryError Traceback (most recent call last) Input In [36], in &lt;cell line: 1&gt;() ----&gt; 1 df[&quot;build_log&quot;] = df[&quot;build_log&quot;].apply(preprocess) File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\pandas\core\series.py:4433, in Series.apply(self, func, convert_dtype, args, **kwargs) 4323 def apply( 4324 self, 4325 func: AggFuncType, (...) 4328 **kwargs, 4329 ) -&gt; DataFrame | Series: 4330 &quot;&quot;&quot; 4331 Invoke function on values of Series. 4332 (...) 4431 dtype: float64 4432 &quot;&quot;&quot; -&gt; 4433 return SeriesApply(self, func, convert_dtype, args, kwargs).apply() File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\pandas\core\apply.py:1088, in SeriesApply.apply(self) 1084 if isinstance(self.f, str): 1085 # if we are a string, try to dispatch 1086 return self.apply_str() -&gt; 1088 return self.apply_standard() File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\pandas\core\apply.py:1143, in SeriesApply.apply_standard(self) 1137 values = obj.astype(object)._values 1138 # error: Argument 2 to &quot;map_infer&quot; has incompatible type 1139 # &quot;Union[Callable[..., Any], str, List[Union[Callable[..., Any], str]], 1140 # Dict[Hashable, Union[Union[Callable[..., Any], str], 1141 # List[Union[Callable[..., Any], str]]]]]&quot;; expected 1142 # &quot;Callable[[Any], Any]&quot; -&gt; 1143 mapped = lib.map_infer( 1144 values, 1145 f, # type: ignore[arg-type] 1146 convert=self.convert_dtype, 1147 ) 1149 if len(mapped) and isinstance(mapped[0], ABCSeries): 1150 # GH#43986 Need to do list(mapped) in order to get treated as nested 1151 # See also GH#25959 regarding EA support 1152 return obj._constructor_expanddim(list(mapped), index=obj.index) File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\pandas\_libs\lib.pyx:2870, in pandas._libs.lib.map_infer() Input In [35], in preprocess(text) 1 def preprocess(text): ----&gt; 2 doc = nlp(text, disable=['ner', 'parser']) 3 lemmas = [token.lemma_ for token in doc] 4 commands = get_commands(&quot;command-words.txt&quot;) File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\spacy\language.py:1025, in Language.__call__(self, text, disable, component_cfg) 1023 raise ValueError(Errors.E109.format(name=name)) from e 1024 except Exception as e: -&gt; 1025 error_handler(name, proc, [doc], e) 1026 if doc is None: 1027 raise ValueError(Errors.E005.format(name=name)) File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\spacy\util.py:1630, in raise_error(proc_name, proc, docs, e) 1629 def raise_error(proc_name, proc, docs, e): -&gt; 1630 raise e File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\spacy\language.py:1020, in Language.__call__(self, text, disable, component_cfg) 1018 error_handler = proc.get_error_handler() 1019 try: -&gt; 1020 doc = proc(doc, **component_cfg.get(name, {})) # type: ignore[call-arg] 1021 except KeyError as e: 1022 # This typically happens if a component is not initialized 1023 raise ValueError(Errors.E109.format(name=name)) from e File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\spacy\pipeline\trainable_pipe.pyx:56, in spacy.pipeline.trainable_pipe.TrainablePipe.__call__() File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\spacy\util.py:1630, in raise_error(proc_name, proc, docs, e) 1629 def raise_error(proc_name, proc, docs, e): -&gt; 1630 raise e File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\spacy\pipeline\trainable_pipe.pyx:52, in spacy.pipeline.trainable_pipe.TrainablePipe.__call__() File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\spacy\pipeline\tok2vec.py:125, in Tok2Vec.predict(self, docs) 123 width = self.model.get_dim(&quot;nO&quot;) 124 return [self.model.ops.alloc((0, width)) for doc in docs] --&gt; 125 tokvecs = self.model.predict(docs) 126 batch_id = Tok2VecListener.get_batch_id(docs) 127 for listener in self.listeners: File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\thinc\model.py:315, in Model.predict(self, X) 311 def predict(self, X: InT) -&gt; OutT: 312 &quot;&quot;&quot;Call the model's `forward` function with `is_train=False`, and return 313 only the output, instead of the `(output, callback)` tuple. 314 &quot;&quot;&quot; --&gt; 315 return self._func(self, X, is_train=False)[0] File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\thinc\layers\chain.py:54, in forward(model, X, is_train) 52 callbacks = [] 53 for layer in model.layers: ---&gt; 54 Y, inc_layer_grad = layer(X, is_train=is_train) 55 callbacks.append(inc_layer_grad) 56 X = Y File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\thinc\model.py:291, in Model.__call__(self, X, is_train) 288 def __call__(self, X: InT, is_train: bool) -&gt; Tuple[OutT, Callable]: 289 &quot;&quot;&quot;Call the model's `forward` function, returning the output and a 290 callback to compute the gradients via backpropagation.&quot;&quot;&quot; --&gt; 291 return self._func(self, X, is_train=is_train) File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\thinc\layers\with_array.py:40, in forward(model, Xseq, is_train) 38 return model.layers[0](Xseq, is_train) 39 else: ---&gt; 40 return _list_forward(cast(Model[List2d, List2d], model), Xseq, is_train) File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\thinc\layers\with_array.py:75, in _list_forward(model, Xs, is_train) 73 lengths = layer.ops.asarray1i([len(seq) for seq in Xs]) 74 Xf = layer.ops.flatten(Xs, pad=pad) # type: ignore ---&gt; 75 Yf, get_dXf = layer(Xf, is_train) 77 def backprop(dYs: List2d) -&gt; List2d: 78 dYf = layer.ops.flatten(dYs, pad=pad) # type: ignore File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\thinc\model.py:291, in Model.__call__(self, X, is_train) 288 def __call__(self, X: InT, is_train: bool) -&gt; Tuple[OutT, Callable]: 289 &quot;&quot;&quot;Call the model's `forward` function, returning the output and a 290 callback to compute the gradients via backpropagation.&quot;&quot;&quot; --&gt; 291 return self._func(self, X, is_train=is_train) File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\thinc\layers\chain.py:54, in forward(model, X, is_train) 52 callbacks = [] 53 for layer in model.layers: ---&gt; 54 Y, inc_layer_grad = layer(X, is_train=is_train) 55 callbacks.append(inc_layer_grad) 56 X = Y File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\thinc\model.py:291, in Model.__call__(self, X, is_train) 288 def __call__(self, X: InT, is_train: bool) -&gt; Tuple[OutT, Callable]: 289 &quot;&quot;&quot;Call the model's `forward` function, returning the output and a 290 callback to compute the gradients via backpropagation.&quot;&quot;&quot; --&gt; 291 return self._func(self, X, is_train=is_train) File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\thinc\layers\residual.py:40, in forward(model, X, is_train) 37 else: 38 return d_output + dX ---&gt; 40 Y, backprop_layer = model.layers[0](X, is_train) 41 if isinstance(X, list): 42 return [X[i] + Y[i] for i in range(len(X))], backprop File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\thinc\model.py:291, in Model.__call__(self, X, is_train) 288 def __call__(self, X: InT, is_train: bool) -&gt; Tuple[OutT, Callable]: 289 &quot;&quot;&quot;Call the model's `forward` function, returning the output and a 290 callback to compute the gradients via backpropagation.&quot;&quot;&quot; --&gt; 291 return self._func(self, X, is_train=is_train) File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\thinc\layers\chain.py:54, in forward(model, X, is_train) 52 callbacks = [] 53 for layer in model.layers: ---&gt; 54 Y, inc_layer_grad = layer(X, is_train=is_train) 55 callbacks.append(inc_layer_grad) 56 X = Y File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\thinc\model.py:291, in Model.__call__(self, X, is_train) 288 def __call__(self, X: InT, is_train: bool) -&gt; Tuple[OutT, Callable]: 289 &quot;&quot;&quot;Call the model's `forward` function, returning the output and a 290 callback to compute the gradients via backpropagation.&quot;&quot;&quot; --&gt; 291 return self._func(self, X, is_train=is_train) File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\thinc\layers\chain.py:54, in forward(model, X, is_train) 52 callbacks = [] 53 for layer in model.layers: ---&gt; 54 Y, inc_layer_grad = layer(X, is_train=is_train) 55 callbacks.append(inc_layer_grad) 56 X = Y [... skipping similar frames: Model.__call__ at line 291 (1 times)] File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\thinc\layers\chain.py:54, in forward(model, X, is_train) 52 callbacks = [] 53 for layer in model.layers: ---&gt; 54 Y, inc_layer_grad = layer(X, is_train=is_train) 55 callbacks.append(inc_layer_grad) 56 X = Y File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\thinc\model.py:291, in Model.__call__(self, X, is_train) 288 def __call__(self, X: InT, is_train: bool) -&gt; Tuple[OutT, Callable]: 289 &quot;&quot;&quot;Call the model's `forward` function, returning the output and a 290 callback to compute the gradients via backpropagation.&quot;&quot;&quot; --&gt; 291 return self._func(self, X, is_train=is_train) File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\thinc\layers\maxout.py:49, in forward(model, X, is_train) 47 W = model.get_param(&quot;W&quot;) 48 W = model.ops.reshape2f(W, nO * nP, nI) ---&gt; 49 Y = model.ops.gemm(X, W, trans2=True) 50 Y += model.ops.reshape1f(b, nO * nP) 51 Z = model.ops.reshape3f(Y, Y.shape[0], nO, nP) File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\thinc\backends\numpy_ops.pyx:94, in thinc.backends.numpy_ops.NumpyOps.gemm() File ~\AppData\Local\Programs\Python\Python310\lib\site-packages\blis\py.pyx:79, in blis.py.gemm() MemoryError: Unable to allocate 7.87 GiB for an array with shape (7331886, 288) and data type float32 </code></pre> <p>I have been trying to figure out the issue for a while and was wondering if anyone knew how to fix this issue? I thought disabling certain components would help but that doesn't seem to be the case. Any suggestion would be greatly appreciated!</p>
<p>I don't see the point of processing ~40m characters as a single string. Do lines separated by <code>\n</code> form logical units? In this case read the string line by line and process each line using <code>pipe()</code>.</p> <pre><code>text = df.iloc[161][1] lines = text.split('\n') processed_lines = nlp.pipe(lines, disable=['ner', 'parser', &quot;textcat&quot;]) # get for example lemmas, nested by line lemmas_per_line = [[tok.lemma_ for tok in line] for line in processed_lines] # or if you need them as flat list lemmas_flat = [lem for line in lemmas_per_line for lem in line] </code></pre> <p>Note that even when using the faster <code>pipe()</code> I wouldn't expect SpaCy to process more than ~50k characters per second, so this should take at least 10-12 minutes or possibly much slower depending on your PC and the model used. If you need a progress bar to show progress you can use <a href="https://tqdm.github.io/" rel="nofollow noreferrer">tqdm</a>:</p> <pre><code>from tqdm import tqdm ... processed_lines = tqdm(nlp.pipe(lines, disable=['ner', 'parser', &quot;textcat&quot;])) ... </code></pre>
python|nlp|spacy
1
501
55,791,341
Persistent storage of data?
<p>My use case needs to store the data on a disk immediately when the data is available. I'm using Raspberry PI and few lasers. Once the laser is activated/deactivated timestamp is taken and it should be stored on the disk. Data is only stored when lasers are "armed". They can also be in "idle" state (they're still working, but timestamps are ignored). Also, lasers can be armed/disarmed multiple times. </p> <p>What would be the most efficient way of doing this? Using plane csv/xml/txt or something else? Actual SD card that is used in RPI is limited to 8GB. </p> <p>Another question, when using <code>open()</code> method, should i <code>close()</code> the file once i executed <code>write()</code> method or should I keep it open as long as the script itself is running (script is running all the time until user decides to quit)?</p>
<p>Sounds like python?</p> <p>If so, you can write to your file using <code>with</code>:</p> <blockquote> <p><code>with open('/path', 'w') as f: f.write('stuff')</code></p> </blockquote> <p>and the file descriptor will close automatically when execution exits the block.</p> <p>However, regarding your other questions it depends on your use case. Why does it need to be available immediately? Will another process be reading it? How quickly will this be happening? Are there any other bits of data you need to save along with the timestamp - presumably whether the laser is on or off at that time?</p> <p>Likely, a good solution for you would be a lightweight database such as SQLite. The storage on disk is approximately what it would be in a "flat" file, such as the .txt or .csv you reference. It will be fast. And it eliminates concern about managing the actual writing.</p>
python-2.7|persistence
0
502
73,468,846
Django: typehinting backward / related_name / ForeignKey relationships
<p>Let's say we have the following models:</p> <pre><code>class Site(models.Model): # This is djangos build-in Site Model pass class Organization(models.Model): site = models.OneToOneField(Site) </code></pre> <p>And if I use this somewhere in some other class:</p> <pre><code>organization = self.site.organization </code></pre> <p>Then mypy complains:</p> <pre><code>Site has no attribute &quot;organization&quot; </code></pre> <p>How can I make mypy happy here?</p>
<p>Django adds backwards relations at runtime which aren't caught by <code>mypy</code> which only does static analysis.</p> <p>To make <code>mypy</code> happy (and to make it work with your editor's autocomplete) you need to add an explicit type hint to <code>Site</code>:</p> <pre class="lang-py prettyprint-override"><code>class Site(models.Model): organization: &quot;Organization&quot; class Organization(models.Model): site = models.OneToOneField(Site) </code></pre> <p>Using quotes around the type is needed since we are doing a <a href="https://peps.python.org/pep-0484/#forward-references" rel="nofollow noreferrer">forward reference</a> to <code>Organization</code> before it has been defined.</p> <p>For foreign keys and many-to-many relationships, you can do the same thing, but using a <code>QuerySet</code> type hint instead:</p> <pre class="lang-py prettyprint-override"><code>class Organization(models.Model): site = models.OneToOneField(Site) employees: models.QuerySet[&quot;Employee&quot;] class Employee(models.Model): organization = models.ForeignKey( Organization, on_delete=models.CASCADE, related_name=&quot;employees&quot;, ) </code></pre> <hr /> <p>EDIT: There is a <a href="https://pypi.org/project/django-stubs/" rel="nofollow noreferrer">django-stubs</a> package which is meant to integrate with <code>mypy</code>, however I haven't used it personally. It may provide a solution for this without having to explicitly add type hints to models.</p>
python|django|type-hinting|django-stubs
1
503
49,833,144
Efficient way of replacing values from a data set with values from another one
<p>I have this code:</p> <pre><code>for index, row in df.iterrows(): for index1, row1 in df1.iterrows(): if df['budget'].iloc[index] == 0: if df['production_companies'].iloc[index] == df1['production_companies'].iloc[index1] and df['release_date'].iloc[index].year == df1['release_year'].iloc[index1] : df['budget'].iloc[index] = df1['mean'].iloc[index1] </code></pre> <p>It works, but it would take too long to finish. How can I make it run faster? I also tried:</p> <pre><code>df.where((df['budget'] != 0 and df['production_companies'] != df1['production_companies'] and df['release_date'] != df1['release_year']), other = pd.replace(to_replace = df['budget'], value = df1['mean'], inplace = True)) </code></pre> <p>It should be faster but it doesn't work. How do I achieve this? Thank you!</p> <p><code>df</code> looks like this: </p> <pre><code>budget; production_companies; release_date ;title 0; Villealfa Filmproduction Oy ;10/21/1988; Ariel 0; Villealfa Filmproduction Oy ;10/16/1986; Shadows in Paradise 4000000; Miramax Films; 12/25/1995; Four Rooms 0; Universal Pictures; 10/15/1993; Judgment Night 42000; inLoops ;1/1/2006; Life in Loops (A Megacities RMX) ... </code></pre> <p>and <code>df1</code>: </p> <pre><code>production_companies; release_year; mean; Metro-Goldwyn-Mayer (MGM); 1998; 17500000 Metro-Goldwyn-Mayer (MGM); 1999; 12500000 Metro-Goldwyn-Mayer (MGM); 2000; 12000000 Metro-Goldwyn-Mayer (MGM) ;2001 ;43500000 Metro-Goldwyn-Mayer (MGM); 2002 ;12000000 Metro-Goldwyn-Mayer (MGM) ;2003; 36000000 Metro-Goldwyn-Mayer (MGM); 2004 ;27500000 ... </code></pre> <p>I want to replace the value 0 from <code>df</code> with the "mean" vealue from <code>df1</code> if the year and the production company is the same. </p>
<p>Get rid of all of the loops, you can accomplish this efficiently with a merge. Here I provided some example data, since none of the data you provided will actually merge. You want to make sure <code>release_date</code> in <code>df</code> is a datetime, if it isn't already. </p> <pre><code>import pandas as pd import numpy as np df = pd.DataFrame({'budget': [0, 100, 0, 1000, 0], 'production_company': ['Villealfa Filmproduction Oy', 'Villealfa Filmproduction Oy', 'Villealfa Filmproduction Oy', 'Miramax Films', 'Miramax Films'], 'release_date': ['10/21/1988', '10/18/1986', '12/25/1955', '1/1/2006', '4/13/2017'], 'title': ['AAA', 'BBB', 'CCC', 'DDD', 'EEE']}) df1 = pd.DataFrame({'production_companies': ['Villealfa Filmproduction Oy', 'Villealfa Filmproduction Oy', 'Villealfa Filmproduction Oy', 'Miramax Films', 'Miramax Films'], 'release_year': [1988, 1986, 1955, 2006, 2017], 'mean': [1000000, 2000000, 30000000, 4000000, 5000000]}) df['release_date'] = pd.to_datetime(df.release_date, format='%m/%d/%Y') # budget production_company release_date title #0 0 Villealfa Filmproduction Oy 1988-10-21 AAA #1 100 Villealfa Filmproduction Oy 1986-10-18 BBB #2 0 Villealfa Filmproduction Oy 1955-12-25 CCC #3 1000 Miramax Films 2006-01-01 DDD #4 0 Miramax Films 2017-04-13 EEE </code></pre> <p>Then you want to replace budget where it is 0 with the mean if production company and year match. So as a merge this is:</p> <pre><code>df.loc[df.budget==0, 'budget'] = (df.merge(df1, left_on=['production_company', df.release_date.dt.year], right_on=['production_companies', 'release_year'], how='left') .loc[df.budget==0, 'mean']) # budget production_company release_date title #0 1000000 Villealfa Filmproduction Oy 1988-10-21 AAA #1 100 Villealfa Filmproduction Oy 1986-10-18 BBB #2 30000000 Villealfa Filmproduction Oy 1955-12-25 CCC #3 1000 Miramax Films 2006-01-01 DDD #4 5000000 Miramax Films 2017-04-13 EEE </code></pre> <p>If you don't have <code>mean</code> data for a given production company and year, the <code>0</code>s in <code>budget</code> will be replaced with <code>np.NaN</code>, so you can either leave them or replace them back to 0 if you want. </p>
python|performance|pandas|numpy|dataframe
1
504
53,287,108
pass wx.grid to wx.frame WX.python
<p>All im trying to do is have 2 classes </p> <p>1- creates a grid</p> <p>2- takes the grid and puts it into a wx.notebook </p> <p>so basically one class makes the grid the other class takes the grid as parameter and add it to the wx.notebook</p> <p>but I keep getting an error that says</p> <pre><code> self.m_grid1 = wx.grid.Grid(self) TypeError: Grid(): arguments did not match any overloaded call: </code></pre> <p><code>overload 1: too many arguments</code> <code>overload 2: argument 1 has unexpected type 'reportGrid'</code></p> <p>and here is the code for the Grid class is called <strong>reportGrid</strong></p> <pre><code>class reportGrid (): def __init__( self, list): self.m_grid1 = wx.grid.Grid(self) self.m_grid1.Create(parent = None, id=wx.ID_ANY, pos=wx.DefaultPosition, size=wx.DefaultSize, style=wx.WANTS_CHARS, name="Grid") # Grid self.m_grid1.CreateGrid( 7, 18 ) self.m_grid1.EnableEditing( True ) self.m_grid1.EnableGridLines( True ) self.m_grid1.SetGridLineColour( wx.SystemSettings.GetColour( wx.SYS_COLOUR_WINDOWTEXT ) ) self.m_grid1.EnableDragGridSize( True ) self.m_grid1.SetMargins( 0, 0 ) # Columns self.m_grid1.EnableDragColMove( False ) self.m_grid1.EnableDragColSize( True ) self.m_grid1.SetColLabelSize( 30 ) self.m_grid1.SetColLabelAlignment( wx.ALIGN_CENTRE, wx.ALIGN_CENTRE ) # Rows self.m_grid1.EnableDragRowSize( True ) self.m_grid1.SetRowLabelSize( 80 ) self.m_grid1.SetRowLabelAlignment( wx.ALIGN_CENTRE, wx.ALIGN_CENTRE ) # Label Appearance self.m_grid1.SetColLabelValue(0, "Yield") self.m_grid1.SetColLabelValue(1, "64CU") self.m_grid1.SetColLabelValue(2, "Yield") self.m_grid1.SetColLabelValue(3, "60CU") self.m_grid1.SetColLabelValue(4, "Chain") self.m_grid1.SetColLabelValue(5, "Logic") self.m_grid1.SetColLabelValue(6, "Delay") self.m_grid1.SetColLabelValue(7, "BIST") self.m_grid1.SetColLabelValue(8, "CREST") self.m_grid1.SetColLabelValue(9, "HSIO") self.m_grid1.SetColLabelValue(10, "DC-Spec") self.m_grid1.SetColLabelValue(11, "HBM") self.m_grid1.SetColLabelValue(12, "OS") self.m_grid1.SetColLabelValue(13, "PS") self.m_grid1.SetColLabelValue(14, "Alarm") self.m_grid1.SetColLabelValue(15, "JTAG") self.m_grid1.SetColLabelValue(16, "Thermal IDD") self.m_grid1.SetColLabelValue(17, "Insuff Config") self.m_grid1.SetRowLabelValue(0, "Today") self.m_grid1.SetRowLabelValue(1, "WTD") self.m_grid1.SetRowLabelValue(2, "WW45") self.m_grid1.SetRowLabelValue(3, "WW44") self.m_grid1.SetRowLabelValue(4, "WW43") self.m_grid1.SetRowLabelValue(5, "Monthly") self.m_grid1.SetRowLabelValue(6, "QTD") # Cell Defaults for i in range(len(list)): for j in range(len(list[i])): self.m_grid1.SetCellValue(i,j, list[i][j]) self.m_grid1.SetDefaultCellAlignment( wx.ALIGN_LEFT, wx.ALIGN_TOP ) </code></pre> <p>and here the class that takes it as a parameter and suppose to create notebook </p> <pre><code>class reportFrame ( wx.Frame ): def __init__( self, parent , grid1): wx.Frame.__init__ ( self, parent, id = wx.ID_ANY, title = u"Report", pos = wx.DefaultPosition, size = wx.Size( 7990,210 ), style = wx.DEFAULT_FRAME_STYLE|wx.TAB_TRAVERSAL ) self.SetSizeHints( wx.DefaultSize, wx.DefaultSize ) bSizer6 = wx.BoxSizer( wx.VERTICAL ) self.m_notebook1 = wx.Notebook( self, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, 0 ) self.m_notebook1.SetBackgroundColour( wx.SystemSettings.GetColour( wx.SYS_COLOUR_INFOBK ) ) self.m_panel2 = wx.Panel( self.m_notebook1, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.TAB_TRAVERSAL ) bSizer14 = wx.BoxSizer( wx.HORIZONTAL ) bSizer14.Add( grid1, 0, wx.ALL, 5 ) self.m_panel2.SetSizer( bSizer14 ) self.m_panel2.Layout() bSizer14.Fit( self.m_panel2 ) self.m_notebook1.AddPage( self.m_panel2, u"a page", False ) self.m_panel3 = wx.Panel( self.m_notebook1, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.TAB_TRAVERSAL ) bSizer17 = wx.BoxSizer( wx.VERTICAL ) bSizer17.Add( grid1, 0, wx.ALL, 5 ) self.m_panel3.SetSizer( bSizer17 ) self.m_panel3.Layout() bSizer17.Fit( self.m_panel3 ) self.m_notebook1.AddPage( self.m_panel3, u"a page", True ) bSizer6.Add( self.m_notebook1, 1, wx.EXPAND |wx.ALL, 3 ) self.SetSizer( bSizer6 ) self.Layout() self.Centre( wx.BOTH ) self.Show(show=True) </code></pre>
<p><code>wx.grid.Grid(self)</code> here <code>self</code> must be a wx.Window (or subclass) type. In your code it's <code>reportGrid</code> type.</p> <p>But <code>reportGrid</code> is not a wx.Window nor a subclass of wx.Window.</p> <p>If you have a page "pagegrid" (for example, of type wx.Panel or subclass) of the wx.Notebook then you can set</p> <pre><code>class reportGrid (wx.Panel): def __init__( self, list): self.m_grid1 = wx.grid.Grid(self) </code></pre> <p>and inside your notebook definition</p> <pre><code>pagegrid = reportGrid(nb) nb.AddPage(pagegrid , "Grid Page") </code></pre>
python|python-3.x|wxwidgets|wxpython
1
505
65,361,807
select the rows of a table according to an id that is in a JSON in a column of the table
<p>I need to select the rows of a table according to the id In a JSON In one of the columns using Pandas.</p> <p>example :</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: center;">column_a</th> <th style="text-align: center;">column_b</th> <th style="text-align: left;">column_c</th> </tr> </thead> <tbody> <tr> <td style="text-align: center;">aaaa</td> <td style="text-align: center;">bbbbb</td> <td style="text-align: left;">{'id' : cc, 'name' : xx ...}</td> </tr> <tr> <td style="text-align: center;">xxxx</td> <td style="text-align: center;">yyyy</td> <td style="text-align: left;">{'id' : ff, 'name' : gg ...}</td> </tr> </tbody> </table> </div> <p>so I want to select all the rows where the id of the JSON in the column_c is equal to 'cc', So the result will be :</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: center;">column_a</th> <th style="text-align: center;">column_b</th> <th style="text-align: left;">column_c</th> </tr> </thead> <tbody> <tr> <td style="text-align: center;">aaaa</td> <td style="text-align: center;">bbbbb</td> <td style="text-align: left;">{'id' : cc, 'name' : xx ...}</td> </tr> </tbody> </table> </div>
<p>If you have already loaded this into pandas it is probably now a dictionary which can be accessed by key similar to the dataframe so you are looking to filter like this:</p> <pre><code>df[df['column_c']['id'] == 'cc'] </code></pre>
python|json|pandas|dataframe
0
506
71,780,524
How to change decimal separator from dot to coma in Pandas when column have NaN values?
<p>When I try to open my ready files in Excel, it changes my decimals number to data. I try to change dot to coma in decimals numbers, and it work. I used this code to change it:</p> <pre class="lang-py prettyprint-override"><code>def convert_df(df): return df.to_csv(sep=';',decimal=',').encode('utf-8') </code></pre> <p>Problem is that I have some NaN values in my DataFrame. I changed NaN values to '-' to make it look prettier. This function above does not change dot to coma in columns that have this '-' value.</p> <p>I try this code too:</p> <pre class="lang-py prettyprint-override"><code>DF['Age'].replace('.',',',inplace=True) </code></pre> <p>But this solutions work in the same way as this first one.</p> <p>Anyone has some solutions for this problem? Thanks for help.</p>
<p>Here is one way to do it:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd df = pd.DataFrame( { &quot;col1&quot;: [1.1, 1.2, 1.3], &quot;col2&quot;: [1.1, pd.NA, 1.3], } ) print(df) # Toy dataframe col1 col2 0 1.1 1.1 1 1.2 &lt;NA&gt; 2 1.3 1.3 </code></pre> <pre class="lang-py prettyprint-override"><code>df.fillna(&quot;-&quot;).applymap(lambda x: str(x).replace(&quot;.&quot;, &quot;,&quot;)).to_csv( path_or_buf=&quot;df.csv&quot;, sep=&quot;;&quot;, index=False ) </code></pre> <p>When you open <code>df.csv</code> in Excel:</p> <p><a href="https://i.stack.imgur.com/i4Dav.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/i4Dav.png" alt="enter image description here" /></a></p>
python|pandas
1
507
62,687,732
Linking two files by ID, and then removing data values from one file by referencing the other in Python using DataFrames
<p>I don't think this problem is that complex, I'm just dumb and I'm not sure how to word my search.</p> <p>I have two files, and they are linked by a common ID. One file (FileA), there is an upper year and a lower year listed out in each row. In the other file (FileB), there is a range of years. I don't need the years defined by the interval in FileA in FileB. How do I remove them by referencing a common ID? It needs to be done per each ID group which is adding to the complexity.</p> <blockquote> <p>File A:</p> <p>ID, uyear, lyear</p> <p>2341, 2005, 1995</p> <p>2341, 2013, 2010</p> </blockquote> <p>So I don't need the years from 1995 - 2005, and 2010-2013 for the ID 2341 in FileB</p> <blockquote> <p>Example FileB:</p> <p>ID, year, price,</p> <p>4321, 1991, 2.45</p> <p>4321, 1992, 2.47</p> <p>4321, 1993, 3.4</p> <p>4321, 1994, 3.4</p> <p>4321, 1995, 2.34</p> <p>4321, 1996, 2.44</p> <p>3214, 1990, 2.33</p> <p>3214, 1991, 2.44</p> <p>3214, 1992, 2.55</p> </blockquote>
<p>I added some 2341 references in your file_b example to show that they would be filtered out:</p> <pre><code>import pandas as pd file_a = pd.DataFrame( data=[[2341, 2005, 1995], [2341, 2013, 2010]], columns=[&quot;id&quot;, &quot;uyear&quot;, &quot;year&quot;] ) file_b = pd.DataFrame( data=[[4321, 1991, 2.45], [4321, 1992, 2.47], [4321, 1993, 3.4], [4321, 1994, 3.4], [4321, 1995, 2.34], [4321, 1996, 2.44], [2341, 1994, 2.34], [2341, 1995, 2.34], [2341, 1996, 2.44], [3214, 1990, 2.33], [3214, 1991, 2.44], [3214, 1992, 2.55]], columns=[&quot;id&quot;, &quot;year&quot;, &quot;price&quot;] ) </code></pre> <p>Note that we'd expect one of the 2341s to be kept: 2341 for 1994. The other two rows fall into one of the ranges in file_a.</p> <pre><code>remove_indexes = (file_b .assign(file_b_index=lambda x: x.index) .merge(file_a, on=&quot;id&quot;, how=&quot;left&quot;) .query(&quot;year_x &gt;= year_y and year_x &lt;= uyear&quot;) .file_b_index) file_b[~file_b.index.isin(remove_indexes)].reset_index()[[&quot;id&quot;, &quot;year&quot;, &quot;price&quot;]] </code></pre> <p>Yields</p> <pre><code> id year price 0 4321 1991 2.45 1 4321 1992 2.47 2 4321 1993 3.40 3 4321 1994 3.40 4 4321 1995 2.34 5 4321 1996 2.44 6 2341 1994 2.34 7 3214 1990 2.33 8 3214 1991 2.44 9 3214 1992 2.55 </code></pre> <p>The basic idea is to determine which indexes from file_b you need removed (because they match on if and fall into at least one range), then remove rows from the original file by index.</p>
python-3.x|pandas
0
508
62,021,220
is there a way to make python type in google without getting into anything?
<p>so im trying to make a code that you copy what you want then you press the hotkey and i want python to open google and type there "what is the meaning of (what ever word you want) in Hebrew" and then close python after the code is complete is there a way to do that? this is the code:</p> <pre><code>from pynput.keyboard import Key, KeyCode, Listener import webbrowser from googlesearch import search import pyperclip def function_1(): """ One of your functions to be executed by a combination """ query='what is the mening of '+pyperclip.paste()+'in hebrew' for res in search(query, tld="co.in", num=10, stop=10, pause=2): webbrowser.open(res) combination_to_function = { frozenset([Key.delete, KeyCode(vk=67)]): function_1 # delete + c } pressed_vks = set() def get_vk(key): """ Get the virtual key code from a key. These are used so case/shift modifications are ignored. """ return key.vk if hasattr(key, 'vk') else key.value.vk def is_combination_pressed(combination): """ Check if a combination is satisfied using the keys pressed in pressed_vks """ return all([get_vk(key) in pressed_vks for key in combination]) def on_press(key): """ When a key is pressed """ vk = get_vk(key) # Get the key's vk pressed_vks.add(vk) # Add it to the set of currently pressed keys for combination in combination_to_function: # Loop through each combination if is_combination_pressed(combination): # Check if all keys in the combination are pressed combination_to_function[combination]() # If so, execute the function def on_release(key): """ When a key is released """ vk = get_vk(key) # Get the key's vk pressed_vks.remove(vk) # Remove it from the set of currently pressed keys with Listener(on_press=on_press, on_release=on_release) as listener: listener.join() </code></pre>
<p>If you simply need to open the browser and execute a search you can use this. </p> <pre class="lang-py prettyprint-override"><code>import webbrowser def search_google(subject): webbrowser.open("https://www.google.com/search?q=What is the meaning of " + subject + " in Hebrew") search_google("Sample") </code></pre> <p>Additional parameters can also be used, take a look at <a href="https://moz.com/blog/the-ultimate-guide-to-the-google-search-parameters" rel="nofollow noreferrer">this blogpost by Pete Watson-Wailes</a></p>
python
1
509
60,672,863
Biopython PDBIO assembly chain IDs
<p>I am using Bio.PDB to parse structures in mmCIF and PDB format. I realised that PDBIO does not deal well with two-character chain identifiers (like ‘AA’ or ‘AB’) found in <strong>assembly</strong> structures. I have made a slight change to the code that fits me. Attached you will find the modified PDBIO module. What it does basically is checking the length of the chain identifier string and adds a space in front of it, if is a single character. The formatting string is modified accordingly.</p> <p>These are my changes in Bio.PDB.PDBIO module. Please consider it putting it in a future update.</p> <p><strong>Modified:</strong></p> <p><code>_ATOM_FORMAT_STRING = "%s%5i %-4s%c%3s%s%4i%c %8.3f%8.3f%8.3f%s%6.2f %4s%2s%2s\n"</code></p> <p><strong>Modified:</strong></p> <pre><code>for chain in model.get_list(): if not select.accept_chain(chain): continue chain_id = chain.get_id() if len(chain_id)==1: #Added line chain_id = ' {}'.format(chain_id) #Added line </code></pre> <p><strong>Modified:</strong></p> <p><code>fp.write("TER %5i %3s %s%4i%c \n</code></p>
<p>Stackoverflow is a site to ask questions. What you are proposing is a change to BioPython software. Luckily, BioPython is open-source, so you create a <a href="https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/about-pull-requests" rel="nofollow noreferrer">pull request</a> so that your change can be added to the software.</p> <ul> <li>Go to <a href="https://github.com/biopython/biopython/blob/master/Bio/PDB/PDBIO.py" rel="nofollow noreferrer">https://github.com/biopython/biopython/blob/master/Bio/PDB/PDBIO.py</a></li> <li><p>Click on the arrow icon in the top right corner. This will create a <a href="https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/about-forks" rel="nofollow noreferrer">fork</a> of the BioPython repository </p> <p><a href="https://i.stack.imgur.com/tzXBA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tzXBA.png" alt="screenshot"></a></p></li> <li><p>Make the changes that you mentioned above in your fork and a title and a description:</p> <p><a href="https://i.stack.imgur.com/qNRZ0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qNRZ0.png" alt="screenshot"></a></p></li> <li><p>Click on <code>propose file change</code>. You can now visually compare your modifications side by side.</p> <p><a href="https://i.stack.imgur.com/ktQmt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ktQmt.png" alt="screenshot"></a></p></li> <li><p>If everything looks OK, click on <code>create pull request</code>. This will send a pull request to the master branch of the BioPython repository. There it will be reviewed. If the authors of the BioPython software agree that this is a usefull change, they will merge it into the sofware.</p> <p><a href="https://i.stack.imgur.com/mMyzB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mMyzB.png" alt="screenshot"></a></p></li> </ul>
biopython|pdb
0
510
60,608,693
Django, get value of the ChoiceField form
<p>I have a form which contain a choicefield of items on my database. My question is How can I get the selected value of my choicheField?</p> <p>forms.py</p> <pre><code>class list_data(forms.Form): message = forms.CharField(widget=forms.Textarea) def __init__(self, author, *args, **kwargs): super(list_data, self).__init__(*args, **kwargs) self.fields['List'] = forms.ChoiceField( choices=[(o.id, str(o)) for o in List.objects.filter(author=author)] ) </code></pre> <p>views.py</p> <pre><code>def sms(request): form2 = list_data(author=request.user) if request.method == "POST": form2 = list_data(request.POST) if form2.is_valid(): choice = form2.cleaned_data["List"] print(choice) else: return render(request, "data_list/sms.html", {"form2": form2}) return render(request, "data_list/sms.html", {"form2": form2}) </code></pre> <p>When I try to press the submit button it give me this error: </p> <pre><code> int() argument must be a string, a bytes-like object or a number, not 'QueryDict' </code></pre> <p>So I changed the <code>form2 = list_data(request.POST)</code> for <code>form2 = list_data(author=request.user)</code> the error is gone but it print nothing else.</p> <p>Thanks for helping</p> <p>models.py</p> <pre><code>class List(models.Model): item = models.CharField(max_length=100) content = models.TextField() site = models.CharField(max_length=11, choices=THE_SITE) content_list = models.TextField() author = models.ForeignKey(User, on_delete=models.CASCADE) def __str__(self): return self.item </code></pre>
<p>In case of a POST request, you pass <code>request.POST</code> as first parameter, and thus as <code>author</code>, and not as data. You can rewrite the view to:</p> <pre><code>def sms(request): if request.method == 'POST': form2 = <b>list_data(request.user, data=request.POST)</b> if form2.is_valid(): choice = form2.cleaned_data[&quot;List&quot;] print(choice) else: form2 = list_data(author=request.user) return render(request, &quot;data_list/sms.html&quot;, {&quot;form2&quot;: form2})</code></pre> <p>I would however advise to use a <a href="https://docs.djangoproject.com/en/3.0/ref/forms/fields/#django.forms.ModelChoiceField" rel="nofollow noreferrer"><strong><code>ModelChoiceField</code></strong> [Django-doc]</a> here that will remove some boilerplate logic, and then you can work with model objects:</p> <pre><code>class ListDataForm(forms.Form): message = forms.CharField(widget=forms.Textarea) list = <b>forms.ModelChoiceField(</b>queryset=List.objects.none()<b>)</b> def __init__(self, author, *args, **kwargs): super(list_data, self).__init__(*args, **kwargs) self.fields['list']<b>.queryset</b> = List.objects.filter(author=author)</code></pre> <p>Note that according to the <a href="https://www.python.org/dev/peps/pep-0008/" rel="nofollow noreferrer"><strong>PEP-0008</strong> style guidelines</a>, the classes should be written in PerlCase (so <code>ListDataForm</code>, not <code>list_data</code>), and the attributes should be written in snake_case, so <code>list</code>, not <code>List</code>.</p>
python|django|forms
2
511
60,779,818
Set conditional constraint, Pulp
<p>First time using pulp and I am trying to set a conditional constraint on a production problem I am working on. Unfortunately I cannot find any examples in the documentation as to how to do so either. </p> <p>The objective function is to maximise revenue by informing monthly plant production on which product to produce based on the forecast price per product minus costs (Naturally there are lots of other constraints omitted here, otherwise it would be far simpler). </p> <p>For the below data I need to set the following constraints:</p> <ol> <li>A plant can only produce a single product each month, despite having the capability to produce multiple products. </li> </ol> <p>that limits a plant to producing only ONE product in a month. I am quite new to pulp but despite trawling the documentation and S.O. I cannot find an example implementation. </p> <p>Production data:</p> <p><a href="https://i.stack.imgur.com/ztvcY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ztvcY.png" alt="production data"></a></p> <p><a href="https://i.stack.imgur.com/RTEav.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RTEav.png" alt="product forecast price data"></a></p> <p>My code so far </p> <pre><code># omitted data etl logic - it is formatted as per the above images # Get production info plants = LpVariable.dicts('plants', ((month, plant, product) for month, plant, product in wp_df.index if month &gt;= 4), lowBound = 0, cat='Integer') # Get forecast price info by product forecast_prices = LpVariable.dicts('price_by_prod', ((month, contract) for month, contract in fcst_diffs.index if month &gt;= 4), lowBound = 0, cat='Integer') # Prod costs for each month, plant. costs = LpVariable.dicts( 'prod costs', ((month, plant) for month, plant in prod_costs_df.index), lowBound=0, cat='Integer') # Define problem model = LpProblem('Revenue Maximising Production Optimisation', LpMaximize) # Define objective function model += lpSum( [plants[m,w,g] * wp_df.loc[(m,w,g), 'production_output'] for m, w, g in wp_df.index] + [costs[m, w] * costs_df.loc[(m,w), 'prod_costs_usd'] for m, w in prod_costs_df.index] ) </code></pre> <p>I am omitting constraints for now as I have quite a few to set. </p> <p>Appreciate the help, thank you.</p>
<p>Introduce a set of binary variables which are indexed by {plant, product, month}, which determine whether plant <code>i</code> is being used to make product <code>j</code> during month <code>k</code>. Variable will be <code>1</code> when this is true, and <code>0</code> otherwise.</p> <p>You'll then need to add constraints so that the <em>amount</em> of product <code>j</code> being produced in plant <code>i</code> during month <code>k</code> is limited. Typically this could be done with a constraint constraining this <em>amount</em> variable to be <code>&lt;= b*C</code> where <code>b</code> is the binary variable, and C is the capacity of that plant to make that product.</p> <p>Finally you need to constraint each plant to only make a single product during each month. For each month, and for each plant, the sum of these binary variables across all the products is limited to be <code>&lt;= 0</code>.</p> <p>Good luck!</p>
python|linear-programming|pulp
0
512
66,042,721
pandas grouping and visualization
<p>I have to do some analysis using Python3 and pandas with a dataset which is shown as a toy example-</p> <pre><code>data ''' location importance agent count 0 London Low chatbot 2 1 NYC Medium chatbot 1 2 London High human 3 3 London Low human 4 4 NYC High human 1 5 NYC Medium chatbot 2 6 Melbourne Low chatbot 3 7 Melbourne Low human 4 8 Melbourne High human 5 9 NYC High chatbot 5 ''' </code></pre> <p>My aim is to group the location and then count the number of Low, Medium and/or High 'importance' column for each location. So far, the code I have come up with is-</p> <pre><code>data.groupby(['location', 'importance']).aggregate(np.size) ''' agent count location importance London High 1 1 Low 2 2 Melbourne High 1 1 Low 2 2 NYC High 2 2 Medium 2 2 ''' </code></pre> <p>This grouping and count aggregation contains index as the grouping objects-</p> <pre><code>data.groupby(['location', 'importance']).aggregate(np.size).index </code></pre> <p>I don't know how to proceed next? Also, how can I visualize this?</p> <p>Help?</p>
<p>I think you need <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.pivot_table.html" rel="nofollow noreferrer"><code>DataFrame.pivot_table</code></a>, added <code>aggfunc=sum</code> for aggregate if duplicates and then use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.plot.html" rel="nofollow noreferrer"><code>DataFrame.plot</code></a>:</p> <pre><code>df = data.pivot_table(index='location', columns='importance', values='count', aggfunc='sum') df.plot() </code></pre> <p>If need counts of pairs <code>location</code> with <code>importance</code> use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.crosstab.html" rel="nofollow noreferrer"><code>crosstab</code></a>:</p> <pre><code>df = pd.crosstab(data['location'], data['importance']) df.plot() </code></pre>
python|pandas
2
513
72,510,330
How to get first element of a list inside dictionary and add to Pandas Dataframe column in Python?
<p>I have a dictionary like this:</p> <pre><code>dict = {&quot;key 1&quot;: [&quot;val 1&quot;, &quot;val 2&quot;], &quot;key 2&quot;: [&quot;val 3&quot;, &quot;val 4&quot;, &quot;val 5&quot;], &quot;key 3&quot;: [&quot;val 6&quot;, &quot;val 7&quot;], ... } </code></pre> <p>I also have a pandas dataframe that contains all the keys like this:</p> <pre><code> key 0 key 1 1 key 2 2 key 3 ... </code></pre> <p>I need to add a new column to the dataframe called first_key that takes the first element of the list inside the dictionary for each key in the dict, so it ends up like this:</p> <pre><code> key first_key 0 key 1 val 1 1 key 2 val 3 2 key 3 val 6 ... </code></pre> <p>which I have had some trouble with... doing something like this doesn't work:</p> <pre><code>df['first_key'] = df['key'].map(dict[WHAT HERE][0]) </code></pre> <p>:D</p>
<p>Try:</p> <pre class="lang-py prettyprint-override"><code>dct = { &quot;key 1&quot;: [&quot;val 1&quot;, &quot;val 2&quot;], &quot;key 2&quot;: [&quot;val 3&quot;, &quot;val 4&quot;, &quot;val 5&quot;], &quot;key 3&quot;: [&quot;val 6&quot;, &quot;val 7&quot;], } df[&quot;first_key&quot;] = df[&quot;key&quot;].apply(dct.get).str[0] print(df) </code></pre> <p>Prints:</p> <pre class="lang-none prettyprint-override"><code> key first_key 0 key 1 val 1 1 key 2 val 3 2 key 3 val 6 </code></pre> <hr /> <p>Or:</p> <pre class="lang-py prettyprint-override"><code>df[&quot;first_key&quot;] = df[&quot;key&quot;].map(dct).str[0] </code></pre>
python|pandas|list|dataframe|dictionary
2
514
63,192,438
Unbound local error does not occur consistently
<p>I am trying to add data to my SQlite3 table which runs on a function that takes two arguments to find a city and a neighbourhood <code>def scrapecafes(city, area)</code> Strangely, this works well with some of the arguments I am entering but not with others. For example if I run <code>scrapecafes(melbourne, thornbury)</code> the code works fine, but if I run <code>scrapecafes(melbourne, carlton</code> I get the following error: <code>UnboundLocalError: local variable 'lat' referenced before assignment</code></p> <p>I know the function definitely works, but I can't figure out why I am getting the UnboundLocalError for some arguments but not for others. Here is the code:</p> <pre><code>import folium from bs4 import BeautifulSoup import requests from requests import get import sqlite3 import geopandas import geopy from geopy.geocoders import Nominatim from geopy.extra.rate_limiter import RateLimiter #cafeNames def scrapecafes(city, area): #url = 'https://www.broadsheet.com.au/melbourne/guides/best-cafes-thornbury' #go to the website url = f&quot;https://www.broadsheet.com.au/{city}/guides/best-cafes-{area}&quot; response = requests.get(url, timeout=5) soup_cafe_names = BeautifulSoup(response.content, &quot;html.parser&quot;) type(soup_cafe_names) cafeNames = soup_cafe_names.findAll('h2', attrs={&quot;class&quot;:&quot;venue-title&quot;, }) #scrape the elements cafeNamesClean = [cafe.text.strip() for cafe in cafeNames] #clean the elements #cafeNameTuple = [(cafe,) for cafe in cafeNamesCleans #print(cafeNamesClean) #addresses soup_cafe_addresses = BeautifulSoup(response.content, &quot;html.parser&quot;) type(soup_cafe_addresses) cafeAddresses = soup_cafe_addresses.findAll( attrs={&quot;class&quot;:&quot;address-content&quot; }) cafeAddressesClean = [address.text for address in cafeAddresses] #cafeAddressesTuple = [(address,) for address in cafeAddressesClean] #print(cafeAddressesClean) ##geocode addresses locator = Nominatim(user_agent=&quot;myGeocoder&quot;) geocode = RateLimiter(locator.geocode, min_delay_seconds=1) try: location = [] for item in cafeAddressesClean: location.append(locator.geocode(item)) lat = [loc.latitude for loc in location] long = [loc.longitude for loc in location] except: pass #zip up for table fortable = list(zip(cafeNamesClean, cafeAddressesClean, lat, long)) print(fortable) ##connect to database try: sqliteConnection = sqlite3.connect('25july_database.db') cursor = sqliteConnection.cursor() print(&quot;Database created and Successfully Connected to 25july_database&quot;) sqlite_select_Query = &quot;select sqlite_version();&quot; cursor.execute(sqlite_select_Query) record = cursor.fetchall() print(&quot;SQLite Database Version is: &quot;, record) cursor.close() except sqlite3.Error as error: print(&quot;Error while connecting to sqlite&quot;, error) #create table try: sqlite_create_table_query = ''' CREATE TABLE IF NOT EXISTS test555 ( name TEXT NOT NULL, address TEXT NOT NULL, latitude FLOAT NOT NULL, longitude FLOAT NOT NULL );''' cursor = sqliteConnection.cursor() print(&quot;Successfully Connected to SQLite&quot;) cursor.execute(sqlite_create_table_query) sqliteConnection.commit() print(&quot;SQLite table created&quot;) except sqlite3.Error as error: print(&quot;Error while creating a sqlite table&quot;, error) ##enter data into table try: sqlite_insert_name_param = &quot;&quot;&quot;INSERT INTO test555 (name, address, latitude, longitude) VALUES (?,?,?,?);&quot;&quot;&quot; cursor.executemany(sqlite_insert_name_param, fortable) sqliteConnection.commit() print(&quot;Total&quot;, cursor.rowcount, &quot;Records inserted successfully into table&quot;) sqliteConnection.commit() cursor.close() except sqlite3.Error as error: print(&quot;Failed to insert data into sqlite table&quot;, error) finally: if (sqliteConnection): sqliteConnection.close() print(&quot;The SQLite connection is closed&quot;) </code></pre>
<p>The problem is <code>geopy</code> doesn't have co-ordinates for <code>Carlton</code>. Hence, you should change your table schema and insert <code>null</code> in those cases.</p> <p>When <code>geopy</code> doesn't have data, it returns <code>None</code> and when try to call something on <code>None</code> it throws exception. You have to put the <code>try/except</code> block inside the <code>for</code> loop.</p> <pre><code>from bs4 import BeautifulSoup import requests from requests import get import sqlite3 import geopandas import geopy from geopy.geocoders import Nominatim from geopy.extra.rate_limiter import RateLimiter #cafeNames def scrapecafes(city, area): #url = 'https://www.broadsheet.com.au/melbourne/guides/best-cafes-thornbury' #go to the website url = f&quot;https://www.broadsheet.com.au/{city}/guides/best-cafes-{area}&quot; response = requests.get(url, timeout=5) soup_cafe_names = BeautifulSoup(response.content, &quot;html.parser&quot;) cafeNames = soup_cafe_names.findAll('h2', attrs={&quot;class&quot;:&quot;venue-title&quot;, }) #scrape the elements cafeNamesClean = [cafe.text.strip() for cafe in cafeNames] #clean the elements #cafeNameTuple = [(cafe,) for cafe in cafeNamesCleans #addresses soup_cafe_addresses = BeautifulSoup(response.content, &quot;html.parser&quot;) cafeAddresses = soup_cafe_addresses.findAll( attrs={&quot;class&quot;:&quot;address-content&quot; }) cafeAddressesClean = [address.text for address in cafeAddresses] #cafeAddressesTuple = [(address,) for address in cafeAddressesClean] ##geocode addresses locator = Nominatim(user_agent=&quot;myGeocoder&quot;) geocode = RateLimiter(locator.geocode, min_delay_seconds=1) lat = [] long = [] for item in cafeAddressesClean: try: location = locator.geocode(item.strip().replace(',','')) lat.append(location.latitude) long.append(location.longitude) except: lat.append(None) long.append(None) #zip up for table fortable = list(zip(cafeNamesClean, cafeAddressesClean, lat, long)) print(fortable) ##connect to database try: sqliteConnection = sqlite3.connect('25july_database.db') cursor = sqliteConnection.cursor() print(&quot;Database created and Successfully Connected to 25july_database&quot;) sqlite_select_Query = &quot;select sqlite_version();&quot; cursor.execute(sqlite_select_Query) record = cursor.fetchall() print(&quot;SQLite Database Version is: &quot;, record) cursor.close() except sqlite3.Error as error: print(&quot;Error while connecting to sqlite&quot;, error) #create table try: sqlite_create_table_query = ''' CREATE TABLE IF NOT EXISTS test ( name TEXT NOT NULL, address TEXT NOT NULL, latitude FLOAT, longitude FLOAT );''' cursor = sqliteConnection.cursor() print(&quot;Successfully Connected to SQLite&quot;) cursor.execute(sqlite_create_table_query) sqliteConnection.commit() print(&quot;SQLite table created&quot;) except sqlite3.Error as error: print(&quot;Error while creating a sqlite table&quot;, error) ##enter data into table try: sqlite_insert_name_param = &quot;&quot;&quot;INSERT INTO test (name, address, latitude, longitude) VALUES (?,?,?,?);&quot;&quot;&quot; cursor.executemany(sqlite_insert_name_param, fortable) sqliteConnection.commit() print(&quot;Total&quot;, cursor.rowcount, &quot;Records inserted successfully into table&quot;) sqliteConnection.commit() cursor.close() except sqlite3.Error as error: print(&quot;Failed to insert data into sqlite table&quot;, error) finally: if (sqliteConnection): sqliteConnection.close() print(&quot;The SQLite connection is closed&quot;) scrapecafes('melbourne', 'carlton') </code></pre>
python|sqlite|beautifulsoup|geopy
1
515
67,814,858
Canno't create a PySimpleGui table with my data
<p><strong>My table does not accept data in format, that I put in var <strong>dataT</strong></strong></p> <pre class="lang-py prettyprint-override"><code>import PySimpleGUI as sg dataT = [[''], [''], [''], [''], [''], [''], [''], [''], ['']] def edit(): sg.theme('Light Green 1') headings = ['CPF', 'NAME', 'ENDEREÇO', 'CITY', 'STATE', 'GENDER', 'EMAIL', 'BIRTH', 'FAQ'] # ------ Window Layout ------ layout = [ [sg.Table(values=dataT[1:][:], headings=headings, max_col_width=55, auto_size_columns=True, display_row_numbers=True, justification='center', key='-TABLE-', size=(920,390))], [sg.Button('Delete')], ] # ------ Create Window ------ window = sg.Window('MyTable', layout) # ------ Event Loop ------ while True: event, values = window.read() print(event, values) if event is None: break window.close() edit() </code></pre>
<p>There are 9 columns for headings,</p> <pre class="lang-py prettyprint-override"><code>headings = ['CPF', 'NAME', 'ENDEREÇO', 'CITY', 'STATE', 'GENDER', 'EMAIL', 'BIRTH', 'FAQ'] </code></pre> <p>Here table data means 9 rows and each one only with one row.</p> <pre class="lang-py prettyprint-override"><code>dataT = [[''], [''], [''], [''], [''], [''], [''], [''], ['']] </code></pre> <p>Option <code>size</code> maybe something wrong, it said <strong>'DO NOT USE! Use num_rows instead'</strong>.</p> <p>To avoid the width just exactly match the length of each heading, set each column width with extra 2 more chars.</p> <p>Character width maybe not exactly same as shown for non-monospace font, so add monospace font by method <code>sg.set_options</code>.</p> <p>After all, code as following,</p> <pre class="lang-py prettyprint-override"><code>import PySimpleGUI as sg dataT = [ ['', '', '', '', '', '', '', '', ''], ] def edit(): sg.theme('LightGreen1') sg.set_options(font=(&quot;Courier New&quot;, 12)) headings = ['CPF', 'NAME', 'ENDEREÇO', 'CITY', 'STATE', 'GENDER', 'EMAIL', 'BIRTH', 'FAQ'] # ------ Window Layout ------ layout = [ [sg.Table(values=dataT, headings=headings, max_col_width=55, auto_size_columns=False, col_widths=list(map(lambda i:len(i)+2, headings)), display_row_numbers=True, justification='center', key='-TABLE-', num_rows=20)], [sg.Button('Delete')], ] # ------ Create Window ------ window = sg.Window('MyTable', layout) # ------ Event Loop ------ while True: event, values = window.read() print(event, values) if event is None: break window.close() edit() </code></pre> <p><a href="https://i.stack.imgur.com/ohr6U.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ohr6U.png" alt="enter image description here" /></a></p>
python|pysimplegui
0
516
67,845,221
Webscraping with beautiful soup 4, class not working
<p>I'm trying to webscrape, as a personal excersie, the players data from this page: <a href="https://sofifa.com/players" rel="nofollow noreferrer">https://sofifa.com/players</a> So I want to grab the players ID which is in this kind of line of HTML:</p> <pre><code>&lt;td class = &quot;col col-pi&quot; data-col=&quot;pi&quot;&gt; 11111 &lt;/td&gt; </code></pre> <p>So what I do is this: First I get my soup</p> <pre><code>url = 'http://sofifa.com/players' def soup_making(url): my_page = requests.get(url) soup = bs(my_page.text, &quot;html.parser&quot;) return soup soup = soup_making(url) </code></pre> <p>The I try to do my <strong>scraping</strong> with find_all:</p> <pre><code>test = soup.find_all('td',{'class':'col col-pi'}) print(test) </code></pre> <p>And the output is [], this method has worked for other classes of the same page, but it doesn't work for this particular &quot;col col-pi&quot; as well as some others like &quot;col col-name&quot;, but if I scrape this:</p> <pre><code>&lt;td class = &quot;col col-ae&quot; data-col=&quot;ae&quot;&gt; 26 &lt;/td&gt; test = soup.find_all('td',{'class':'col col-ae'}) print(test) </code></pre> <p>This works, does anyone knows why is working with some clasess and not with others when I'm using the same method for both? Do you recomend a better way of doing it?</p> <p>Thanks for the answer @myz540 is so weird that is no picking all the td classes, here is an image of the source code I see: <a href="https://i.stack.imgur.com/dkVmr.png" rel="nofollow noreferrer">Example of the sofifa source code td classes</a></p>
<p>I went to the site and inspected the source. I copied your code and grabbed all the <code>td</code> elements but I did not find any with <code>class=&quot;col col-pi&quot;</code>.</p> <pre class="lang-py prettyprint-override"><code>soup = soup_making(url) tags = soup.find_all('td') all_td_classes = set() for tag in tags: for c in tag.attrs['class']: all_td_classes.add(c) print(all_td_classes) </code></pre> <p>Outputs:</p> <pre><code>{'col-oa', 'col-name', 'col-pt', 'col-vl', 'col', 'col-wg', 'col-comment', 'col-ae', 'col-tt', 'col-avatar'} False </code></pre> <p>Where are you seeing the player ID?</p> <p><a href="https://i.stack.imgur.com/BVG74.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BVG74.png" alt="enter image description here" /></a></p>
python|html|web-scraping|beautifulsoup|css-selectors
0
517
67,019,451
Is it possible to choose at runtime to import uic compiled files or dynamically load the ui with QUiLoader()?
<p>As stated in the <a href="https://doc.qt.io/qtforpython/tutorials/basictutorial/uifiles.html" rel="nofollow noreferrer">official documentation</a> there are 2 ways of importing <code>.ui</code> files in your code:</p> <ul> <li><a href="https://doc.qt.io/qtforpython/tutorials/basictutorial/uifiles.html#option-a-generating-a-python-class" rel="nofollow noreferrer">Option A: Generating a Python class</a></li> <li><a href="https://doc.qt.io/qtforpython/tutorials/basictutorial/uifiles.html#option-b-loading-it-directly" rel="nofollow noreferrer">Option B: Loading it directly</a></li> </ul> <p>In my project I'm using Option A, but now I'm wondering if it would be possible to choose <strong>at a project level</strong> Option A or Option B <strong>at runtime</strong>, because it would avoid having to compile the widgets after each change while in development</p>
<p>In the case of Qt for Python the option is to use <a href="https://doc.qt.io/qtforpython/PySide6/QtUiTools/loadUiType.html" rel="nofollow noreferrer"><code>loadUiType</code></a>:</p> <pre class="lang-py prettyprint-override"><code>ui_class, qt_class = loadUiType(&quot;filename.ui&quot;) class FooWidget(QFooWidget): def __init__(self, parent=None): super().__init__(parent) self.ui = ui_class() self.ui.setupUi(self) </code></pre>
python|qt|pyside2
2
518
42,936,464
How to read into a pandas dataframe the wollowing json?
<p>I have the following json:</p> <pre><code>[ [ { "A": "2017-02-02T11:57:41+0000", "B": "agent", "C": "hi how are you son." }, { "A": "2017-02-01T22:19:58+0000", "B": "user2", "C": "M contestan" }, { "A": "2017-02-01T22:19:42+0000", "B": "user2", "C": "preetty thanks you?" }, { "A": "2017-02-01T22:19:28+0000", "B": "user2", "C": "the cat sat over the fox" } ] ] </code></pre> <p>How can I compose it into a pandas dataframe like this?:</p> <pre><code>A B C 2017-02-02T11:57:41+0000 agent Hola Alex, si no has realizado la modificación de los datos afiliados, por favor confírmanos tu DNI, celular y operador para revisarlo. Gracias. .... 2017-02-01T16:22:30+0000 user1 Hola me han depositado un dinero a mi nombre, no tengo cuenta en este banco, puedo saber por aquí si ya puedo cobrar? DNI 42782263 gracias </code></pre> <p>I tried to build it with:</p> <pre><code>df = pd.DataFrame.apply(lambda x: map(x.from_records, json_path)) </code></pre> <p>And</p> <pre><code>df = pd.DataFrame('../path/file.json') </code></pre> <p>And with <code>read_json()</code>, However it is not working. Thus, How can I build the dataframe from the json?.</p>
<pre><code>In [17]: import json </code></pre> <p>Assuming you have the following JSON string:</p> <pre><code>In [18]: s Out[18]: '[[{"A": "2017-02-02T11:57:41+0000", "B": "agent", "C": "Hola Alex, si no has realizado la modificacin de los datos afiliados, por favor confrmanos tu DNI, celular y operador para revisarlo. Gracias."}, {"A": "2017-02-01T22:19:58+0000", "B": "user2", "C": "Me podran ayud ar?, estoy llamando al CC y no contestan"}, {"A": "2017-02-01T22:19:42+0000", "B": "user2", "C": "No me llega el sms con la clave token"}, { "A": "2017-02-01T22:19:28+0000", "B": "user2", "C": "Tengo problemas para hacer pagos de servicios desde la app"}, {"A": "2017-02-01T22:19:1 8+0000", "B": "user2", "C": "Buenas tardes"}], [{"A": "2017-02-01T22:19:12+0000", "B": "agent", "C": "Hola Alexander, as es, el dinero ya se encuentra disponible puedes acercarte a cualquiera de nuestras tiendas el nmero es 1703070024597. Buenas noches"}, {"A": "2017-02-01T16:22: 30+0000", "B": "user1", "C": "Hola me han depositado un dinero a mi nombre, no tengo cuenta en este banco, puedo saber por aqu si ya puedo c obrar? DNI 42782263 gracias"}]]' </code></pre> <p>you can parse it:</p> <pre><code>In [19]: data = json.loads(s) </code></pre> <p>and build a DataFrame:</p> <pre><code>In [31]: pd.DataFrame.from_records(np.concatenate(data)) Out[31]: A B C 0 2017-02-02T11:57:41+0000 agent Hola Alex, si no has realizado la mo... 1 2017-02-01T22:19:58+0000 user2 Me podran ayudar?, estoy llamando al... 2 2017-02-01T22:19:42+0000 user2 No me llega el sms con la clave token 3 2017-02-01T22:19:28+0000 user2 Tengo problemas para hacer pagos de ... 4 2017-02-01T22:19:18+0000 user2 Buenas tardes 5 2017-02-01T22:19:12+0000 agent Hola Alexander, as es, el dinero ya ... 6 2017-02-01T16:22:30+0000 user1 Hola me han depositado un dinero a m... </code></pre>
python|json|python-3.x|pandas
1
519
69,513,415
Python Selenium: extraction of rating given by individual reviewer
<p>I am trying to extract google reviews of a resturant using Python Selenium. I tried to extract the reviews posted by each reviewers. Here is my code:</p> <pre><code>from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.chrome.options import Options from selenium.webdriver.common.action_chains import ActionChains import time driver = webdriver.Chrome('') base_url = 'https://www.google.com/search?tbs=lf:1,lf_ui:9&amp;tbm=lcl&amp;sxsrf=AOaemvJFjYToqQmQGGnZUovsXC1CObNK1g:1633336974491&amp;q=10+famous+restaurants+in+Dunedin&amp;rflfq=1&amp;num=10&amp;sa=X&amp;ved=2ahUKEwiTsqaxrrDzAhXe4zgGHZPODcoQjGp6BAgKEGo&amp;biw=1280&amp;bih=557&amp;dpr=2#lrd=0xa82eac0dc8bdbb4b:0x4fc9070ad0f2ac70,1,,,&amp;rlfi=hd:;si:5749134142351780976,l,CiAxMCBmYW1vdXMgcmVzdGF1cmFudHMgaW4gRHVuZWRpbiJDUjEvZ2VvL3R5cGUvZXN0YWJsaXNobWVudF9wb2kvcG9wdWxhcl93aXRoX3RvdXJpc3Rz2gENCgcI5Q8QChgFEgIIFkiDlJ7y7YCAgAhaMhAAEAEQAhgCGAQiIDEwIGZhbW91cyByZXN0YXVyYW50cyBpbiBkdW5lZGluKgQIAxACkgESaXRhbGlhbl9yZXN0YXVyYW50mgEkQ2hkRFNVaE5NRzluUzBWSlEwRm5TVU56ZW5WaFVsOUJSUkFCqgEMEAEqCCIEZm9vZCgA,y,2qOYUvKQ1C8;mv:[[-45.8349553,170.6616387],[-45.9156414,170.4803685]]' driver.get(base_url) WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.XPATH,&quot;//div[./span[text()='Newest']]&quot;))).click() total_reviews_text =driver.find_element_by_xpath(&quot;//div[@class='review-score-container']//div//div//span//span[@class='z5jxId']&quot;).text num_reviews = int (total_reviews_text.split()[0]) all_reviews = WebDriverWait(driver, 20).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, 'div.gws-localreviews__google-review'))) time.sleep(2) total_reviews = len(all_reviews) while total_reviews &lt; num_reviews: driver.execute_script('arguments[0].scrollIntoView(true);', all_reviews[-1]) WebDriverWait(driver, 5, 0.25).until_not(EC.presence_of_element_located((By.CSS_SELECTOR, 'div[class$=&quot;activityIndicator&quot;]'))) #all_reviews = driver.find_elements_by_css_selector('div.gws-localreviews__google-review') time.sleep(5) all_reviews = WebDriverWait(driver, 5).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, 'div.gws-localreviews__google-review'))) print(total_reviews) total_reviews +=5 review_info = driver.find_elements_by_xpath(&quot;//div[@class='PuaHbe']&quot;) for person in person_infos: rating = person.find_element_by_xpath(&quot;./span&quot;).get_attribute('aria-label') print(rating) </code></pre> <p>However, the above code produces/print 'none'. I am not sure where I made the mistake. Any help to fix the issue would be appreciated.</p>
<p>You are using a wrong XPath locator.<br /> Instead of</p> <pre class="lang-py prettyprint-override"><code>rating = person.find_element_by_xpath(&quot;./span&quot;).get_attribute('aria-label') </code></pre> <p>Try using</p> <pre class="lang-py prettyprint-override"><code>rating = person.find_element_by_xpath(&quot;./g-review-stars/span&quot;).get_attribute('aria-label') </code></pre>
python|python-3.x|selenium|selenium-webdriver|xpath
1
520
69,373,046
How do I extract a specific column from a dataset using pandas that's imported from a HTML file?
<pre><code>import requests import os import pandas as pd from bs4 import BeautifulSoup #Importing html df = pd.read_html(os.path.expanduser(&quot;~/Documents/HTMLSpider/HTMLSpider_test/spotgamma.html&quot;)) print (df['Latest Data']) </code></pre> <p>All of the documentation I can find online states that extracting a specific column from a dataset required you to specify the name of the column header in square braces, yet this is returning a TypeError when I try to do so:</p> <pre><code>&gt; print (df['Latest Data']) TypeError: list indices must be integers or slices, not str </code></pre> <p>If you're curious as to what the dataset looks like without trying to specify the column:</p> <pre><code> SpotGamma Proprietary Levels Latest Data ... NDX QQQ 0 Ref Price: 4465 ... 15283 372 1 SpotGamma Imp. 1 Day Move: 0.91%, ... NaN NaN 2 SpotGamma Imp. 5 Day Move: 2.11% ... NaN NaN 3 SpotGamma Gamma Index™: 0.48 ... 0.04 -0.08 4 Volatility Trigger™: 4415 ... 15075 373 5 SpotGamma Absolute Gamma Strike: 4450 ... 15500 370 6 Gamma Notional(MM): $157 ... $4 $-397 </code></pre>
<p>Note that</p> <pre><code>df = pd.read_html(os.path.expanduser(&quot;~/Documents/HTMLSpider/HTMLSpider_test/spotgamma.html&quot;)) </code></pre> <p>will return a <strong>list of</strong> dataframes, not a single one.</p> <p>See: <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_html.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_html.html</a> (&quot;Read HTML tables into a list of DataFrame objects.&quot;)</p> <p>Better do</p> <pre><code>ldf = pd.read_html(os.path.expanduser(&quot;~/Documents/HTMLSpider/HTMLSpider_test/spotgamma.html&quot;)) </code></pre> <p>and then</p> <pre><code>df = ldf[0] # replace 0 with the number of the dataframe you want </code></pre> <p>to get the first dataframe (there may be more, check <code>len(ldf)</code> to see how many you got and which one has the column you need).</p>
python|pandas|beautifulsoup
3
521
42,170,046
Python serial port returing null string
<p>Reading data from the serial port: readline() in the below code return the null vector, the reading data from the serial port is hexadecimal number like AABB00EF the putty gives me the output means the communication is working but nothing works via python here is the code:</p> <pre><code>#!/usr/bin/python import serial, time ser = serial.Serial() ser.port = "/dev/ttyUSB0" ser.baudrate = 115200 #ser.bytesize = serial.EIGHTBITS #ser.parity = serial.PARITY_NONE #ser.stopbits = serial.STOPBITS_ONE #ser.timeout = None ser.timeout = 1 #ser.xonxoff = False #ser.rtscts = False #ser.dsrdtr = False #ser.writeTimeout = 2 try: ser.open() except Exception, e: print "error open serial port: " + str(e) exit() if ser.isOpen(): try: #ser.flushInput() #ser.flushOutput() #time.sleep(0.5) # numOfLines = 0 # f=open('signature.txt','w+') while True: response = ser.readline() print len(response) #f=ser.write(response) print response # numOfLines = numOfLines + 1 f.close() ser.close() except Exception, e1: print "error communicating...: " + str(e1) else: print "cannot open serial port " </code></pre>
<p>readline will try to read until the end of the line is reached, if there is no <code>\r</code> or <code>\n</code> then it will wait forever (if you have a timeout it might work...) instead try something like this</p> <pre><code>ser.setTimeout(1) result = ser.read(1000) # read 1000 characters or until our timeout occures, whichever comes first print repr(result) </code></pre> <p>just use this code</p> <pre><code>ser = serial.Serial("/dev/ttyUSB0",115200,timeout=1) print "OK OPENED SERIAL:",ser time.sleep(1)# if this is arduino ... wait longer time.sleep(5) ser.write("\r") # send newline time.sleep(0.1) print "READ:",repr(ser.read(8)) </code></pre> <p>you can create a readuntil method</p> <pre><code>def read_until(ser,terminator="\n"): resp = "" while not resp.endswith(terminator): tmp = ser.read(1) if not tmp: return resp # timeout occured resp += tmp return resp </code></pre> <p>then just use it like </p> <pre><code>read_until(ser,"\r") </code></pre>
python|pyserial
0
522
54,154,770
When does it make sense to use a public package as a Submodule in Python vs installing using pip?
<p>I am working on a python project that has many open sourced dependencies that may not be regularly maintained. I tried using packages as submodules by adding them with Git; but then I get an error saying the module I want is not available when I try to use the submodule; when I install the package with pip it works fine. This hasn't happened with every package. I am wondering why I can't use the submodule like I would the installed package simply by importing it? </p> <p>(Modules seem to be missing from the submodule import vs the pip package installed import.)</p> <p>However is it better to use these packages as submodules or just add the required package and version number to a requirements.txt file to be installed for production deployment?</p> <p>(Any additional functionality required for a submodule or package is added with a wrapper)</p>
<p><code>git</code> is a development tool; you use it during development but not deployment. <code>pip</code> is a deployment tool; during development you use it to install necessary libraries; during deployment your users use it to install your package with dependencies.</p> <p>Use submodules when you need something from a remote repository in your development environment. For example, if said remote repository contains Makefile(s) or other non-python files that you need and that usually aren't installed with <code>pip</code>.</p> <p>For everything else <code>pip</code> is preferable.</p>
python|git
7
523
23,656,404
Error on downloading scrapy image
<p>i have a <code>scrapy spider</code> to fetch images and content from some ecommerce sites. Now i want to download images, i write a few codes but i got this error :</p> <pre><code>.. File "/usr/lib/python2.7/pprint.py", line 238, in format return _safe_repr(object, context, maxlevels, level) File "/usr/lib/python2.7/pprint.py", line 282, in _safe_repr vrepr, vreadable, vrecur = saferepr(v, context, maxlevels, level) File "/usr/lib/python2.7/pprint.py", line 323, in _safe_repr rep = repr(object) File "/usr/local/lib/python2.7/dist-packages/Scrapy-0.23.0-py2.7.egg/scrapy/item.py", line 77, in __repr__ return pformat(dict(self)) File "/usr/lib/python2.7/pprint.py", line 63, in pformat return PrettyPrinter(indent=indent, width=width, depth=depth).pformat(object) File "/usr/lib/python2.7/pprint.py", line 122, in pformat self._format(object, sio, 0, 0, {}, 0) File "/usr/lib/python2.7/pprint.py", line 140, in _format rep = self._repr(object, context, level - 1) File "/usr/lib/python2.7/pprint.py", line 226, in _repr self._depth, level) File "/usr/lib/python2.7/pprint.py", line 238, in format return _safe_repr(object, context, maxlevels, level) File "/usr/lib/python2.7/pprint.py", line 282, in _safe_repr vrepr, vreadable, vrecur = saferepr(v, context, maxlevels, level) File "/usr/lib/python2.7/pprint.py", line 323, in _safe_repr rep = repr(object) File "/usr/local/lib/python2.7/dist-packages/Scrapy-0.23.0-py2.7.egg/scrapy/item.py", line 77, in __repr__ return pformat(dict(self)) File "/usr/lib/python2.7/pprint.py", line 63, in pformat return PrettyPrinter(indent=indent, width=width, depth=depth).pformat(object) File "/usr/lib/python2.7/pprint.py", line 122, in pformat self._format(object, sio, 0, 0, {}, 0) File "/usr/lib/python2.7/pprint.py", line 140, in _format rep = self._repr(object, context, level - 1) File "/usr/lib/python2.7/pprint.py", line 226, in _repr self._depth, level) File "/usr/lib/python2.7/pprint.py", line 238, in format return _safe_repr(object, context, maxlevels, level) File "/usr/lib/python2.7/pprint.py", line 280, in _safe_repr for k, v in _sorted(object.items()): File "/usr/lib/python2.7/pprint.py", line 78, in _sorted with warnings.catch_warnings(): exceptions.RuntimeError: maximum recursion depth exceeded </code></pre> <p>My <code>spider</code> : </p> <pre><code>from scrapy.spider import Spider from scrapy.selector import Selector from scrapy.http import Request from loom.items import LoomItem import sys from scrapy.contrib.loader import XPathItemLoader from scrapy.utils.response import get_base_url from scrapy.contrib.spiders import CrawlSpider, Rule from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor class LoomSpider(CrawlSpider): name = "loom_org" allowed_domains = ["2loom.com"] start_urls = [ "http://2loom.com", "http://2loom.com/collections/basic", "http://2loom.com/collections/design", "http://2loom.com/collections/tum-koleksiyon" ] rules = [ Rule(SgmlLinkExtractor(allow='products'), callback='parse_items',follow = True), Rule(SgmlLinkExtractor(allow=()), follow=True), ] def parse_items(self, response): sys.setrecursionlimit(10000) item = LoomItem() items = [] sel = Selector(response) name = sel.xpath('//h1[@itemprop="name"]/text()').extract() brand = "2loom" price_lower = sel.xpath('//h1[@class="product-price"]/text()').extract() price = "0" image = sel.xpath('//meta[@property="og:image"]/@content').extract() description = sel.xpath('//meta[@property="og:description"]/@content').extract() print image ##image indiriliyor loader = XPathItemLoader(item, response = response) loader.add_xpath('image_urls', '//meta[@property="og:image"]/@content') ##ID Split ediliyor (10. Design | Siyah &amp; beyaz kalpli) id = name[0].strip().split(". ") id = id[0] item['id'] = id item['name'] = name item['url'] = response.url item['image'] = loader.load_item() item['category'] = "Basic" item['description'] = description item["brand"] = "2Loom" item['price'] = price item['price_lower'] = price_lower print item items.append(item) return items Items # Define here the models for your scraped items # # See documentation in: # http://doc.scrapy.org/en/latest/topics/items.html from scrapy.item import Item, Field class LoomItem(Item): # define the fields for your item here like: # name = Field() id = Field() name = Field() brand = Field() image = Field() category = Field() description = Field() price_lower = Field() price = Field() url = Field() images = Field() image_urls = Field() </code></pre> <p><code>Pipeline</code> : </p> <pre><code>from scrapy.contrib.pipeline.images import ImagesPipeline, ImageException from scrapy.http import Request from cStringIO import StringIO import psycopg2 import hashlib from scrapy.conf import settings class MyImagePipeline(ImagesPipeline): def get_media_requests(self, item, info): return [Request(x) for x in item.get('image_urls', [])] def item_completed(self, results, item, info): item['images'] = [x for ok, x in results if ok] return item # Override the convert_image method to disable image conversion def convert_image(self, image, size=None): buf = StringIO() try: image.save(buf, image.format) except Exception, ex: raise ImageException("Cannot process image. Error: %s" % ex) return image, buf def image_key(self, url): image_guid = hashlib.sha1(url).hexdigest() return 'full/%s.jpg' % (image_guid) </code></pre> <p><code>Settings</code> : </p> <pre><code>BOT_NAME = 'loom' SPIDER_MODULES = ['loom.spiders'] NEWSPIDER_MODULE = 'loom.spiders' DOWNLOAD_DELAY = 5 ITEM_PIPELINES = {'scrapy.contrib.pipeline.images.ImagesPipeline': 1} IMAGES_STORE = '/root/loom/images/' IMAGES_THUMBS = { 'small': (90, 90), 'big': (300, 300), } USER_AGENT = "Mozilla/5.0 (Windows NT 6.0; rv:2.0) Gecko/20100101 Firefox/4.0" IM_MODULE = 'loom.pipelines.MyImagePipeline' ITEM_PIPELINES = ['loom.pipelines.MyImagePipeline'] LOG_LEVEL = 'INFO' </code></pre> <p>I dont know why I got this error. So thanks for help</p>
<p>Try change recursion limit for <code>sys.setrecursionlimit(10000)</code> in spyder. My python interpreter gave 900 recursions before "RuntimeError"</p>
python|scrapy|web-crawler
1
524
53,686,899
Firestore updates using python api are not persisting
<p>I have the following code. </p> <pre><code>from firebase_admin import firestore db = firestore.client() collection = db.collection('word_lists') word_list = collection.get() for item in word_list: item_dict = item.to_dict() print item_dict['next_practice_date'] item.reference.update({'next_practice_date': 0.0}) </code></pre> <p>When I run the code the first time everything is fine, no errors. The second time I run it I expect all the prints to print <code>0.0</code> but instead many print <code>None</code>, particularly the ones at the end. What is going on?</p>
<p>I did not find the solution to the problem but instead switched <code>from firebase_admin import firestore</code></p> <p>to <code>from google.cloud import firestore</code> and everything works well now.</p>
python|firebase|google-cloud-firestore|firebase-admin
0
525
55,060,950
Sparse matrix hstack getting error regarding subscriptability
<p>Would someone please explain why this does not work?</p> <pre><code>from scipy.sparse import coo_matrix, hstack row = np.array([0,3,1,0]) col = np.array([0,3,1,2]) data = np.array([4,5,7,9]) temp = coo_matrix((data, (row, col))) temp_stack = coo_matrix([0, 11,22,33], ([0, 1,2,3], [0, 0,0,0])) temp_res = hstack(temp, temp_stack) </code></pre> <p>I get an error that <code>coo_matrix</code> is not subscriptable, but I don't understand why, it appears that I am concatenating matrices of compatible dimension.</p>
<p>First note that the first argument of <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.hstack.html" rel="nofollow noreferrer"><code>hstack</code></a> is expected to be a tuple containing the arrays to be stacked, so you should call it with <code>hstack((temp, temp_stack))</code>.</p> <p>Next, <code>temp</code> has shape <code>(4, 4)</code> and <code>temp_stack</code> has shape <code>(1, 4)</code>. These shapes can not be <code>hstack</code>ed. What shape do expect the result to be? If you are trying to create a result that has shape <code>(5, 4)</code>, use <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.vstack.html" rel="nofollow noreferrer"><code>vstack</code></a>:</p> <pre><code>In [28]: result = vstack((temp, temp_stack)) In [29]: result.A Out[29]: array([[ 4, 0, 9, 0], [ 0, 7, 0, 0], [ 0, 0, 0, 0], [ 0, 0, 0, 5], [ 0, 11, 22, 33]], dtype=int64) </code></pre> <p>If you meant for <code>temp_stack</code> to have shape <code>(4, 1)</code>, then fix how it is created by adding an extra level of parentheses in the call of <code>coo_matrix</code>:</p> <pre><code>In [38]: temp_stack = coo_matrix(([0, 11, 22, 33], ([0, 1, 2, 3], [0, 0, 0, 0]))) In [39]: temp_stack.shape Out[39]: (4, 1) In [40]: result = hstack((temp, temp_stack)) In [41]: result.A Out[41]: array([[ 4, 0, 9, 0, 0], [ 0, 7, 0, 0, 11], [ 0, 0, 0, 0, 22], [ 0, 0, 0, 5, 33]], dtype=int64) </code></pre> <hr> <p>By the way, I think it is a SciPy bug that this call</p> <pre><code>temp_stack = coo_matrix([0, 11,22,33], ([0, 1,2,3], [0, 0,0,0])) </code></pre> <p>does not raise an error. That call is equivalent to</p> <pre><code>temp_stack = coo_matrix(arg1=[0, 11,22,33], shape=([0, 1,2,3], [0, 0,0,0])) </code></pre> <p>and that <code>shape</code> value is clearly not valid. That call to <code>coo_matrix</code> should raise a <code>ValueError</code>. I created an issue for this on the SciPy github site: <a href="https://github.com/scipy/scipy/issues/9919" rel="nofollow noreferrer">https://github.com/scipy/scipy/issues/9919</a></p>
python|scipy
1
526
33,303,067
How to assign project category to a project using JIRA rest apis
<p>How to assign project category to a project using JIRA rest apis.<br> My Jira server version is 6.3.13</p>
<p>All the following is using python!</p> <p>If you are creating a new issue you can do it in two different ways, the first being a dict:</p> <pre><code> issue_dict = { 'project': {'id': 123}, 'summary': 'New issue from jira-python', 'description': 'Look into this one', 'issuetype': {'name': 'Bug'}, } new_issue = jira.create_issue(fields=issue_dict) </code></pre> <p>The second way is to do it all in the function call:</p> <pre><code> new_issue = jira.create_issue(project='PROJ_key_or_id', summary='New issue from jira-python', description='Look into this one', issuetype={'name': 'Bug'}) </code></pre> <p>However if you are updating a existing issue then you have to use the <code>.update()</code> function. Which would look like this:</p> <pre><code> new_issue.update(issuetype={'name' : 'Bug'}) </code></pre> <p>Source: <a href="http://pythonhosted.org/jira/" rel="nofollow">http://pythonhosted.org/jira/</a></p>
jira-rest-api|python-jira
-1
527
40,797,026
How to make multiple update in django?
<p>I'm trying to make multiple update in django by checking in checkbox then push the update button. </p> <p>This is my view.py</p> <pre><code>def update_kel_stat(request, id, kelid): if request.method == "POST": cursor = connection.cursor() sql = "UPDATE keluargapeg_dipkeluargapeg SET KelStatApprov='3' WHERE (PegUser = %s AND KelID=%s )" % (id, kelid,) cursor.execute(sql) </code></pre> <p>where 'id' is user parameter and 'kelid' is row paramater where 'kelid' become multiple parameter.</p> <p>This is my url.py</p> <pre><code>url(r'^karyawan/update_status/(?P&lt;id&gt;\d+)/(?P&lt;kelid&gt;\d+)/$', views.pesan_update, name='update_pesan') </code></pre> <p>template.html, I use JavaScript to load url where use to update</p> <pre><code> &lt;script&gt; function setDeleteAction() { if (confirm("Are you sure want to delete these rows?")) { document.kel.action = "{% url 'update_pesan' %}"; document.kel.submit(); } } &lt;/script&gt; &lt;form method="post" action="" name="kel" enctype="multipart/form-data"&gt; {% for keluarga in kels %} &lt;tr id="{{ keluarga.KelID }}"&gt; &lt;td&gt; &lt;a href="#"&gt;{{ keluarga.KelNamaLengkap }}&lt;/a&gt; &lt;/td&gt; &lt;td&gt;{{ keluarga.KelHubungan }}&lt;/td&gt; &lt;td class="hidden-480"&gt;{{ keluarga.KelTglLahir }}&lt;/td&gt; &lt;td&gt;{{ keluarga.KelJenisKel }}&lt;/td&gt; &lt;td class="hidden-480"&gt;{{ keluarga.KelIjazahAkhir }} &lt;/td&gt; &lt;td&gt;{{ keluarga.KelPekerjaan }}&lt;/td&gt; {% if keluarga.KelStatApprov == '1' %} &lt;td&gt;&lt;span class="label label-sm label-danger"&gt;Draft&lt;/span&gt; &lt;/td&gt; {% elif keluarga.KelStatApprov == '2' %} &lt;td&gt; &lt;span class="label label-sm label-warning"&gt;Revisi&lt;/span&gt; &lt;/td&gt; {% elif keluarga.KelStatApprov == '3' %} &lt;td&gt; &lt;span class="label label-sm label-success"&gt;Setuju&lt;/span&gt; &lt;/td&gt; {% endif %} &lt;td&gt;{{ keluarga.KelKetRevisi }}&lt;/td&gt; &lt;td&gt; &lt;a href=" {{ MEDIA_URL }}{{ keluarga.KelFileUpload }}"&gt;{{ keluarga.KelNamaFile }}&lt;/a&gt; &lt;/td&gt; &lt;td&gt;&lt;input type="checkbox" name="kel[]" value="{{ keluarga.KelID }}"&gt;&lt;/td&gt; &lt;td&gt; &lt;div class="hidden-sm hidden-xs action-buttons"&gt; &lt;a class="green" href="{% url 'edit_keluarga' keluarga.PegUser keluarga.KelID %}"&gt; &lt;i class="ace-icon fa fa-pencil bigger-130"&gt;&lt;/i&gt; &lt;/a&gt; &lt;a class="red" href="#"&gt; &lt;i class="ace-icon fa fa-trash-o bigger-130"&gt;&lt;/i&gt; &lt;/a&gt; &lt;/div&gt; &lt;/td&gt; &lt;/tr&gt; {% endfor %} &lt;tr&gt; &lt;td&gt; &lt;button type="button" name="btn_delete" id="btn_delete" class="btn btn-success" onClick="setDeleteAction();"&gt;Approve &lt;/button&gt; &lt;/td&gt; &lt;/tr&gt; </code></pre> <p></p> <p>How can I get multiple row(like array in php) in view and url?</p>
<p>were you looking for <a href="https://docs.djangoproject.com/en/1.10/ref/request-response/#django.http.QueryDict.getlist" rel="nofollow noreferrer">getlist</a>?</p> <blockquote> <p>QueryDict.getlist(key, default=None)<br> Returns the data with the requested key, as a Python list. Returns an empty list if the key doesn’t exist and no default value was provided. It’s guaranteed to return a list of some sort unless the default value provided is not a list.</p> </blockquote> <pre><code>request.POST.getlist('kel') </code></pre>
javascript|python|django|web
2
528
19,085,274
Open New Thunderbird Email Using Python
<p>I'm trying to open just a new Thunderbird email and attach a file to it for me to fill out the recipient email addresses' instead of hardcoding it. I'm using Windows 7, Python 2.7 and the latest version of Thunderbird.</p> <p>I noticed some other questions like this but they all involved writing a Thunderbird plugin which isn't what I want to do. I know how to do this for Outlook like below and want to do the same thing:</p> <pre><code> # open new e-mail in Outlook and attach the Map Package outlook = win32com.client.Dispatch("Outlook.Application") email = outlook.CreateItem(0) email.Subject = "Map Package Area of Interest" email.Attachments.Add(pkgPath) email.Display() </code></pre> <p>Thanks</p>
<p>Thunderbird and other programs from Mozilla don't use <code>win32com</code>. Instead, they use <code>xpcom</code>. See [<a href="http://kb.mozillazine.org/Calling_Thunderbird_from_other_programs" rel="nofollow">http://kb.mozillazine.org/Calling_Thunderbird_from_other_programs</a>. </p> <p>There is a python module, <a href="https://developer.mozilla.org/en-US/docs/PyXPCOM" rel="nofollow">PyXPCOM</a>, which could help you out with controlling Mozilla from Python, if you really want to.</p> <p>You can also use <a href="http://www.autohotkey.com/" rel="nofollow">AutoHotKey</a> to script Thunderbird and many other programs, too.</p>
python|email|python-2.7|thunderbird
2
529
19,299,168
Why child class doesn't overwerite the fields from based class in python and how deal with that
<p>I create based abstract class in python which is based class for all child classes and implement some functions which will be redundant to write each time in every child class.</p> <pre><code>class Element: ###SITE### __sitedefs = [None] def getSitedefs(self): return self.__sitedefs class SRL16(Element): ###SITE### __sitedefs = ['SLICEM'] </code></pre> <p>Ther result is logical in on hand beacuse I get the value from the based class where I declare the value but otherwise I overwite it in child one. My question is how to get from</p> <pre><code>srl = SRL16() srl.getSitedefs() </code></pre> <p>SLICEM not NONE</p> <p>Probably I missunderstending something very based but please help. </p> <p>Best regards</p>
<p>Your problem's is due to name mangling. See eg: <a href="https://stackoverflow.com/questions/1301346/the-meaning-of-a-single-and-a-double-underscore-before-an-object-name-in-python">What is the meaning of a single- and a double-underscore before an object name?</a>.</p> <p>If you change all the <code>__sitedefs</code> by <code>_sitedefs</code> then everything should work as expected.</p>
python
7
530
41,719,006
What type of field do I have to use in order to associate related parent object in serializer
<p>I have two models with one to many relation. I will use the default example.</p> <pre><code>class Album(models.Model): id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False) album_name = models.CharField(max_length=100) artist = models.CharField(max_length=100) class Track(models.Model): album = models.ForeignKey(Album, related_name='tracks', on_delete=models.CASCADE) order = models.IntegerField() title = models.CharField(max_length=100) duration = models.IntegerField() </code></pre> <p>The question I have is how do I implement serializer in order to associate a track with an album by providing only Album 'id' key. What I want is to know which type of serializers.Field do I have to declare is the serializer.</p> <p>Here is an example</p> <pre><code>class TrackSerializer(serializers.Serializer): album = serializers.MagiclyRelatedFieldByUUID() # &lt;---- ??? title = serializers.CharField() order = serializers.IntegerField() duration = serializers.IntegerField() class Meta: model = models.Track </code></pre> <p>Request looks like this:</p> <pre><code>{ 'album': '137b5a6c-dd76-11e6-bf26-cec0c932ce01', 'title': 'my new track', 'duration': 10 'order': 31 } </code></pre> <p><strong>Updated</strong></p> <p>I managed to solve it with a HyperlinkedRelatedField by specifying view_name='album-detail', queryset=models.Album.objects.all() and lookup_field='uuid', but in this case I have to send a valid url to the album. Is it the only way to get instance of related model in serializer?</p> <p>So far it my solution is following:</p> <pre><code>class TrackSerializer(serializers.Serializer): album = serializers.HyperlinkedRelatedField(view_name='album-detail', queryset=models.Album.objects.all(), lookup_field='uuid') </code></pre>
<p>You can even try this</p> <pre><code>album = serializers.SlugRelatedField( queryset=models.Album.objects.all(), slug_field='uuid' ) </code></pre> <p>It will accept you model uid to get the object.</p> <pre><code>{ "album": "ed79716c-ba5d-4d3f-bb96-2685b38139e5", "title": "Eleanor Rigby", "order": 2, "duration": 206 } </code></pre>
python|django|serialization|django-rest-framework
1
531
27,829,259
Organize subplots using matplotlib
<p>I am trying to plot the content of a json file. The script should generate 64 subplots. Each subplot consists of 128 samples (voltage levels). "ElementSig" is a "key" in that json file for a list of 8192 samples. I am taking 128 samples at a time and generate a subplot of it as you see in my following script:</p> <pre><code>import json import matplotlib.pyplot as plt json_data = open('txrx.json') loaded_data = json.load(json_data) json_data.close() j = 0 E = loaded_data['ElementSig'] for i in range(64): plt.ylabel('E%s' % str(i+1)) print 'E', i, ':' plt.figure(1) plt.subplot(64, 2, i+1) print E[0+j:127+j] plt.plot(E[0+j:127+j]) j += 128 plt.show() </code></pre> <p>The results is very packed and the figures are overlapping. <img src="https://i.stack.imgur.com/AK5CR.png" alt="enter image description here"></p> <p>Any help is appreciated. </p>
<p>I got a better figure when I saved it as .png file. </p> <pre><code>fig = plt.figure(figsize=(20, 222)) plt.subplots_adjust(top=.9, bottom=0.1, wspace=0.2, hspace=0.2) for i in range(1, 65): print 'E', i, ':' plt.subplot(64, 2, i) plt.ylabel('E%s' % str(i)) i += 1 print E[0+j:127+j] plt.plot(E[0+j:127+j]) j += 128 plt.savefig('foo.png', bbox_inches='tight') plt.show() </code></pre> <p>Though I believe there is a better solution.</p>
python|matplotlib
0
532
48,646,779
TypeError: cupcake_flour() missing 1 required positional argument: 'cu_flour'. What am I doing wrong?
<p>This is my first time taking python and i'm having a hard time understanding what I've done wrong to receive this error? This code is supposed to change grams to cups for a cupcake recipe and this is just the first step converting the flour. The input function works but after that I get the above error.</p> <pre><code>user = input("How many cookies do you want to make? ") def cupcake_flour(cu_flour): cu_flour = user * 100 / 120 print(cu_flour + "cups of flour") def main(): cupcake_flour() main() </code></pre>
<p>You have defined your function <code>cupcake_flour</code> to take an argument, but you are not providing one when you are calling <code>cupcake_flour()</code>. You probably want to pass the user input to the function and then print the amount of flour needed like so:</p> <pre><code>def cupcake_flour(cookies): cu_flour = cookies * 100 / 120 print(str(cu_flour) + "cups of flour") def main(): num_cookies = int(input("How many cookies do you want to make? ")) cupcake_flour(num_cookies) main() </code></pre> <p>Note a few minor changes: </p> <ol> <li><code>int(input("How many cookies do you want to make? "))</code> since the input is supposed to be interpreted as a number (and used as such in the calculation)</li> <li>Moved the user input into the main, as it makes more sense to only ask for it when <code>main()</code> is called</li> <li><code>str(cu_flour)</code> as it needs to be a string</li> </ol>
python|python-3.x
1
533
64,291,796
subset a python dataframe by conditions
<p>I trying to select the name row with count&gt;250, which is called effective here. So we will try to find the mean of its rate</p> <pre><code>t3=dfnew.groupby('name')['ratings'] t4=t3.count() t5=t4[t4.values&gt;250] t6=t3.mean() t6[(t6.index==t5.index)] </code></pre> <p>Obviously the problem is in last row of my code. Where I want to match t6's index with t5's index. If they match, then save it, otherwise left it out. It is kind of like inner join in SQL.</p> <p>What should I do to modify last row?</p> <p>Suppose dataframe like this</p> <pre><code>input: name ratings A 1 A 2 : A 251 B 1 B 2 : B 230 </code></pre> <p>so intended result should be 126 ( (1+251)/2))</p> <pre><code>Output A 126 </code></pre>
<pre><code>t3=dfnew.groupby('name')['ratings'].agg(['count','mean']) t5=t3[t3['count']&gt;250] t5 </code></pre> <p>It works fine when I aggregate two functions at the same time.</p>
python|pandas|numpy
0
534
64,462,917
"view_as_windows" from skimage but in Pytorch
<p>Is there any Pytorch version of <code>view_as_windows</code> from skimage? I want to create the view while the tensor is on the GPU.</p>
<p>I needed the same functionality from Pytorch and ended up implementing it myself:</p> <pre class="lang-py prettyprint-override"><code>def view_as_windows_torch(image, shape, stride=None): &quot;&quot;&quot;View tensor as overlapping rectangular windows, with a given stride. Parameters ---------- image : `~torch.Tensor` 4D image tensor, with the last two dimensions being the image dimensions shape : tuple of int Shape of the window. stride : tuple of int Stride of the windows. By default it is half of the window size. Returns ------- windows : `~torch.Tensor` Tensor of overlapping windows &quot;&quot;&quot; if stride is None: stride = shape[0] // 2, shape[1] // 2 windows = image.unfold(2, shape[0], stride[0]) return windows.unfold(3, shape[1], stride[1]) </code></pre> <p>Essentially it is just two lines of Pytorch code relying on <a href="https://pytorch.org/docs/stable/generated/torch.Tensor.unfold.html?highlight=unfold#torch.Tensor.unfold" rel="nofollow noreferrer">torch.Tensor.unfold</a>. You can easily convince yourself, that it does the same as <code>skimage.util.view_as_windows</code>:</p> <pre class="lang-py prettyprint-override"><code>import torch x = torch.arange(16).reshape((1, 1, 4, 4)) patches = view_as_windows_torch(image=x, shape=(2, 2)) print(patches) </code></pre> <p>Gives:</p> <pre><code>tensor([[[[[[ 0, 1], [ 4, 5]], [[ 1, 2], [ 5, 6]], [[ 2, 3], [ 6, 7]]], [[[ 4, 5], [ 8, 9]], [[ 5, 6], [ 9, 10]], [[ 6, 7], [10, 11]]], [[[ 8, 9], [12, 13]], [[ 9, 10], [13, 14]], [[10, 11], [14, 15]]]]]]) </code></pre> <p>I hope this helps!</p>
python|pytorch|scikit-image
1
535
70,578,538
literal_eval and boolean Logic in Python
<pre><code>&gt;&gt;&gt; from ast import literal_eval &gt;&gt;&gt; H = {&quot;('a','b')&quot;:1} &gt;&gt;&gt; x = ('a','b') &gt;&gt;&gt; str(x) &quot;('a', 'b')&quot; &gt;&gt;&gt; list(H.keys())[0] &quot;('a','b')&quot; &gt;&gt;&gt; str(x) == list(H.keys())[0] False </code></pre> <p>Why do I get a False statement? However, when I do</p> <pre><code>&gt;&gt;&gt; x == literal_eval(list(H.keys())[0]) True </code></pre> <p>I get a True statement.</p>
<p>In my tests, <code>str(x)</code> is <code>&quot;('a', 'b')&quot;</code>. Do you notice the space after the comma?</p> <p>That is enough to explain why the strings are different (one contains a space while the other does not), while the tuples are equal.</p>
python|boolean
1
536
70,546,285
How can I find the second smallest output for my function?
<p>I used this function to find the biggest pullback $ wise for my data frame column with stock prices. I need help to figure out how to get the X following output. Basically the plan is to join those outputs into a new data frame to get the X biggest pullbacks within my data frame.</p> <p><strong>Main question:</strong> How could I loop through the X biggest pullback, starting at the biggest pullback and finding the X next biggest pullback?</p> <pre><code>def maxdrop(p): bestdrop = 0 wheredrop = -1,-1 i = 0 while i &lt; len(p) - 1: if p[i+1] &lt; p[i]: bestlocal = p[i+1] wherelocal = i+1 j = i + 1 while j &lt; len(p) - 1 and p[j + 1] &lt; p[i]: j += 1 if p[j] &lt; bestlocal: bestlocal = p[j] wherelocal = j if p[i] - bestlocal &gt; bestdrop: bestdrop = p[i] - bestlocal wheredrop = i, wherelocal i = j+1 else: i += 1 return bestdrop,wheredrop </code></pre> <p>maxdrop(df1['price'])</p> <p>Here is the current output for the code:</p> <pre><code>(782.5300000000001, (1640, 1657)) </code></pre>
<p>The strategy u can use is to first find the biggest pullback, then exclude that range where that pullback is and then calculate the biggest pullback for all valid ranges that are left.</p> <p>I made my own <code>maxdrop</code> function that works in a similar fashion as yours, except it only looks within specified bounds. Then <code>alldrops</code> returns an array of all draw-downs without overlap. Then you could sort this array by the $ draw-down to get what you want.</p> <pre><code>def maxdrop(pricearray, leftbound=0 , rightbound=-1): # Calculate the pullback/drop by splitting the array in half, # then calculating the max of the first and the min of the second. # By testing all &quot;splitting points&quot; and selecting the maximum we get the biggest drop drops = [] begin, end = -1, -1 if rightbound == -1: rightbound = len(pricearray) for i in range(leftbound+1,rightbound-1): leftpart = pricearray[leftbound:i] rightpart = pricearray[i:rightbound] begin = pricearray.index(max(leftpart)) end = pricearray.index(min(rightpart)) delta = max(leftpart)-min(rightpart) drops.append([delta, begin, end]) if len(drops) &gt; 0: return max(drops) else: return None; def alldrops(pricearray): droplist = [] # Stores all the drops droplist.append(maxdrop(pricearray)) while True: termswhereadded = False validranges = [] # Stores all ranges that are not part of drawdown #Get ranges that are not already part of a drawdown for i in range(-1, len(droplist)): if(i == -1): b = 0 else: b = droplist[i][2] if(i == len(droplist)-1): e = len(pricearray)-1 else: e = droplist[i+1][1] if (b &lt; e-1): validranges.append((b, e)); # If there are no valid ranges left, we are finished if (len(validranges) == 0): break; #Calculate the biggest drawdown in all those valid ranges for vrange in validranges: drop = maxdrop(pricearray, vrange[0], vrange[1]) if (drop != None): if (drop[0] &gt; 0): droplist.append(drop) termswhereadded = True droplist.sort(key= lambda n : n[1]) # If no drawdown was added we are finished if(not termswhereadded): break; return droplist </code></pre> <p>For a array with a hundred random elements you get (the first element in each array is the pullback, the second where it starts, the third where it ends)</p> <blockquote> <p>[[0.6391820462436719, 0, 1], [4.945067107442718, 3, 7], [0.38440828483857103, 10, 11], [0.44438096165870533, 14, 15], [1.2783529599412589, 23, 24], [0.20126563551455945, 25, 26], [1.1957951552365884, 28, 30], [0.5895546638677374, 32, 37], [1.5337809447945148, 40, 41], [3.0108867730327518, 43, 60], [1.0752516082881058, 67, 68], [1.0413928565593054, 70, 71], [3.039113846862932, 82, 87], [6.364453213541438, 92, 99]]</p> </blockquote> <p>Which when you plot the pullbacks in matplotlib: <a href="https://i.stack.imgur.com/mp6fm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mp6fm.png" alt="plotting in matplotlib" /></a></p>
python|pandas|function|format
1
537
72,948,512
Is there any method to replace selectROI with auto selection?
<p>I have finished detecting faces through videos and generating a bounding box if detected by Haar Cascade classifier. And now I only want to analyze the particular part of the face such as foreheads or cheeks, but I could just choose the place manually through selectROI in OpenCV. Is there any method to revise my code or I could just do it manually?</p> <pre><code>import cv2 as cv import argparse import numpy as np parser = argparse.ArgumentParser() parser.add_argument('--face_cascade', help='Path to face cascade.',default='opencv-3.4/data/haarcascades/haarcascade_frontalface_alt2.xml') parser.add_argument('--camera', help='Camera divide number.', type=int, default=0) args = parser.parse_args() face_cascade_name = args.face_cascade face_cascade = cv.CascadeClassifier() if not face_cascade.load(cv.samples.findFile(face_cascade_name)): print('Error loading face cascade') exit(0) camera_device = args.camera # for build-in camera cap = cv.VideoCapture(camera_device) if not cap.isOpened: print('Error opening video capture') exit(0) tracker = cv.TrackerCSRT_create() roi = None while True: ret, frame = cap.read() if frame is None: print('No captured frame, Break!') break frame_gray = cv.cvtColor(frame, cv.COLOR_BGR2GRAY) frame_gray = cv.equalizeHist(frame_gray) faces = face_cascade.detectMultiScale( frame_gray, scaleFactor=1.1, minNeighbors=3) for (x, y, w, h) in faces: cv.rectangle(frame, (x, y), (x+w, y+h), (0, 0, 255), 3) if roi is None: roi = cv.selectROI('frame', frame, False, False) if roi != (0, 0, 0, 0): tracker.init(frame, roi) success, rect = tracker.update(frame) if success: (x, y, w, h) = [int(i) for i in rect] cv.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 3) cv.imshow('Face Detection', frame) if (cv.waitKey(1) == ord('q') or cv.waitKey(1) == 27): break </code></pre>
<p>there can be different ways you can go around for detecting and analysing facial regions, I am listing a few:</p> <ul> <li>you can use <a href="http://dlib.net/face_landmark_detection.py.html" rel="nofollow noreferrer"><code>Dlib's Landmark Detector</code></a> to detect facial landmarks and classify the facial regions based on landmark's position. Example: the face portion above eye-brow landmarks is forehead region etc. For more clarity see the image below. <a href="https://i.stack.imgur.com/7xze9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7xze9.png" alt="enter image description here" /></a></li> <li>You can object detectors which can detect facials regions which you want but it will be difficult to find pre-trained model for this, you have to train your own model.</li> </ul>
python|opencv|face-detection
1
538
64,801,774
AttributeError: module 'numexpr' has no attribute '__version__'
<p>Trying to import some modules written below:</p> <pre><code>import numpy as np import os.path import pandas as pd import math import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D </code></pre> <p>However I get an AttributeError: module 'numexpr' has no attribute '<strong>version</strong>' which I don't know how to solve. I've already tried to uninstall and install numpy. I've added the full error message below I apologize if it's a bit lengthy.</p> <pre><code>AttributeError Traceback (most recent call last) &lt;ipython-input-1-c0202e9c9cc8&gt; in &lt;module&gt;() 1 import numpy as np 2 import os.path ----&gt; 3 import pandas as pd 4 import math 5 import matplotlib.pyplot as plt ~\Anaconda3\lib\site-packages\pandas\__init__.py in &lt;module&gt;() 40 import pandas.core.config_init 41 ---&gt; 42 from pandas.core.api import * 43 from pandas.core.sparse.api import * 44 from pandas.stats.api import * ~\Anaconda3\lib\site-packages\pandas\core\api.py in &lt;module&gt;() 8 from pandas.core.dtypes.missing import isnull, notnull 9 from pandas.core.categorical import Categorical ---&gt; 10 from pandas.core.groupby import Grouper 11 from pandas.io.formats.format import set_eng_float_format 12 from pandas.core.index import (Index, CategoricalIndex, Int64Index, ~\Anaconda3\lib\site-packages\pandas\core\groupby.py in &lt;module&gt;() 44 from pandas.core.base import (PandasObject, SelectionMixin, GroupByError, 45 DataError, SpecificationError) ---&gt; 46 from pandas.core.index import (Index, MultiIndex, 47 CategoricalIndex, _ensure_index) 48 from pandas.core.categorical import Categorical ~\Anaconda3\lib\site-packages\pandas\core\index.py in &lt;module&gt;() 1 # flake8: noqa ----&gt; 2 from pandas.core.indexes.api import * 3 from pandas.core.indexes.multi import _sparsify ~\Anaconda3\lib\site-packages\pandas\core\indexes\api.py in &lt;module&gt;() ----&gt; 1 from pandas.core.indexes.base import (Index, _new_Index, # noqa 2 _ensure_index, _get_na_value, 3 InvalidIndexError) 4 from pandas.core.indexes.category import CategoricalIndex # noqa 5 from pandas.core.indexes.multi import MultiIndex # noqa ~\Anaconda3\lib\site-packages\pandas\core\indexes\base.py in &lt;module&gt;() 50 import pandas.core.algorithms as algos 51 from pandas.io.formats.printing import pprint_thing ---&gt; 52 from pandas.core.ops import _comp_method_OBJECT_ARRAY 53 from pandas.core.strings import StringAccessorMixin 54 from pandas.core.config import get_option ~\Anaconda3\lib\site-packages\pandas\core\ops.py in &lt;module&gt;() 17 from pandas import compat 18 from pandas.util._decorators import Appender ---&gt; 19 import pandas.core.computation.expressions as expressions 20 21 from pandas.compat import bind_method ~\Anaconda3\lib\site-packages\pandas\core\computation\__init__.py in &lt;module&gt;() 8 try: 9 import numexpr as ne ---&gt; 10 ver = ne.__version__ 11 _NUMEXPR_INSTALLED = ver &gt;= LooseVersion(_MIN_NUMEXPR_VERSION) 12 AttributeError: module 'numexpr' has no attribute '__version__' </code></pre>
<p>I experienced the same problem a while back and solved it by doing the following thing in <code>Ananconda</code>:</p> <pre><code>pip uninstall -y numpy pip uninstall -y setuptools pip install setuptools pip install numpy </code></pre> <p>If you are using Anaconda3 try the same thing using <code>pip3</code>.</p>
python|pandas
0
539
65,008,711
What is the optimal way to create a new column in Pandas dataframe based on conditions from another row?
<p>I have a Pandas dataframe, <code>week1_plays</code> in the following format:</p> <p><a href="https://i.stack.imgur.com/18A4N.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/18A4N.png" alt="enter image description here" /></a></p> <p>What I want to do is add a column <code>week1_plays['distance_from_receiver']</code> such that for each row in the dataframe, we grab the keys of <code>gameId, playId, frameId</code> and find the x and y position of the player with those keys and <code>position == 'WR'</code>. Then I'll calculate the distance from the receiver with the following function:</p> <pre><code>def get_distance(rec_x, rec_y, def_x, def_y): distance = np.sqrt( ((def_x - rec_x)**2) + ((def_y - rec_y)**2) ) return distance </code></pre> <p>For example using the sample provided, the row 0 input to the function would be</p> <pre><code>get_distance(91.35, 44.16, 88.89, 36.47) </code></pre> <p>The current solution I have is to use a lambda function on the dataframe as such:</p> <pre><code>week1_topReceivers['distance_from_receiver'] = week1_topReceivers.apply(lambda row: get_distance(week1_wr_position.loc[np.where((week1_topReceivers['playId'] == row['playId']) &amp; (week1_topReceivers['frameId'] == row['frameId']) &amp; (week1_topReceivers['gameId'] == row['frameId']))]['x'], week1_topReceivers.loc[np.where((week1_topReceivers['playId'] == row['playId']) &amp; (week1_topReceivers['frameId'] == row['frameId']) &amp; (week1_topReceivers['gameId'] == row['frameId']))]['y'], row['x'], row['y']), axis = 1) </code></pre> <p>but querying the dataframe for the first two inputs takes a very long time with a large dataframe. I know there has to be a more optimal solution to this but my searches online aren't turning up any better options.</p> <p>EDIT: Here is a larger sample and the expected output:</p> <p>SAMPLE</p> <pre><code>x y o dir event position frameId team gameId playId playDirection route 88.89 36.47 105.63 66.66 None SS 1 home 2018090600 75 left NaN 91.35 44.16 290.45 16.86 None WR 1 away 2018090600 75 left HITCH 86.31 22.01 70.12 168.91 None FS 1 home 2018090600 75 left NaN 73.64 28.70 103.05 219.41 None FS 1 home 2018090600 75 left NaN 86.48 31.12 95.90 33.36 None MLB 1 home 2018090600 75 left NaN 82.67 20.53 81.14 174.57 None CB 1 home 2018090600 75 left NaN 84.00 43.49 108.23 110.32 None CB 1 home 2018090600 75 left NaN 85.63 26.59 87.69 38.80 None LB 1 home 2018090600 75 left NaN 88.89 36.47 105.63 68.49 None SS 2 home 2018090600 75 left NaN 91.37 44.17 290.45 29.61 None WR 2 away 2018090600 75 left HITCH 86.32 22.00 70.88 119.04 None FS 2 home 2018090600 75 left NaN 73.64 28.70 104.57 228.17 None FS 2 home 2018090600 75 left NaN 86.48 31.11 101.10 30.26 None MLB 2 home 2018090600 75 left NaN 82.68 20.53 82.24 147.46 None CB 2 home 2018090600 75 left NaN 84.02 43.49 107.33 106.73 None CB 2 home 2018090600 75 left NaN 85.64 26.61 87.69 37.51 None LB 2 home 2018090600 75 left NaN 88.88 36.47 107.02 57.53 None SS 3 home 2018090600 75 left NaN 91.37 44.17 290.45 32.20 None WR 3 away 2018090600 75 left HITCH 86.33 22.00 71.88 93.49 None FS 3 home 2018090600 75 left NaN 73.63 28.69 104.57 227.74 None FS 3 home 2018090600 75 left NaN </code></pre> <p>EXPECTED OUTPUT:</p> <pre><code>x y o dir event position frameId team gameId playId playDirection route distance_from_receiver 88.89 36.47 105.63 66.66 None SS 1 home 2018090600 75 left NaN 8.07 91.35 44.16 290.45 16.86 None WR 1 away 2018090600 75 left HITCH 0.00 86.31 22.01 70.12 168.91 None FS 1 home 2018090600 75 left NaN 22.72 73.64 28.70 103.05 219.41 None FS 1 home 2018090600 75 left NaN 23.51 86.48 31.12 95.90 33.36 None MLB 1 home 2018090600 75 left NaN 13.92 82.67 20.53 81.14 174.57 None CB 1 home 2018090600 75 left NaN 25.17 84.00 43.49 108.23 110.32 None CB 1 home 2018090600 75 left NaN 7.38 85.63 26.59 87.69 38.80 None LB 1 home 2018090600 75 left NaN 18.48 88.89 36.47 105.63 68.49 None SS 2 home 2018090600 75 left NaN 8.09 91.37 44.17 290.45 29.61 None WR 2 away 2018090600 75 left HITCH 0.00 86.32 22.00 70.88 119.04 None FS 2 home 2018090600 75 left NaN 22.74 73.64 28.70 104.57 228.17 None FS 2 home 2018090600 75 left NaN 23.53 86.48 31.11 101.10 30.26 None MLB 2 home 2018090600 75 left NaN 13.95 82.68 20.53 82.24 147.46 None CB 2 home 2018090600 75 left NaN 25.19 84.02 43.49 107.33 106.73 None CB 2 home 2018090600 75 left NaN 7.39 85.64 26.61 87.69 37.51 None LB 2 home 2018090600 75 left NaN 18.47 88.88 36.47 107.02 57.53 None SS 3 home 2018090600 75 left NaN 8.09 91.37 44.17 290.45 32.20 None WR 3 away 2018090600 75 left HITCH 0.00 86.33 22.00 71.88 93.49 None FS 3 home 2018090600 75 left NaN 22.74 73.63 28.69 104.57 227.74 None FS 3 home 2018090600 75 left NaN 23.54 </code></pre>
<p>You are looking for a <code>merge</code> or <code>join</code> operation. Try something like this:</p> <pre><code>df = pd.DataFrame({'gameId':[1,1,1,1,1,1],'playId':[1,1,1,1,1,1], 'frameId':[1,1,1,2,2,2], 'position':['A','B','WR','C','WR','D'], 'x':[87,56,45,34,45,67], 'y':[25,36,47,365,25,36]}) # create a table with just the wide receiver positions: wr = df.loc[df.position=='WR'].drop(columns='position') # merge the wide receiver x,y values into the original table based on the keys: df = df.merge(wr, how='outer', on=['gameId', 'playId', 'frameId'], suffixes=['', '_wr']) # apply your function to calculate the column (avoid using apply because it's super slow) df['dist_from_wr'] = [get_distance(x, y, x_wr, y_wr) for x, y, x_wr, y_wr in zip(df.x, df.y, df.x_wr, df.y_wr)] </code></pre> <p>Note as well, that you're lucky here because your function is already vectorized (which is not always the case) so you can actually do this even more efficiently by passing entire columns as input arguments as follows:</p> <pre><code>df['dist_from_wr'] = get_distance(df.x, df.y, df.x_wr, df.y_wr) </code></pre> <p>Result:</p> <pre><code>| gameId | playId | frameId | position | x | y | x_wr | y_wr | dist_from_wr | |-------:|-------:|--------:|:---------|----:|----:|-----:|-----:|-------------:| | 1 | 1 | 1 | A | 87 | 25 | 45 | 47 | 47.4131 | | 1 | 1 | 1 | B | 56 | 36 | 45 | 47 | 15.5563 | | 1 | 1 | 1 | WR | 45 | 47 | 45 | 47 | 0 | | 1 | 1 | 2 | C | 34 | 365 | 45 | 25 | 340.178 | | 1 | 1 | 2 | WR | 45 | 25 | 45 | 25 | 0 | | 1 | 1 | 2 | D | 67 | 36 | 45 | 25 | 24.5967 | </code></pre>
python|pandas|dataframe
2
540
64,810,833
Python 3.8 sort - Lambda function behaving differently for lists, strings
<p>Im trying to sort a list of objects based on frequency of occurrence (increasing order) of characters. Im seeing that the sort behaves differently if list has numbers versus characters. Does anyone know why this is happening?</p> <p>Below is a list of numbers sorted by frequency of occurrence.</p> <pre><code># Sort list of numbers based on increasing order of frequency nums = [1,1,2,2,2,3] countMap = collections.Counter(nums) nums.sort(key = lambda x: countMap[x]) print(nums) # Returns correct output [3, 1, 1, 2, 2, 2] </code></pre> <p>But If I sort a list of characters, the order of 'l' and 'o' is incorrect in the below example:</p> <pre><code># Sort list of characters based on increasing order of frequency alp = ['l', 'o', 'v', 'e', 'l', 'e', 'e', 't', 'c', 'o', 'd', 'e'] countMap = collections.Counter(alp) alp.sort(key = lambda x: countMap[x]) print(alp) # Returns Below output - characters 'l' and 'o' are not in the correct sorted order ['v', 't', 'c', 'd', 'l', 'o', 'l', 'o', 'e', 'e', 'e', 'e'] # Expected output ['v', 't', 'c', 'd', 'l', 'l', 'o', 'o', 'e', 'e', 'e', 'e'] </code></pre>
<p>Sorting uses stable sort - that means if you have the same sorting criteria for two elements they keep their <em>relative</em> order/positioning (here it being the amount of 2 for both of them).</p> <pre><code>from collections import Counter # Sort list of characters based on increasing order of frequency alp = ['l', 'o', 'v', 'e', 'l', 'e', 'e', 't', 'c', 'o', 'd', 'e'] countMap = Counter(alp) alp.sort(key = lambda x: (countMap[x], x)) # in a tie, the letter will be used to un-tie print(alp) ['c', 'd', 't', 'v', 'l', 'l', 'o', 'o', 'e', 'e', 'e', 'e'] </code></pre> <p>This fixes it by using the letter as second criteria.</p> <p>To get your exact output you can use:</p> <pre><code># use original position as tie-breaker in case counts are identical countMap = Counter(alp) pos = {k:alp.index(k) for k in countMap} alp.sort(key = lambda x: (countMap[x], pos[x])) print(alp) ['v', 't', 'c', 'd', 'l', 'l', 'o', 'o', 'e', 'e', 'e', 'e'] </code></pre> <hr /> <p>See <a href="https://stackoverflow.com/questions/1915376/is-pythons-sorted-function-guaranteed-to-be-stable">Is python&#39;s sorted() function guaranteed to be stable?</a> or <a href="https://wiki.python.org/moin/HowTo/Sorting/" rel="nofollow noreferrer">https://wiki.python.org/moin/HowTo/Sorting/</a> for details on sorting.</p>
python-3.x
3
541
63,857,586
How do I fix a value error when using scipy.integrate odeint function?
<p>I'm an engineering student and I'm trying to figure out how to use the odeint function from the scipy.integrate module (I've only ever used ode45 in MATLAB). I'm attempting to numerically solve a simple second order mass, spring, dashpot system. Below is the code I've written (specifically I'm using Jupyter Notebook and running the latest version of Python 3):</p> <pre><code>import numpy as np from scipy.integrate import odeint from matplotlib.pyplot as plt %matplotlib inline # Numerical solution to mx&quot; + bx' + kx = f(t) # Define state vector y and its derivative def translational(x,t,m,b,k,f): y = [x[0], x[1]] # state vector ydot = [x[1], f -b/m*x[1] - k/m*x[0]] # derivative of state vector return ydot # Parameters for the system t = np.arange(0,10,0.01) IC = [0, 0] #[x0 v0] m = 10 # kg b = 2 # N*s/m k = 5 # N/m f = 5*np.cos(10*t) y = odeint(translational,IC,t,args=(m,b,k,f)) </code></pre> <p>When I execute the code it returns the following error:</p> <pre><code>TypeError Traceback (most recent call last) TypeError: only size-1 arrays can be converted to Python scalars The above exception was the direct cause of the following exception: ValueError Traceback (most recent call last) &lt;ipython-input-7-423018367c52&gt; in &lt;module&gt; 20 k = 5 # N/m 21 f = 5*np.cos(10*t) ---&gt; 22 y = odeint(translational,IC,t,args=(m,b,k,f)) ~\anaconda3\lib\site-packages\scipy\integrate\odepack.py in odeint(func, y0, t, args, Dfun, col_deriv, full_output, ml, mu, rtol, atol, tcrit, h0, hmax, hmin, ixpr, mxstep, mxhnil, mxordn, mxords, printmessg, tfirst) 239 t = copy(t) 240 y0 = copy(y0) --&gt; 241 output = _odepack.odeint(func, y0, t, args, Dfun, col_deriv, ml, mu, 242 full_output, rtol, atol, tcrit, h0, hmax, hmin, 243 ixpr, mxstep, mxhnil, mxordn, mxords, ValueError: setting an array element with a sequence. </code></pre> <p>For the life of me I can't figure out what's wrong. Any help is much appreciated! Thanks.</p>
<p><code>f</code> is an array of numbers, and therefore so is <code>f -b/m*x[1] - k/m*x[0]</code>, so the return value of your function <code>translational</code> is not correct.</p> <p>Instead of attempting to precompute the values of <code>f</code>, what you should do is use the expression for the function in <code>translational</code>:</p> <pre><code>def translational(x,t,m,b,k): y = [x[0], x[1]] # state vector f = 5*np.cos(10*t) ydot = [x[1], f -b/m*x[1] - k/m*x[0]] # derivative of state vector return ydot </code></pre> <p>and remove <code>f</code> from the <code>args</code> parameter of the <code>odeint</code> function call.</p>
python|numpy|scipy
0
542
53,291,663
How to build a dictionary that map nodes to its degree in networkx2.1,python3?
<p>what I try is here :</p> <pre><code>def comm_deg(G): nodes = G.nodes() A=nx.adj_matrix(G) deg_dict = {} n = len(nodes) degree= A.sum(axis = 1) for i in range(n): deg_dict[nodes[i]] = degree[i,0] return deg_dict </code></pre> <p>it shows that KeyError: 0, I find both using <code>nodes[]</code> <code>degree[,]</code> would occur this issue</p> <p>here is the full error message:</p> <pre><code>&gt; File "/Users/shaoyupei/Desktop/code/untitled1.py", line 25, in comm_deg &gt; deg_dict[nodes[i]] = degrees[i,0] &gt; File "/anaconda3/lib/python3.6/site-packages/networkx/classes/reportviews.py", line 178, in __getitem__ &gt; return self._nodes[n] &gt; KeyError: 0 </code></pre>
<p>So there's several issues here.</p> <p>First, there's a better way to create a dict than what you're doing. In fact it's basically already built in. <code>G.degree</code> is already a dict-like object so that <code>G.degree[node]</code> will give the degree of <code>node</code>.</p> <p>If you really want it to be a dict, the best way to do that is probably</p> <pre><code>deg_dict = dict(G.degree) </code></pre> <hr> <p>Now let's look at the error you're getting. <code>G.nodes()</code> is not a list (it's also something dictlike). So when you set <code>nodes=G.nodes()</code>, then <code>nodes</code> isn't a list. Here <code>nodes[0]</code> trying to return the attributes of node <code>0</code> (and for what it's worth, if your nodes don't have any attributes <code>nodes[node]</code> will return an empty dict). But (I believe) <code>0</code> is not a node in your graph <code>G</code>. So this is the meaning of your error message.</p> <p>Also, as a general rule, if you ever do <code>n=len(x)</code> and then <code>for i in range(n):</code>, you almost always really want to do <code>for name in x:</code> or if you really need the index, you could do <code>for i, name in enumerate(x)</code>.</p> <p>So if you want to use the approach you did,</p> <pre><code>for i, node in nodes: deg_dist[node] = degree[i] </code></pre>
python|python-3.x|networkx
2
543
72,091,852
Pandas datetime filter
<p>I want to get subset of my dataframe if date is before 2022-04-22. The original df is like below</p> <p>df:</p> <pre><code> date hour value 0 2022-04-21 0 10 1 2022-04-21 1 12 2 2022-04-21 2 14 3 2022-04-23 0 10 4 2022-04-23 1 12 5 2022-04-23 2 14 </code></pre> <p>I checked data type by <strong>df.dtypes</strong> and it told me <strong>'date'</strong> column is <strong>'object'</strong>.</p> <p>So I checked individual cell using <strong>df['date'][0]</strong> and it is <strong>datetime.date(2022, 4, 21)</strong>.</p> <p>Also, <strong>df['date'][0] &lt; datetime.date(2022, 4, 22)</strong> gave me <strong>'True'</strong></p> <p>However, when I wanted to apply this smaller than in whole dataframe by</p> <p><strong>df2 = df[df['date'] &lt; datetime.date(2022, 4, 22)]</strong>,</p> <p>it showed <em>TypeError: '&lt;' not supported between instances of 'str' and 'datetime.date'</em></p> <p>Why was this happening? Thanks in advance!</p>
<p>You most likely still have some string dates in one of your rows thus the first element might be ok but a complete comparison of all values using &quot;&lt;&quot; will fail.</p> <p>Either you use timegeb's answer in the comments.</p> <pre><code>df['date'] = pd.to_datetime(df['date']) </code></pre> <p>or you convert them elementwise</p> <pre><code>import datetime df['date']=[datetime.datetime.strptime(d,'%Y-%m-%d') if type(d)==str else d for d in test] </code></pre> <p>Both methods might fail if you have an odd string in any of your rows. In that case you can use:</p> <pre><code>def convstr2date(d): if type(d)==str: try: d = datetime.datetime.strptime(str(d),'%Y-%m-%d') except: d = np.datetime64('NaT') return d df['date'] = [convstr2date(d) for d in df['date']] </code></pre>
python|pandas|dataframe|datetime
1
544
68,581,994
How to get the "element name" for selenium?
<p>I have (found) a python script whose purpose is to click a certain button on a certain web page. This is the script:</p> <pre><code>from selenium import webdriver from webdriver_manager.chrome import ChromeDriverManager driver = webdriver.Chrome(ChromeDriverManager().install()) url = &quot;https://url&quot; driver.get(url) button = driver.find_element_by_class_name(&quot;name-of-the-button&quot;) button.click() </code></pre> <p>As I understand, &quot;class name&quot; is the name of the button. My question is how do I know the class name of the specific button on that specific website? What's the proper way to get this information from?</p>
<p>What you're looking for is called the developer tools. If you're using Chrome, <a href="https://nira.com/chrome-developer-tools/#:%7E:text=From%20the%20Chrome%20menu%3A,web%20page%20you%27re%20on." rel="nofollow noreferrer">here's</a> a good tutorial. Most browsers have a very similar layout.</p> <p>Essentialy you want to hit F12 to open up the developer tools, hit CTRL+SHIFT+C and click the button you want. On the right, the button source code should be highlighted, and you should be able to see a <code>class=</code> attribute.</p>
python|selenium
1
545
10,583,640
Is this a correct implementation for a curried operator.add function?
<p>This <a href="http://www.uncarved.com/blog/not_currying.mrk" rel="nofollow">blogpost</a> proposed this implementation of a curried addition function:</p> <pre><code>def addN(n): return lambda x: x + n def plus(a, b): addA=addN(a) return addA(b) </code></pre> <p>I believe my version is more correct because it uses unary functions all the way.</p> <pre><code>from operator import add plus = lambda a: lambda b: add(a, b) plus(1)(2) </code></pre> <p>What do you think ?</p>
<p>Your <code>plus</code> function is identical to the <code>addN</code> function from the blog post. The only differences are:</p> <ol> <li>You used a <code>lambda</code> where the blog post used a <code>def</code>.</li> <li>For some reason you used <code>operator.add(a,b)</code> instead of simply <code>a+b</code>.</li> </ol> <p>Usage of <code>plus</code> and <code>addN</code> will give the same results so neither one is more "correct".</p> <p>Note that the function naming you chose is confusing since your <code>plus</code> function corresponds to the <code>addN</code> function (rather than the <code>plus</code> function) from the blog post.</p>
python|functional-programming
5
546
5,394,474
Django: Edit Function while not changing the Image data
<p>I have an edit function that I want the user to be able to edit the Picture object (tags), while keeping the old image. The form is looking for a photo but I do want the user to be able to change the image - just the other information. </p> <p>How do you pass the original image data from the picture object into the PictureForm so it validates?</p> <p>My view:</p> <pre><code>@csrf_protect @login_required def edit_picture(request, picture_id, template_name="picture/newuserpicture.html"): picture = get_object_or_404(Picture, id=picture_id) if request.user != picture.user: return HttpResponseForbidden() if request.method == 'POST': form = PictureForm(request.POST or None, request.FILES or None, instance=picture) if form.is_valid(): form.save() return HttpResponseRedirect('/picture/%d/' % picture.id ) else: form = PictureForm(instance=picture) data = { "picture":picture, "form":form } return render_to_response(template_name, data, context_instance=RequestContext(request)) </code></pre>
<p>I think this thread should give you a clue how to make existing fields readonly: <a href="https://stackoverflow.com/questions/324477/in-a-django-form-how-to-make-a-field-readonly-or-disabled-so-that-it-cannot-be">In a Django form, how do I make a field readonly (or disabled) so that it cannot be edited?</a></p> <p>I you want to hide the picture completely and stumble across validation errors because the field is marked as required in your model definition (<code>blank=True</code>) another option would be to override the form's save method and tweak the field's required attribute.</p> <p>Something along these lines:</p> <pre><code> def __init__(self, *args, **kwargs): super(PictureForm, self).__init__(*args, **kwargs) for key in self.fields: self.fields[key].required = False </code></pre>
python|django-forms|django-views
0
547
5,484,900
Does local GAE read and write to a local datastore file on the hard drive while it's running?
<p>I have just noticed that when I have a running instance of my GAE application, there nothing happens with the datastore file when I add or remove entries using Python code or in admin console. I can even remove the file and still have all data safe and sound in admin area and accessible from code. But when I restart my application, all data obviously goes away and I have a blank datastore. So, the question - does GAE reads all data from the file only when it starts and then deals with it in the memory, saving the data after I stop the application? Does it make any requests to the datastore file when the application is running? If it doesn't save anything to the file while it's running, then, possibly, data may be lost if the application unexpectedly stops? Please make it clear for me if you know how it works in this aspect. </p>
<p>How the datastore reads and writes its underlying files varies - the standard datastore is read on startup, and written progressively, journal-style, as the app modifies data. The SQLite backend uses a SQLite database.</p> <p>You shouldn't have to care, though - neither backend is designed for robustness in the face of failure, as they're development backends. You shouldn't be modifying or deleting the underlying files, either.</p>
python|google-app-engine|local-storage
3
548
5,313,513
Is there any python implement of edonkey/emule
<p>I want deploy a project in google appengine to search edonkey/emule, Is there any python implement of edonkey/emule or ed2k protocol library ?</p>
<p>After 20 minutes of googling all combinations of python and edonkey/emule/ed2k and visiting all sites of all clients listed under the "eDonkey network" Wikipedia page I can say with near certainty that the answer is "No."</p>
python|p2p
1
549
61,848,676
Is there a way to label the mean and median in matplotlib boxplot legend?
<p>I have the following box plot which plots some values with different mean and median values for each box; I am wondering if there is any way to label them so that they appear on the graph legend (because the current box plot plots an orange line for the median and a blue dot for the mean and it is not so clear which is which)? Also is there a way to make one legend for these subplots, instead of having a legend for each one, since they are essentially the same objects just different data?</p> <p>Here's a code example for one of the subplots, the other subplots are the same but have different data:</p> <pre><code>fig = plt.figure() xlim = (4, 24) ylim = (0, 3700) plt.subplot(1,5,5) x_5_diff = {5: [200, 200, 291, 200, 291, 200, 291, 200, 291, 200, 291, 200, 291, 200, 291], 7: [161, 161, 179, 161, 179, 161, 179, 161, 179, 161, 179, 161, 179, 161, 179], 9: [205, 205, 109, 205, 109, 205, 109, 205, 109, 205, 109, 205, 109, 205, 109], 11: [169, 169, 95, 169, 95, 169, 95, 169, 95, 169, 95, 169, 95, 169, 95], 13: [43, 43, 70, 43, 70, 43, 70, 43, 70, 43, 70, 43, 70, 43, 70], 15: [33, 33, 39, 33, 39, 33, 39, 33, 39, 33, 39, 33, 39, 33, 39], 17: [23, 23, 126, 23, 126, 23, 126, 23, 126, 23, 126, 23, 126, 23, 126], 19: [17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17, 17], 21: [15, 15, 120, 15, 120, 15, 120, 15, 120, 15, 120, 15, 120, 15, 120], 23: [63, 63, 25, 63, 25, 63, 25, 63, 25, 63, 25, 63, 25, 63, 25]} keys = sorted(x_5_diff) plt.boxplot([x_5_diff[k] for k in keys], positions=keys) # box-and-whisker plot plt.hlines(y = 1600, colors= 'r', xmin = 5, xmax = 23, label = "Level 1 Completed") plt.title("x = 5 enemies") plt.ylim(0,3700) plt.plot(keys, [sum(x_5_diff[k]) / len(x_5_diff[k]) for k in keys], '-o') plt.legend() plt.show() </code></pre> <p>Any help would be appreciated.</p>
<p>Its a bit late, but try this:</p> <pre><code> bp = plt.boxplot([x_5_diff[k] for k in keys], positions=keys) # You can access boxplot items using ist dictionary plt.legend([bp['medians'][0], bp['means'][0]], ['median', 'mean']) </code></pre>
python|matplotlib|boxplot
5
550
61,801,990
Tensorflow 2.0 : AttributeError: module 'tensorflow' has no attribute 'matrix_band_part'
<p>While running the code tf.matrix_band_part , i get the following error</p> <pre><code>AttributeError: module 'tensorflow' has no attribute 'matrix_band_part' </code></pre> <p>My tensorflow version : 2.0</p> <p>Any solution for this problem is needed.</p>
<p>I have found the answer. So i would like to share.</p> <p>Compatible version for the function for tensorflow 2.0 is</p> <pre><code>tf.compat.v1.matrix_band_part </code></pre> <p>Ref : <a href="https://www.tensorflow.org/api_docs/python/tf/linalg/band_part" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/linalg/band_part</a></p>
tensorflow2.0|attributeerror
1
551
61,789,921
Cant See Data Inside of Section Tag with Selenium
<p>I am trying to count how many buttons are on a page. And then later press them. However to access these buttons I have to go through an iframe, some generic (div) layers, and a region (section) layer.</p> <p>I'm able to get through the iframe layer with</p> <p><code>driver.switch_to.frame("iframeID")</code></p> <p>but cant figure out how to gain access to elements within the secion layers.</p> <p>html looks something like this:</p> <pre><code>&lt;iframe id="iframeID" resize="" src="about:blank;" seamless="" scrolling="no" allowfullscreen="" style="height: 2135px;" xpath="1"&gt; #document &lt;!document&gt; &lt;html&gt; &lt;head&gt;...&lt;/head&gt; &lt;body&gt; &lt;section class="sectionC"&gt; &lt;div class="divC"&gt; &lt;button type="button" class="buttonC" data-id="1234" style=""&gt;Done&lt;/button&gt; &lt;/div&gt; &lt;/section&gt; &lt;/body&gt; &lt;/html&gt; &lt;/iframe&gt; </code></pre>
<p>It is simple to achieve with Beautiful Soup:</p> <pre><code>from bs4 import BeautifulSoup soup = BeautifulSoup(driver.page_source, 'html.parser') len(soup.find_all('button', {'type' : 'button'})) </code></pre> <p>Hope this helps.</p>
python|selenium|selenium-webdriver|webdriver
0
552
67,538,117
Python Countdown but in Year, Month, Week, Days, Hours, Minutes, Sec
<p>I would like to have my lifetime displayed in the form of a countdown. Unfortunately, Python datetime only allows days. And couldn't program a conversion</p> <p>this is what i tried:</p> <pre><code>#!/usr/bin/env python3 import time import datetime from dateutil.relativedelta import relativedelta from datetime import timedelta while True: lebenszeit = datetime.datetime(2085,7,6) - datetime.datetime.now() jahr = str(int((lebenszeit.days)/365.25)) monate = str('%0.2d' %(int((((lebenszeit.days)*365)-int((lebenszeit.days)/365))*12))) tage = str('%0.2d' %(int(((((lebenszeit.days)/365)-int((lebenszeit.days)/365))*12)-((((lebenszeit.days)/365)-int((lebenszeit.days)/365))*12)*30))) print(jahr+&quot;.&quot;+monate+&quot;.&quot;+tag) i = i+1 </code></pre> <p>as you can see very complicated...</p> <p>I would like to have a countdown that should look like this ( Year, Month, Week, Days, Hours, Minutes, Secounds):</p> <pre><code>68.02.04.29.07.40.44 </code></pre>
<p>Here's how I'd do it. Note that &quot;months&quot; is approximate, assuming 30 days per month. Using only &quot;weeks&quot; would be more accurate.</p> <pre><code>import time import datetime from datetime import timedelta lebenszeit = datetime.datetime(2085,7,6) - datetime.datetime.now() alldays = lebenszeit.days jahr = int((alldays)/365.25) alldays -= int(jahr * 365.25) months = int((alldays)/30.0) alldays -= months * 30 weeks = int((alldays)/7.0) alldays -= weeks * 7 days = alldays print(f&quot;{jahr}.{months:02d}.{weeks:02d}.{days:02d}&quot;) </code></pre>
python|datetime|countdown
1
553
71,257,349
Linear discriminant Analysis Sklearn
<p>I’m running LDA on a dataset and the outcome was good across all metrics. However I can’t seem to extract the top features or loadings like I can for PCA.</p> <p>Is anyone familiar with extracting top features / loadings from LDA when using sklearn python3?</p>
<p>try this:</p> <pre><code>import numpy as np from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA X = training_input y = training_label.ravel() clf = LDA(n_components=1) clf.fit(X, y) clf.coef_ beste_Merkmal = np.argsort(clf.coef_)[0][::-1][0:25] print('beste_Merkmal =', beste_Merkmal) </code></pre>
python|python-3.x|lda|linear-discriminant
0
554
70,326,423
Python: Large float arithmetic for El Gamal decryption
<h2>Context</h2> <p>The decryption math formula for the El Gamal method is the following:</p> <pre><code>m = ab^(-k) mod p </code></pre> <p>Specifically in Python, I want to compute the following equivalent:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; m = (b**(-k) * a) % p </code></pre> <p>The issue in the above Python code is that the numbers inserted would overflow or result in 0.0 due to precision. Consider the following example:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; (15653**(-3632) * 923) % 262643 0.0 </code></pre> <p>The expected answer for the above example is 152015.</p> <h2>More Examples</h2> <p><a href="https://i.stack.imgur.com/M2YNr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/M2YNr.png" alt="enter image description here" /></a></p> <h2>Attempts</h2> <p>I've tried to research a strategy to deal with this problem and found that using Python's default <strong>pow(x,y,z)</strong>, which differs from <strong>math.pow()</strong>, can help.</p> <p><strong>pow(x,y,z)</strong> is equivalent to <strong>x**y % z</strong></p> <p>However, I cannot use <code>pow(x,y,z)</code>. I tried to use <strong>pow(15653, -3632, 262643)</strong>, but I cannot multiply the result of <strong>pow(15653, -3632)</strong> by 923 to then, as a final step, mod by 262643.</p> <p>In other words, instead of <strong>x**y % z</strong>, I am trying to perform <strong>(x**y * a ) % z</strong>, but there is clearly a 3-parameter limit or number of operations from <code>pow(x,y,z)</code>.</p> <p>What can I do to compute the math formula in Python?</p>
<p>Very easily: just multiply the two, and do an explicit mod:</p> <pre><code>&gt;&gt;&gt; p = 262643 &gt;&gt;&gt; pow(15653, -3632, p) 86669 &gt;&gt;&gt; 86669 * 923 % p 152015 </code></pre> <p>Done!</p>
python|math|cryptography|precision|elgamal
2
555
11,237,527
I have a set of points along the oval. How do I create a filled binary mask
<p>I am trying to get an filled binary mask of a contour of this image. <img src="https://i.stack.imgur.com/rp469.png" alt="The contour of the image"></p> <p>I took a look this question <a href="https://stackoverflow.com/questions/3654289/scipy-create-2d-polygon-mask">SciPy Create 2D Polygon Mask</a>; however it does not seem to like my set of data. </p> <pre><code>import numpy as np from matplotlib.nxutils import points_inside_poly nx, ny = 10, 10 poly_verts = [(1,1), (5,1), (5,9),(3,2),(1,1)] # Create vertex coordinates for each grid cell... # (&lt;0,0&gt; is at the top left of the grid in this system) x, y = np.meshgrid(np.arange(nx), np.arange(ny)) x, y = x.flatten(), y.flatten() points = np.vstack((x,y)).T grid = points_inside_poly(points, poly_verts) grid = grid.reshape((ny,nx)) print grid </code></pre> <p>I wonder if there is another way that I can try to return a binary mask or someone to explain the limitations of points_inside_poly</p> <p>because it seems to end up something like this <img src="https://i.stack.imgur.com/ybK18.png" alt="badMaskScatter"></p>
<p>I'm not sure what you're plotting at the end, but your example works for me:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt from matplotlib.nxutils import points_inside_poly from itertools import product, compress pv = [(1,1),(5,1),(5,9),(3,2),(1,1)] x, y = np.meshgrid(np.arange(10),np.arange(10)) x, y = x.flatten(), y.flatten() xy = np.vstack((x,y)).T grid = points_inside_poly(xy,pv) xv, yv = zip(*pv) xp, yp = zip(*compress(xy,grid)) plt.plot(xp,yp,'o',color='red',label='points') plt.plot(xv,yv,'o',color='blue',label='vertices') plt.xlim((0,10)) plt.ylim((0,10)) plt.legend() plt.show() </code></pre> <p><img src="https://i.stack.imgur.com/tbuz5.png" alt="points_inside_poly"></p>
python|image-processing|numpy|matplotlib
2
556
10,966,006
Django Middleware - How to edit the HTML of a Django Response object?
<p>I'm creating a custom middleware to django edit response object to act as a censor. I would like to find a way to do a kind of search and replace, replacing all instances of some word with one that I choose.</p> <p>I've created my middleware object, added it to my <code>MIDDLEWARE_CLASSES</code> in settings and have it set up to process the response. But so far, I've only found methods to add/edit cookies, set/delete dictionary items, or write to the end of the html:</p> <pre class="lang-py prettyprint-override"><code>class CensorWare(object): def process_response(self, request, response): """ Directly edit response object here, searching for and replacing terms in the html. """ return response </code></pre> <p>Thanks in advance.</p>
<p>You can simply modify the <code>response.content</code> string:</p> <pre><code>response.content = response.content.replace("BAD", "GOOD") </code></pre>
python|html|django
9
557
56,617,528
Keras model doest not provide same results after converting into tensorflow-js model
<p>Keras model performs as expected in python but after converting the model the results are different on the same data.</p> <p>I tried updating the keras and tensorflow-js version but still the same issue.</p> <p>Python code for testing:</p> <pre><code> import keras import cv2 model = keras.models.load_model("keras_model.h5") img = cv2.imread("test_image.jpg") def preprocessing_img(img): img = cv2.resize(img, (50,50)) x = np.array(img) image = np.expand_dims(x, axis=0) return image/255 prediction_array= model.predict(preprocessing_img(img)) print(prediction_array) print(np.argmax(prediction_array)) </code></pre> <p>Results: [[1.9591815e-16 1.0000000e+00 3.8602989e-18 3.2472009e-19 5.8910814e-11]] 1</p> <p>These results are correct.</p> <p>Javascript Code:</p> <p>tfjs version:</p> <pre><code>&lt;script type="text/javascript" src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@0.13.5"&gt; &lt;/script&gt; </code></pre> <p>preprocessing_img method and prediction in js:</p> <pre><code>function preprocessing_img(img) { let tensor = tf.fromPixels(img) const resized = tf.image.resizeBilinear(tensor, [50, 50]).toFloat() const offset = tf.scalar(255.0); const normalized = tf.scalar(1.0).sub(resized.div(offset)); const batched = normalized.expandDims(0) return batched } const pred = model.predict(preprocessing_img(imgEl)).dataSync() const class_index = tf.argMax(pred); </code></pre> <p>In this case the results are not same and the last index in the pred array is 1 90% of the time.</p> <p>I think there is something wrong with the preprocessing method of image in javascript since i am not an expert in javascript or am i missing something in javascript part?</p>
<p>It has to do with the image used for the prediction. The image needs to have completely loaded before the prediction.</p> <pre><code>imEl.onload = function (){ const pred = model.predict(preprocessing_img(imgEl)).dataSync() const class_index = tf.argMax(pred); } </code></pre>
javascript|python|keras|tensorflowjs
1
558
69,730,687
Is there a way to make Class[key] work to extract from a static container?
<p>I'm trying to build a class that maintains an internal list of all objects of that class and can look them up by ID. While I could use <code>myClass.get(objectID)</code> to get the objects, I would really prefer to use <code>myClass[objectID]</code> but this throws <code>TypeError: 'type' object is not subscriptable</code>. Is there any permutation of the sample case below that would work?</p> <pre class="lang-py prettyprint-override"><code>class Bucket(object): bucket = set() def __init__(self, id, name): self.id = id self.name = name Bucket.bucket.add(self) def get(id): return Bucket.__getitem__(None, id) def __getitem__(self, id): for i in Bucket.bucket: if i.id == id: return i.name return None b = Bucket(&quot;foo&quot;, &quot;bar&quot;) print(1, Bucket.get(&quot;foo&quot;)) print(2, b[&quot;foo&quot;]) print(3, Bucket[&quot;foo&quot;]) </code></pre> <pre><code>1 bar 2 bar Traceback (most recent call last): File &quot;{snip}\bucketTest.py&quot;, line 22, in &lt;module&gt; print(3, Bucket[&quot;foo&quot;]) TypeError: 'type' object is not subscriptable </code></pre> <hr /> <p><strong>EDIT</strong></p> <p>With a hint in the direction of metaclasses, I've come up with this. As I have honestly never stumbled across them before, I have to ask: am I doing this right? Am I missing some fundamental bit, or is this vaguely correct? How could I improve it?</p> <pre class="lang-py prettyprint-override"><code>class MetaBucket(type): def __init__(cls, name, bases, dct): cls.bucket = set() def __getitem__(cls, key): for i in cls.bucket: if i.id == key: return i.name class Bucket(metaclass = MetaBucket): def __init__(self, id, name): self.id = id self.name = name Bucket.bucket.add(self) b = Bucket(&quot;foo&quot;, &quot;bar&quot;) print(3, Bucket[&quot;foo&quot;]) </code></pre>
<p>With respect to your <strong>EDIT</strong> that uses a metaclass, I'd suggest using a <code>dict</code> instead of a <code>set</code> for the <code>bucket</code> attribute since it makes things easier and more succinct:</p> <pre><code>class MetaBucket(type): def __init__(cls, name, bases, dct): cls.bucket = {} def __getitem__(cls, id): return cls.bucket[id] def __setitem__(cls, id, name): cls.bucket[id] = name class Bucket(metaclass=MetaBucket): def __init__(self, id, name): self.bucket[id] = name b = Bucket(&quot;foo&quot;, &quot;bar&quot;) print(3, Bucket[&quot;foo&quot;]) # -&gt; 3 bar print(4, Bucket[&quot;nonesuch&quot;]) # -&gt; KeyError: 'nonesuch' </code></pre>
python|class
0
559
17,737,914
Unable to iterate over the "tr" element of a table using beautiful soup
<pre><code>from bs4 import BeautifulSoup import re import urllib2 url = 'http://sports.yahoo.com/nfl/players/5228/gamelog' page = urllib2.urlopen(url) soup = BeautifulSoup(page) table = soup.find(id='player-game_log-season').find('tbody').find_all('tr') for rows in tr: data = raws.find_all("td") print data </code></pre> <p>I'm trying to go through the table for a certain player's stats last year and grab their stats, however, I get a <code>AttributeError: 'NoneType' object has no attribute 'find_all'</code> When I try to run this code. I'm new to beautiful soup so I'm not really sure what the problem is. </p> <p>Also if anyone has any good tutorials to recommend me that would be awesome. Reading through the documentation is sort of confusing as I am fairly new to programming. </p>
<p>There's no <code>tbody</code> in the table under <code>div#player-game_log-season</code>. And your code has some typos.</p> <ul> <li><code>raws</code> -> <code>rows</code></li> <li><code>table</code> -> <code>tr</code></li> </ul> <hr> <pre><code>... tr = soup.find(id='player-game_log-season').find_all('tr') for rows in tr: data = rows.find_all("td") print data </code></pre>
python|web-scraping|beautifulsoup
1
560
60,895,313
Pandas Dataframe: New Column that uses Country if Province is empty, else use the Province
<p>The meat of what I'm trying to do can be seen at the bottom. Here's the dataset I'm using: <a href="https://github.com/CSSEGISandData/COVID-19/blob/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_confirmed_global.csv" rel="nofollow noreferrer">https://github.com/CSSEGISandData/COVID-19/blob/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_confirmed_global.csv</a></p> <p>What I want is to add to ['Names'] the data from ['Province/State'] if it isn't empty, else the data from ['Country/Region'].</p> <p>I'm building an interactive heat map using plotly, and it works. But the problem is, there are multiple markers named "Canada" (for each of the states there) and Greenland is named "Denmark," because in the CSV file, "Greenland" is under "State/Province" and "Denmark" is under "Country/Region."</p> <pre><code>import pandas as pd import plotly.graph_objects as go import requests from datetime import date, timedelta yesterday = date.today() - timedelta(days=1) confirmed_url = 'https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_confirmed_global.csv' deaths_url = 'https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_deaths_global.csv' yesterdays_date = yesterday.strftime('%-m/%d/%y') confirmed = pd.read_csv(confirmed_url) deaths = pd.read_csv(deaths_url) confirmed.iloc[0]['Country/Region'] #Test for place in deaths[['Province/State','Country/Region']]: if place is float: deaths_names.append('Country/Region') else: deaths_names.append('Province/State') confirmed['Name'] = df(confirmed_names) deaths['Name'] = df(deaths_names) </code></pre>
<p>This worked:</p> <pre><code>def names_column(frame, lst): #Makes a new column called Name for i in range(len(frame)): if type(frame['Province/State'][i]) is str: lst.append(frame['Province/State'][i]) else: lst.append(frame['Country/Region'][i]) frame['Name'] = df(lst) names_column(confirmed, confirmed_names) names_column(deaths, deaths_names) </code></pre>
python|pandas|dataframe
-1
561
72,775,645
in class, pass from method to method the values of local variables
<p>I have a problem to pass from method to method the values of local variables. I didn't put them in the constructor because I would like some processing to be done in the methods</p> <pre><code>class Myclass: def __init__(self,nbr1,nbr2): self.nbr1 = nbr1 self.nbr2 = nbr2 def operation1(self): nbr3 =nbr1+nbr2 return nbr3 #I would like to pass the nbr3 value in the operation2 function # for some treatments def operation2(self): nbr4= nbr3*2 return nbr4, nbr3 #and return value of def operation2 in showMe function def showMe(self,param): showresult = param() print(f'this a result : {showresult[0]} and another result {showresult[1]}') nbr1 = 5 nbr2 = 7 result = Myclass(nbr1,nbr2) result.showMe(result.operation2) </code></pre> <p>but I have an error nbr3 is not defined</p> <p>thank for helps</p>
<p>You need to actually call <code>operation1()</code> somewhere</p> <pre><code>def operation2(self): nbr3 = self.operation1() nbr3 *= 2 return nbr4, nbr3 </code></pre> <p>Or set the instance variable</p> <pre><code>def operation1(self): self.nbr3 = self.nbr1 + self.nbr2 def operation2(self): nbr4 = self.nbr3 * 2 return nbr4, self.nbr3 ... result = Myclass(nbr1,nbr2) result.operation1() result.showMe(result.operation2) </code></pre> <p>Or remove the <code>operation1</code> and set the instance variable</p> <pre><code>class Myclass: def __init__(self,nbr1,nbr2): self.nbr1 = nbr1 self.nbr2 = nbr2 self.nbr3 = self.nbr1 + self.nbr2 def operation2(self): nbr4= self.nbr3 * 2 return nbr4, nbr3 </code></pre>
python|oop
0
562
68,110,565
Does Google Drive API search responds not only files/folder metadata but also the matched content w.r.t query in the search response?
<pre><code>response = DRIVE.files().list(q=&quot;fullText contains 'what is python?',spaces='drive',fields='*',pageToken=page_token).execute() </code></pre> <p>from the above sample Python code,what extra param that I can pass or extract to get the files with the matched content as well with them?</p> <p>Example response(current)</p> <blockquote> <p>{'kind': 'drive#file', 'id': '1acVspMMcliVE8M6WzNL14sdvXYT-dScw', 'name': '4590764611082297754.txt', 'mimeType': 'text/plain',.....}</p> </blockquote> <p>So can this json response can also include the matched content from the query and also the score in any form. Please let me know if this feature is available or can be coded/extracted somehow</p> <p>Thanks</p>
<p>The <a href="https://developers.google.com/drive/api/v2/search-shareddrives" rel="nofollow noreferrer">q</a> parameter for file.list method allows you to search for things like files with a specific title, or file type</p> <p>The google drive api is just a file storage system it does not have the power to open a file and see what it contains. There is no method for searching the contents of a file.</p>
python|google-api|google-drive-api|google-api-python-client
0
563
59,411,587
Python - find items with multiple occurences and replace with mean
<p>For df:</p> <pre><code>sample type count sample1 red 5 sample1 red 7 sample1 green 3 sample2 red 2 sample2 green 8 sample2 green 8 sample2 green 2 sample3 red 4 sample3 blue 5 </code></pre> <p>I would like to find items in "type" with multiple occurences and replace the "count" for each of those with the mean count. So expected output: </p> <pre><code>sample type count sample1 red 6 sample1 green 3 sample2 red 2 sample2 green 6 sample3 red 4 sample3 blue 5 </code></pre> <p>So</p> <pre><code>non_uniq = df.groupby("sample")["type"].value_counts() non_uniq = non_uniq.where(non_uniq &gt; 1).dropna() </code></pre> <p>finds the "type" with multiple occurences but I don't know how to match it in df</p>
<p>I believe you can simplify solution to <code>mean</code> per all groups, because mean by value is same like this value:</p> <pre><code>df = df.groupby(["sample","type"], as_index=False, sort=False)["count"].mean() print (df) sample type count 0 sample1 red 6 1 sample1 green 3 2 sample2 red 2 3 sample2 green 6 4 sample3 red 4 5 sample3 blue 5 </code></pre> <p>Your solution is possible change by:</p> <pre><code>m = df.groupby(["sample", "type"])['type'].transform('size') &gt; 1 df1 = df[m].groupby(["sample","type"], as_index=False, sort=False)["count"].mean() df = pd.concat([df1, df[~m]], ignore_index=True) print (df) sample type count 0 sample1 red 6 1 sample2 green 6 2 sample1 green 3 3 sample2 red 2 4 sample3 red 4 5 sample3 blue 5 </code></pre>
python|pandas
1
564
35,619,831
Iterating to produce a unique list
<p>This is the initial code:</p> <pre><code>word_list = ['cat','dog','rabbit'] letter_list = [ ] for a_word in word_list: for a_letter in a_word: letter_list.append(a_letter) print(letter_list) </code></pre> <p>I need to modify it to produce a list of unique letters.</p> <p>Could somebody please advise how to do this <strong>without using set()</strong></p> <p>The result should be like this</p> <pre><code>&gt; ['c', 'a', 't', 'd', 'o', 'g', 'r', 'b', 'i'] </code></pre>
<p>Only problem that I can see is that you have not checked if the letter is already present in list or not. Try this:</p> <pre><code>&gt;&gt;&gt; word_list= ['cat', 'dog', 'rabbit'] &gt;&gt;&gt; letter_list= [] &gt;&gt;&gt; for a_word in word_list: for a_letter in a_word: if a_letter not in letter_list: letter_list.append(a_letter) &gt;&gt;&gt; print letter_list ['c', 'a', 't', 'd', 'o', 'g', 'r', 'b', 'i'] </code></pre>
python|list|for-loop|char|unique
2
565
73,484,500
useEffect fires and print statements run but no actual axios.post call runs reactjs
<p>I have a useEffect function that is firing due to <code>yearsBackSettings</code> changing and the console.log statements inside useEffect fire too:</p> <pre><code>useEffect(() =&gt; { console.log(&quot;something changed&quot;) console.log(yearsBackSettings) if (userId) { const user_profile_api_url = BASE_URL + '/users/' + userId const request_data = { searches: recentSearches, display_settings: displaySettings, years_back_settings: yearsBackSettings } console.log(&quot;running user POST&quot;) console.log(request_data) axios.post(user_profile_api_url, request_data) .then(response =&gt; { console.log(&quot;user POST response&quot;) console.log(response) }) } }, [recentSearches, displaySettings, yearsBackSettings]) </code></pre> <p>As the image shows, changing yearsBackSettings causes this to run, which SHOULD make a post request with all the new settings:</p> <p><a href="https://i.stack.imgur.com/AtNpC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AtNpC.png" alt="enter image description here" /></a></p> <p>However, for some reason there is nothing happening on the server except the stock search running:</p> <pre><code>the last updated time for stock ibm before save: 08/25/2022 08:13:30 stock was updated within the last 5 minutes...no need to make an api call the last updated time for stock ibm after save: 08/25/2022 08:13:30 [25/Aug/2022 08:17:25] &quot;POST /users/114260670592402026255 HTTP/1.1&quot; 200 9 [25/Aug/2022 08:17:25] &quot;GET /dividends/ibm/3/5 HTTP/1.1&quot; 200 4055 the last updated time for stock ibm before save: 08/25/2022 08:13:30 stock was updated within the last 5 minutes...no need to make an api call the last updated time for stock ibm after save: 08/25/2022 08:13:30 [25/Aug/2022 08:17:26] &quot;GET /dividends/ibm/27/5 HTTP/1.1&quot; 200 8271 the last updated time for stock ibm before save: 08/25/2022 08:13:30 stock was updated within the last 5 minutes...no need to make an api call the last updated time for stock ibm after save: 08/25/2022 08:13:30 [25/Aug/2022 08:18:11] &quot;GET /dividends/ibm/27/70 HTTP/1.1&quot; 200 14734 </code></pre> <p>The post to users there was an initial one when users loaded. If I sign in and sign out I lose the 70 years in the second component:</p> <p><a href="https://i.stack.imgur.com/6WyFj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6WyFj.png" alt="enter image description here" /></a></p> <p>When I log out and log in it shows 27 years and 5 years, I lose the 70 because the /users POST didn't run</p> <p>I have the following React main component</p> <pre><code>import React, {useState, useEffect} from 'react'; import { connect } from 'react-redux'; import axios from 'axios'; const SearchPage = ({userId}) =&gt; { const [recentSearches, setRecentSearches] = useState([DEFAULT_STOCK]); const [dividendsYearsBack, setDividendsYearsBack] = useState('3'); const [debouncedDividendYearsBack, setDebouncedDividendYearsBack] = useState('3'); const [earningsYearsBack, setEarningsYearsBack] = useState('5'); const [debouncedEarningsYearsBack, setDebouncedEarningsYearsBack] = useState('5'); const [errorMessage, setErrorMessage] = useState(''); ) const [displaySettings, setDisplaySettings] = useState([ {setting_name: 'showYieldChange', visible: true}, {setting_name: 'showAllDividends', visible: true}, {setting_name: 'showAllEarnings', visible: true}, ]) const [yearsBackSettings, setYearsBackSettings] = useState([ {section: 'dividendsYearsBack', years_back: 3}, {section: 'earningsYearsBack', years_back: 5} ]) const onTermUpdate = (term) =&gt; { const trimmed = term.trim() setTerm(trimmed); } debounceTerm(setDebouncedTerm, term, 1500); debounceTerm(setDebouncedDividendYearsBack, dividendsYearsBack, 1500); debounceTerm(setDebouncedEarningsYearsBack, earningsYearsBack, 1500); useEffect(() =&gt; {runSearch()}, [debouncedTerm]); useEffect(() =&gt; { // alert(dividendsYearsBack) if (dividendsYearsBack !== '' &amp;&amp; earningsYearsBack !== '') { runSearch(); } }, [debouncedDividendYearsBack, debouncedEarningsYearsBack]) useEffect(() =&gt; { const yearsSettingsCopy = Object.assign(yearsBackSettings); const dividendsYearsBackSetting = yearsSettingsCopy.find((dict) =&gt; dict.section == 'dividendsYearsBack'); dividendsYearsBackSetting.years_back = dividendsYearsBack; const earningsYearsBackSetting = yearsSettingsCopy.find((dict) =&gt; dict.section == 'earningsYearsBack'); earningsYearsBackSetting.years_back = earningsYearsBack; setYearsBackSettings(yearsSettingsCopy); }, [dividendsYearsBack, earningsYearsBack]) useEffect(() =&gt; { const dividendsYearsBackSetting = yearsBackSettings.find((dict) =&gt; dict.section == 'dividendsYearsBack'); setDividendsYearsBack(dividendsYearsBackSetting.years_back); const earningsYearsBackSetting = yearsBackSettings.find((dict) =&gt; dict.section == 'earningsYearsBack'); setEarningsYearsBack(earningsYearsBackSetting.years_back); }, [yearsBackSettings]) useEffect(() =&gt; { console.log(&quot;user id changed&quot;) if (userId) { const user_profile_api_url = BASE_URL + '/users/' + userId axios.get(user_profile_api_url, {}) .then(response =&gt; { // console.log(response) const recent_searches_response = response.data.searches; const new_recent_searches = []; recent_searches_response.map(dict =&gt; { new_recent_searches.push(dict.search_term) }) setRecentSearches(new_recent_searches); setDisplaySettings(response.data.display_settings); setYearsBackSettings(response.data.years_back_settings); }) .catch((error) =&gt; { console.log(&quot;error in getting user profile: &quot;, error.message) }) } }, [userId]) useEffect(() =&gt; { console.log(&quot;something changed&quot;) console.log(yearsBackSettings) if (userId) { const user_profile_api_url = BASE_URL + '/users/' + userId const request_data = { searches: recentSearches, display_settings: displaySettings, years_back_settings: yearsBackSettings } console.log(&quot;running user POST&quot;) console.log(request_data) axios.post(user_profile_api_url, request_data) .then(response =&gt; { console.log(&quot;user POST response&quot;) console.log(response) }) } }, [recentSearches, displaySettings, yearsBackSettings]) return ( &lt;div className=&quot;ui segment&quot;&gt; {renderMainContent()} &lt;/div&gt; &lt;/div&gt; ) } const mapStateToProps = state =&gt; { return { userId: state.auth.userId }; }; export default connect( mapStateToProps )(SearchPage); // export default SearchPage; </code></pre> <p>The yearsBackSettings is showing up changed to 27 and 70 (from the picture) but the POST request doesn't fire. How can I get these settings to save when the settings change?</p> <p>The issue is that the post doesnt run when I update the years back settings:</p> <pre><code> useEffect(() =&gt; { console.log(&quot;running user profile post&quot;); const user_profile_api_url = BASE_URL + '/users/' + userId const request_data = { searches: recentSearches, display_settings: displaySettings, years_back_settings: yearsBackSettings } axios.post(user_profile_api_url, request_data) .then(response =&gt; { console.log(response) }) }, [recentSearches, displaySettings, yearsBackSettings]) </code></pre> <p>this doesnt run when yearsBackSettings changes. I am logging yearsBackSettings to console and it is certainly changed, but the post request to user profile doesnt fire</p> <p>I think the issue is here:</p> <pre><code> useEffect(( ) =&gt; { const dividendsYearsBackSetting = yearsBackSettings.find((dict) =&gt; dict.section == 'dividendsYearsBack'); dividendsYearsBackSetting.years_back = dividendsYearsBack; const earningsYearsBackSetting = yearsBackSettings.find((dict) =&gt; dict.section == 'earningsYearsBack'); earningsYearsBackSetting.years_back = earningsYearsBack; setYearsBackSettings(yearsBackSettings); }, [dividendsYearsBack, earningsYearsBack]) </code></pre> <p>as an example, I tried doing any useEffect with yearsBackSettings, and it never works. I have changed the settings a few times and the alert does not fire:</p> <pre><code>useEffect(() =&gt; { alert(&quot;years back settings changed&quot;) }, [yearsBackSettings]) </code></pre>
<p>The issue is somewhere in your python server code, in your console you can see that you are actually logging a response object with a 200 response code, meaning your server doesn't crash during the actual request.</p> <p>There might be a problem in your server side logging causing the request to not show up, I would look at that first.</p>
javascript|python|reactjs|django
5
566
71,090,728
Optimization variables of a neural network model with simulated annealing
<p>I implement an MLP neural network model on the data, for optimization 4 variables a function base on the MLP model is defined, and simulated annealing run on this function. I don't know why I get this error (attached below).</p> <p>Neural network code:</p> <pre><code># mlp for regression from numpy import sqrt from pandas import read_csv from sklearn.model_selection import train_test_split from tensorflow.keras import Sequential from tensorflow.keras.layers import Dense import tensorflow from tensorflow import keras from matplotlib import pyplot from keras.layers import Dropout from tensorflow.keras import regularizers # determine the number of input features n_features = X_train.shape[1] # define model model = Sequential() model.add(Dense(150, activation='tanh', kernel_initializer='zero',kernel_regularizer=regularizers.l2(0.001), input_shape=(n_features,))) #relu/softmax/tanh model.add(Dense(100, activation='tanh', kernel_initializer='zero',kernel_regularizer=regularizers.l2(0.001))) model.add(Dense(50, activation='tanh', kernel_initializer='zero',kernel_regularizer=regularizers.l2(0.001))) model.add(Dropout(0.0)) model.add(Dense(1)) # compile the model opt= keras.optimizers.Adam(learning_rate=0.001) #opt = tensorflow.keras.optimizers.RMSprop(learning_rate=0.001,rho=0.9,momentum=0.0,epsilon=1e-07,centered=False,name=&quot;RMSprop&quot;) model.compile(optimizer=opt, loss='mse') # fit the model history=model.fit(X_train, y_train, validation_data = (X_test,y_test), epochs=100, batch_size=10, verbose=0,validation_split=0.3) # evaluate the model error = model.evaluate(X_test, y_test, verbose=0) print('MSE: %.3f, RMSE: %.3f' % (error, sqrt(error))) # plot learning curves pyplot.title('Learning Curves') pyplot.xlabel('Epoch') pyplot.ylabel('Cross Entropy') pyplot.plot(history.history['loss'], label='train') pyplot.plot(history.history['val_loss'], label='val') pyplot.legend() pyplot.show() </code></pre> <p>function code:</p> <pre><code>def objective_function(X): wob = X[0] torque= X[1] RPM = X[2] pump = X[3] input=[wob,torque,RPM, 0.00017,0.027,pump,0,0.5,0.386,0.026,0.0119,0.33,0.83,0.48] input = pd.DataFrame(input) obj= model.predict(input) return obj </code></pre> <p>simulated annealing for optimization:</p> <pre><code>import time import random import math import numpy as np ## custom section initial_temperature = 100 cooling = 0.8 # cooling coef. number_variables = 4 upper_bounds = [1,1,1,1] lower_bounds = [0,0,0,0] computing_time = 1 # seconds ## simulated Annealing algorithm ## 1. Genertate an initial solution randomly initial_solution = np.zeros((number_variables)) for v in range(number_variables): initial_solution[v] = random.uniform(lower_bounds[v], upper_bounds[v]) current_solution = initial_solution best_solution = initial_solution n=1 # no of solutions accepted best_fitness = objective_function(best_solution) current_temperature = initial_temperature # current temperature start = time.time() no_attemps = 100 # number of attemps in each level of temperature record_best_fitness = [] for i in range(9999999): for j in range(no_attemps): for k in range(number_variables): ## 2. generate a candidate solution y randomly based on solution x current_solution[k] = best_solution[k] + 0.1*(random.uniform(lower_bounds[k], upper_bounds[k])) current_solution[k] = max(min(current_solution[k], upper_bounds[k]), lower_bounds[k]) # repaire the solution respecting the bounds ## 3. check if y is better than x current_fitness = objective_function(current_solution) E = abs(current_fitness - best_solution) if i==0 and j==0: EA = E if current_fitness &lt; best_fitness: p = math.exp(-E/(EA*current_temperature)) # make a decision to accept the worse solution or not ## 4. make a decision whether r &lt; p if random.random()&lt;p: accept = True # this worse solution is not accepted else: accept = False # this worse solution is not accepted else: accept = True # accept better solution ## 5. make a decision whether step comdition of inner loop is met if accept == True: best_solution = current_solution # update the best solution best_fitness = objective_function(best_solution) n = n + 1 #count the solutions accepted EA = (EA*(n-1)+E)/n # accept EA print('interation : {}, best_solution:{}, best_fitness:{}'. format(i, best_solution, best_fitness)) record_best_fitness.append(best_fitness) ## 6. decrease the temperature current_temperature = current_temperature * cooling ## 7. stop condition of outer loop is met end = time.time() if end-start &gt;= computing_time: break </code></pre> <p>The error picture: <a href="https://i.stack.imgur.com/2yaX9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2yaX9.png" alt="enter image description here" /></a></p>
<p>it's for your input shape, in MLP neural network your input shape is [none,14], but in your function's input id [14,1], so you need transpose it.</p> <pre><code>def objective_function(X): wob = X[0] torque= X[1] RPM = X[2] pump = X[3] input=[wob,torque,RPM, 0.00017,0.027,pump,0,0.5,0.386,0.026,0.0119,0.33,0.83,0.48] input = pd.DataFrame(input) input=input.T model1.predict(input) return obj </code></pre>
python|function|optimization|mlp|simulated-annealing
0
567
70,984,483
Issue with load_img- Error- FileNotFoundError: [Errno 2] No such file or directory:
<pre><code>for i in os.listdir(&quot;D:/Deep Learning/vgg16_images&quot;): print(i) image=[] for i in os.listdir(r'D:\Deep Learning\vgg16_images'): img = load_img(i,target_size=(224, 224)) img=img_to_array(img) img = img.reshape((1, img.shape[0], img.shape[1], img.shape[2])) # prepare the image for the VGG model img = preprocess_input(img) image.append(img) </code></pre> <blockquote> <p>The for loop at the top throws 4 images 1) bus.jpg 2) mug.jpg 3) schoolbus.jpg 4) traffic.jpg</p> </blockquote> <blockquote> <p>the next section of the code at load_img throws the error FileNotFoundError: [Errno 2] No such file or directory: 'bus.jpg'</p> </blockquote> <p><strong>Path and image name extension are all correct, the same code works if i remove the image &quot;bus&quot;. and the issue is not with that particular image, if i add any other image it throws the error</strong></p> <p><strong>the pattern i saw was that once i run the code on x number of images and then when i rerun the code by adding new images it throws the error, tried resolving it by restarting the kernel and closing and refreshign the folders aswell</strong></p>
<p>my apologies..this question should not have been there in the first place, realized it later..those days when the brain stops working completely</p> <p>The mentioned directory and load_img paths are different. load_imp was working for all images other than the bus.jpg was because those images were there in both the folder paths</p> <p>Not deleting the question though- someday someone might have the bad day as well</p>
python|image
0
568
70,840,179
pandas pivot data Cols to rows and rows to cols
<p>I am using python and pandas have tried a variety of attempts to pivot the following (switch the row and columns)</p> <p>Example: A is unique</p> <pre><code> A B C D E... (and so on) [0] apple 2 22 222 [1] peach 3 33 333 [N] ... and so on </code></pre> <p>And I would like to see</p> <pre><code> ? ? ? ? ... and so on A apple peach B 2 3 C 22 33 D 222 333 E ... and so on </code></pre> <p>I am ok if the columns are named after the col &quot;A&quot;, and if the first column needs a name, lets call it &quot;name&quot;</p> <pre><code> name apple peach ... B 2 3 C 22 33 D 222 333 E ... and so on </code></pre>
<p>Think you're wanting <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.transpose.html" rel="nofollow noreferrer">transpose</a> here.</p> <pre><code>df = pd.DataFrame({'A': {0: 'apple', 1: 'peach'}, 'B': {0: 2, 1: 3}, 'C': {0: 22, 1: 33}}) df = df.T print(df) 0 1 A apple peach B 2 3 C 22 33 </code></pre> <p>Edit for comment. I would probably reset the index and then use the df.columns to update the column names with a list. You may want to reset the index again at the end as needed.</p> <pre><code>df.reset_index(inplace=True) df.columns = ['name', 'apple', 'peach'] df = df.iloc[1:, :] print(df) name apple peach 1 B 2 3 2 C 22 33 </code></pre>
python|pandas|pivot
1
569
60,030,104
how to convert a pandas dataframe to a list of dictionaries in python?
<p>I have a dataframe like this:</p> <pre><code>data = {'id': [1,1,2,2,2,3], 'value': ['a','b','c','d','e','f'] } df = pd.DataFrame (data, columns = ['id','value']) </code></pre> <p>I want to convert it to a list of dictionary like:</p> <pre><code>df_dict = [ { 'id': 1, 'value':['a','b'] }, { 'id': 2, 'value':['c','d','e'] }, { 'id': 3, 'value':['f'] } ] </code></pre> <p>And then eventually insert this list <code>df_dict</code> to another dictionary:</p> <pre><code>{ "products": [ { "productID": 1234, "tag": df_dict } ] } </code></pre> <p>We don't need to worry about how the other dictionary looks like. We can simply use the example I gave above. </p> <p>How do I do that? Many thanks!</p>
<p>You can groupby and then use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_dict.html" rel="nofollow noreferrer">to_dict</a> to convert it to a dictionary.</p> <pre><code>&gt;&gt;&gt; df.groupby(df['id'], as_index=False).agg(list).to_dict(orient="records") [{'id': 1, 'value': ['a', 'b']}, {'id': 2, 'value': ['c', 'd', 'e']}, {'id': 3, 'value': ['f']}] </code></pre>
python|pandas|dataframe|dictionary
3
570
60,235,937
How do I extract the information from the website?
<p>I am trying to gather information of all the vessels from this website: <a href="https://www.marinetraffic.com/en/data/?asset_type=vessels&amp;columns=flag,shipname,photo,recognized_next_port,reported_eta,reported_destination,current_port,imo,ship_type,show_on_live_map,time_of_latest_position,lat_of_latest_position,lon_of_latest_position&amp;ship_type_in|in|Cargo%20Vessels|ship_type_in=7" rel="nofollow noreferrer">https://www.marinetraffic.com/en/data/?asset_type=vessels&amp;columns=flag,shipname,photo,recognized_next_port,reported_eta,reported_destination,current_port,imo,ship_type,show_on_live_map,time_of_latest_position,lat_of_latest_position,lon_of_latest_position&amp;ship_type_in|in|Cargo%20Vessels|ship_type_in=7</a> </p> <p>This is my code right now: </p> <pre><code>import selenium.webdriver as webdriver url = "https://www.marinetraffic.com/en/data/?asset_type=vessels&amp;columns=flag,shipname,photo,recognized_next_port,reported_eta,reported_destination,current_port,imo,ship_type,show_on_live_map,time_of_latest_position,lat_of_latest_position,lon_of_latest_position&amp;ship_type_in|in|Cargo%20Vessels|ship_type_in=7" browser = webdriver.Chrome(executable_path=r"C:\Users\CSA\OneDrive - College Sainte-Anne\Programming\PYTHON\Learning\WS\chromedriver_win32 (1)\chromedriver.exe") browser.get(url) browser.implicitly_wait(100) Vessel_link = browser.find_element_by_class_name("ag-cell-content-link") Vessel_link.click() browser.implicitly_wait(30) imo = browser.find_element_by_xpath('//*[@id="imo"]') print(imo) </code></pre> <p><a href="https://i.stack.imgur.com/hNaUc.png" rel="nofollow noreferrer">My output</a></p> <p>I am using selenium, which isn't going to work because. I have several thousands of ships to extract data from and it just isn't going to be efficient. (Also, I only need to extract information from Cargo Vessels (U can find that using the filter or by looking at green signs on the vessel type column.) and I need to extract the country name(flag), the Imo and the Vessels name.</p> <p>What should I use? Selenium or Bs4 + requests or other libraries? And How? I just started web scraping... </p> <p>I can't get the Imo nor anything! The HTML structure is very weird.</p> <p>I would appreciate any help. Thank You! :)</p>
<p>Instead of clicking each vessel to open up the details, you can get the information you're searching for from the results page. This will get each vessel, pull the info you wanted and click to the next page if there are more vessels:</p> <pre><code>import selenium.webdriver as webdriver url = "https://www.marinetraffic.com/en/data/?asset_type=vessels&amp;columns=flag,shipname,photo,recognized_next_port,reported_eta,reported_destination,current_port,imo,ship_type,show_on_live_map,time_of_latest_position,lat_of_latest_position,lon_of_latest_position&amp;ship_type_in|in|Cargo%20Vessels|ship_type_in=7" browser = webdriver.Chrome('C:\Users\CSA\OneDrive - College Sainte-Anne\Programming\PYTHON\Learning\WS\chromedriver_win32 (1)\') browser.get(url) browser.implicitly_wait(5) checking_for_vessels = True vessel_count = 0 while checking_for_vessels: vessel_left_container = browser.find_element_by_class_name('ag-pinned-left-cols-container') vessels_left = vessel_left_container.find_elements_by_css_selector('div[role="row"]') vessel_right_container = browser.find_element_by_class_name("ag-body-container") vessels_right = vessel_right_container.find_elements_by_css_selector('div[role="row"]') for i in range(len(vessels_left)): vessel_count += 1 vessel_country_list = vessels_left[i].find_elements_by_class_name('flag-icon') if len(vessel_country_list) == 0: vessel_country = 'Unknown' else: vessel_country = vessel_country_list[0].get_attribute('title') vessel_name = vessels_left[i].find_element_by_class_name('ag-cell-content-link').text vessel_imo = vessels_right[i].find_element_by_css_selector('[col-id="imo"] .ag-cell-content div').text print('Vessel #' + str(vessel_count) + ': ' + vessel_name + ', ' + vessel_country + ', ' + vessel_imo) pagination_container = browser.find_element_by_class_name('MuiTablePagination-actions') page_number = pagination_container.find_element_by_css_selector('input').get_attribute('value') max_page_number = pagination_container.find_element_by_class_name('MuiFormControl-root').get_attribute('max') if page_number == max_page_number: checking_for_vessels = False else: next_page_button = pagination_container.find_element_by_css_selector('button[title="Next page"]') next_page_button.click() </code></pre> <p>There was one vessel that did not display a flag, so there's a check for that and the country is replaced with 'Unknown' if no flag found. The same kind of check can be done for the vessel name and imo.</p> <p>The implicit wait was reduced to 5 because of the known issue of missing a flag on one vessel and waiting 100 seconds for this to be figured out was excessive. This number can be adjusted higher if you find there's issues waiting long enough to find elements.</p> <p>It appears you are using a windows machine. You can place the path of your chromedriver in the PATH variable on your machine and then you don't have to use the path when you instantiate your browser driver. Obviously, your path to your chromedriver is different than mine, so hopefully what you provided is correct or else this won't work.</p>
python|selenium|web-scraping|beautifulsoup
1
571
2,814,450
How to make Django work with MySQL Connector/Python?
<p>Has anyone made Django work with myconnpy?</p> <p>I've checked out <a href="http://github.com/rtyler/connector-django-mysql" rel="nofollow noreferrer">http://github.com/rtyler/connector-django-mysql</a> but the author said it's very outdated and not supported.</p> <p>If you've managed to make Django work with myconnpy, please share your experience.</p> <p>Thanks. </p>
<p>I needed something similar, so I forked the project you linked to and updated it to work (for small values of) with Django 1.2's newer database backend API.</p> <p>It should be noted that my use case is very simple (read access to a single table on a single database) and I have not tested it with anything more than that.</p> <p>You can find it at <a href="http://github.com/jerith/connector-django-mysql" rel="nofollow noreferrer">http://github.com/jerith/connector-django-mysql</a></p>
python|mysql|django|mysql-connector|django-database
1
572
6,016,937
Writing a string to the last line in a file, Python
<p>I'm attempting to make a function that writes a string to the last line in a file. However, what I currently have (below) only writes to the first line. So if I call the function more than once, it simply overwrites the first line. I'd like it to instead write the string to a new line; how would I go about this? </p> <p>Snippet :</p> <pre><code>def write (self, string) : # self.Dir is a txt file self.File = file(self.Dir, 'w') self.File.write(string) </code></pre>
<p>Open the file in append mode (<code>'a'</code> instead of <code>'w'</code>). Opening in <code>'w'</code> mode truncates your file (you're now writing into an empty file)</p>
python|string|file|io
16
573
5,660,549
Google AppEngine tells me that my int is not an int
<p>The relevant part of the code:</p> <pre><code>pk = int(pk) logging.info('pk: %r :: %s', pk, type(pk)) instance = models.Model.get_by_id(int(pk)) </code></pre> <p>The output from the log message above</p> <pre><code>pk: 757347 :: &lt;type 'int'&gt; </code></pre> <p>The stacktrace:</p> <pre><code>Traceback (most recent call last): File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/webapp/__init__.py", line 634, in __call__ handler.get(*groups) File "/base/data/home/apps/&lt;myapp&gt;/&lt;version&gt;/scrape.py", line 61, in get instance = models.Model.get_by_id(int(pk)) File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/db/__init__.py", line 1212, in get_by_id return get(keys[0], config=config) File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/db/__init__.py", line 1434, in get model = cls1.from_entity(entity) File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/db/__init__.py", line 1350, in from_entity instance = cls(None, _from_entity=True, **entity_values) File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/db/__init__.py", line 890, in __init__ prop.__set__(self, value) File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/db/__init__.py", line 593, in __set__ value = self.validate(value) File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/db/__init__.py", line 2967, in validate % (self.name, type(value).__name__)) BadValueError: Property pk must be an int or long, not a unicode </code></pre> <p>Anyone has an idea if I'm doing something wrong here?</p> <p><strong>Note:</strong> removing the <code>int</code> from the last line of the code makes no difference (that was the first version).</p> <p>Also, the code works without a problem on <code>dev_appserver.py</code>.</p>
<p>Does your model have a property 'pk', which is now an IntegerProperty(), but was previously a StringProperty(), and the entity with id 757347 was saved with the old version of the model?</p>
python|google-app-engine|google-cloud-datastore
5
574
67,803,117
How to edit "view on site" url on django admin?
<p>How in the modern version of Django edit the &quot;view on site&quot; url on django admin?</p>
<p>in your model implement <code>get_absolute_url</code> method like this</p> <pre><code> def get_absolute_url(self): return reverse('model_record_view',args=[self.id]) </code></pre> <p>where model_record_view is the name of the view and it is id as a paramter</p>
python|django
0
575
67,986,807
Create Pandas DataFrame from a list and list of lists
<p>I have two python lists</p> <pre><code>messages = ['message1', 'message2', 'message3'] labels = [[1,0,1,3,1], [1,1,2,0,3], [0,0,2,1,0]] </code></pre> <p>I am creating dataFrame which will take <strong>messages</strong> as first column and <strong>labels</strong> as <strong>cat_1, cat_2, cat_3, cat_4, cat_5</strong> i.e. total 6 columns</p> <p>I tried</p> <pre><code>msgs_labels = pd.DataFrame( {'message': messages, 'cat': labels, }) </code></pre> <p>but it returns two columns. <strong>messages</strong> and <strong>cat</strong>.</p>
<p>If no problem with starting by <code>0</code> for new columns names use <code>DataFrame</code> constructors with <code>join</code>:</p> <pre><code>df = pd.DataFrame({'message': messages}).join(pd.DataFrame(labels).add_prefix('cat_')) print (df) message cat_0 cat_1 cat_2 cat_3 cat_4 0 message1 1 0 1 3 1 1 message2 1 1 2 0 3 2 message3 0 0 2 1 0 </code></pre> <hr /> <pre><code>f = lambda x: f'cat_{x + 1}' df = pd.DataFrame({'message': messages}).join(pd.DataFrame(labels).rename(columns=f)) print (df) message cat_1 cat_2 cat_3 cat_4 cat_5 0 message1 1 0 1 3 1 1 message2 1 1 2 0 3 2 message3 0 0 2 1 0 </code></pre> <p>Some another ideas:</p> <pre><code>f = lambda x: f'cat_{x + 1}' df = (pd.DataFrame(labels,index=messages) .rename(columns=f) .rename_axis('messages') .reset_index()) print (df) messages cat_1 cat_2 cat_3 cat_4 cat_5 0 message1 1 0 1 3 1 1 message2 1 1 2 0 3 2 message3 0 0 2 1 0 </code></pre> <p>Or a bit crazy:</p> <pre><code>f = lambda x: f'cat_{x + 1}' df = (pd.DataFrame(labels,index=pd.Series(messages, name='messages')) .rename(columns=f) .reset_index()) </code></pre> <p>Or solution with processing nested lists first:</p> <pre><code>d = {f'cat_{i + 1}': x for i, x in enumerate(map(list, zip(*labels)))} d = {**{'message': messages}, **d} df = pd.DataFrame(d) print (df) message cat_1 cat_2 cat_3 cat_4 cat_5 0 message1 1 0 1 3 1 1 message2 1 1 2 0 3 2 message3 0 0 2 1 0 </code></pre>
python|python-3.x|pandas|list|dataframe
4
576
67,686,642
RedisCluster MGET with pipeline
<p>I am trying to perform an MGET operation on my Redis with the pipeline to increase performance. I have tried doing MGET in one go as well as as a batch flow</p> <pre><code>from rediscluster import RedisCluster ru = RedisCluster(startup_nodes=[{&quot;host&quot;: &quot;somecache.aws.com&quot;, &quot;port&quot;: &quot;7845&quot;}], decode_responses=True, skip_full_coverage_check=True) pipe = ru.pipeline() # pipe.mget(keys) for i in range(0, len(keys), batch_size): temp_list = keys[i:i + batch_size] pipe.mget(temp_list) resp = pipe.execute() </code></pre> <p>So far I am getting the error</p> <pre><code>raise RedisClusterException(&quot;ERROR: Calling pipelined function {0} is blocked when running redis in cluster mode...&quot;.format(func.__name__)) rediscluster.exceptions.RedisClusterException: ERROR: Calling pipelined function mget is blocked when running redis in cluster mode... </code></pre> <p>What I want to know is that</p> <ol> <li>Does RedisCluster pipelined MGET?</li> <li>If not then is there any other lib that I can use to archive this?</li> </ol>
<p>Turns out we can not use MGET with the pipeline, below is m final solution</p> <pre><code>from rediscluster import RedisCluster def redis_multi_get(rc: RedisCluster, keys: list): pipe = rc.pipeline() [pipe.get(k) for k in keys] return pipe.execute() if __name__ == '__main__': rc = RedisCluster(startup_nodes=[{&quot;host&quot;: host, &quot;port&quot;: port}], decode_responses=True, skip_full_coverage_check=True) keys = rc.keys(PREFIX + '*') cache_hit = redis_multi_get(rc, keys) </code></pre>
python|redis|pipeline|redis-cluster
0
577
67,639,397
Combine two rows in csv using python
<p>I need to combine two rows removing the space between them. What I need is:</p> <p>My csv with single column:</p> <pre><code>&quot;2021-05-13&quot;|&quot;test&quot;|&quot;perfect line&quot; &quot;2021-05-13&quot;|&quot;test&quot;| &quot;imperfect line&quot; &quot;2021-05-13&quot;|&quot;test&quot;|&quot;perfect line&quot; </code></pre> <p>My output needs to be :</p> <pre><code>&quot;2021-05-13&quot;|&quot;test&quot;|&quot;perfect line&quot; &quot;2021-05-13&quot;|&quot;test&quot;|&quot;perfect line&quot; &quot;2021-05-13&quot;|&quot;test&quot;|&quot;perfect line&quot; </code></pre> <p>But what I got is:</p> <pre><code>&quot;2021-05-13&quot;|&quot;test&quot;|&quot;perfect line&quot;,&quot;2021-05-13&quot;|&quot;test&quot;|&quot;perfect line&quot;,&quot;2021-05-13&quot;|&quot;test&quot;|&quot;perfect line&quot; </code></pre> <p>My code is:</p> <pre><code>fIn = open(&quot;01new.csv&quot;, &quot;r&quot;) fOut = open(&quot;output.csv&quot;, &quot;w&quot;) fOut.write(&quot;,&quot;.join([line for line in fIn]).replace(&quot;\n&quot;,&quot;&quot;)) fIn.close() fOut.close() </code></pre> <p>How can I get the output I need?</p> <hr /> <p>When I run <a href="https://stackoverflow.com/a/67639860/843953">the code from Pranav's answer</a>, I get this output:</p> <pre><code>&quot;2021-05-13&quot;|&quot;test&quot;|&quot;perfect line&quot; &quot;2021-05-13&quot;|&quot;test&quot;|&quot;imperfect line&quot; &quot;2021-05-13&quot;|&quot;test&quot;|&quot;perfect line&quot; </code></pre> <p>And in addition i had empty delimiter that too get vanished For eg:</p> <p>My Actual File is</p> <pre><code>&quot;2021-05-13&quot;|&quot;test&quot;|&quot;&quot;|&quot;perfect line&quot; &quot;2021-05-13&quot;|&quot;test&quot;|&quot;&quot;| &quot;imperfect line&quot; &quot;2021-05-13&quot;|&quot;test&quot;|&quot;&quot;|&quot;perfect line&quot; </code></pre> <p>I need Like :</p> <pre><code>&quot;2021-05-13&quot;|&quot;test&quot;|&quot;&quot;|&quot;perfect line&quot; &quot;2021-05-13&quot;|&quot;test&quot;|&quot;&quot;|&quot;imperfect line&quot; &quot;2021-05-13&quot;|&quot;test&quot;|&quot;&quot;|&quot;perfect line&quot; </code></pre>
<p>You can use a <a href="https://regex101.com/r/se0wbY/2" rel="nofollow noreferrer">regex</a> to reconstruct the line in the proper format:</p> <pre><code>import re with open(your_file, 'r') as f: s=re.sub(r'^([^|]*\|)([^|]*\|)\n\s*([^|\n]*\n)',r'\1\2\3', f.read(), flags=re.M) print(s) </code></pre> <p>Prints:</p> <pre><code>&quot;2021-05-13&quot;|&quot;test&quot;|&quot;perfect line&quot; &quot;2021-05-13&quot;|&quot;test&quot;|&quot;imperfect line&quot; &quot;2021-05-13&quot;|&quot;test&quot;|&quot;perfect line&quot; </code></pre> <p>To use a string with csv, feed the string to the <a href="https://docs.python.org/3/library/io.html#io.StringIO" rel="nofollow noreferrer">StringIO</a> library:</p> <pre><code>import csv import re from io import StringIO with open(fn, 'r') as f: s=re.sub(r'^([^|]*\|)([^|]*\|)\n\s*([^|\n]*\n)',r'\1\2\3', f.read(), flags=re.M) for row in csv.reader(StringIO(s), delimiter='|'): print(row) </code></pre> <p>Prints:</p> <pre><code>['2021-05-13', 'test', 'perfect line'] ['2021-05-13', 'test', 'imperfect line'] ['2021-05-13', 'test', 'perfect line'] </code></pre> <hr /> <hr /> <p>Another way is to recognize that with a <code>\n</code> inserted unquoted into the CSV file, you have a broken csv file.</p> <p>You can reconstruct the record structure by reading the csv one field at a time then reconstituting into the 3 fields per record (4 fields if you insert the blank field) like so:</p> <pre><code>import csv def next_field(f): for line in f: for field in line.strip().split('|'): if field: yield field.strip('&quot;') with open(fn, 'r') as f, open(fn_out, 'w') as fo: w=csv.writer(fo,delimiter='|', quotechar='&quot;', quoting=csv.QUOTE_ALL) for r in (t[:2]+('',)+t[2:] for t in zip(*[iter(next_field(f))]*3)): w.writerow(r) </code></pre> <p>Your <code>fn_out</code> file is now:</p> <pre><code>&quot;2021-05-13&quot;|&quot;test&quot;|&quot;&quot;|&quot;perfect line&quot; &quot;2021-05-13&quot;|&quot;test&quot;|&quot;&quot;|&quot;imperfect line&quot; &quot;2021-05-13&quot;|&quot;test&quot;|&quot;&quot;|&quot;perfect line&quot; </code></pre>
python|csv
0
578
30,345,832
osm file, parsing, memory error even with clearing elements.
<p>I want to take an osm file, clean it, and then save it as a json file. The xml file is about 1 gb big.</p> <pre><code>def audit(): osm_file = open('c:\Users\Stephan\Downloads\los-angeles_california.osm', "r") with open('lala.txt', 'w') as outfile: for event, elem in ET.iterparse(osm_file, events=("start",)): if elem.tag == "node" or elem.tag == "way": json.dump(shape_element(elem),outfile) elem.clear() audit() </code></pre> <p>Eventhough i use elm.clear() i still get an memory error. Anyone knows why ?</p>
<pre><code>osm_file = open('c:\Users\Stephan\Downloads\los-angeles_california.osm', "wr") </code></pre> <p>if you want to clean it, it should be writable</p>
python|json|memory|openstreetmap
0
579
30,344,045
Ordering a string by its substring numerical value in python
<p>I have a list of strings that need to be sorted in numerical order using as a int key two substrings. Obviously using the <code>sort()</code> function orders my strings alphabetically so I get 1,10,2... that is obviously not what I'm looking for.</p> <p>Searching around I found a key parameter can be passed to the <code>sort()</code> function, and using <code>sort(key=int)</code> should do the trick, but being my key a substring and not the whole string should lead to a cast error.</p> <p>Supposing my strings are something like:</p> <pre><code>test1txtfgf10 test1txtfgg2 test2txffdt3 test2txtsdsd1 </code></pre> <p>I want my list to be ordered in numeric order on the basis of the first integer and then on the second, so I would have:</p> <pre><code>test1txtfgg2 test1txtfgf10 test2txtsdsd1 test2txffdt3 </code></pre> <p>I think I could extract the integer values, sort only them keeping track of what string they belong to and then ordering the strings, but I was wondering if there's a way to do this thing in a more efficient and elegant way.</p> <p>Thanks in advance</p>
<p>Try the following</p> <pre><code>In [26]: import re In [27]: f = lambda x: [int(x) for x in re.findall(r'\d+', x)] In [28]: sorted(strings, key=f) Out[28]: ['test1txtfgg2', 'test1txtfgf10', 'test2txtsdsd1', 'test2txffdt3'] </code></pre> <p>This uses regex (the <a href="https://docs.python.org/3/library/re.html" rel="nofollow"><code>re</code> module</a>) to find all integers in each string, then <a href="https://docs.python.org/3/tutorial/datastructures.html#comparing-sequences-and-other-types" rel="nofollow">compares the resulting lists</a>. For example, <code>f('test1txtfgg2')</code> returns <code>[1, 2]</code>, which is then compared against other lists.</p>
python|string|sorting
4
580
66,788,463
How to move specific data from one column to a new column on Pandas?
<p>I have a set of data with 2 columns: Column1 = Hex Code and Column2= Current (A).</p> <p>The data in Column1 is Hex Code, 27 different codes which repeats and for each Hex Code have Current (A) value on Column2.</p> <p>I want to pick a set of 27 data points from Column1 &amp; Column2 and place them into Coulmn3 &amp; Column4.</p> <p>Can someone help me to achieve this?</p> <p><a href="https://i.stack.imgur.com/yr9Lo.png" rel="nofollow noreferrer">This is how the initial data looks</a></p> <p><a href="https://i.stack.imgur.com/msSxn.png" rel="nofollow noreferrer">This is how i would like the data to be arranged</a></p>
<p>I am going tho show you my code. But I want to tell that you can not have repeating columns names. We suppose data is the name of your original dataset:</p> <pre><code>import pandas as pd col_name1=data.columns.values[0] col_name2=data.columns.values[1] two_columns = data[[col_name1,col_name2]][0:27].values two_columns = pd.DataFrame(two_columns,columns=[col_name1+'_1',col_name2+'_2']) df = data.iloc[0:27,:] df = df.join(two_columns) print(df) </code></pre>
python|excel|pandas|dataframe
0
581
43,010,622
Unable to click element on page Selenium python
<p>I am trying to move to page 2 and beyond of this page (pagination) with python selenium and spent a few hours on this. I am getting this error, and would be thankful of any help..Error from chromedriver</p> <pre><code>is not clickable at point(). Other element would receive the click </code></pre> <p>My code so far:</p> <pre><code>class Chezacash: t1 = time.time() driver = webdriver.Chrome(chromedriver) def controller(self): self.driver.get("https://www.chezacash.com/#/home/") element = WebDriverWait(self.driver, 10).until( EC.presence_of_element_located((By.CSS_SELECTOR, "div.panel-heading"))) soup = BeautifulSoup(self.driver.page_source.encode('utf-8'),"html.parser") self.parser(soup) self.driver.find_element(By.XPATH, "//li[@class='paginate_button active']/following-sibling::li").click() time.sleep(2) soup = BeautifulSoup(self.driver.page_source.encode('utf-8'),"html.parser") self.parser(soup) def parser(self, soup): for i in soup.find("table", {"id":"DataTables_Table_1"}).tbody.contents: date = i.findAll("td")[0].get_text().strip() time = i.findAll("td")[1].get_text().strip() home = i.findAll("td")[4].div.span.get_text().strip().encode("utf-8") home_odds = i.findAll("td")[4].div.findAll("span")[1].get_text().strip() draw_odds = i.findAll("td")[5].div.findAll("span")[1].get_text().strip() away = i.findAll("td")[6].div.span.get_text().strip().encode("utf-8") away_odds = i.findAll("td")[6].div.findAll("span")[1].get_text().strip() print home cheza = Chezacash() try: cheza.controller() except: cheza.driver.service.process.send_signal(signal.SIGTERM) # kill the specific phantomjs child proc # quit the node proc cheza.driver.quit() traceback.print_exc() </code></pre>
<p>What if instead you would locate the "Next" button <em>by link text</em>, scroll into it's view and then click:</p> <pre><code>next_button = self.driver.find_element_by_link_text("Next") self.driver.execute_script("arguments[0].scrollIntoView();", next_button) next_button.click() </code></pre> <p>I would also maximize the browser window before navigating to the page:</p> <pre><code>self.driver.maximize_window() self.driver.get("https://www.chezacash.com/#/home/") </code></pre>
python|selenium|web-scraping
2
582
42,590,529
How can I track all SQL query timings and counts in Django?
<p>I'd like to have a Django application record how much time each SQL query took.</p> <p>The first problem is that SQL queries differ, even when they originate from the same code. That can be solved by normalizing them, so that</p> <pre><code>SELECT first_name, last_name FROM people WHERE NOW() - birth_date &lt; interval '20' years; </code></pre> <p>would become something like</p> <pre><code>SELECT $ FROM people WHERE $ - birth_date &lt; $; </code></pre> <p>After getting that done, we could just log the normalized query and the query timing to a file, syslog or statsd (for statsd, I'd probably also use a hash of the query as a key, and keep an index of hash->query relations elsewhere).</p> <p>The bigger problem, however, is figuring out where that action can be performed. The best place for that I could find is this: <a href="https://github.com/django/django/blob/b5bacdea00c8ca980ff5885e15f7cd7b26b4dbb9/django/db/backends/util.py#L46" rel="nofollow noreferrer">https://github.com/django/django/blob/b5bacdea00c8ca980ff5885e15f7cd7b26b4dbb9/django/db/backends/util.py#L46</a> (note: we do use that ancient version of Django, but I'm fine with suggestions that are relevant only to newer versions).</p> <p>Ideally, I'd like to make this a Django extension, rather than modifying Django source code. Sounds like I can make another backend, inheriting from the one we currently use, and make its <code>CursorWrapper</code>'s class <code>execute</code> method record the timing and counter.</p> <p>Is that the right approach, or should I be using some other primitives, like <code>QuerySet</code> or something?</p>
<p>Django debug toolbar has a panel that shows "SQL queries including time to execute and links to EXPLAIN each query" <a href="http://django-debug-toolbar.readthedocs.io/en/stable/panels.html#sql" rel="nofollow noreferrer">http://django-debug-toolbar.readthedocs.io/en/stable/panels.html#sql</a></p>
python|django
0
583
42,936,110
None value in python numerical integration function
<p>I'm trying to write a code that calculates integrals using the rectangular rule and also allows the user to input the integral limits and number of divions(rectangles). I've written the function, but for certain values it just returns "None". Any idea why?</p> <p>Here's my code so far:</p> <pre><code>def integral(f, a, b, N): h = int((b-a)/N) result = 0 result += h * f(a) for i in range(1, N-1): result += h * f(a + i*h) return result def f(x): return x**3 string_input1 = input("Please enter value for a: ") a = int(string_input1) string_input2 = input("Please enter value for b: ") b = int(string_input2) while True: string_input3 = input("Please enter integer positive value for N: ") N = int(string_input3) if N&gt;0: break print(integral(f, a, b, N)) </code></pre> <p>an example of values that return "None" is a=0 b=1 N=2</p>
<pre><code>for i in range(1, N-1): result += h * f(a + i*h) return result </code></pre> <p>If <code>N = 2</code> then <code>for i in range(1, 1)</code> is not going to execute, thus <code>integral</code> returns <code>None</code>.</p> <p>But even if <code>N &gt; 2</code>, having <code>return</code> inside the <code>for</code> loop doesn't make any sense since it will only run the first iteration and then return whatever <code>result</code> is.</p>
python|numerical-integration
2
584
72,454,208
How to pass a variable as a column name with pyodbc?
<p>I have a list that has two phone numbers and I'd like to put each phone number into its own column in an Access database. The column names are Phone_Number1 and Phone_Number2. How do I pass that to the INSERT statement?</p> <pre class="lang-py prettyprint-override"><code>phone_numbers = ['###.218.####', '###.746.####'] driver = '{Microsoft Access Driver (*.mdb, *.accdb)}' filepath = 'C:/Users/Notebook/Documents/master.accdb' myDataSources = pyodbc.dataSources() access_driver = myDataSources['MS Access Database'] conn = pyodbc.connect(driver=driver, dbq=filepath) cursor = conn.cursor() phone_number_count = 1 for phone_number in phone_numbers: column_name = &quot;Phone_Number&quot; + str(phone_number_count) cursor.execute(&quot;INSERT INTO Business_Cards (column_name) VALUES (?)&quot;, (phone_number)) conn.commit() print(&quot;Your database has been updated.&quot;) </code></pre> <p>This is what I have so far.</p> <pre class="lang-py prettyprint-override"><code>Traceback (most recent call last): File &quot;C:/Users/Notebook/PycharmProjects/Jarvis/BusinessCard.py&quot;, line 55, in &lt;module&gt; database_entry(phone_numbers, emails, name, title) File &quot;C:/Users/Notebook/PycharmProjects/Jarvis/BusinessCard.py&quot;, line 47, in database_entry cursor.execute(&quot;INSERT INTO Business_Cards (column_name) VALUES (?)&quot;, (phone_number)) pyodbc.Error: ('HYS22', &quot;[HYS22] [Microsoft][ODBC Microsoft Access Driver] The INSERT INTO statement contains the following unknown field name: 'column_name'. Make sure you have typed the name correctly, and try the operation again. (-1507) (SQLExecDirectW)&quot;) </code></pre>
<p>If you want to insert both numbers in the same row, remove the for loop and adjust the <code>INSERT</code> to consider the two columns:</p> <pre class="lang-py prettyprint-override"><code>phone_numbers = ['###.218.####', '###.746.####'] # ... column_names = [f&quot;PhoneNumber{i}&quot; for i in range(1, len(phone_numbers) + 1)] placeholders = ['?'] * len(phone_numbers) cursor.execute(f&quot;INSERT INTO Business_Cards ({', '.join(column_names)}) VALUES ({', '.join(placeholders)})&quot;, tuple(phone_numbers)) conn.commit() # ... </code></pre>
python|pyodbc
1
585
65,828,379
How to blit from the x and y coordinates of an image in Pygame?
<p>I'm trying to lessen the number of files I need for my pygame project by instead of having a folder with for example 8 boots files, I can make 1 bigger image that has all of them 8 pictures put next to each other and depending on animation tick, that specific part of the image gets blitted.</p> <p>Currently, I utilise lists.</p> <pre><code>right = [&quot;playerdesigns/playerright0.png&quot;,&quot;playerdesigns/playerright1.png&quot;,&quot;playerdesigns/playerright2.png&quot;,&quot;playerdesigns/playerright3.png&quot;] </code></pre> <p>my code then just depending on animation tick, takes on of those files and blits it</p> <p>but I wish to make it into one <strong>playerright.png</strong> image file that 0-100 Xpixels of the picture has <strong>playerright1.png</strong>, 101-200 Xpixels has <strong>playerright2.png</strong> etc, and then depending on need, I can blit 100 wide image from any point.</p>
<p>You can define a subsurface that is directly linked to the source surface with the method <a href="https://www.pygame.org/docs/ref/surface.html#pygame.Surface.subsurface" rel="nofollow noreferrer"><code>subsurface</code></a>:</p> <blockquote> <p><code>subsurface(Rect) -&gt; Surface</code></p> <p>Returns a new Surface that shares its pixels with its new parent. The new Surface is considered a child of the original. Modifications to either Surface pixels will effect each other.</p> </blockquote> <p>The <code>Rect</code> argument of <code>subsurface</code> specifies the rectangular area for the sub-image. It can either be a <a href="https://www.pygame.org/docs/ref/rect.html" rel="nofollow noreferrer"><code>pygame.Rect</code></a> object or a tuple with 4 components (<em>x</em>, <em>y</em>, <em>width</em>, <em>height</em>).</p> <p>For example, if you have an image that contains 3 100x100 size sub-images:</p> <pre class="lang-py prettyprint-override"><code>right_surf = pygame.image.load(&quot;playerdesigns/playerright.png&quot;) right_surf_list = [right_surf.subsurface((i*100, 0, 100, 100)) for i in range(3)] </code></pre>
python|pygame
2
586
65,623,839
Why I am getting None in place of int from if block inside a method called by another method in the class
<p>Here is the code, I am trying to get a binary search result using a method inside the class. The class has more functions but only this function is giving the wrong output (<code>None</code> in place an integer). The <code>if</code> part from line number 10 to 15 is causing the problem.</p> <pre><code>class Solution: def getNum(self, nums, x): L_nums = len(nums) j = self.binary_search( nums, 0, L_nums-1, x) print(&quot;j=&quot;,j) def binary_search(self, nums, start, end, x): print(&quot;BS called start=&gt;&quot;, start,&quot;end=&gt;&quot;, end,&quot;x=&gt;&quot;, x) if end==start: if nums[end]==x: return end else: print(&quot;called else and returing -1&quot;) return -1 i = (end-start)//2 + start print(&quot;i=&quot;,i) if x==nums[i]: return i elif x&gt;nums[i]: self.binary_search(nums, i+1, end, x) else: self.binary_search(nums, start, i-1, x) sol = Solution() sol.getNum([1,3,5,7,9],1) </code></pre> <p><strong>output:</strong> here <code>j</code> should be 0, in place of that it is returning None</p> <pre><code>BS called start=&gt; 0 end=&gt; 4 x=&gt; 1 i= 2 BS called start=&gt; 0 end=&gt; 1 x=&gt; 1 i= 0 j= None </code></pre>
<p>Short answer to your question: include the &quot;return&quot; in raw 21,23.</p> <ol start="20"> <li> <pre><code> elif x&gt;nums[i]: </code></pre> </li> <li> <pre><code> return self.binary_search(nums, i+1, end, x) </code></pre> </li> <li> <pre><code> else: </code></pre> </li> <li> <pre><code> return self.binary_search(nums, start, i-1, x) </code></pre> </li> </ol>
python|python-3.x
0
587
51,019,885
Using Rasa NLU model with python API instead of HTTP server
<p>Is there a way to use <a href="https://nlu.rasa.com" rel="nofollow noreferrer">https://nlu.rasa.com</a> model without the HTTP server ? I want to use it as a python library/module. </p>
<p>Yes, and this is documented in there docs at nlu.rasa.com specifically <a href="https://nlu.rasa.com/python.html" rel="nofollow noreferrer">this section</a>.</p> <p>As of version 0.12.3:</p> <p><strong>Training</strong></p> <pre><code>from rasa_nlu.training_data import load_data from rasa_nlu.config import RasaNLUModelConfig from rasa_nlu.model import Trainer from rasa_nlu import config training_data = load_data('data/examples/rasa/demo-rasa.json') trainer = Trainer(config.load("sample_configs/config_spacy.yml")) trainer.train(training_data) model_directory = trainer.persist('./projects/default/') # Returns the directory the model is stored in </code></pre> <p><strong>Parsing</strong></p> <pre><code>from rasa_nlu.model import Metadata, Interpreter # where `model_directory points to the folder the model is persisted in interpreter = Interpreter.load(model_directory) interpreter.parse(u"The text I want to understand") </code></pre>
python|rasa-nlu
4
588
50,444,618
Python - MySQL "Column count doesn't match value count at row 1"
<pre><code>name = form.name.data email = form.email.data username = form.username.data password = sha256_crypt.encrypt(form.password.data) cursor = mysql.connection.cursor() cursor.execute("Insert into users(name,email.username,password) values(%s,%s,%s,%s)",(name,email,username,password)) mysql.connection.commit() cursor.close() </code></pre> <p>I am using python with mysql to send the data entered in the table from a table in the database but I am getting such an error. Can you help me?</p>
<pre><code>cursor.execute("Insert into users(name,email.username,password) </code></pre> <p>You have a "." instead of a "," between email and username. It should be</p> <pre><code>cursor.execute("Insert into users(name,email,username,password) </code></pre>
python|mysql|database
4
589
50,411,346
Update an Excel sheet in real time using Python
<p>Is there a way to update a spreadsheet in real time while it is open in Excel? I have a workbook called Example.xlsx which is open in Excel and I have the following python code which tries to update cell B1 with the string 'ID': </p> <pre><code>import openpyxl wb = openpyxl.load_workbook('Example.xlsx') sheet = wb['Sheet'] sheet['B1'] = 'ID' wb.save('Example.xlsx') </code></pre> <p>On running the script I get this error:</p> <p><code>PermissionError: [Errno 13] Permission denied: 'Example.xlsx'</code></p> <p>I know its because the file is currently open in Excel, but was wondering if there is another way or module I can use to update a sheet while its open.</p>
<p>I have actually figured this out and its quite simple using xlwings. The following code opens an existing Excel file called Example.xlsx and updates it in real time, in this case puts in the value 45 in cell B2 instantly soon as you run the script. </p> <pre><code>import xlwings as xw wb = xw.Book('Example.xlsx') sht1 = wb.sheets['Sheet'] sht1.range('B2').value = 45 </code></pre>
python|excel|openpyxl
21
590
26,644,810
In Python how to strip dollar signs and commas from dollar related fields only
<p>I'm reading in a large text file with lots of columns, dollar related and not, and I'm trying to figure out how to strip the dollar fields ONLY of $ and , characters.</p> <p>so say I have:</p> <pre><code>a|b|c $1,000|hi,you|$45.43 $300.03|$MS2|$55,000 </code></pre> <p>where a and c are dollar-fields and b is not. The output needs to be:</p> <pre><code>a|b|c 1000|hi,you|45.43 300.03|$MS2|55000 </code></pre> <p>I was thinking that regex would be the way to go, but I can't figure out how to express the replacement:</p> <pre><code>f=open('sample1_fixed.txt','wb') for line in open('sample1.txt', 'rb'): new_line = re.sub(r'(\$\d+([,\.]\d+)?k?)',????, line) f.write(new_line) f.close() </code></pre> <p>Anyone have an idea?</p> <p>Thanks in advance.</p>
<p>Unless you are really tied to the idea of using a regex, I would suggest doing something simple, straight-forward, and generally easy to read:</p> <pre><code>def convert_money(inval): if inval[0] == '$': test_val = inval[1:].replace(",", "") try: _ = float(test_val) except: pass else: inval = test_val return inval def convert_string(s): return "|".join(map(convert_money, s.split("|"))) a = '$1,000|hi,you|$45.43' b = '$300.03|$MS2|$55,000' print convert_string(a) print convert_string(b) </code></pre> <p><strong>OUTPUT</strong></p> <pre><code>1000|hi,you|45.43 300.03|$MS2|55000 </code></pre>
python|regex
4
591
61,342,267
Removing the rows that columns don't match with the same values
<p>I have a data frame that looks like this.</p> <p>This is what I have:</p> <pre><code> V1 V2 V3 hello 0 0 nice 0 1 meeting 1 1 you 1 0 </code></pre> <p>I want to make it look like this:</p> <pre><code> V1 V2 V3 hello 0 0 meeting 1 1 </code></pre> <p>So basically I want to remove the rows that V2 and V3 column does not match with the same numbers. I only one to leave rows that V2 and V3 column share the same values either 0 or 1. How can I do this? Please help me.... Thank you very much in advance </p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a> with inverted logic - get all rows with same values in both columns:</p> <pre><code>df = df[df.V2 == df.V3] </code></pre> <p>Alternative with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.eq.html" rel="nofollow noreferrer"><code>Series.eq</code></a> for compare:</p> <pre><code>df = df[df.V2.eq(df.V3)] </code></pre> <p>Next alternative with <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.query.html" rel="nofollow noreferrer"><code>DataFrame.query</code></a>:</p> <pre><code>df = df.query("V2 == V3") </code></pre>
python|pandas|merge
3
592
61,196,837
Azure Blob Bindings with Azure Function (Python)
<p>I currently have a process of reading from sql, using pandas and pd.Excelwriter to format the data and email it out. I want my function to read from sql (no problem) and write to a blob, then from that blob (using SendGrid binding) attach that file from the blob and send it out. </p> <p>My question is do I need both an in (attaching for email) and an out (archiving to the blob) binding for that blob? Additionally, is this the simplest way to do this? It's be nice to send it and write to the blob as two unconnected operations instead of sequentially. </p> <p>It also appears that with the binding, I have to hard code the name of the file in the blob-path? That seems a little ridiculous, does anyone know a workaround, or perhaps I have misunderstood. </p>
<blockquote> <p>do I need both an in (attaching for email) and an out (archiving to the blob) binding for that blob?</p> </blockquote> <p>Firstly I don't think you could bind the blob in and out simultaneously if the not existed. If you have tried you will find it will return error. And I suppose you could send the mail directly with the content from sql and write to blob, don't need to read content from blob again.</p> <blockquote> <p>I have to hard code the name of the file in the blob-path?</p> </blockquote> <p>If you could accept guid or datetime blob name you could bind the path with <code>{rand-guid}</code> or <code>{DateTime}</code>(you could format the time).</p> <p>I fyou could not accept this binding, you could pass the blob path from the trigger body with json data like below pic. If you use other like queue trigger, you also could pass the json data with the path value.</p> <p><a href="https://i.stack.imgur.com/2bapD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2bapD.png" alt="enter image description here"></a></p>
python|azure|azure-functions|azure-blob-storage
0
593
58,112,337
Python: Mysql Escape function generates corrupted query
<p>Python mysql default escape function, corrupts the query. Original Query string is following. It works fine and does add records to database as desired</p> <pre><code>INSERT IGNORE INTO state (`name`, `search_query`, `business_status`, `business_type`, `name_type`, `link`) VALUES ("test_name1", "test", "test_status", "test_b_typ", "test_n_typ", "test_link"), ("test_name2", "test", "test_status", "test_b_typ", "test_n_typ", "test_link") </code></pre> <p>But After escaping it to make sql Injection secure using the fuction safe_sql = self.conn.escape_string(original_sql) safe_sql being generated is following</p> <pre><code>b'INSERT IGNORE INTO state (`name`, `search_query`, `business_status`, `business_type`, `name_type`, `link`) VALUES (\\"test_name1\\", \\"test\\", \\"test_status\\", \\"test_b_typ\\", \\"test_n_typ\\", \\"test_link\\"), (\\"test_name2\\", \\"test\\", \\"test_status\\", \\"test_b_typ\\", \\"test_n_typ\\", \\"test_link\\")' </code></pre> <p>Now if I try to execute the safe_sql I get the syntax error below</p> <pre><code>MySQLdb._exceptions.ProgrammingError: (1064, 'You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near \'\\"test_name1\\", \\"test\\", \\"test_status\\", \\"test_b_typ\\", \\"test_n_typ\\", \\"tes\' at line 1') </code></pre> <p>Which makes me wonder that if escape function I am using is either broken / uncompatibl or I am not using it the right way ? Also i am entering hundreds of records at one time, and due to the fast processing (which i purely assume) of single query as compared to prepared statements running hundreds of time, I am creating a large query. </p>
<p>You can't escape <em>the entire query!</em> You can't construct a query by randomly concatenating strings and then wave a magic wand over it and make it "injection secure". You need to escape every individual value <strong>before</strong> you put it into the query. E.g.:</p> <pre><code>"INSERT ... VALUES ('%s', ...)" % self.conn.escape_string(foo) </code></pre> <p>But really, your MySQL API probably offers <em>prepared statements</em>, which are much easier to use and less error prone. Something like:</p> <pre><code>self.conn.execute('INSERT ... VALUES (%s, %s, %s, ...)', (foo, bar, baz)) </code></pre>
python|mysql|python-3.x
1
594
56,265,046
error attributing items from scrapy into a database
<p>am trying to insert items scraped through scrapy into a MySQL database (create a new database if none is present before), I followed an online tutorial since I have no idea how to do this but an error keeps happening.</p> <p>am trying to store an item that contains 5 text fields into a database</p> <p>here's my pipeline</p> <pre class="lang-py prettyprint-override"><code># -*- coding: utf-8 -*- # Define your item pipelines here # # Don't forget to add your pipeline to the ITEM_PIPELINES setting # See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html import mysql.connector class LinkPipeline(object): def _init_(self): self.create_connection() self.create_table() def create_connection(self): self.conn = mysql.connector.connect( host = 'localhost', user = 'root', passwd = 'facebook123', database = 'link' ) self.curr = self.conn.cursor() def create_table(self): self.curr.execute("""DROP TABLE IF EXISTS link_tb""") self.curr.execute("""create table link_tb( profile text, post_url text, action text, url text, date text )""") def process_item(self,item, spider): self.store_db(item) return(item) def store_db(self, item): self.curr.execute("""insert into link_tb values (%s,%s,%s,%s,%s)""", ( item['profile'][0], item['post_url'][0], item['action'][0], item['url'][0], item['date'][0] )) self.conn.commit() </code></pre> <p>here's a part of my spider </p> <pre class="lang-py prettyprint-override"><code> if response.meta['flag'] == 'init': #parse root comment for root in response.xpath('//div[contains(@id,"root")]/div/div/div[count(@id)!=1 and contains("0123456789", substring(@id,1,1))]'): new = ItemLoader(item=LinkItem(),selector=root) new.context['lang'] = self.lang new.add_xpath('profile', "substring-before(.//h3/a/@href, concat(substring('&amp;', 1 div contains(.//h3/a/@href, 'profile.php')), substring('?', 1 div not(contains(.//h3/a/@href, 'profile.php')))))") new.add_xpath('action','.//div[1]//text()') new.add_xpath('date','.//abbr/text()') new.add_value('post_url',response.meta['link_url']) new.add_value('url',response.url) yield new.load_item() </code></pre> <p>I expect the item to be stored in my "link" database but I keep running into this error " self.cursor.execute("""insert into link_tb values (%s,%s,%s,%s,%s)""", ( AttributeError: 'LinkPipeline' object has no attribute 'cursor'"</p>
<p>You defined the constructor as <code>_init_</code> instead of <code>__init__</code></p>
python|mysql|scrapy
1
595
69,568,033
How to plot lines from a dataframe with column headers as the x-axis
<p>I figure I need to do some sort of data sorting/display it differently in order to plot the graph but I'm not sure how. I have tried transposing the data set but that doesn't seem to do the trick either.</p> <p>This is my data after slicing and I need to plot W values as x axis vs the R values as y1, y2, y3, y4 and y5</p> <p><img src="https://i.stack.imgur.com/ntCHc.png" alt="" /></p> <pre class="lang-py prettyprint-override"><code>import pandas as pd data = {'observations': [15, 28, 10, 6, 25], 'biomass': [94.67, 56.56, 81.33, 26.00, 65.78], 380: [0.013918, 0.012229, 0.013622, 0.015602, 0.011784], 390: [0.015578, 0.012762, 0.014548, 0.017856, 0.013304], 400: [0.016338, 0.014434, 0.014872, 0.019132, 0.014054]} data1 = pd.DataFrame(data, index=[14, 17, 9, 5, 24]) data1.plot() </code></pre> <p><a href="https://i.stack.imgur.com/t434y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/t434y.png" alt="enter image description here" /></a></p>
<p>For each graph you need two arrays or lists x and y.</p> <p>Since x values are the same for every graph you can reuse them. You could get them from the keys of your DataFrame (assuming they are integers) like this:</p> <pre><code>x = [key for key in df.keys() if type(key) == int] </code></pre> <p>Next you need the y values for each graph. You can iterate the rows of a DataFrame with df.iterrows():</p> <pre><code> fig, ax = plt.subplots() # create figure and axes for index, row in data1[x].iterrows(): ax.plot(x, row) plt.show() </code></pre> <p><code>data1[x]</code> returns columns that are in x</p> <p><code>iterrows()</code>returns tuple of index and row. Row is of type pandas.Series</p>
python|pandas|matplotlib|plot
2
596
55,292,876
Autostart a python program in RaspberryPi
<p>I am making a project related to RaspberryPi and Xbee, where it is essential that python program should start when i give power to RaspberryPi.</p> <p>I saw a techniqe on a udemy lecture, where it was said- sudo crontab -e A file will open. Go at the end of the file and then type @reboot sudo python3 /home/pi/mycode.py Reboot the Raspberrypi. Even by doing this, i am not getting any success. Please suggest where i am going wrong. This is a easy problem but i am stuck here. Please help.</p>
<pre><code>sudo nano /home/pi/.bashrc </code></pre> <p>Go to the last line of the script and add:</p> <pre><code>echo Running at boot sudo python /home/pi/sample.py </code></pre> <p>There are various other ways in this blog <a href="https://www.dexterindustries.com/howto/run-a-program-on-your-raspberry-pi-at-startup/" rel="nofollow noreferrer">https://www.dexterindustries.com/howto/run-a-program-on-your-raspberry-pi-at-startup/</a></p>
python|raspbian
1
597
57,364,349
Optionally passing parameters onto another function with jit
<p>I am attempting to jit compile a python function, and use a optional argument to change the arguments of another function call. </p> <p>I think where jit might be tripping up is that the default value of the optional argument is None, and jit doesn't know how to handle that, or at least doesn't know how to handle it when it changes to a numpy array. See below for a rough overview:</p> <pre><code>@jit(nopython=True) def foo(otherFunc,arg1, optionalArg=None): if optionalArg is not None: out=otherFunc(arg1,optionalArg) else: out=otherFunc(arg1) return out </code></pre> <p>Where optionalArg is either None, or a numpy array</p> <p>One solution would be to turn this into three functions as shown below, but this feels kinda janky and I don't like it, especially because speed is very important for this task.</p> <pre><code>def foo(otherFunc,arg1,optionalArg=None): if optionalArg is not None: out=func1(otherFunc,arg1,optionalArg) else: out=func2(otherFunc,arg1) return out @jit(nopython=True) def func1(otherFunc,arg1,optionalArg): out=otherFunc(arg1,optionalArg) return out @jit(nopython=True) def func2(otherFunc,arg1): out=otherFunc(arg1) return out </code></pre> <p>Note that other stuff is happening besides just calling otherFunc that makes using jit worth it, but I'm almost certain that is not where the problem is since this was working before without the optionalArg portion, so I have decided not to include it. </p> <p>For those of you that are curious its runge-kutta order 4 implementation with optional extra parameters to pass to the differential equation. If you want to see the whole thing just ask.</p> <p>The traceback is rather long but here is some of it:</p> <pre><code>inte.rk4(de2,y0,0.001,200,vals=np.ones(4)) Traceback (most recent call last): File "&lt;ipython-input-38-478197aa6a1a&gt;", line 1, in &lt;module&gt; inte.rk4(de2,y0,0.001,200,vals=np.ones(4)) File "C:\Users\Alex\Anaconda3\lib\site-packages\numba\dispatcher.py", line 350, in _compile_for_args error_rewrite(e, 'typing') File "C:\Users\Alex\Anaconda3\lib\site-packages\numba\dispatcher.py", line 317, in error_rewrite reraise(type(e), e, None) File "C:\Users\Alex\Anaconda3\lib\site-packages\numba\six.py", line 658, in reraise raise value.with_traceback(tb) TypingError: Internal error at &lt;numba.typeinfer.CallConstraint object at 0x00000258E168C358&gt;: This continues... </code></pre> <p>inte.rk4 is the equiavlent of foo, de2 is otherFunc, y0, 0.001 and 200 are just values, that I swaped out for arg1 in my problem description above, and vals is optionalArg.</p> <p>A similar thing happens when I try to run this with the vals parameter omitted:</p> <pre><code>ysExp=inte.rk4(deExp,y0,0.001,200) Traceback (most recent call last): File "&lt;ipython-input-39-7dde4bcbdc2f&gt;", line 1, in &lt;module&gt; ysExp=inte.rk4(deExp,y0,0.001,200) File "C:\Users\Alex\Anaconda3\lib\site-packages\numba\dispatcher.py", line 350, in _compile_for_args error_rewrite(e, 'typing') File "C:\Users\Alex\Anaconda3\lib\site-packages\numba\dispatcher.py", line 317, in error_rewrite reraise(type(e), e, None) File "C:\Users\Alex\Anaconda3\lib\site-packages\numba\six.py", line 658, in reraise raise value.with_traceback(tb) TypingError: Internal error at &lt;numba.typeinfer.CallConstraint object at 0x00000258E048EA90&gt;: This continues... </code></pre>
<p>If you see the documentation <a href="https://numba.pydata.org/numba-doc/dev/reference/types.html#optional-types" rel="nofollow noreferrer">here</a>, you can specify the <code>optional</code> type arguments explicitly in Numba. For example (this is the same example from documentation):</p> <pre><code>&gt;&gt;&gt; @jit((optional(intp),)) ... def f(x): ... return x is not None ... &gt;&gt;&gt; f(0) True &gt;&gt;&gt; f(None) False </code></pre> <p>Additionally, based on the conversation going on <a href="https://github.com/numba/numba/issues/2749" rel="nofollow noreferrer">this Github issue</a> you can use the following workaround to implement optional keyword. I have modified the code from the solution provided in the github issue to suit your example:</p> <pre><code>from numba import jitclass, int32, njit from collections import OrderedDict import numpy as np np_arr = np.asarray([1,2]) spec = OrderedDict() spec['x'] = int32 @jitclass(spec) class Foo(object): def __init__(self, x): self.x = x def otherFunc(self, optionalArg): if optionalArg is None: return self.x + 10 else: return len(optionalArg) @njit def useOtherFunc(arg1, optArg): foo = Foo(arg1) print(foo.otherFunc(optArg)) arg1 = 5 useOtherFunc(arg1, np_arr) # Output: 2 useOtherFunc(arg1, None) # Output : 15 </code></pre> <p>See <a href="https://colab.research.google.com/drive/1dG3F9xfSAzEGLDw1u6hwGHNg19POgk6T" rel="nofollow noreferrer">this colab notebook</a> for the example shown above. </p>
python-3.x|jit|numba
3
598
57,419,527
Why flask session didn't store the user info when making different posts to it from react?
<p>I wrote several APIs using Flask-RESTful and several react modules for testing purposes. Ideally, if I stored some info in session through a request, python should be able to detect whether there is such session even in other API entries with code, like</p> <pre class="lang-py prettyprint-override"><code>if session: return jsonify({'user': session['username'], 'status': 2000}) return jsonify({'user': None, 'status': 3000}) </code></pre> <p>However, the problem I met was within a single request, say login request, session was indeed properly used and <code>username</code> was also stored in the session —— for example,</p> <pre class="lang-py prettyprint-override"><code>from flask import session ... # login API class UserLoginResource(Resource): @staticmethod def post(): ... ... # a user object (model) is defined session['username'] = user.username return jsonify({'status': 2000, 'user': session['username']}) </code></pre> <p>with this code, it returned the exact username from session, which meant info was stored. However, when I made another get request from react side to the index API, like</p> <pre class="lang-py prettyprint-override"><code>from flask import session ... # index API (without any practical use) class IndexResource(Resource): @staticmethod def get(): if session: return jsonify({'username': session['username']}) </code></pre> <p>In this case, the response was None, cuz the API didn't detect any session.</p> <pre class="lang-js prettyprint-override"><code>// makePostRequest Function makePostRequest = (e: any) =&gt; { e.preventDefault() const payload = { 'email': this.state.email, 'password': this.state.password } fetch('http://127.0.0.1:5000/api/login', { method: 'POST', headers: { 'Access-Control-Allow-Origin': '*', 'Content-Type': 'application/json' }, body: JSON.stringify(payload) }).then(res =&gt; res.json()) .then(res =&gt; {this.setState({ status: res['status'], username: res['user'] })}) .catch(err =&gt; console.log(err)) } </code></pre> <p>This is the way I make login post request. If login successful, it returns status code 2000; and if the status code is 2000, it means the program has gone through the code <code>session['username']=_the_username_</code>. And I should be able to extract username data from session storage when accessing Index page.</p> <pre class="lang-js prettyprint-override"><code>componentDidMount = () =&gt; { fetch('http://127.0.0.1:5000/api') .then(res =&gt; res.json()) .then(res =&gt; this.setState({ user: res['user'], status: res['status'] })) } </code></pre> <p>This is how I make a get request on the homepage module. However, the <code>user</code> is always <em>None</em> and <code>status</code> is always <em>3000</em></p> <p>This may be just improper use of session, but I don't know how to actually correctly use the session in flask. So, what was the mistake here?</p> <hr> <p>Update: So, I added a GET request within <code>class UserLoginResource(Resource)</code> like this</p> <pre class="lang-py prettyprint-override"><code>class UserLoginResource(Resource): @staticmethod def post(): ... # identical to the previous code @staticmethod(): def get(): # url: http://127.0.0.1:5000/api/login session['username'] = 'user_a' return jsonify({'message': 'session set'}) </code></pre> <p>And I made a get request in react side to <code>http://127.0.0.1:5000/api/login</code> and got the <code>message: session set</code>. However, when react then accessed <code>http://127.0.0.1:5000/api</code>, the result remained to be <code>status</code> 3000 and None username. Then, I directly accessed the url <code>http://127.0.0.1:5000/api/login</code> and then accessed <code>http://127.0.0.1:5000/api</code> and there we go we had the username <code>user_a</code> and status <code>2000</code>. So, I think the problem here might be that the backend didn't recognize the browser that was accessing it was the same person, or it might be others. Also, I checked if it was something wrong with <code>componentDidMount</code>, but unfortunately <code>componentDidMount</code> wasn't the source of error — after I turned it into a normal function triggered by <code>onClick</code>, still it didn't work. How to fix this?</p>
<p><code>fetch</code> does not supports cookie by default, you need to enable it using <code>credentials: 'include'</code></p> <pre><code>makePostRequest = (e: any) =&gt; { e.preventDefault() const payload = { 'email': this.state.email, 'password': this.state.password } fetch('http://127.0.0.1:5000/api/login', { method: 'POST', credentials: 'include', headers: { 'Access-Control-Allow-Origin': '*', 'Content-Type': 'application/json' }, body: JSON.stringify(payload) }).then(res =&gt; res.json()) .then(res =&gt; {this.setState({ status: res['status'], username: res['user'] })}) .catch(err =&gt; console.log(err)) } </code></pre> <p>Enable cors on server using <code>pip install flask-cors</code></p> <p>Then Add this to <code>app.py</code>, where you initialize your app</p> <pre><code>from flask_cors import CORS app = Flask(__name__) CORS(app) </code></pre>
python-3.x|session|web|flask|flask-restful
0
599
57,710,512
How to store .format() print output to a var for reuse
<p>I have a list of dictionaries which i want to display using Tkinter. So far i only managed to print the desired result.</p> <p>Example code:</p> <pre><code>for x in list: for key, value in x.items(): print("{}: {}".format(key, value)) &gt;&gt;&gt;key: value key: value key: value </code></pre> <p>The way it's printed is the exact way i want to display it on the application. How do i store this output as text?</p>
<p>Looks like you need.</p> <pre><code>out = "" for x in list: for key, value in x.items(): out += "{}: {}\n".format(key, value)) print(out) </code></pre>
python|dictionary|format
2