Unnamed: 0
int64
0
1.91M
id
int64
337
73.8M
title
stringlengths
10
150
question
stringlengths
21
64.2k
answer
stringlengths
19
59.4k
tags
stringlengths
5
112
score
int64
-10
17.3k
1,902,400
56,809,050
Dynamically filter list and remove item in loop
<p>I have the following data (represented in a list in my code):</p> <pre class="lang-py prettyprint-override"><code>word_list = [{'bottom': Decimal('58.650'), 'text': 'Contact' }, {'bottom': Decimal('77.280'), 'text': 'email@domain.com' }, {'bottom': Decimal('101.833'), 'text': 'www.domain.com' }, {'bottom': Decimal('116.233'), 'text': '(Acme INC)' }, {'bottom': Decimal('74.101'), 'text': 'Oliver' }, {'bottom': Decimal('90.662'), 'text': 'CEO' }] </code></pre> <p>The above data is coming from a PDF text extraction. I am trying to parse this and keep the layout formatting, based on the <code>bottom</code> values.</p> <p>The thought is to check the <code>bottom</code> value for the current word, and then find <strong>all</strong> matching words, that is <em>within</em> a specific range with a tolerance of <code>threshold=</code>.</p> <p>This is my code:</p> <pre class="lang-py prettyprint-override"><code>threshold = float('10') current_row = [word_list[0], ] row_list = [current_row, ] for word in word_list[1:]: if abs(current_row[-1]['bottom'] - word['bottom']) &lt;= threshold: # distance is small, use same row current_row.append(word) else: # distance is big, create new row current_row = [word, ] row_list.append(current_row) </code></pre> <p>So this will return a list of the words <em>within</em> the approved threshold.</p> <p>I am a bit stuck here, since it may happen that when iterating the list, that words will have <code>bottom</code> values that are very close to each other, and thus it will select the same close words in multiple iterations.</p> <p>For example, if a word has a bottom value that is close to a word that is already added to the <code>row_list</code>, it will simply just add it to the list again.</p> <p>I was wondering if it was maybe possible to delete the words that's already been iterated/added? Something like:</p> <pre class="lang-py prettyprint-override"><code> if abs(current_row[-1]['bottom'] - word['bottom']) &lt;= threshold: [...] else: [...] del word from word_list </code></pre> <p>However I am not sure how to implement this? As I cannot modify the <code>word_list</code> within the loop.</p>
<p>You can use a while loop instead of for loop</p> <pre><code>while len(word_list[1:])!=0: word=word_list[1] #as you are deleting item once it is used, next item will come to the beginning of list automatically word_list.remove(word) if abs(current_row[-1]['bottom'] - word['bottom']) &lt;= threshold: [...] else: [...] </code></pre>
python|python-3.x
1
1,902,401
56,602,636
Creating tensor of dynamic shape from python lists to feed tensorflow RNN
<p>I'm creating an end-to-end speech recognition architecture, in which my data is a list of segmented spectrograms. My data has shape <code>(batch_size, timesteps, 8, 65, 1)</code> in which <code>batch_size</code> is fixed but <code>timesteps</code> is varying. I can't figure out, how to put this data into a tensor with the appropriate shape to feed my model. Here is a piece of code that shows my problem:</p> <pre><code>import numpy as np import tensorflow as tf import tensorflow.keras as keras from tensorflow.keras.layers import Conv2D, MaxPool2D, Dense, Dropout, Flatten, TimeDistributed from tensorflow.keras.layers import SimpleRNN, LSTM from tensorflow.keras import Input, layers from tensorflow.keras import backend as K segment_width = 8 segment_height = 65 segment_channels = 1 batch_size = 4 segment_lengths = [28, 33, 67, 43] label_lengths = [16, 18, 42, 32] TARGET_LABELS = np.arange(35) # Generating data X = [np.random.uniform(0,1, size=(segment_lengths[k], segment_width, segment_height, segment_channels)) for k in range(batch_size)] y = [np.random.choice(TARGET_LABELS, size=label_lengths[k]) for k in range(batch_size)] # Model definition input_segments_data = tf.keras.Input(name='input_segments_data', shape=(None, segment_width, segment_height, segment_channels), dtype='float32') input_segment_lengths = tf.keras.Input(name='input_segment_lengths', shape=[1], dtype='int64') input_label_lengths = tf.keras.Input(name='input_label_lengths', shape=[1], dtype='int64') # More complex architecture comes here outputs = Flatten()(input_segments_data) model = tf.keras.Model(inputs=[input_segments_data, input_segment_lengths, input_label_lengths], outputs = outputs) def dummy_loss(y_true, y_pred): return y_pred model.compile(optimizer="Adam", loss=dummy_loss) model.summary() </code></pre> <p>output:</p> <pre><code>Model: "model" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_segments_data (InputLayer [(None, None, 8, 65, 0 __________________________________________________________________________________________________ input_segment_lengths (InputLay [(None, 1)] 0 __________________________________________________________________________________________________ input_label_lengths (InputLayer [(None, 1)] 0 __________________________________________________________________________________________________ flatten (Flatten) (None, None) 0 input_segments_data[0][0] ================================================================================================== Total params: 0 Trainable params: 0 Non-trainable params: 0 __________________________________________________________________________________________________ </code></pre> <p>Now when I try to predict from my random data:</p> <pre><code>model.predict([X, segment_lengths, segment_lengths]) </code></pre> <p>I get this error:</p> <pre><code>ValueError: Error when checking input: expected input_segments_data to have 5 dimensions, but got array with shape (4, 1) </code></pre> <p>How can I convert <code>X</code> (which is a list of arrays) to a tensor of shape <code>(None, None, 8, 65, 1)</code> and feed it to my model? I don't want to use zero padding!</p>
<p>Keras model takes numpy array (tensor) as input. You cannot have a tensor with variable timesteps. Instead, what you can do is to pad all the data into same shape, using e.g. <a href="https://keras.io/preprocessing/sequence/#pad_sequences" rel="nofollow noreferrer">pad_sequence</a> And then, you can add a <a href="https://keras.io/layers/core/#masking" rel="nofollow noreferrer">Masking layer</a> to your model to ignore the padded values.</p>
python|numpy|tensorflow|keras
2
1,902,402
18,207,276
Internal Server error. Flask
<p>I was running this script only a while back and suddenly it seems to give me an error.</p> <pre><code>@app.route('/') def hello(): return ''' &lt;form method="POST" action="/people"&gt; &lt;font size = "4"&gt;I am looking for?&lt;/font&gt;&lt;br&gt; &lt;font size = 2&gt;Please enter atleast two words. &lt;/font&gt;&lt;br&gt;&lt;br&gt; &lt;input name="search" type="text" width=1000px&gt; &lt;br&gt; &lt;input type="submit" value="People Search"/&gt; &lt;br&gt; &lt;/form&gt; &lt;form method="POST" action="/science"&gt; &lt;input name="search" type="text" width=1000px&gt; &lt;input type="submit" value="Science Search"/&gt; &lt;/form&gt;''' @app.route('/people', methods=['POST']) def PeopleSearch(): name = request.form.get('search') print (name) </code></pre> <p>it gives an internal server error on clicking peoplesearch. This was working only a while back.</p>
<p>You do not return response for <code>PeopleSearch</code> endpoint. Next code must work fine:</p> <pre><code>@app.route('/people', methods=['POST']) def people_search(): name = request.form.get('search') return name </code></pre>
python|flask
2
1,902,403
17,784,322
How to click links one by one with Selenium webdriver and Python
<p>Site has top menu with 6 links. I can get list of this links like this:</p> <pre><code>links = browser.find_elements_by_css_selector(MENU_LINKS_CSS_SELECTOR) </code></pre> <p>After this I need to click this links one by one. If I do it like this:</p> <pre><code>for link in links: link.click() </code></pre> <p>I get the following error: <code>selenium.common.exceptions.StaleElementReferenceException: Message: u'Element not found in the cache - perhaps the page has changed since it was looked up'</code>. As I understand, this error raises beacause of connection betweeb <code>WebElement</code> instances and DOM of the web-page is broken after reloading the page (clicking on link).</p> <p>Here I should notice that top menu is the same on all pages.</p> <p>So, what I do wrong? How to fix this? TIA!</p>
<p>I don't know much Selenium but you should select the links again - </p> <pre><code>for i in range(0,6): links = browser.find_elements_by_css_selector(MENU_LINKS_CSS_SELECTOR) links[i].click() </code></pre>
python|selenium|selenium-webdriver
5
1,902,404
66,277,912
The validation accuracy gets lower when the number of workers increases in Federated Learning with non-IID dataset
<p>I use human activity recognition (HAR) dataset with 6 classes using federated learning (FL). In this case, I implement the non-IID dataset by assigning (1) each class dataset to different 6 workers, (2) two classes to 3 different workers, and (3) three classes to 2 different workers.</p> <p>When I run the FL process, the validation accuracy for scenario (3) &gt; (2) &gt; (1). I expect that all scenarios will obtain almost the same validation accuracy. For each scenario, I use the same hyperparameter settings including batch size, shuffle buffer, and the model configuration.</p> <p>Is it common in FL with the non-IID dataset or is there any problem with my result?</p>
<p>The scenario where each worker has only one (and all of) one label can be considered the &quot;pathologically bad&quot; non-IID for Federated Averaging.</p> <p>In this scenario, its possible that each worker learns to predict only the label it has. The model does not need to discriminate on any features: if a worker only has class 1, it can predict class 1 and obtain 100% accuracy. Each round, when all of the model updates are averaged, the global is back to a model that only predicts each class with 1/6 probability.</p> <p>The closer each workers distribution of examples is to the global distribution (or each other, i.e. the more IID the client datasets are), the closer its local training will produce an update to the global model that is in the same direction as the averaged update, leading to better training results.</p>
tensorflow-federated|federated-learning
1
1,902,405
69,253,183
How do I match samples with their predictions when doing inference with PyTorch's DistributedSampler?
<p>I have trained a torch model for NLP tasks and would like to perform some inference using a multi GPU machine (in this case with two GPUs). Inside the processing code, I use this</p> <pre><code>dataset = TensorDataset(encoded_dict['input_ids'], encoded_dict['attention_mask']) sampler = DistributedSampler( dataset, num_replicas=args.nodes * args.gpus, rank=args.node_rank * args.gpus + gpu_number, shuffle=False ) dataloader = DataLoader(dataset, batch_size=batch_size, sampler=sampler) </code></pre> <p>For those familiar with NLP, <code>encoded_dict</code> is the output from the <code>tokenizer.batch_encode_plus</code> function where the tokenizer is an instance of <code>transformers.BertTokenizer</code>.</p> <p>The issue I’m having is that when I call the code through the <code>torch.multiprocessing.spawn</code> function, each GPU is doing predictions (i.e. inference) on a subset of the full dataset, and saving the predictions separately; for example, if I have a dataset with 1000 samples to predict, each GPU is predicting 500 of them. As a result, I have no way of knowing which samples out of the 1000 were predicted by which GPU, as their order is not preserved, therefore the model predictions are meaningless as I cannot trace each of them back to their input sample.</p> <p>I have tried to save the dataloader instance (as a pickle) together with the predictions and then extracting the input_ids by using <code>dataloader.dataset.tensors</code>, however this requires a tokeniser decoding step which I rather avoid, as the tokenizer will have slightly changed the text (for example double whitespaces would be removed, words with dashes will have been split and so on). What is the cleanest way to save the input text samples together with their predictions when doing inference in distributed mode, or alternatively to keep track of which prediction refers to which sample?</p>
<p>As I understand it, basically your dataset returns for an index <code>idx</code> <code>[data,label]</code> during training and <code>[data]</code> during inference. The issue with this is that the <code>idx</code> is not preserved by the <code>dataloader</code> object, so there is no way to obtain the <code>idx</code> values for the minibatch after the fact.</p> <p>One way to handle this issue is to define a very simple custom dataset object that also returns <code>[data,id]</code> instead of only <code>data</code> during inference. Probably the easiest way to do this is to make the dataset return a dictionary object with keys <code>id</code> and <code>data</code>. The dictionary return type is convenient because Pytorch <em>collates</em> (converts data structures to batches) this type automatically, otherwise you'd have to define a custom <code>collate_fn</code> and pass it to the dataloader object, which is itself not very hard but is an extra step.</p> <p>In any case, here's I would define a new dataset object as follows which should be almost a one-to-one substitute for your current dataset (I believe):</p> <pre><code>def TensorDictDataset(torch.data.Dataset): def __init__(self,ids,attention_mask): self.ids = ids self.mask = attention_mask def __len__(self): return len(self.ids) def __getitem(self,idx): datum = { &quot;mask&quot;: self.mask[idx], &quot;id&quot;:ids[idx] } return datum </code></pre> <p>The only change you'll then have to make is that rather than returning <code>mask</code> your dataset will now return <code>dict{&quot;mask&quot;:mask,&quot;id&quot;:id}</code> so you'll have to parse that appropriately.</p>
machine-learning|deep-learning|nlp|pytorch|multi-gpu
1
1,902,406
68,189,767
Python how to Enumerate k-mers Lexicographically
<p>I have trouble doing this exercise, I want to get from a string of k letters and a number, a combination of all permutations of that number ordered lexicographically. This is my code:</p> <pre><code>string = 'A C G T' n = int(2) lista = [] for element in string: if element != ' ': lista.append(element) perm = combinations_with_replacement(lista,n) for i in list(perm): print(i) </code></pre> <p>This is my output: ('A', 'A') ('A', 'C') ('A', 'G') ('A', 'T') ('C', 'C') ('C', 'G') ('C', 'T') ('G', 'G') ('G', 'T') ('T', 'T') And it's not bad but I have GT and not TG, AG but not GA, and i don't know how to include those</p> <p>Ty in advance, really apreciate it :)</p>
<p>Use <a href="https://docs.python.org/3/library/itertools.html#itertools.product" rel="nofollow noreferrer"><code>itertools.product</code></a> instead:</p> <pre><code>from itertools import product pool = &quot;A C G T&quot;.split() n = 2 [*map(&quot;&quot;.join, product(pool, repeat=n))] ['AA', 'AC', 'AG', 'AT', 'CA', 'CC', 'CG', 'CT', 'GA', 'GC', 'GG', 'GT', 'TA', 'TC', 'TG', 'TT'] </code></pre>
python|permutation|bioinformatics
1
1,902,407
68,405,285
'task() takes 0 positional arguments but 1 was given' for python asyncio function
<p>Within my python code, I am trying to design a piece of client code that connects to a WebSockets Server every second and then prints the timestamp and the obtained value from the server in a .csv file. This is given below:</p> <pre><code>import asyncio import websockets import logging import datetime import time starttime = time.time() # start value for timed data acquisition logger = logging.getLogger(&quot;websockets&quot;) logger.setLevel(logging.INFO) # Switch to DEBUG for full error information logger.addHandler(logging.StreamHandler()) class Timer: # class for asynchronous (non-blocking) counter def __init__(self, interval, first_immediately, callback): self._interval = interval self._first_immediately = first_immediately self._callback = callback self._is_first_call = True self._ok = True self._task = asyncio.ensure_future(self._job()) print(&quot;init timer done&quot;) async def _job(self): try: while self._ok: if not self._is_first_call or not self._first_immediately: await asyncio.sleep(self._interval) await self._callback(self) self._is_first_call = False except Exception as ex: print(ex) def cancel(self): self._ok = False self._task.cancel() async def test(): async with websockets.connect( &quot;ws://198.162.1.177:80/&quot;, ping_interval=None ) as websocket: await websocket.send( str(1.001) ) # send a message to the websocket server response = ( await websocket.recv() ) # wait to get a response from the server print(response) dataline_pv1 = ( datetime.datetime.today().isoformat() + &quot;,&quot; + str(response) + &quot;,&quot; + str(0) + &quot;\n&quot; ) # format and assemble data line file_name_pv1 = ( &quot;{:%Y%m%d}&quot;.format(datetime.datetime.today()) + &quot;_flow.csv&quot; ) # generate file name with open( file_name_pv1, &quot;a&quot; ) as etherm_file1: # append dataline to file etherm_file1.write(dataline_pv1) # asyncio.get_event_loop().run_forever(test()) # run until test() is finished while True: timer = Timer(interval=1, first_immediately=True, callback=test) loop = asyncio.get_event_loop() try: asyncio.ensure_future(test()) loop.run_forever() except KeyboardInterrupt: timer.cancel() pass finally: print(&quot;Closing Loop&quot;) loop.close() </code></pre> <p>When this runs, I obtain the following message of my terminal (however the code does not crash):</p> <pre><code>test() takes 0 positional arguments but 1 was given </code></pre> <p>I have seen from this question (<a href="https://stackoverflow.com/questions/60461651/typeerror-takes-0-positional-arguments-but-1-was-given">TypeError: takes 0 positional arguments but 1 was given</a>) that this error occurs when a Class object is not defined properly, but my error seems to be occurring outside of a class framework. In addition, the desired .csv file is produced, however only one line is printed to the file, and does not repeat every second as desired.</p> <p>What am I missing here? (also I am a complete novice with asyncio programming)</p> <p>UPDATE: After changing the definition of <code>test()</code> to <code>async def test(timer=None)</code>, my code now runs as expected and outputs the values to a .csv file every second (roughly), but still throws up an error. Specifically:</p> <pre><code>Task exception was never retrieved future: &lt;Task finished coro=&lt;test() done, defined at flowmeterclient_v2.py:36&gt; exception=ConnectionRefusedError(111, &quot;Connect call failed ('198.162.1.177', 80)&quot;)&gt; Traceback (most recent call last): File &quot;flowmeterclient_v2.py&quot;, line 37, in test async with websockets.connect(&quot;ws://198.162.1.177:80/&quot;, ping_interval=None) as websocket: File &quot;/usr/lib64/python3.6/site-packages/websockets/legacy/client.py&quot;, line 604, in __aenter__ return await self File &quot;/usr/lib64/python3.6/site-packages/websockets/legacy/client.py&quot;, line 622, in __await_impl__ transport, protocol = await self._create_connection() File &quot;/usr/lib64/python3.6/asyncio/base_events.py&quot;, line 798, in create_connection raise exceptions[0] File &quot;/usr/lib64/python3.6/asyncio/base_events.py&quot;, line 785, in create_connection yield from self.sock_connect(sock, address) File &quot;/usr/lib64/python3.6/asyncio/selector_events.py&quot;, line 439, in sock_connect return (yield from fut) File &quot;/usr/lib64/python3.6/asyncio/selector_events.py&quot;, line 469, in _sock_connect_cb raise OSError(err, 'Connect call failed %s' % (address,)) ConnectionRefusedError: [Errno 111] Connect call failed ('198.162.1.177', 80) </code></pre>
<p>I think you don't really need the Timer here at all. Simply have an asyncio task that loops forever and has an <code>asyncio.sleep()</code> internally.</p> <p>This also doesn't reconnect to the websocket server for each request, like your previous code did.</p> <pre><code>import asyncio import websockets import logging import datetime logger = logging.getLogger(&quot;websockets&quot;) logger.setLevel(logging.INFO) # Switch to DEBUG for full error information logger.addHandler(logging.StreamHandler()) async def test(): async with websockets.connect( &quot;ws://198.162.1.177:80/&quot;, ping_interval=None, ) as websocket: while True: await websocket.send(str(1.001)) response = await websocket.recv() print(response) now = datetime.datetime.now() dataline_pv1 = f&quot;{now.isoformat()},{response},0\n&quot; file_name_pv1 = f&quot;{now:%Y%m%d}_flow.csv&quot; with open(file_name_pv1, &quot;a&quot;) as etherm_file1: etherm_file1.write(dataline_pv1) await asyncio.sleep(1) asyncio.run(test()) </code></pre> <p>Following up on comments, if you actually do need to reconnect for each request, you can refactor this like so:</p> <pre><code>import asyncio import websockets import logging import datetime logger = logging.getLogger(&quot;websockets&quot;) logger.setLevel( logging.INFO ) # Switch to DEBUG for full error information logger.addHandler(logging.StreamHandler()) async def get_data(): async with websockets.connect( &quot;ws://198.162.1.177:80/&quot;, ping_interval=None, ) as websocket: await websocket.send(str(1.001)) response = await websocket.recv() return response def save_data(response): now = datetime.datetime.now() dataline_pv1 = f&quot;{now.isoformat()},{response},0\n&quot; file_name_pv1 = f&quot;{now:%Y%m%d}_flow.csv&quot; with open(file_name_pv1, &quot;a&quot;) as etherm_file1: etherm_file1.write(dataline_pv1) async def test(): while True: response = await get_data() save_data(response) await asyncio.sleep(1) asyncio.run(test()) </code></pre>
python|asynchronous|websocket|python-asyncio
0
1,902,408
68,203,609
How to mock a custom exception in python3?
<p>I'm doing some unit test works in python using unittest module. When i try to unittest for the custom exception, it seems like it's not working. Below is my code</p> <pre><code># src.py from exceptions import ClusterException, IndexingException from utils import create_index, index_to_es def method(bucket, key, file): try: s3_obj = get_object(bucket, key) .... .... create_index(index_name, index_mapping) index_to_es(df) except ClusterException as e: raise ClusterException(e) except Exception: raise IndexingException(e) </code></pre> <p>Here i need to test for the ClusterException exception block. So i'm mocking create_index() method to raise a ClusterException error. My testing code is</p> <pre><code># test_src.py with mock.patch('src.ClusterException') as mocked_cluster_exception: mocked_cluster_exception.side_effect = ClusterException(&quot;Bad Cluster Error&quot;) with mock.patch('src.create_index') as mocked_create_index: mocked_create_index.side_effect = ClusterException(&quot;Index creation error&quot;) self.assertRaises(ClusterException, method, 'bucket', 'key', 'file') </code></pre> <p>And my exception file is</p> <pre><code># exceptions.py class ClusterException(Exception): pass class IndexingException(Exception): pass </code></pre> <p>But when i run this the testing is getting failed with below message. What am i missing here?</p> <pre><code>TypeError: catching classes that do not inherit from BaseException is not allowed </code></pre>
<p>You don't need to patch the <code>src.ClusterException</code>. You should patch the <code>create_index()</code> function to raise a <code>ClusterException</code>.</p> <p>E.g.</p> <p><code>src.py</code>:</p> <pre class="lang-py prettyprint-override"><code>from exceptions import ClusterException, IndexingException from utils import create_index def method(bucket, key, file): try: index_name = 'index_name' index_mapping = 'index_mapping' create_index(index_name, index_mapping) except ClusterException as e: print(e) raise ClusterException(e) except Exception as e: raise IndexingException(e) </code></pre> <p><code>utils.py</code>:</p> <pre class="lang-py prettyprint-override"><code>def create_index(name, map): pass </code></pre> <p><code>exceptions.py</code>:</p> <pre class="lang-py prettyprint-override"><code>class ClusterException(Exception): pass class IndexingException(Exception): pass </code></pre> <p><code>test_src.py</code>:</p> <pre class="lang-py prettyprint-override"><code>from unittest import mock, main, TestCase from exceptions import ClusterException from src import method class TestSrc(TestCase): def test_method_to_raise_cluster_exception(self): with mock.patch('src.create_index') as mocked_create_index: mocked_create_index.side_effect = ClusterException(&quot;Index creation error&quot;) self.assertRaises(ClusterException, method, 'bucket', 'key', 'file') mocked_create_index.assert_called_once_with('index_name', 'index_mapping') if __name__ == '__main__': main() </code></pre> <p>unit test result:</p> <pre><code>Index creation error . ---------------------------------------------------------------------- Ran 1 test in 0.001s OK Name Stmts Miss Cover Missing ------------------------------------------------------------------------ src/stackoverflow/68203609/exceptions.py 4 0 100% src/stackoverflow/68203609/src.py 12 2 83% 13-14 src/stackoverflow/68203609/test_src.py 11 0 100% src/stackoverflow/68203609/utils.py 2 1 50% 2 ------------------------------------------------------------------------ TOTAL 29 3 90% </code></pre>
unit-testing|python-unittest|python-unittest.mock
2
1,902,409
59,470,364
How to use Pandas vector methods based on rolling custom function that involves entire row and prior data
<p>While its easy to use pandas rolling method to apply standard formulas, but i find it hard if it involves multiple column with limited past rows. Using the following code to better elaborate: -</p> <pre><code>import numpy as np import pandas as pd #create dummy pandas df=pd.DataFrame({'col1':np.arange(0,25),'col2':np.arange(100,125),'col3':np.nan}) def func1(shortdf): #dummy formula #use last row of col1 multiply by sum of col2 return (shortdf.col1.tail(1).values[0]+shortdf.col2.sum())*3.14 for idx, i in df.iterrows(): if idx&gt;3: #only interested in the last 3 rows from position of dataframe df.loc[idx,'col3']=func1(df.iloc[idx-3:idx]) </code></pre> <p>I currently use this iterrow method which needless to say is extremely slow. can anyone has better suggestion?</p>
<h2>Option 1</h2> <p>So <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.shift.html" rel="nofollow noreferrer">shift</a> is the solution here. You do have to use rolling for the summation, and then shift that series after the addition and multiplication. </p> <pre><code>df = pd.DataFrame({'col1':np.arange(0,25),'col2':np.arange(100,125),'col3':np.nan}) ans = ((df['col1'] + df['col2'].rolling(3).sum()) * 3.14).shift(1) </code></pre> <p>You can check to see that <code>ans</code> is the same as <code>df['col3']</code> by using <code>ans.eq(df['col3'])</code>. Once you see that all but the first few are the same, just change <code>ans</code> to <code>df['col3']</code> and you should be all set.</p> <h2>Option 2</h2> <p>Without additional information about the customized weight function, it is hard to help. However, this option may be a solution as it separates the rolling calculation at the cost of using more memory. </p> <pre><code># df['col3'] = ((df['col1'] + df['col2'].rolling(3).sum()) * 3.14).shift(1) s = df['col2'] stride = pd.DataFrame([s.shift(x).values[::-1][:3] for x in range(len(s))[::-1]]) res = pd.concat([df, stride], axis=1) # here you can perform your custom weight function res['final'] = ((res[0] + res[1] + res[2] + res['col1']) * 3.14).shift(1) </code></pre> <p><code>stride</code> is adapted from <a href="https://stackoverflow.com/questions/36701099/creating-a-pandas-rolling-window-series-of-arrays">this</a> question and the calculation is concatenated row-wise to the original dataframe. In this way each column has the value needed to compute whatever it is you may need.</p> <p><code>res['final']</code> is identical to option 1's <code>ans</code></p>
pandas
1
1,902,410
59,237,648
Correct orientation of an image based on 4 Qpoints
<p>I'm trying to correct orientation (rotating) of an image based on four Qpoints which was taken from the user. I found a similar code to which I work on and it was posted as a solution in this <a href="https://stackoverflow.com/questions/59176013/cropping-an-image-based-on-its-angles-with-pyqt">link</a>.</p> <p><strong>The code:</strong></p> <pre><code>import os import sys from PyQt5 import QtCore, QtGui, QtWidgets current_dir = os.path.dirname(os.path.realpath(__file__)) point_filename = os.path.join(current_dir, "Lastout.png") class GraphicsView(QtWidgets.QGraphicsView): def __init__(self, parent=None): super().__init__(QtWidgets.QGraphicsScene(), parent) self.pixmap_item = self.scene().addPixmap(QtGui.QPixmap()) self.pixmap_item.setShapeMode(QtWidgets.QGraphicsPixmapItem.BoundingRectShape) self.setAlignment(QtCore.Qt.AlignCenter) self.setHorizontalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOff) self.setVerticalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOff) def set_image(self, pixmap): self.pixmap_item.setPixmap(pixmap) #The pixmap is scaled to a rectangle as small as possible outside size, preserving the aspect ratio. self.fitInView(self.pixmap_item, QtCore.Qt.KeepAspectRatio) class CropView(GraphicsView): Changed_view = QtCore.pyqtSignal(QtGui.QPixmap) def __init__(self, parent=None): super().__init__(parent) self.point_items = [] def mousePressEvent(self, event): if not self.pixmap_item.pixmap().isNull(): sp = self.mapToScene(event.pos()) #print("Event position = " +str(sp)) lp = self.pixmap_item.mapFromScene(sp) #print("Event position FromScene = " +str(lp)) if self.pixmap_item.contains(lp): size = QtCore.QSize(30, 30) height = ( self.mapToScene(QtCore.QRect(QtCore.QPoint(), size)) .boundingRect() .size() .height() ) pixmap = QtGui.QPixmap(point_filename) point_item = QtWidgets.QGraphicsPixmapItem(pixmap, self.pixmap_item) point_item.setOffset( -QtCore.QRect(QtCore.QPoint(), pixmap.size()).center() ) point_item.setPos(lp) scale = height / point_item.boundingRect().size().height() # print ("Scale: "+str(scale)) point_item.setScale(scale) self.point_items.append(point_item) if len(self.point_items) == 4: points = [] for it in self.point_items: points.append(it.pos().toPoint()) print ("points: " + str (it.pos().toPoint())) print (" x " + str(it.x()) +" y "+ str( it.y()) ) self.crop(points) elif len(self.point_items) == 5: for it in self.point_items[:-1]: self.scene().removeItem(it) self.point_items = [self.point_items[-1]] else: print("outside") super().mousePressEvent(event) def crop(self, points): # https://stackoverflow.com/a/55714969/6622587 polygon = QtGui.QPolygonF(points) path = QtGui.QPainterPath() path.addPolygon(polygon) source = self.pixmap_item.pixmap() r = path.boundingRect().toRect().intersected(source.rect()) print (str(r)) #t = QtGui.QTransform() #added pixmap = QtGui.QPixmap(source.size()) #t.translate (pixmap._center.x() -pixmap.width() / 2, pixmap._center.y() -pixmap.height() / 2) #t.translate(pixmap.width() / 2, pixmap.height() / 2) # t.rotate(45.0) #t.translate(-pixmap.width() / 2, -pixmap.height() / 2) pixmap.fill(QtCore.Qt.transparent) painter = QtGui.QPainter(pixmap) painter.setClipPath(path) painter.drawPixmap(QtCore.QPoint(), source, source.rect()) painter.end() result = pixmap.copy(r) self.Changed_view.emit(result) class MainWindow(QtWidgets.QMainWindow): def __init__(self, parent=None): super().__init__(parent) self.setFixedSize(1200, 700) self.left_view = CropView() self.rigth_view = GraphicsView() self.left_view.Changed_view.connect(self.rigth_view.set_image) button = QtWidgets.QPushButton(self.tr("Select Image")) button.setStyleSheet("background-color: rgb(0, 100, 100);") button.setFixedSize(230, 60) font = QtGui.QFont() font.setFamily("Microsoft YaHei UI") font.setPointSize(11) font.setBold(True) font.setWeight(75) button.setFont(font) button.setCursor(QtGui.QCursor(QtCore.Qt.PointingHandCursor)) button.clicked.connect(self.load_image) central_widget = QtWidgets.QWidget() self.setCentralWidget(central_widget) lay = QtWidgets.QGridLayout(central_widget) lay.addWidget(self.left_view, 0, 0) lay.addWidget(self.rigth_view, 0, 1) lay.addWidget(button, 1, 0, 1, 2, alignment=QtCore.Qt.AlignHCenter) @QtCore.pyqtSlot() def load_image(self): fileName, _ = QtWidgets.QFileDialog.getOpenFileName( None, "Select Image", "", "Image Files (*.png *.jpg *jpeg *.bmp)" ) if fileName: pixmap = QtGui.QPixmap(fileName) self.left_view.set_image(pixmap) if __name__ == "__main__": app = QtWidgets.QApplication(sys.argv) w = MainWindow() w.show() sys.exit(app.exec_()) </code></pre> <p><strong>Current Output:</strong></p> <p><a href="https://i.stack.imgur.com/rjr9W.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rjr9W.png" alt="enter image description here"></a></p> <p><strong>Expected output:</strong> A corrected orientation of user input image after cropping</p> <p><a href="https://i.stack.imgur.com/O0PxT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/O0PxT.png" alt="A corrected orientation of user input image after cropping"></a></p> <p>Can anyone guide me on how to do this? </p> <p>Thank you.</p>
<p>Selecting four arbitrary points will not give you a rectangle, but a quadrilateral, which might not have all corners with 90° angles. How would you decide <em>which</em> line is to take as a reference for the rotation?<br> Also, a simple rotation will not compensate for perspective distorsion.</p> <p>Instead of simply rotating a rectangle, you should probably use a transformation.</p> <p>I took the liberty to change your logic behind the creation of the points (making it a bit simpler): in this way they're not children of the pixmap item, but of the scene; they can also be moved, showing immediately the result image.</p> <p><a href="https://i.stack.imgur.com/VtuK1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VtuK1.png" alt="crop and transform example"></a></p> <p>In the following image the perspective distorsion is better explained: I'm using a source with visible perspective, and with the transformation I'm able to make a quadrilateral into a rectangle.</p> <p><a href="https://i.stack.imgur.com/UtbUc.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UtbUc.jpg" alt="perspective transformation"></a></p> <p>In this example I'm assuming that the order of the points is always top-left, top-right, bottom-right, bottom-left.<br> If the user follows another order the result will obviously wrong, so you'll probably need to find a way to better check the positioning of the points.</p> <pre><code>class CropView(GraphicsView): Changed_view = QtCore.pyqtSignal(QtGui.QPixmap) def __init__(self, parent=None): super().__init__(parent) self.point_items = [] self.crosshair = QtGui.QPixmap(point_filename) def mousePressEvent(self, event): if not self.pixmap_item.pixmap().isNull(): if not self.itemAt(event.pos()) in self.point_items: scenePos = self.mapToScene(event.pos()) if len(self.point_items) == 4: while self.point_items: self.scene().removeItem(self.point_items.pop()) if self.pixmap_item.sceneBoundingRect().contains(scenePos): point_item = self.scene().addPixmap(self.crosshair) point_item.setPos(scenePos) point_item.setFlag(point_item.ItemIgnoresTransformations) point_item.setFlag(point_item.ItemIsMovable) point_item.setOffset(-self.crosshair.rect().center()) self.point_items.append(point_item) if len(self.point_items) == 4: self.crop() super().mousePressEvent(event) def mouseMoveEvent(self, event): super().mouseMoveEvent(event) if len(self.point_items) == 4 and self.itemAt(event.pos()) in self.point_items: # update the rectangle if the points have been moved self.crop() def crop(self): points = [] for point_item in self.point_items: points.append(self.pixmap_item.mapFromScene(point_item.pos())) # get the width and height based on the 4 points: # I'm assuming that the points are always in this order: # top-left, top-right, bottom-right, bottom-left # so we get the width from the longest two top and bottom lines # and the height from the longest left and right lines width = max(QtCore.QLineF(points[0], points[1]).length(), QtCore.QLineF(points[2], points[3]).length()) height = max(QtCore.QLineF(points[1], points[2]).length(), QtCore.QLineF(points[3], points[0]).length()) sourcePolygon = QtGui.QPolygonF(points) source = self.pixmap_item.pixmap() pixmap = QtGui.QPixmap(width, height) transform = QtGui.QTransform() rect = pixmap.rect() # this is the target used for the transformation targetPolygon = QtGui.QPolygonF([rect.topLeft(), rect.topRight(), rect.bottomRight(), rect.bottomLeft()]) # quadToQuad is a static that sets the matrix of a transform based on two # four-sided polygons QtGui.QTransform.quadToQuad(sourcePolygon, targetPolygon, transform) painter = QtGui.QPainter(pixmap) # smooth pixmap transform is required for better results painter.setRenderHints(painter.SmoothPixmapTransform) painter.setTransform(transform) painter.drawPixmap(QtCore.QPoint(), source) painter.end() self.Changed_view.emit(pixmap) </code></pre> <p>Note that I also added a line to the <code>set_image</code> function:</p> <pre><code> self.setSceneRect(self.scene().sceneRect()) </code></pre> <p>This ensures that the view's sceneRect is always adapted to the actual scene rect.<br> Also, you should remember to remove all point items as soon as a new image is loaded.</p>
python|python-3.x|pyqt|rotation|pyqt5
2
1,902,411
63,185,303
python binary tree traversal with search and delete functions
<p>I create a binary tree traversal project. Unfortunately, I have a basic knowledge of python. I wrote &quot;preorder&quot;, &quot;inorder&quot; and &quot;postorder&quot; correctly. But I can not create find and delete node function. please help. Please check the code below. Please help to create those 2 functions. Thank you</p> <pre><code> def __init__(self, value): self.value = value self.left = None self.right = None def find(self, value): if self.value == value: return True elif value &lt; self.value and self.left: return self.left.find(value) elif value &gt; self.value and self.right: return self.right.find(value) return False class BinaryTree(object): def __init__(self, root): self.root = Node(root) def print_tree(self, traversal_type): if traversal_type == &quot;preorder&quot;: return self.preorder_print(tree.root, &quot;&quot;) elif traversal_type == &quot;inorder&quot;: return self.inorder_print(tree.root, &quot;&quot;) elif traversal_type == &quot;postorder&quot;: return self.postorder_print(tree.root, &quot;&quot;) else: print(&quot;Traversal type &quot;+str(traversal_type)+&quot; is not supported.&quot;) def preorder_print(self, start, traversal): # Root-&gt;Left-&gt;Right if start: traversal += (str(start.value)+&quot;-&quot;) traversal = self.preorder_print(start.left, traversal) traversal = self.preorder_print(start.right, traversal) return traversal def inorder_print(self, start, traversal): # Left-&gt;Root-&gt;Right if start: traversal = self.inorder_print(start.left, traversal) traversal += (str(start.value) + &quot;-&quot;) traversal = self.inorder_print(start.right, traversal) return traversal def postorder_print(self, start, traversal): # Left-&gt;Right-&gt;Root if start: traversal = self.postorder_print(start.left, traversal) traversal = self.postorder_print(start.right, traversal) traversal += (str(start.value) + &quot;-&quot;) return traversal # Set up tree order tree = BinaryTree(1) tree.root.left = Node(2) tree.root.right = Node(3) tree.root.left.left = Node(4) tree.root.left.right = Node(5) tree.root.right.left = Node(6) tree.root.right.right = Node(7) print(&quot;Preorder: &quot; + tree.print_tree(&quot;preorder&quot;)) # 1-2-4-5-3-6-7 print(&quot;Inorder: &quot; + tree.print_tree(&quot;inorder&quot;)) # 4-2-5-1-6-3-7 print(&quot;Postorder: &quot; + tree.print_tree(&quot;postorder&quot;)) # 4-2-5-6-3-7-1 print(tree.root.find(1)) print(tree.root.find(2)) print(tree.root.find(3)) print(tree.root.find(4)) print(tree.root.find(5)) print(tree.root.find(6)) print(tree.root.find(7)) print(tree.root.find(8)) ``` </code></pre>
<p>The reason find is not working is the tree you have setup is not a binary search tree. In a BST all nodes to the left have values lower than the root and all nodes to the right have higher values. Check the tree you have constructed.</p> <p>Here is the implementation for delete node.</p> <p><a href="https://www.geeksforgeeks.org/binary-search-tree-set-2-delete/" rel="nofollow noreferrer">https://www.geeksforgeeks.org/binary-search-tree-set-2-delete/</a></p>
python|binary-tree
0
1,902,412
63,091,343
New line for output in python
<p>I am a beginner and I want to make a simple program that would take an input and print it back in new line. This is how my code looks.</p> <pre><code>a=input(&quot;enter: &quot;) print(a) </code></pre> <p>output what i get:</p> <pre><code>enter: python\ncode python\ncode </code></pre> <p>expected output:</p> <pre><code>enter: python\ncode python code </code></pre>
<p>Answer to this problem based on another Stack Overflow thread: <a href="https://stackoverflow.com/questions/4020539/process-escape-sequences-in-a-string-in-python">Process escape sequences in a string in Python</a> Decode the string before printing like this</p> <pre><code> &gt;&gt;&gt; a = input('Enter: ') Enter: python\ncode &gt;&gt;&gt; a 'python\\ncode' &gt;&gt;&gt; print(a) python\ncode &gt;&gt;&gt; print(bytes(a, &quot;utf-8&quot;).decode(&quot;unicode_escape&quot;)) python code &gt;&gt;&gt; </code></pre>
python-3.x
0
1,902,413
62,320,107
Difference between aws lambda duration and my measures duration
<p>I have aws lambda running python 3 service. I am measuring my service duration(simple time -time) from the start of the lambda invocation(first invocation line) till the end(last invocation line). Im getting results that are pretty dramatically different than aws reported duration and billed duration. Most of the time my measures indicates on average of 730.9 ms And aws reported duration and billed duration reports on Duration: 1058.36 ms Billed Duration: 1100 ms. Where the difference can come from?</p>
<p>Prior to function invocation, the instance must be spun up. I believe AWS Lambda charges for the setup time of the function as part of the execution time.</p> <p>Any imports or other assets that must be loaded before your function is invoked count against the total execution time, and your timer doesn't start until after that inital loading time.</p>
python|python-3.x|aws-lambda|duration
1
1,902,414
58,849,994
Running tests at circleci in conda environment
<p>I use conda, and and trying to figure out how to get things running at circleci. I have a very simple project in an environment called <code>calculator</code> with two functions (one <code>addition</code>, one <code>subtraction</code> and one test for each). I am using <code>pylint8</code> to check the formatting, and <code>pytest</code>/<code>pytest-cov</code> for testing/coverage.</p> <p>My configuration file is as follows, which seems to be working until I reach the test-running stage:</p> <pre><code># Python CircleCI 2.0 configuration file version: 2 jobs: build: docker: - image: continuumio/miniconda3 working_directory: ~/repo steps: # Step 1: obtain repo from GitHub - checkout # Step 2: create virtual env and install dependencies - run: name: install dependencies command: | conda env create -f environment.yml # Step 3: run linter and tests - run: name: run tests command: | conda init bash conda activate calculator flake8 --statistics pytest -v --cov </code></pre> <p>Steps 1 and 2 work ok, but Step 3 gives a fail with the following message:</p> <pre><code>#!/bin/bash -eo pipefail conda init bash conda activate calculator flake8 --statistics pytest -v --cov no change /opt/conda/condabin/conda no change /opt/conda/bin/conda no change /opt/conda/bin/conda-env no change /opt/conda/bin/activate no change /opt/conda/bin/deactivate no change /opt/conda/etc/profile.d/conda.sh no change /opt/conda/etc/fish/conf.d/conda.fish no change /opt/conda/shell/condabin/Conda.psm1 no change /opt/conda/shell/condabin/conda-hook.ps1 no change /opt/conda/lib/python3.7/site-packages/xontrib/conda.xsh no change /opt/conda/etc/profile.d/conda.csh modified /root/.bashrc ==&gt; For changes to take effect, close and re-open your current shell. &lt;== CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'. To initialize your shell, run $ conda init &lt;SHELL_NAME&gt; Currently supported shells are: - bash - fish - tcsh - xonsh - zsh - powershell </code></pre> <p>I am in Ubuntu 18. I previously wasn't running <code>conda init bash</code>, but based on the error, I put it in there, but it is still suggesting I initialize my shell even though I already did this.</p>
<p><code>conda init bash</code> changes your <code>.bashrc</code> which then would have to be reloaded.</p> <p>You could try it in this order</p> <pre class="lang-sh prettyprint-override"><code>conda init bash source ~/.bashrc conda activate calculator </code></pre> <p>or simply try the old fashioned way of <code>source activate calculator</code> (without running <code>conda init bash</code> at all).</p>
python|anaconda|circleci
2
1,902,415
58,741,490
Convert 2-D matrix into Dataframe in python
<pre><code>a = np.array([[5, 6, 7, 8],[5, 6, 7, 8]]) df = pd.DataFrame(a, columns=['a']) </code></pre> <p>ValueError: Shape of passed values is (2, 4), indices imply (2, 1)</p> <p>I hope that the final result is:</p> <pre><code>a ----------- [5, 6, 7, 8] [5, 6, 7, 8] </code></pre> <p>Edit:</p> <pre><code> df = pd.DataFrame({"a": [a]}) a ---------------------------------- 0 [[5, 6, 7, 8], [5, 6, 7, 8]] </code></pre> <p>why?</p>
<p>According to <a href="https://stackoverflow.com/a/18646275/5405298">https://stackoverflow.com/a/18646275/5405298</a>, you have to turn the array into a list. In your case, you can use</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import numpy as np a = np.array([[5, 6, 7, 8], [5, 6, 7, 8]]) df = pd.DataFrame({"a": a.tolist()}) print(df) </code></pre> <p>this returns:</p> <pre><code> a --------------- 0 [5, 6, 7, 8] 1 [5, 6, 7, 8] </code></pre>
python
2
1,902,416
48,924,491
I have a dictionary of lists, how to make a dictionary of the ith elements?
<p>Working in python 3.</p> <p>So I have a dictionary like this; <code> {"stripes": [1,0,5,3], "legs": [4,4,2,3], "colour": ['red', 'grey', 'blue', 'green']} </code> I know all the lists in the dictionary have the same length, but the may not contain the same type of element. Some of them may even be lists of lists. </p> <p>I want to return a dictionary like this;</p> <p><code> $&gt;&gt;&gt; Get_element(2) {"stripes": 5, "legs": 2, "colour": 'blue'} </code></p> <p>I know that dictionary comprehension is a thing, but I'm a bit confused on how to use it. I'm not sure if it's the most elegant way to achieve my goal either, can I slice a dictionary?</p>
<p>If you want to create <code>Get_element()</code> function this is the way to go:</p> <pre><code>def Get_element(d, i): return {k: v[i] for k, v in d.items()} </code></pre>
python|python-3.x|dictionary
1
1,902,417
25,314,832
Is it possible to "refresh" a connection created with urllib2.urlopen?
<p>I am fetching data from a URL using <code>urllib2.urlopen</code>:</p> <pre><code>from urllib2 import urlopen ... conn = urlopen(url) data = conn.read() conn.close() </code></pre> <p>Suppose the data did not "come out" as I had expected.</p> <p>What would be the best method for me to read it again?</p> <p>I am currently repeating the whole process (open, read, close).</p> <p>Is there a better way (some sort of connection-refresh perhaps)?</p>
<p>No, <a href="https://mail.python.org/pipermail/python-list/2010-February/566702.html" rel="nofollow">repeating the process</a> is the only way to get new data. </p>
python|urllib2|urlopen
2
1,902,418
60,247,035
PyInstaller with Python-VLC: No Attribute "media_player_new" Error
<p>I'm using Python-VLC to create a video player and PyInstaller to generate the executable file on my Windows 10 machine. Initially, it gave me the error:</p> <pre><code>Import Error Failed to load dynlib/dll 'libvlc.dll'. Most probably this dynlib/dll was not found when the application was frozen. </code></pre> <p>To solve this issue, I added the missing dlls in the binaries of the .spec file in the following manner:</p> <pre><code>a = Analysis(['video_player.py'], pathex=['C:\\Users\\harsh\\Desktop\\demo\\Video'], binaries=[("C:\\Program Files\\VideoLAN\\VLC\\libvlc.dll","."),("C:\\Program Files\\VideoLAN\\VLC\\libvlccore.dll",".")], datas=[], hiddenimports=[], hookspath=[], runtime_hooks=[], excludes=[], win_no_prefer_redirects=False, win_private_assemblies=False, cipher=block_cipher, noarchive=False) </code></pre> <p>After doing this, I'm no more getting the above error. However, now I'm getting the following error:</p> <pre><code>Exception in thread Thread-3: Traceback (most recent call last): File "threading.py", line 914, in _bootstrap_inner File "threading.py", line 862, in run File "video_player.py", line 100, in vlc_player AttributeError: 'NoneType' object has no attribute 'media_player_new' </code></pre> <p>The code which leads to this error is:</p> <pre><code>i=vlc.Instance('--fullscreen') p=i.media_player_new() </code></pre> <p>I've made sure that I've installed Python-VLC. Am I missing anything else here ? Any suggestions on how to deal with this issue ? </p>
<p>One common answer to this problem is to make sure you have the VLC main programme installed, as the vlc.py relies on it. And also, that Python and the libvlc.dll both have to be 32-bit, or, both have to be 64-bit.</p>
python|dll|windows-10|pyinstaller|python-vlc
0
1,902,419
60,274,707
How to remove comma from column values of each csv and then merge?
<p>I have multiple CSVs which I want to merge on certain columns. But before that I need to ensure the column values don't have any commas. So the commas should be replaced by a white space(<code>" "</code>).</p> <p>So I have a folder that contains the CSVs. I am able to load them and merge them on columns <code>town</code>, <code>city</code> and <code>state</code>. So this is what I do</p> <pre><code>os.chdir('/Users/cho/Downloads/census/') dfs = [pd.read_csv(f) for f in os.listdir(os.getcwd()) if f.endswith('csv') df = reduce(lambda left,right: pd.merge(left,right,on=['town', 'city', 'state']), dfs) df.to_csv('multicsv.csv', sep=',', encoding='utf-8', index=False) </code></pre> <p>But I also I want to include the additional operation of replacing commas with space for each column values. I know I can do it separately by doing something like</p> <pre><code># I get the list of columns for each dataframe cols = ['col1', 'col2', ..., 'colN'] # pass them to df.replace(), specifying each char and it's replacement: df[cols] = df[cols].replace({'\$': '', ',': ''}, regex=True) </code></pre> <p>But how do I include this step as part of merging operation?</p>
<p>I think better is replace values in list comprehension for create list of DataFrames <code>dfs</code>:</p> <pre><code>os.chdir('/Users/cho/Downloads/census/') cols = ['col1', 'col2', ..., 'colN'] dfs = [pd.read_csv(f).replace({'\$': '', ',': ''}, regex=True) for f in os.listdir(os.getcwd()) if f.endswith('csv')] df = reduce(lambda left,right: pd.merge(left,right,on=['town', 'city', 'state']), dfs) df.to_csv('multicsv.csv', sep=',', encoding='utf-8', index=False) </code></pre>
python|pandas|merge
2
1,902,420
60,005,632
How do I solve this broyden1 np.ndarray calling problem?
<p>first time poster here, currently working on a project for uni and I'm a little stuck.</p> <p>This part of the task is to use the Monte Carlo method to estimate the value of an integral (in this case, the function we are integrating is f(r). Here's my current code:</p> <pre><code>import numpy as np # import numpy from scipy.optimize #import broyden1 def U(r, ep, sig): return 4*ep*((sig/r)**(12)-(sig/r)**(6)) # return U(r) # Function to calculate f(r) def f(r, ep, sig): Ur = 4*ep*((sig/r)**(12)-(sig/r)**(6)) # puts U(r) into this function return (1-np.exp(-Ur/(k*T)))*(r**2) # return f(r) data = np.array([[2.56,3.75,3.4,4.07],[1.41,1.32,1.66,3.04]]) #He, N2, Ar, Xe k = 1.38*10**-23 # Boltzmann constant T = 300 # temperature value T R = np.linspace(1*10**-10, 10, 1000) # r defined with 1000 values between 2.5 and 10 U = U(R, data[1,0], data[0,0])# calculate U((r) F = f(R, data[1,0], data[0,0]) # calculate f(r) cutoff = broyden1(F,6) </code></pre> <p>I get an error message 'TypeError: 'numpy.ndarray' object is not callable'. I know this is a very common error message but I can't figure out from other posts on here what my problem is. </p> <p>Any help would be much appreciated, thank you!</p>
<p>I think you are mixing Joules with eV. </p> <p>This part in your force function is blowing up</p> <p><code>np.exp(-Ur/(k*T))</code></p> <p>Your script is not reaching the last function call (i.e., <code>cutoff = broyden1(F,6)</code>)</p> <p>I think because your Boltzmann constants should be expressed in eVK-1 rather than JK-1. So <code>k = 8.617333262145*10**-5</code></p> <p><strong>EDIT</strong></p> <p>Possibly your problem is related to this <a href="https://github.com/scipy/scipy/issues/3562" rel="nofollow noreferrer">issue</a>. I did some modifications to your code and now is running. However it's taking time to find a solution. </p> <pre><code>from functools import partial import numpy as np # import numpy from scipy.optimize import broyden1 def U(r, ep, sig): return 4*ep*((sig/r)**(12)-(sig/r)**(6)) # return U(r) # Function to calculate f(r) def F(r): ep = 1.41 sig = 2.56 Ur = 4*ep*((sig/r)**(12)-(sig/r)**(6)) # puts U(r) into this function return (1-np.exp(-Ur/(k*T)))*(r**2) # return f(r) data = np.array([[2.56,3.75,3.4,4.07],[1.41,1.32,1.66,3.04]]) #He, N2, Ar, Xe #k = 1.38*10**-23 # Boltzmann constant k = 8.617333262145*10**-5 T = 300 # temperature value T R = np.linspace(1*10**-10, 10, 1000) # r defined with 1000 values between 2.5 and 10 U = U(R, data[1,0], data[0,0])# calculate U((r) #F = f(R, data[1,0], data[0,0])# calculate f(r) cutoff = broyden1(F,R,f_tol=1e-2) </code></pre>
python|numpy
0
1,902,421
3,199,343
Regex to match Domain.CCTLD
<p>Does anyone know a regular expression to match Domain.CCTLD? I don't want subdomains, only the "atomic domain". For example, <code>docs.google.com</code> doesn't get matched, but <code>google.com</code> does. However, this gets complicated with stuff like <code>.co.uk</code>, CCTLDs. Does anyone know a solution? Thanks in advance.</p> <p><strong>EDIT:</strong> I've realized I also have to deal with multiple subdomains, like <code>john.doe.google.co.uk</code>. Need a solution now more than ever :P.</p>
<p>It sounds like you are looking for the information available through the <a href="http://publicsuffix.org/" rel="noreferrer">Public Suffix List</a> project. </p> <blockquote> <p>A "public suffix" is one under which Internet users can directly register names. Some examples of public suffixes are ".com", ".co.uk" and "pvt.k12.wy.us". The Public Suffix List is a list of all known public suffixes. </p> </blockquote> <p>There is no single regular expression that will reasonably match the list of public suffixes. You will need to implement code to use the public suffix list, or find an existing library that already does so.</p>
python|regex|subdomain|dns|tld
8
1,902,422
2,562,757
Is there a multithreaded map() function?
<p>I have a function that is side-effect free. I would like to run it for every element in an array and return an array with all of the results.</p> <p>Does Python have something to generate all of the values?</p>
<p>Try the Pool.map function from multiprocessing:</p> <p><a href="http://docs.python.org/library/multiprocessing.html#using-a-pool-of-workers" rel="noreferrer">http://docs.python.org/library/multiprocessing.html#using-a-pool-of-workers</a></p> <p>It's not multithreaded per-se, but that's actually good since multithreading is severely crippled in Python by the GIL.</p>
python|multithreading
18
1,902,423
6,031,954
inverted reindent.py (spaces to tabs)
<p>afaik <code>reindent.py</code> (available in the standard python examples) has a tokenizer allowing it to do smart reindenting based on the indentation level rather than on the number of spaces osed per level (which can vary in bad code)</p> <p>unfortunately it enforces 4-space indentation, but i want tabs, because 1 tab == 1 indentation level is more logical than x spaces.</p> <p><a href="https://stackoverflow.com/questions/338767/tool-to-convert-python-indentation-from-spaces-to-tabs">this</a> question has no suitable answer:</p> <ul> <li>i don’t care about pep-8 (i know how to write my code)</li> <li>vim is installed, but <code>:retab!</code> doesn’t handle inconsistent indentation</li> <li>all tools convert spaces used for alignment (!= indentation) to tabs, too.</li> </ul> <p>one way would be to use reindent.py and afterwards doing sth. like:</p> <pre><code>#!/usr/bin/env python3 from re import compile from sys import argv spaces = compile("^ +") multistr = False for line in open(argv[1]): num = 0 if not multistr: try: num = len(spaces.search(line).group(0)) // 4 except AttributeError: pass print("\t"*num + line[num*4:-1]) if line.count('"""') % 2 == 1: multistr = not multistr </code></pre> <p>but that’s rather hacky. is there no non-zealot version of reindent.py?</p> <p>PS: why suggests the highlighting that <code>// 4</code> is a comment instead of a truncating division?</p> <hr> <p>The following script should do the trick, but either i missed sth., or tokenize is buggy (or the example in the python documentation)</p> <pre><code>#!/usr/bin/env python3 from tokenize import * from sys import argv f = open(argv[1]) def readline(): return bytes(f.readline(), "utf-8") tokens = [] ilvl=0 for token in tokenize(readline): if token.type == INDENT: ilvl+=1 tokens.append((INDENT, "\t"*ilvl)) else: if token.type == DEDENT: ilvl-=1 tokens.append(token) print(untokenize(tokens).decode('utf-8')) </code></pre>
<p>Using <code>sed</code> in unix you could get it with one line:</p> <pre><code>sed -r ':f; s|^(\t*)\s{4}|\1\t|g; t f' file </code></pre> <p>edit: this will work for spaces at beginning of the line only.</p>
python|indentation
3
1,902,424
68,026,866
Split a DataFrame column applying a function Python
<p>I have a data frame (see example df) and I need to split the column into 2 (see example df_exp).</p> <pre><code>import pandas as pd #given df df = pd.DataFrame(np.array([[&quot;Joe&quot;, 25, &quot;40 RF&quot;], [&quot;Sam&quot;, 5, &quot;RM&quot;], [&quot;Roy&quot;, 8, &quot;50 SD&quot;]]),columns=[0, 1, 2]) #expected df df_exp = pd.DataFrame(np.array([[&quot;Joe&quot;, 25, &quot;40 RF&quot;, 40, &quot;RF&quot;], [&quot;Sam&quot;, 5, &quot;RM&quot;, None, &quot;RM&quot;], [&quot;Roy&quot;, 8, &quot;50 SD&quot;, 50, &quot;SD&quot;]]),columns=[0, 1, 2, 2.1, 2.2]) </code></pre> <p>I have the following function:</p> <pre><code>def split_string(string): if string[0].isnumeric()==True: sep = string.split(&quot; &quot;,1) return sep[0], sep[1] else: return None, string </code></pre> <p>I tried to apply it, but got an error, what is the best way to split a column using a function?</p> <pre><code>df[[21, 2.2]] = df.apply(lambda x: split_string(df.ix[:, 2]), axis = 1) </code></pre>
<pre><code>import re def split_string(string): return re.search('(\d+)?\s*(\w+)?', string).groups() </code></pre> <pre><code>&gt;&gt;&gt; df[2].apply(split_string).apply(pd.Series) 0 1 0 40 RF 1 None RM 2 50 SD </code></pre> <p><em>Old answer:</em><br /> You can use <code>extract</code> to accomplish what you want:</p> <pre><code>&gt;&gt;&gt; df[2].str.extract(r'(\d+)?\s*(\w+)?') 0 1 0 40 RF 1 NaN RM 2 50 SD </code></pre>
python|pandas|dataframe
1
1,902,425
67,859,455
How to track which resource is utilised by each customer in SimPY
<p>In the official SimPY documentation <a href="https://simpy.readthedocs.io/en/latest/topical_guides/monitoring.html#resource-usage" rel="nofollow noreferrer">here</a> under the 'Resource usage' section, there is an example on how we can monitor the usage of resources. However, is there a way to track the number of times each resource was utilized? For example, I will let to know how many customers utilize Counter 1, how many customers utilize Counter 2 etc.</p>
<p>As you noted, resources do not have identity. You need to create your own resource objects that can track their usage and use a store instead of a resource</p>
python|simpy
0
1,902,426
67,958,533
Simultaneous feature selection and hyperparameter tuning
<p>I'm trying to conduct both hyperparameter tuning and feature selection on a sklearn SVC model.</p> <p>I tried the below code, but am getting an error which I have included.</p> <pre><code>clf = Pipeline([('anova', SelectPercentile(f_classif)), ('svc', SVC( probability = True))]) score_means = list() score_params = list() percentiles = (1, 3, 6, 10, 15, 20, 30, 40, 60, 80, 100) params = { &quot;C&quot;: np.logspace(-3, 17, 21), &quot;gamma&quot;: np.logspace(-20, 1, 21), 'class_weight' : [None, 'balanced'] } halving_search = HalvingGridSearchCV(estimator = clf, param_grid = params, scoring = 'neg_brier_score', factor = 2, verbose = 2, cv = 2) for percentile in percentiles: clf.set_params(anova__percentile=percentile) this_scores = halving_search.fit(x_train, y_train) score_means.append(this_scores.best_score_) score_params.append(this_scores.best_params) </code></pre> <p>Running the pipeline code with a cross_val_score separate from the HalvingGridSearchCV works, but I want to conduct both feature selection and hyperparameter tuning to find which combination of features and hyperparameters produces the best model.</p> <p>When I run the above code, I get the following error:</p> <pre><code>Traceback (most recent call last): File &quot;&lt;ipython-input-83-cf714445297c&gt;&quot;, line 4, in &lt;module&gt; this_scores = halving_search.fit(x_train, y_train) File &quot;C:\Users\fredd\Anaconda3\lib\site-packages\sklearn\model_selection\_search_successive_halving.py&quot;, line 213, in fit super().fit(X, y=y, groups=None, **fit_params) File &quot;C:\Users\fredd\Anaconda3\lib\site-packages\sklearn\utils\validation.py&quot;, line 63, in inner_f return f(*args, **kwargs) File &quot;C:\Users\fredd\Anaconda3\lib\site-packages\sklearn\model_selection\_search.py&quot;, line 841, in fit self._run_search(evaluate_candidates) File &quot;C:\Users\fredd\Anaconda3\lib\site-packages\sklearn\model_selection\_search_successive_halving.py&quot;, line 320, in _run_search more_results=more_results) File &quot;C:\Users\fredd\Anaconda3\lib\site-packages\sklearn\model_selection\_search.py&quot;, line 809, in evaluate_candidates enumerate(cv.split(X, y, groups)))) File &quot;C:\Users\fredd\Anaconda3\lib\site-packages\joblib\parallel.py&quot;, line 1041, in __call__ if self.dispatch_one_batch(iterator): File &quot;C:\Users\fredd\Anaconda3\lib\site-packages\joblib\parallel.py&quot;, line 859, in dispatch_one_batch self._dispatch(tasks) File &quot;C:\Users\fredd\Anaconda3\lib\site-packages\joblib\parallel.py&quot;, line 777, in _dispatch job = self._backend.apply_async(batch, callback=cb) File &quot;C:\Users\fredd\Anaconda3\lib\site-packages\joblib\_parallel_backends.py&quot;, line 208, in apply_async result = ImmediateResult(func) File &quot;C:\Users\fredd\Anaconda3\lib\site-packages\joblib\_parallel_backends.py&quot;, line 572, in __init__ self.results = batch() File &quot;C:\Users\fredd\Anaconda3\lib\site-packages\joblib\parallel.py&quot;, line 263, in __call__ for func, args, kwargs in self.items] File &quot;C:\Users\fredd\Anaconda3\lib\site-packages\joblib\parallel.py&quot;, line 263, in &lt;listcomp&gt; for func, args, kwargs in self.items] File &quot;C:\Users\fredd\Anaconda3\lib\site-packages\sklearn\utils\fixes.py&quot;, line 222, in __call__ return self.function(*args, **kwargs) File &quot;C:\Users\fredd\Anaconda3\lib\site-packages\sklearn\model_selection\_validation.py&quot;, line 581, in _fit_and_score estimator = estimator.set_params(**cloned_parameters) File &quot;C:\Users\fredd\Anaconda3\lib\site-packages\sklearn\pipeline.py&quot;, line 150, in set_params self._set_params('steps', **kwargs) File &quot;C:\Users\fredd\Anaconda3\lib\site-packages\sklearn\utils\metaestimators.py&quot;, line 54, in _set_params super().set_params(**params) File &quot;C:\Users\fredd\Anaconda3\lib\site-packages\sklearn\base.py&quot;, line 233, in set_params (key, self)) ValueError: Invalid parameter C for estimator Pipeline(steps=[('anova', SelectPercentile(percentile=1)), ('svc', SVC(probability=True))]). Check the list of available parameters with `estimator.get_params().keys()`. </code></pre> <p>It reads like the halvingsearch is trying to pass the pipeline as an input for C.</p>
<p>You want to perform a grid search over a <code>Pipeline</code> object. When defining the parameters for the different steps of the pipeline, you have to use the <code>&lt;step&gt;__&lt;parameter&gt;</code> syntax:</p> <pre class="lang-py prettyprint-override"><code>params = { &quot;svc__C&quot;: np.logspace(-3, 17, 21), &quot;svc__gamma&quot;: np.logspace(-20, 1, 21), &quot;svc__class_weight&quot; : [None, 'balanced'] } </code></pre> <p>See the <a href="https://scikit-learn.org/stable/modules/compose.html#nested-parameters" rel="nofollow noreferrer">user guide</a> for more information.</p>
python|scikit-learn|svm|feature-selection|hyperparameters
1
1,902,427
66,358,374
Conversion from tf.gradients() to tf.GradientTape() returns None
<p>I'm migrating some TF1 code to TF2. For full code, you may check <a href="https://github.com/openai/baselines/blob/master/baselines/acer/acer.py" rel="nofollow noreferrer">here</a> lines [155-176]. There is a line in TF1 that gets gradients given a loss (float value) and a (m, n) tensor</p> <p><strong>Edit:</strong> the problem persists</p> <p><strong>Note:</strong> the TF2 code should be compatible and should work inside a <code>tf.function</code></p> <pre><code>g = tf.gradients(-loss, f) # loss being a float and f being a (m, n) tensor k = -f_pol / (f + eps) # f_pol another (m, n) tensor and eps a float k_dot_g = tf.reduce_sum(k * g, axis=-1) adj = tf.maximum( 0.0, (tf.reduce_sum(k * g, axis=-1) - delta) / (tf.reduce_sum(tf.square(k), axis=-1) + eps), ) g = g - tf.reshape(adj, [nenvs * nsteps, 1]) * k grads_f = -g / (nenvs * nsteps) grads_policy = tf.gradients(f, params, grads_f) # params being the model parameters </code></pre> <p>In TF2 code I'm trying:</p> <pre><code>with tf.GradientTape() as tape: f = calculate_f() f_pol = calculate_f_pol() others = do_further_calculations() loss = calculate_loss() g = tape.gradient(-loss, f) </code></pre> <p>However I keep getting <code>g = [None]</code> whether I use <code>tape.watch(f)</code> or create a <code>tf.Variable</code> with the value of <code>f</code> or even use <code>tf.gradients()</code> inside a <code>tf.function</code> because otherwise, it will complain.</p>
<p>It is very possible to be one of below cases</p> <ol> <li>Difine <code>tf.Variable</code> inside a function decorated by <code>@tf.funtion</code> ?</li> <li>Some variable are numpy.array instead of tf.Tensor</li> <li>You alter some outside variable(i.e. global variable ) inside the decorated function.</li> </ol>
python|tensorflow|migration|tensorflow2.x|tensorflow1.15
1
1,902,428
72,246,967
SQLAlchemy: AttributeError: 'BaseQuery' object has no attribute 'select'
<p>I'm trying to query multiple tables by joining and selecting specific columns using SQLAlchemy. My query looks like this this</p> <pre><code>data = db.session.query( TableA, TableB, TableC, ).filter( TableA.id == TableB.x_id, TableA.id2 == TableB.y_id, TableA.id3 == TableB.z_id, ).filter( TableA.id == TableC.id_a, TableA.id3 == TableC.id_b, between(TableB.date, from_date, to_date) ).distinct( TableA.id ).select( TableA.id, TableA.contact, TableB.city, TableC.source, TableC.name, ).all() </code></pre> <p>When I run this I get an error</p> <pre><code>AttributeError: 'BaseQuery' object has no attribute 'select' </code></pre> <p>How can I select specific columns with this query or how can it be improved? The reason for selecting specific columns is that all of these tables have 10s of the columns and returning back all columns will be slow as it will have a lot of data. I tried to move the <code>select</code> func to the beginning but it didn't help.</p>
<p>query object has no select, you should query then filter the result just like this</p> <pre><code>db.query(models.TableA, models.TableB, models.TableC).filter(modes.TableA.id) </code></pre> <p>and to grab the objects u should grab them by <code>first()</code> or <code>all()</code></p>
python|postgresql|sqlalchemy
1
1,902,429
65,713,325
How can I bundle my JSON API data into one dictionary?
<p>I'm trying to package my API data into one GET request simply using standard libraries python.</p> <pre><code>class GetData(APIView): def get(self, request, *args, **kwargs): urls = [url_1, url_2, url_3, url_4 ] data_bundle = [] for x in urls: response = requests.get(x, headers={'Content-Type': 'application/json'}).json() data_bundle.append(response) return Response(data_bundle, status=status.HTTP_200_OK) </code></pre> <p>The return response has to be JSON data, I'm trying to get it to work but it seems like the response data seems to be overiding eachother? How can I properly create a JSON dictionary of dictionaries.</p> <p>I've tried switching <code>data_bundle</code> to an empty dictionary instead of a list. However that just caused an error saying:</p> <p><code>ValueError: dictionary update sequence element #0 has length 8; 2 is required</code></p> <p>Is there a simple way to accomplish this that I'm missing? Thank you for the help.</p>
<pre><code>class GetData(APIView): def get(self, request, *args, **kwargs): urls = [url_1, url_2, url_3, url_4 ] data_bundle = [] for x in urls: response = requests.get(x, headers={'Content-Type': 'application/json'}).json() data_bundle.append(response) return Response(data_bundle, status=status.HTTP_200_OK) </code></pre> <p>maybe this will help dont use return in body of for cycle. And data will not override each other</p>
python|json|django|dictionary|django-rest-framework
2
1,902,430
65,639,429
What is the difference between the 2 snippets of code below
<pre><code>np.array(df[column_name].values) </code></pre> <p>and,</p> <pre><code>df[column_name].values </code></pre> <p>I'm aware that both of them return an array, but how do they differ?</p>
<p>In many cases, <code>np.array</code> will copy the array provided to it. The actual scenarios in which it copies is in <a href="https://numpy.org/doc/stable/reference/generated/numpy.array.html" rel="nofollow noreferrer">the documentation</a>.</p> <p>By default, it should make a new copy of the provided array, as in your case. The reason why you would want to do this is because <code>df.values</code> will provide direct access to the data stored in the dataframe. Copying the values would allow manipulations of the copy while keeping the original state of the dataframe intact.</p> <h3>Here's a quick test you can try:</h3> <pre class="lang-py prettyprint-override"><code>import numpy as np a = np.random.random((10,)) # create a random array (analogous to df.values) b1 = a # direct assignment, no np.array() b2 = np.array(a) # now use np.array(), should copy print(a) # Let's now modify the original array a, and see which variables change: a[0] += 10 print(&quot;Modified:&quot;) print(a) print(b1) print(b2) # You'll see that b1 and a will reflect the change, but not b2, # since b2 was a copy </code></pre>
python|pandas|numpy
3
1,902,431
65,892,417
How to make file comparison with other files in Python
<p>I am new to python :( I want to make:</p> <p>Main file (tokens): beautiful 2 amazing 5 speechless 2</p> <p>Folder with 73 files:</p> <p>How can I write the script in python to check the source frequency for example: The main folder the words to calculate in which sources appear: <strong>The results for example The word beautiful appears in 55 sources The word amazing appears in 30 sources The word speechless appears in 73 sources</strong></p> <pre><code>from os import listdir with open(&quot;C:/Users/ell/Desktop/Archivess/test/rez.txt&quot;, &quot;w&quot;) as f: for filename in listdir(&quot;C:/Users/ell/Desktop/Archivess/test/sources/books/&quot;): with open('C:/Users/ell/Desktop/Archivess/test/freqs/books/' + filename) as currentFile: text = currentFile.read() if ('amazing' in text): f.write('The word excist in the file ' + filename[:-4] + '\n') else: f.write('The word do not excist in the file' + filename[:-4] + '\n') </code></pre> <p>I have written the code but only shows me the word that I write in for loop. How can I do this code for files? I appreciate any help.</p>
<p>Before doing a loop on every file, you should firstly read the file containing your tokens to parse and store them into a list (or a dict, or anything you want), then checking if any element of this list is present in the file.</p> <p>Dicts are convenient cause you can store the frequency of each word as values. For instance, you can do <code>{&quot;beautiful&quot;: 0, &quot;amazing&quot;: 0}</code>, then incrementing each value when its key appear in a file.</p> <p>If your token file looks something like this...</p> <pre><code>amazing beautiful speechless </code></pre> <p>You can do something like that.</p> <pre class="lang-py prettyprint-override"><code>with open(&quot;token_file.txt&quot;, &quot;r&quot;) as f: # Creates a dict with &quot;token&quot; as keys and 0 as values. tokens = {token: 0 for token in f.readlines()} # tokens = {&quot;beautiful&quot;: 0, &quot;amazing&quot;: 0, etc...} </code></pre>
python|python-3.x|dataframe|comparison
0
1,902,432
3,702,628
Fastest way to resolve 100 million A-records in Python
<p>I have a list with 100 million domain names like www.microsoft.com and would like to resolve the IP-number to www.microsoft.com</p> <p>Running a local pdns server and query localhost using Python adns?</p>
<p>I'd probably use <a href="http://twistedmatrix.com/documents/current/api/twisted.names.html" rel="nofollow noreferrer">Twisted DNS</a> libray to do the DNS resolution from <a href="http://code.google.com/speed/public-dns/docs/using.html" rel="nofollow noreferrer">Google's Public DNS</a> (ip address: 8.8.8.8). It'd take some trial and error but I'd guess you could have at least a couple hundred outstanding queries going at once. Google's DNS infrastructure is designed to handle a huge load and Twisted is well suited to handling thousands of simultaneous asychronous operations.</p>
python|dns|performance
1
1,902,433
50,385,327
Get the accuracy of individual test images in image segmentation
<p>I am using CNNs in Tensorflow for image segmentation.I know how to compute the training accuracy </p> <pre><code> #compute the accuracy correct_prediction = tf.equal(tf.argmax(flat_logits, 1), tf.argmax(y,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) [train_accuracy] = sess.run([accuracy], feed_dict={x: batch_x, y:batch_y}) </code></pre> <p>is it possible to compute the accuracy of the accuracy of each individual tested images? </p>
<p>Yes, it is possible. You can do it by simply writing:</p> <pre><code>test_accuracy = sess.run(accuracy, feed_dict={x: x_test, y:y_test}) </code></pre> <p>where x_test is your single test image (say of size [1, width, height, depth] and y_test is the corresponding output.</p>
tensorflow|machine-learning|image-segmentation|convolutional-neural-network
1
1,902,434
50,601,585
Multiple conditions np.extract
<p>I have an array and want ot extract all entries which are in a specific range</p> <pre><code>x = np.array([1,2,3,4]) condition = x&lt;=4 and x&gt;1 x_sel = np.extract(condition,x) </code></pre> <p>But this does not work. I'm getting </p> <pre><code>ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() </code></pre> <p>If I'm doing the same without the and and checking for example only one condition</p> <pre><code>x = np.array([1,2,3,4]) condition = x&lt;=4 x_sel = np.extract(condition,x) </code></pre> <p>everything works... Of courese I could just apply the procedure twice with one condition, but isn't there a solution to do this in one line?</p> <p>Many thanks in advance</p>
<p>You can use either this:</p> <pre><code>import numpy as np x = np.array([1,2,3,4]) condition = (x &lt;= 4) &amp; (x &gt; 1) x_sel = np.extract(condition,x) print(x_sel) # [2 3 4] </code></pre> <p>Or this without <code>extract</code>:</p> <pre><code>x_sel = x[(x &gt; 1) &amp; (x &lt;= 4)] </code></pre>
python|numpy
7
1,902,435
50,559,525
Web scraping with Beautiful Soup (Not capturing all Information)
<p>I have used the beautiful soup package a few times, but this is the first time it doesn't have all the information I need. How do I get the full webpage? I need to extract all the publications and hyperlinks to the papers.</p> <pre><code>from bs4 import BeautifulSoup import requests url = 'https://openreview.net/group?id=ICLR.cc/2018/Conference' source = requests.get(url).text soup = BeautifulSoup(source, 'html.parser') </code></pre>
<p>There are other HTTP requests that are filling in the webpage. A good way of seeing these is using the inspector provided in a web browser. In Chrome, you can see these requests under the 'Network' tab in the inspector.</p> <p>The requests are as follows:</p> <ul> <li>GET <a href="https://openreview.net/notes?invitation=ICLR.cc%2F2018%2FConference%2F-%2FBlind_Submission&amp;details=replyCount&amp;offset=0&amp;limit=1000" rel="nofollow noreferrer">https://openreview.net/notes?invitation=ICLR.cc%2F2018%2FConference%2F-%2FBlind_Submission&amp;details=replyCount&amp;offset=0&amp;limit=1000</a></li> <li>GET <a href="https://openreview.net/notes?invitation=ICLR.cc%2F2018%2FConference%2F-%2FWithdrawn_Submission&amp;noDetails=true&amp;offset=0&amp;limit=1000" rel="nofollow noreferrer">https://openreview.net/notes?invitation=ICLR.cc%2F2018%2FConference%2F-%2FWithdrawn_Submission&amp;noDetails=true&amp;offset=0&amp;limit=1000</a></li> <li>GET <a href="https://openreview.net/notes?invitation=ICLR.cc%2F2018%2FConference%2F-%2FAcceptance_Decision&amp;noDetails=true&amp;offset=0&amp;limit=1000" rel="nofollow noreferrer">https://openreview.net/notes?invitation=ICLR.cc%2F2018%2FConference%2F-%2FAcceptance_Decision&amp;noDetails=true&amp;offset=0&amp;limit=1000</a></li> </ul> <p>It appears that each one returns JSON text with the information you are looking for (the publications and hyperlinks to the papers), so you can just create an individual request for each of these URL's and access the returned JSON in the following manner:</p> <pre><code>import json source = requests.get(new_url).text # json.loads returns a Python dictionary data = json.loads(source) for publication in data['notes']: publication_info = publication['_bibtex'] url = publication_info.split('\nurl={')[1].split('}')[0] </code></pre> <p>The element containing the URL for each publication is rather difficult to parse since it has characters not allowed in dictionary names (i.e. '@'), but this solution should work.</p> <p>Note that I have not tested this solution, so there might be some errors, but the underlying logic behind the solution should be correct.</p> <hr> <h2>Alternatively:</h2> <p>You can use <a href="https://splash.readthedocs.io/en/stable/" rel="nofollow noreferrer">Splash</a>, which is used to render Javascript-based pages. You can run Splash in Docker quite easily, and just make HTTP requests to the Splash container which will return HTML that looks just like the webpage as rendered in a web browser.</p> <p>Although this sounds overly complicated, it is actually quite simple to set up since you don't need to modify the Docker image at all, so you need no previous knowledge of docker to work. It requires just a single line to start a local Splash server: <code>docker run -p 8050:8050 -p 5023:5023 scrapinghub/splash</code></p> <p>You then just modify any existing requests you have in your Python code to route to splash instead:</p> <p>i.e. <code>http://example.com/</code> becomes<br> <code>http://localhost:8050/render.html?url=http://example.com/</code></p>
python-3.x|beautifulsoup
3
1,902,436
57,919,973
Interpret GAN loss
<p>I am currently training the standard DCGAN network on my dataset. After 40 epochs, the loss of both generator and discriminator is 45-50. Can someone please explain the reason and possible solution for this?</p>
<p>This interpretation may be added to <a href="https://en.wikipedia.org/wiki/List_of_unsolved_problems_in_mathematics" rel="nofollow noreferrer">unsolved problems</a>.</p> <p>You cannot interpret the loss of generator and discriminator. Since when one improves it will be harder for the other. When generator improves it will be harder for the critic. When critic improves it will be harder for the generator.</p> <p>The values totally depend on your loss function. You may expect that numbers should be "about the same" over time.</p>
deep-learning|pytorch|gradient-descent|generative-adversarial-network
1
1,902,437
56,118,707
split nested list into sublists by int values
<p>I have two lists:</p> <pre class="lang-py prettyprint-override"><code> data = [[1,2,3,4], [5,6,7], [8,9,10,11,12,13,14]] splitters = [3,7,10,13] </code></pre> <p>I want to split the nested lists in <em>data</em> by the values in <em>splitter</em> with the following conditions:</p> <ol> <li>Don't split, if it's the first/last value in the list. </li> <li>The split value in splitter should be at the end <strong>and</strong> at the beginning of the new lists.</li> <li>Should be kind of iterable, so the lists are splitted in as many parts as splitters are in the list.</li> <li>No redundance.</li> </ol> <p>Final result should be something like:</p> <pre class="lang-py prettyprint-override"><code>results = [[1,2,3],[3,4],[5,6,7],[8,9,10],[10,11,12,13],[13,14] </code></pre> <p>My first attempt looks like this:</p> <pre class="lang-py prettyprint-override"><code>temp = [] for route in data: for node in route: if node in splitter and ((route.index(node) !=0) and (route.index(node) != (len(route)-1))): #route should be splitted and save it for now with the splitter temp.append([route, node]) #here a big part is missing #start a new subroute #maybe something like a whileloop with len(route) #check the same if-statement for the remaining subroute else: #no splitter in this route, so keep the original route temp.append([route, 0]) </code></pre> <p>temp looks like that:</p> <pre><code>[[[1, 2, 3, 4], 0], [[1, 2, 3, 4], 0], [[1, 2, 3, 4], 3], [[1, 2, 3, 4], 0],...] </code></pre> <p>Based on that, I could remove redundant routes and split the route, but I think my approach is unnecessarily complicated and it gets more and more confusing if I want to implement something to meet the other conditions.</p> <p>My research was not successful so far (using itertools.groupby etc.). This is kind of related: <a href="https://www.reddit.com/r/learnpython/comments/3sk1xj/splitting_a_list_in_sublists_by_values/" rel="nofollow noreferrer">https://www.reddit.com/r/learnpython/comments/3sk1xj/splitting_a_list_in_sublists_by_values/</a> </p> <p>Would appreciate some ideas/approaches how to solve this problem or subdivide it in smaller parts.</p> <p><strong>Edit for future readers:</strong> I prefer the solution from maxiotic, because it works even with data like that</p> <pre><code>data = [[1,2,3],[1,2,3,4,5,6,7]] splitters = [1,2,3,4,7] </code></pre> <p>where every start/end of the nested lists is in splitters. Problem in the solution from Relondom is the following if statement and has to be changed:</p> <pre class="lang-py prettyprint-override"><code> if inner[0] in splitters or inner[-1] in splitters: # check if first or last elemtn in splitters </code></pre> <p>Thanks a lot!</p>
<p>I have no idea if this is an optimal way, but I decided to write this code since no one answered yet.</p> <pre><code>res = [] for inner in data: if inner[0] in splitters or inner[-1] in splitters: # check if first or last elemtn in splitters res.append(inner) continue else: temp = [] for val in inner: if val not in splitters: temp.append(val) else: temp.append(val) # list ends with value from splitters res.append(temp) # add new list to result temp = [val] # new list starts with value from splitters if temp not in res: res.append(temp) </code></pre>
python|python-3.x|split|nested-lists
0
1,902,438
56,347,143
Comparing a datetime dataframe with a period dataframe
<p>I have been stuck with a simple <strong>pandas dataframe problem</strong> and maybe someone faced this situation before...</p> <p>Thank you in advance :)</p> <p>Hi have two dataframes, df1 and df2:</p> <p>df1</p> <pre><code>unique_id timestamp 1 2019-01-21 2 2019-02-01 3 2019-04-05 4 2019-05-01 5 2019-05-12 ... ... </code></pre> <p>df2</p> <pre><code>classification from to A 2019-01-05 2019-02-02 B 2019-02-03 2019-02-28 C 2019-03-01 2019-04-05 D 2019-04-06 2019-05-03 E 2019-05-04 2019-05-31 ... ... ... </code></pre> <p>My goal is to compare each <strong>timestamp</strong> in df1 with each <strong>from</strong> <strong>to</strong> date interval in df2 and be able to classify every <strong>unique_id</strong> of df1 with the correspondent <strong>classification</strong> of df2</p> <p>I was trying something like this:</p> <pre><code>df1.loc[(df1['timestamp'] &gt; df2['from]) &amp; (df1['timestamp'] &lt; df2['to']), 'class'] = df2['classification']´ </code></pre> <p>always get a <strong>ValueError: Can only compare identically-labeled Series objects</strong> despite both datetime dtypes are the exact same, <strong>datetime64[ns]</strong>...</p> <p><strong>Expected Output</strong>:</p> <pre><code>unique_id timestamp classification 1 2019-01-21 A 2 2019-02-01 A 3 2019-04-05 C 4 2019-05-01 D 5 2019-05-12 E ... ... ... </code></pre>
<p>what i would personally do is convert timestamp to unix timestamp.</p> <pre><code>for row in df1['timestamp']: row = int(mktime(row.timetuple()) </code></pre> <p>do the same for df2 to get your start and end timestamps and so then you can use the <code>df1.loc[(df1['timestamp'] &gt; df2['from]) &amp; (df1['timestamp'] &lt; df2['to']), 'class'] = df2['classification']´</code> you wrote w/o getting error message</p>
python|python-3.x|pandas|dataframe|datetime
1
1,902,439
18,515,086
Rich text formatting: Setting font overwrites other settings like underline, pointsize, but fontcolors stays
<p>So as nobody wanted to use my running example, here is just the code for changing the font. It does not work as it is, though I thought it should. If you uncomment the commented lines, it works. But why? Shouldn't this be the default behaviour?</p> <pre><code>def changeFont(self): cur=self.textedit.textCursor() if cur.hasSelection(): begin=cur.anchor() end=cur.position() if begin&gt;end: helper=end end=begin begin=helper else: cur.select(QTextCursor.Document) begin=0 plainText=self.textedit.toPlainText() end=len(plainText) for i in range(begin,end): cur.setPosition(i) cur.movePosition(QTextCursor.Right, QTextCursor.KeepAnchor) fmt=cur.charFormat() #pointSize=fmt.fontPointSize() #if fmt.fontUnderline(): # underline=True #else: # underline=False #if fmt.fontItalic(): # italic=True #else: # italic=False #if fmt.fontWeight()==75: # bold=True #else: # bold=False #if fmt.fontStrikeOut(): # strikeOut=True #else: # strikeOut=False fmt.setFont(QFont(self.font)) #if underline: # fmt.setFontUnderline(True) #if italic: # fmt.setFontItalic(True) #if bold: # fmt.setFontWeight(75) #if strikeOut: # fmt.setFontStrikeOut(True) #fmt.setFontPointSize(pointSize) cur.mergeCharFormat(fmt) </code></pre>
<p>With all due respect... please clarify the question and reduce the code to a minimum that demonstrates the problem. The question seems like 'can you fix my code' and most people don't want to read so much code.</p> <p>Having said that, the Qt model is: a TextCursor applies a Format to a portion of a TextDocument. You get the format, change it, and tell the cursor to reapply it (set it or merge it). There are several class of Format (Block, Font, etc.) that can be changed independently. The color of the font is in its format. You got the format (it had a color in it), you didn't change the color in the format, you reapplied the format, why SHOULD the color change? (I didn't read your code carefully, sorry if I am misstating something.)</p>
python|html|qt|pyqt4
0
1,902,440
18,708,172
Tkinter: How to set ttk.Radiobutton activated and get its value?
<p>1) I need to set one of my three ttk.Radiobuttons activated by default when I start my gui app.<br> How do I do it?</p> <p>2) I also need to check if one of my ttk.Radiobuttons was activated/clicked by the user.<br> How do I do it?</p> <pre><code>rb1 = ttk.Radiobutton(self.frame, text='5', variable=self.my_var, value=5) rb2 = ttk.Radiobutton(self.frame, text='10', variable=self.my_var, value=10) rb3 = ttk.Radiobutton(self.frame, text='15', variable=self.my_var, value=15) self.rb1.grid(row=0) self.rb2.grid(row=1) self.rb3.grid(row=2) </code></pre>
<p>use <code>self.my_var.set(1)</code> to set the radiobutton with <code>text='5'</code> as the default RadioButton.</p> <p>To get the selected one you have to call a function</p> <pre><code>rb1 = ttk.Radiobutton(self.frame, text='5', variable=self.my_var, value=5,command=self.selected) rb2 = ttk.Radiobutton(self.frame, text='10', variable=self.my_var, value=10,command=self.selected) rb3 = ttk.Radiobutton(self.frame, text='15', variable=self.my_var, value=15,command=self.selected) self.rb1.grid(row=0) self.rb2.grid(row=1) self.rb3.grid(row=2) def selected(self): if self.my_var.get()==5: "do something" elif self.my_var.get()==10: "do something" else: "do something" </code></pre>
python|python-2.7|tkinter|ttk
21
1,902,441
71,566,073
Flask RESTful and Angular CORS Error on POST Method
<p>I'm trying to submit a form on a Angular Web application using my Flask-RESTful api, but when i click &quot;submit&quot; i have a error with CORS.</p> <h1><strong>APP.PY</strong></h1> <pre><code>from flask import Flask from flask_restful import Api from flask_cors import CORS from flask_pymongo import PyMongo import resources.products app = Flask(__name__) CORS(app, resources={r'/*': {'origins': '*'}}) app.config['MONGO_URI'] = &quot;MONGO URI REPLACED PER SECURITY REASONS&quot; app.config['CORS_HEADERS'] = 'Content-Type' api = Api(app) mongo = PyMongo(app) api.add_resource(resources.products.Products, '/product/') api.add_resource(resources.products.Product, '/product/&lt;int:productId&gt;') if __name__ == &quot;__main__&quot;: app.run(debug=True) </code></pre> <h1><strong>products.py</strong></h1> <pre><code>from flask_restful import Resource, reqparse from flask import Response from bson import json_util from random import randint class Products(Resource): def get(self): import app productsGetAll = app.mongo.db.Products.find() resp = json_util.dumps(productsGetAll) return Response(resp, mimetype='application/json') def post(self): import app args = reqparse.RequestParser() args.add_argument('productName', type=str, required=True, help=&quot;productName cannot be empty.&quot;) args.add_argument('productSKU', type=str, required=True, help=&quot;productSKU cannot be empty.&quot;) args.add_argument('productCategory', type=str, required=True, help=&quot;productCategory cannot be empty.&quot;) args.add_argument('productEAN', type=str, required=True, help=&quot;productEAN cannot be empty.&quot;) args.add_argument('productCostPrice', type=str, required=True, help=&quot;productCostPrice cannot be empty.&quot;) args.add_argument('productSellPrice', type=str, required=True, help=&quot;productSellPrice cannot be empty.&quot;) args.add_argument('productStock', type=str, required=True, help=&quot;productStock cannot be empty.&quot;) args.add_argument('productProvider', type=str, required=True, help=&quot;productProvider cannot be empty.&quot;) args.add_argument('productDescription', type=str) data = args.parse_args() if data['productCategory'] and data['productCostPrice'] and data['productEAN'] and data['productName'] and data['productProvider'] and data['productSKU'] and data['productSellPrice'] and data['productStock'] and data['productDescription']: rId = randint(0, 9) data['productSKU'] = ''.join( filter(str.isalnum, data['productSKU'])) idTeste = app.mongo.db.Products.estimated_document_count() productsFilterCount = idTeste if idTeste == 0 else app.mongo.db.Products.find( ).sort('id', -1).limit(1)[0]['id'] productsFilterCount = int(productsFilterCount) + 1 app.mongo.db.Products.insert_one( {'id': int(productsFilterCount), 'productCategory': str(data['productCategory']), 'productCostPrice': int(data['productCostPrice']), 'productEAN': int(data['productEAN']), 'productName': data['productName'], 'productProvider': data['productProvider'], 'productSKU': data['productSKU'], 'productSellPrice': int(data['productSellPrice']), 'productStock': int(data['productStock']), 'productDescription': data['productDescription'] } ) productReturn = app.mongo.db.Products.find( {'id': int(productsFilterCount)}) resp = json_util.dumps(productReturn) return Response(resp, mimetype='application/json') else: return {'message': 'Cannot post this product. Try again later.'} class Product(Resource): def get(self, productId): import app product = app.mongo.db.Products.find({'id': productId}) resp = json_util.dumps(product) return Response(resp, mimetype='application/json') def delete(self, productId): import app arqDelete = app.mongo.db.Products.delete_one({'id': productId}) return {'message': 'Deletado'} def put(self, productId): import app args = reqparse.RequestParser() args.add_argument('productName', type=str, required=True, help=&quot;productName cannot be empty.&quot;) args.add_argument('productSKU', type=str, required=True, help=&quot;productSKU cannot be empty.&quot;) args.add_argument('productCategory', type=str, required=True, help=&quot;productCategory cannot be empty.&quot;) args.add_argument('productEAN', type=str, required=True, help=&quot;productEAN cannot be empty.&quot;) args.add_argument('productCostPrice', type=str, required=True, help=&quot;productCostPrice cannot be empty.&quot;) args.add_argument('productSellPrice', type=str, required=True, help=&quot;productSellPrice cannot be empty.&quot;) args.add_argument('productStock', type=str, required=True, help=&quot;productStock cannot be empty.&quot;) args.add_argument('productProvider', type=str, required=True, help=&quot;productProvider cannot be empty.&quot;) args.add_argument('productDescription', type=str) data = args.parse_args() print(data['productStock']) productEdit = app.mongo.db.Products.find_one_and_update( {'id': productId}, {'$set': {'productName': data['productName'], 'productSKU': data['productSKU'], 'productCategory': data['productCategory'], 'productEAN': int(data['productEAN']), 'productCostPrice': int(data['productCostPrice']), 'productSellPrice': int(data['productSellPrice']), 'productStock': int(data['productStock']), 'productProvider': data['productProvider'], 'productDescription': data['productDescription'] }, }) resp = json_util.dumps(productEdit) return Response(resp, mimetype='application/json') </code></pre> <p>Everything runs fine on the GET and DELETE method. but POST, PUT, PATCH still getting CORS error. I scanned all the internet for possible solutions envolving FLASK_CORS and nothing works. even seting the to any origin (using: '/*') or making a Angular Proxy file..</p>
<p>You likely are having an issue with the preflight request that's being sent, so you need to pass <code>supports_credentials=True</code> or <code>CORS_SUPPORTS_CREDENTIALS=True</code> in your config.</p> <p>Here is what a preflight request is:</p> <p><a href="https://developer.mozilla.org/en-US/docs/Glossary/Preflight_request" rel="nofollow noreferrer">https://developer.mozilla.org/en-US/docs/Glossary/Preflight_request</a></p> <p>Flask CORS discussion:</p> <p>See here: <a href="https://github.com/corydolphin/flask-cors/issues/200" rel="nofollow noreferrer">https://github.com/corydolphin/flask-cors/issues/200</a></p>
python|mongodb|rest|flask|cors
0
1,902,442
69,400,088
Can you set a task ratio for TaskSet(s) in Locust Python
<pre><code>class UserBehavior1(TaskSet): @task def task1:... @task def task2:... class UserBehavior2(TaskSet): @task def task1:... @task def task2:... class User(HttpUser): wait_time = between(n,m) host = 'https://example.com' tasks = [UserBehavior1, UserBehavior2] </code></pre> <p>Is there a way to specify a ratio for TaskSet(s) in Locust? I'm aware that the @task decorator takes in an additional argument to add weight but I'm not sure if it would work in this case. I want the tasks within each TaskSet to be weighted equally but I want the TaskSet(s) to be performed with different ratios (let's say 2:5).</p>
<p>TaskSets can also be weighted with the same <code>@task</code> decorator as individual tasks, as explained in the <a href="https://docs.locust.io/en/stable/tasksets.html" rel="nofollow noreferrer">TaskSet docs</a>, and you can weight them that way. Alternatively, the docs also explain that the <a href="https://docs.locust.io/en/stable/writing-a-locustfile.html#id2" rel="nofollow noreferrer"><code>tasks attribute</code></a> can be defined as a dictionary instead of a list and you can give a weight with them, even for TaskSets. Applied to your sample code, it would look like this:</p> <pre><code>class User(HttpUser): wait_time = between(n,m) host = 'https://example.com' tasks = {UserBehavior1: 2, UserBehavior2: 5} </code></pre> <p>This should give your TaskSets a weighting of 2:5 each time a user is spawned. The individual tasks in the TaskSets then have their own weighting.</p>
python|locust
3
1,902,443
69,377,015
Is there a way to plot NLTK bigrams freqdist selected result?
<p>I'm trying to plot only the relevant pair of words from the bigram freqdist result <a href="https://i.stack.imgur.com/587u6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/587u6.png" alt="Some of the words here are irrelevant, I'm trying to plot only the pairs that make sense. any insights? Thanks" /></a></p>
<p>Just add the last line as: tokenized_bg.plot(20) or tokenized_bg.plot(20, cumulative=True)</p>
python|pandas|jupyter-notebook|nlp
0
1,902,444
69,450,738
I have a problem with my python Minecraft copy
<p><strong>I was working with &quot;Ursina Engine&quot;</strong> My project is to make a copy of Minecraft, then I found out a problem that every time I run the program and when I want to right-click to place a block, nothing happens.</p> <p>Thanks to someone who can help me find the issue and tell me how to fix it * Here is my Code:*</p> <pre class="lang-py prettyprint-override"><code> from ursina import * from ursina.prefabs.first_person_controller import FirstPersonController class Vovel(Button): def __init__(self, position = (0,0,0)): super().__init__( parent=scene, position=position, model='cube', origin_y = 0.5, texture= 'white_cube', color= color.white, highlight_color = color.lime, ) def Input(self, key): if self.hovered: if key == 'left mouse down': vovel = Vovel(position= self.position + mouse.normal) if key == 'right mouse down': destroy(self) app = Ursina() for z in range(8): for x in range(8): vovel = Vovel(position= (x,0,z)) player = FirstPersonController() app.run() </code></pre> <p>End.</p>
<p>The name of the input function is wrong. <code>Input</code> should be <code>input</code></p>
python|user-interface|ursina
5
1,902,445
69,556,400
Python4Delphi - Error in wrapping delphi interface with TPyDelphiWrapper.WrapInterface
<p>I am using Python 3.8 with Delphi 10.4.2.</p> <p>I am trying to use the components of <a href="https://github.com/pyscripter/python4delphi" rel="nofollow noreferrer">Python4Delphi</a> to access through a Python script some interfaces defined in Delphi.</p> <p>At design-time I added the TPythonEngine, TPythonModule and TPyDelphiWrapper components to my project's VCL form.</p> <p>So I defined 3 interfaces, implemented by 3 classes respectively, as below</p> <pre><code>type IPerson = interface (IUnknown) ['{1D21B5B6-25DE-4884-8BDB-8E2D9A239D64}'] function GetName : string; procedure SetName ( value : string ); property Name: string read GetName write SetName; function GetSurname: string; procedure SetSurname(value : string); property Surname : string read GetSurname write SetSurname; function GetInfo : string; end; ICustomer = interface (IPerson) ['{8742364C-33E8-4FF4-86FB-C19AF67A735B}'] function GetCustomerNumber : string; procedure SetCustomerNumber ( value : string ); property CustomerNumber : string read GetCustomerNumber write SetCustomerNumber; end; ISupplier = interface ( IPerson ) ['{420FFF78-92DE-4D7E-9958-FDA95748EEB7}'] function GetSupplierNumber : string; procedure SetSupplierNumber ( value : string ); property SupplierNumber : string read GetSupplierNumber write SetSupplierNumber; end; TPerson = class ( TInterfacedObject , IPerson) private FName : string; FSurname : string; function GetName : string; procedure SetName ( value : string ); function GetSurname: string; procedure SetSurname(value : string); public property Surname : string read GetSurname write SetSurname; property Name: string read GetName write SetName; function GetInfo : string; virtual; end; TCustomer = class ( TPerson , ICustomer) private FCustomerNumber : string; function GetCustomerNumber : string; procedure SetCustomerNumber ( value : string); public property CustomerNumber : string read GetCustomerNumber write SetCustomerNumber; function GetInfo: string; override; end; TSupplier = class ( TPerson , ISupplier) private FSupplierNumber : string; function GetSupplierNumber : string; procedure SetSupplierNumber ( value : string ); public property SupplierNumber : string read GetSupplierNumber write SetSupplierNumber; function GetInfo : string; override; end; </code></pre> <p>In the Create method of the form, I defined 3 variables, one for each of the 3 interfaces, and through the PyDelphiWrapper I passed them to the Python module in 3 different Python variable.</p> <pre><code>procedure TFrmTestInterface.FormCreate(Sender: TObject); var LPerson : IPerson; LCustomer : ICustomer; LSupplier : ISupplier; Py: PPyObject; begin LPerson := TPerson.Create; LCustomer := TCustomer.Create; LSupplier := TSupplier.Create; LPerson.Name := 'Pippo'; LPerson.Surname := 'Rossi'; LCustomer.Name := 'Pluto'; LCustomer.Surname := 'Verdi'; LSupplier.Name := 'Paperino'; LSupplier.Surname := 'Bianchi'; Py := PyDelphiWrapper1.WrapInterface(TValue.From(LPerson)); PythonModule1.SetVar('delphi_person', Py); GetPythonEngine.Py_DecRef(Py); Py := PyDelphiWrapper1.WrapInterface(TValue.From(LCustomer)); PythonModule1.SetVar('delphi_customer', Py); GetPythonEngine.Py_DecRef(Py); Py := PyDelphiWrapper1.WrapInterface(TValue.From(LSupplier)); PythonModule1.SetVar('delphi_supplier', Py); GetPythonEngine.Py_DecRef(Py); end; </code></pre> <p>At runtime the variables are correctly interpreted, but every time I try to access one of the properties defined in the interface I always get the same error.</p> <p>This is the Python script I try to run:</p> <pre><code>from delphi_module import delphi_person, delphi_customer, delphi_supplier print('type(delphi_person) = ', type(delphi_person)) print('type(delphi_customer) = ', type(delphi_customer)) print('type(delphi_supplier) = ', type(delphi_supplier)) print(delphi_person.Name) </code></pre> <p>And the error I get</p> <blockquote> <p>Traceback (most recent call last): File &quot;&quot;, line 7, in AttributeError: Error in getting property &quot;Name&quot;. Error: Unknown attribute</p> </blockquote> <p>The type(...) command runs correctly for the three variables.</p> <p>If instead of using 3 variables of the interface type, I declare each variable as a class type, using the PyDelphiWrapper.Wrap method, everything works correctly!</p> <pre><code>procedure TFrmTestInterface.FormCreate(Sender: TObject); var LPerson : IPerson; LCustomer : ICustomer; LSupplier : ISupplier; Py: PPyObject; begin LPerson := TPerson.Create; LCustomer := TCustomer.Create; LSupplier := TSupplier.Create; LPerson.Name := 'Pippo'; LPerson.Surname := 'Rossi'; LCustomer.Name := 'Pluto'; LCustomer.Surname := 'Verdi'; LSupplier.Name := 'Paperino'; LSupplier.Surname := 'Grandi'; Py := PyDelphiWrapper1.Wrap(LPerson, TObjectOwnership.soReference); PythonModule1.SetVar('delphi_person', py); GetPythonEngine.Py_DECREF(py); Py := PyDelphiWrapper1.Wrap(LCustomer, TObjectOwnership.soReference); PythonModule1.SetVar('delphi_customer', py); GetPythonEngine.Py_DECREF(py); Py := PyDelphiWrapper1.Wrap(LSupplier, TObjectOwnership.soReference); PythonModule1.SetVar('delphi_supplier', py); GetPythonEngine.Py_DECREF(py); end; </code></pre> <p>With the same Python script I get the correct output without errors</p> <p><a href="https://i.stack.imgur.com/R8Ty9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/R8Ty9.png" alt="enter image description here" /></a></p> <p>Anyone have any idea what I'm doing wrong with using the TPyDelphiWrapper to wrap interface type variables for Python scripts?</p>
<p>Delphi does not add RTTI on properties defined in interfaces. Therefore property 'Name' cannot be found by the Python engine. RTTI is available for the methods when you add {$M+} before the interface declaration. Calling delphi_person.GetName() should work then.</p> <p>There is another issue with using interfaces and the Python engine, the interface is not locked when you call WrapInterface. Therefore the object will be released when it goes out of scope.</p>
python|delphi|python4delphi
0
1,902,446
69,476,814
Count the number of EC2 instances cross-account
<p>I need to create a python lambda function which check a set of conditions. One of the is to count the number of running ec2 instances with a specific name from another aws account.</p> <p>I searched stackoverflow and found something like this, but this should only count the instances from the same account/region.</p> <pre><code>def ec2(event, context): ec2_resource = boto3.resource('ec2') instances = [instance.state['Name'] for instance in ec2_resource.instances.all()] ec2_running_instances = instances.count('running') print(ec2_running_instances) </code></pre>
<p>You can't do this directly from your account. You must assume IAM role that is created in the second account, with permissions to describe the instances. Please check: <a href="https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_cross-account-with-roles.html" rel="nofollow noreferrer">Delegate access across AWS accounts using IAM roles </a>.</p> <p>Once the role exists, you have to use boto3's <a href="https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sts.html#STS.Client.assume_role" rel="nofollow noreferrer">assume_role</a> to assume the role, get <strong>temporary aws credentials</strong>, and then create new <a href="https://boto3.amazonaws.com/v1/documentation/api/latest/reference/core/session.html" rel="nofollow noreferrer">boto3 session</a> with that credentials.</p>
python-3.x|amazon-web-services|amazon-ec2|aws-lambda|boto3
0
1,902,447
42,301,984
How to not exceed the maximum number of fonts when generating XLS spreadsheets
<p>I am taking several comma delimited CSV files and using them to generate an XLS spreadsheet in which the names of the files become separate tabs in the spreadsheet. The code I have produces the results I want except for when opening the spreadsheet I get the following warning: "Some text formatting may have changed in this file because the maximum number of fonts was exceeded. It may help to close other documents and try again." I am pretty sure that the problem arises from the code trying to change the format of cells beyond the 65536 row limit, but I'm not sure how to limit the row changes. I need no more than a few hundred rows across four columns.</p> <pre><code>import csv, glob, xlwt, sys, os csvFiles = os.path.join(LogFileFolder, "*") wb = xlwt.Workbook() colNames = ['iNFADS_FAC','CAT','Crosswalk_FAC','FAC'] for filename in glob.glob(csvFiles): (f_path, f_name) = os.path.split(filename) (f_short_name, f_extension) = os.path.splitext(f_name) ws = wb.add_sheet(f_short_name) with open(filename, 'rb') as csvf: csvReader = csv.reader(csvf) for rowx, row in enumerate(csvReader): for colx, value in enumerate(row): if value in colNames: ws.write(rowx, colx, value, xlwt.easyxf( "border: top medium, right medium, bottom double, left medium; font: bold on; pattern: pattern solid, fore_color pale_blue; align: vert centre, horiz centre")) elif value not in colNames: ws.write(rowx, colx, float(value), xlwt.easyxf("align: vert centre, horiz centre")) ##This second "xlwt.easyxf(align...)" part is the offending section of the code, if ##I remove just that part then the problem goes away. Is there a way to keep ##it within the 65536 limit here? else: pass wb.set_active_sheet = 1 outXLS = os.path.join(LogFileFolder, "FAC-CAT Code Changes.xls") wb.save(outXLS) </code></pre>
<p>I wish to thank John Machin at Google Group 'python-excel' for answering my question. Apparently, the solution is to move the easyxf portion to a variable earlier in the script and then just call it whenever needed. So the script should read:</p> <pre><code>csvFiles = os.path.join(LogFileFolder, "*") wb = xlwt.Workbook() headerStyle = xlwt.easyxf("border: top medium, right medium, bottom double," \ "left medium; font: bold on; pattern: pattern solid, fore_color pale_blue;" \ "align: vert centre, horiz centre") valueStyle = xlwt.easyxf("align: vert centre, horiz centre") colNames = ['iNFADS_FAC','CAT','Crosswalk_FAC','FAC'] for filename in glob.glob(csvFiles): (f_path, f_name) = os.path.split(filename) (f_short_name, f_extension) = os.path.splitext(f_name) ws = wb.add_sheet(f_short_name) with open(filename, 'rb') as csvf: csvReader = csv.reader(csvf) for rowx, row in enumerate(csvReader): for colx, value in enumerate(row): if value in colNames: ws.col(colx).width = 256 * 15 ws.write(rowx, colx, value, headerStyle) elif value not in colNames: ws.write(rowx, colx, float(value), valueStyle) else: pass wb.set_active_sheet = 1 outXLS = os.path.join(LogFileFolder, "FAC-CAT Code Changes.xls") wb.save(outXLS) </code></pre>
csv|python-2.7|excel
2
1,902,448
59,083,311
CPU-limited multiprocessing on python Pool no where near expected speed up
<p><strong><em>UPDATED WITH SOLUTION</em></strong></p> <p>I am having a hard time understanding Pool.</p> <p>I'd like to run an analysis on 12 independent sets of data at once. The individual analyses do not dependent on each other, don't share data, so I expect a near 12x increase in speed if I can run these in parallel.</p> <p>However, using Pool.map, I get no where near such performance. To try to create a situation where I expect a near 12x sped up, I wrote a really simple function that consists of a for loop and just calculates arithmetic based on the loop variable. No results are stored and no data is loaded. I've done this because another thread on here talked of L2 cache limiting performance, so I've tried to pare down the problem to one where there's no data, just pure computation. </p> <pre><code>import multiprocessing as mp import mp_cfg as _cfg import os import time as _tm NUM_CORE = 12 # set to the number of cores you want to use NUM_COPIES_2_RUN = 12 # number of times we want to run the function print("NUM_CORE %d" % NUM_CORE) print("NUM_COPIES %d" % NUM_COPIES_2_RUN) #################################################### ############################### FUNCTION DEFINITION #################################################### def run_me(args): """ function to be run NUM_COPIES_2_RUN times (identical) """ num = args[0] tS = args[1] t1 = _tm.time() for i in range(5000000): v = ((i+i)*(i*3))/100000. t2 = _tm.time() print("work %(wn)d %(t2).3f - %(t1).3f = %(dt).3f" % {"wn" : num, "t1" : (t1-tS), "t2" : (t2-tS), "dt" : (t2-t1)}) #################################################### ################################## serial execution #################################################### print("Running %d copies of the same code in serial execution" % NUM_COPIES_2_RUN) tStart_serial = _tm.time() for i in range(NUM_COPIES_2_RUN): run_me([i, tStart_serial]) tEnd_serial = _tm.time() print("total time: %.3f" % (tEnd_serial - tStart_serial)) #################################################### ############################################## Pool #################################################### print("Running %d copies of the same code using Pool.map_async" % NUM_COPIES_2_RUN) tStart_pool = _tm.time() pool = mp.Pool(NUM_CORE) args = [] for n in range(NUM_COPIES_2_RUN): args.append([n, tStart_pool]) pool.map_async(run_me, args) pool.close() pool.join() tEnd_pool = _tm.time() print("total time: %.3f" % (tEnd_pool - tStart_pool)) </code></pre> <p>When I run this on my 16 core Linux machine, I get (param set #1)</p> <pre><code>NUM_CORE 12 NUM_COPIES 12 Running 12 copies of the same code in serial execution work 0 0.818 - 0.000 = 0.818 work 1 1.674 - 0.818 = 0.855 work 2 2.499 - 1.674 = 0.826 work 3 3.308 - 2.499 = 0.809 work 4 4.128 - 3.308 = 0.820 work 5 4.937 - 4.128 = 0.809 work 6 5.747 - 4.937 = 0.810 work 7 6.558 - 5.747 = 0.811 work 8 7.368 - 6.558 = 0.810 work 9 8.172 - 7.368 = 0.803 work 10 8.991 - 8.172 = 0.819 work 11 9.799 - 8.991 = 0.808 total time: 9.799 Running 12 copies of the same code using Pool.map work 1 0.990 - 0.018 = 0.972 work 8 0.991 - 0.019 = 0.972 work 5 0.992 - 0.019 = 0.973 work 7 0.992 - 0.019 = 0.973 work 3 1.886 - 0.019 = 1.867 work 6 1.886 - 0.019 = 1.867 work 4 2.288 - 0.019 = 2.269 work 9 2.290 - 0.019 = 2.270 work 0 2.293 - 0.018 = 2.274 work 11 2.293 - 0.023 = 2.270 work 2 2.294 - 0.019 = 2.275 work 10 2.332 - 0.019 = 2.313 total time: 2.425 </code></pre> <p>When I change parameters (param set #2) and run again, I get</p> <pre><code>NUM_CORE 12 NUM_COPIES 6 Running 6 copies of the same code in serial execution work 0 0.798 - 0.000 = 0.798 work 1 1.579 - 0.798 = 0.780 work 2 2.355 - 1.579 = 0.776 work 3 3.131 - 2.355 = 0.776 work 4 3.908 - 3.131 = 0.777 work 5 4.682 - 3.908 = 0.774 total time: 4.682 Running 6 copies of the same code using Pool.map_async work 1 0.921 - 0.015 = 0.906 work 4 0.922 - 0.015 = 0.907 work 2 0.922 - 0.015 = 0.908 work 5 0.932 - 0.015 = 0.917 work 3 2.099 - 0.015 = 2.085 work 0 2.101 - 0.014 = 2.086 total time: 2.121 </code></pre> <p>Using another set of parameters (param set #3),</p> <pre><code>NUM_CORE 4 NUM_COPIES 12 Running 12 copies of the same code in serial execution work 0 0.784 - 0.000 = 0.784 work 1 1.564 - 0.784 = 0.780 work 2 2.342 - 1.564 = 0.778 work 3 3.121 - 2.342 = 0.779 work 4 3.901 - 3.121 = 0.779 work 5 4.682 - 3.901 = 0.782 work 6 5.462 - 4.682 = 0.780 work 7 6.243 - 5.462 = 0.780 work 8 7.024 - 6.243 = 0.781 work 9 7.804 - 7.024 = 0.780 work 10 8.578 - 7.804 = 0.774 work 11 9.360 - 8.578 = 0.782 total time: 9.360 Running 12 copies of the same code using Pool.map_async work 3 0.862 - 0.006 = 0.856 work 1 0.863 - 0.006 = 0.857 work 5 1.713 - 0.863 = 0.850 work 4 1.713 - 0.863 = 0.851 work 0 2.108 - 0.006 = 2.102 work 2 2.112 - 0.006 = 2.106 work 6 2.586 - 1.713 = 0.873 work 7 2.587 - 1.713 = 0.874 work 8 3.332 - 2.109 = 1.223 work 9 3.333 - 2.113 = 1.220 work 11 3.456 - 2.587 = 0.869 work 10 3.456 - 2.586 = 0.870 total time: 3.513 </code></pre> <p>This has me totally baffled. Especially for parameter set #2, I'm allowing the use of 12 cores for 6 independent threads of execution, yet my speed up is only 2x.</p> <p>What is going on? I've also tried using <code>map()</code> and <code>map_async()</code>, but there seems to be no difference in performance.</p> <hr> <p><strong><em>UPDATE</em></strong>:</p> <p>So there were several things going on here:</p> <p>1) I had fewer cores than I realized. I thought I had 16 cores, I only had 8 physical cores, and 16 logical cores because hyper-threading was turned on.</p> <p>2) Even IF I only had say 4 independent processes I wanted to run on these 8 physical cores, I was not getting the expected speed up. I was expecting something like 3.5x in this case. I would get that much speed up maybe 10% of the time when I ran the above tests multiple number of times. Other times, I'd get anywhere from 1.5x to 3.5x - which seemed odd, because I had more than enough cores to do calculations, but most of the time, it'd seem the parallelization is working very sub-optimally. This would make sense if I also had lots of other processes on the system, but I am the only user and I had nothing computationally intensive running.</p> <p>3) It turns out that having hyper-threading turned on causes this seeming under-utilization of my hardware. If I turn off hyper-threading</p> <p><a href="https://www.golinuxhub.com/2018/01/how-to-disable-or-enable-hyper.html" rel="nofollow noreferrer">https://www.golinuxhub.com/2018/01/how-to-disable-or-enable-hyper.html</a></p> <p>I would get the expected ~3.5x speed up every time I ran the script posted above - which is what I expect. </p> <p>PS) Now, my actual code that does my analysis is written in python with the numerically intensive portions written using cython. It also uses numpy. My numpy is linked to the math kernel library (MKL), which can take advantage of multiple cores. In cases like mine where multiple independent processes need to be run in parallel, it doesn't make sense to have MKL use multiple cores, thereby interrupting the running thread on a different core, especially since the calls to things like dot wasn't sufficiently expensive enough to overcome the overhead of using multiple cores. </p> <p>I thought that perhaps this was the problem originally:</p> <p><a href="https://stackoverflow.com/questions/30791550/limit-number-of-threads-in-numpy">Limit number of threads in numpy</a></p> <p>export MKL_NUM_THREADS=1</p> <p>did improve performance somewhat, but it wasn't as much as I had hoped, prompting me to ask this question here (and for simplicity, I avoided using numpy altogether).</p>
<p>My guess is you're maxing out cpu in the <code>for</code> loop in:</p> <pre><code>for i in range(5000000): v = ((i+i)*(i*3))/100000. </code></pre> <p>It seems counter-intuitive that you have 16 cores and it maxes out under that, but what happens when you try a function like <code>time.sleep(1)</code> for each core -- does it take 16s when run serially and 1s when run on each core? If so, then it would seem it comes down to cpu limitations or perhaps the internals of the python <code>Pool</code> library.</p> <p>Here's an example on my machine using 8 cores, which cuts the time down by 8 using the most straightforward example I can think of:</p> <pre><code>import time from multiprocessing import Pool NUM_TIMES = 8 def func(i): time.sleep(1) # serial t0=time.time(); [func() for i in range(NUM_TIMES)]; print (time.time() - t0) # 8.020868062973022 # pool.map t0=time.time(); Pool(NUM_TIMES).map(func, range(NUM_TIMES)); print (time.time() - t0) # 1.2892770767211914 </code></pre>
python|python-multiprocessing
2
1,902,449
53,876,746
python for loop print in same line in multiple column
<p>I have a simple loop:</p> <pre><code>for i in range (0,5): for a in range(1,10): print(string.ascii_uppercase[i] + str(a) + " test") </code></pre> <p>but it outputs:</p> <pre><code>a1 test a2 test a3 test </code></pre> <p>what I want is:</p> <pre><code>a1 test b1 test a2 test b2 test a3 test b3 test </code></pre> <p>can anyone give me some light on how to accomplish this.</p>
<p>Just swap the order of your two for loops:</p> <pre><code>for a in range(1,10): for i in range (0,5): print(string.ascii_uppercase[i] + str(a) + " test") </code></pre>
python-3.x|shell
0
1,902,450
54,110,499
Convert an enum to a list in Python
<p>I have an enum that I define like this: </p> <pre><code>def make_enum(**enums): return type('Enum', (), enums) an_enum = make_enum(first=1, second=2) </code></pre> <p>At some later point I would like to check, if a value that I took as a parameter in a funciton is part of <code>an_enum</code>. Usually i would do it like this </p> <p><code>assert 1 in to_list(an_enum)</code></p> <p>How can I convert the enum object <code>an_enum</code> to a list? If that is not possible, how can I check if a value "is part of the enum"?</p>
<p>Python's Enum object has build-in <code>enumerable.name</code> and <code>enumerable.value</code> attributes for each member of an Enum.</p> <pre><code>an_enum = Enum('AnEnum', {'first': 1, 'second': 2}) [el.value for el in an_enum] # returns: [1, 2] [el.name for el in an_enum] # returns: ['first', 'second'] </code></pre> <p><em>Sidenode: Be careful with <code>assert</code>. If someone runs your script with <code>python -O</code> asserts will never fail.</em></p> <p>To check if a value is part of an enum:</p> <pre><code>if 1 in [el.value for el in an_enum]: pass </code></pre>
python|enums|assert
8
1,902,451
53,977,425
VirtualEnv apache2 server No module named 'django'
<p>I am running a virtual environment for my apache2 server from inside <code>/home/myname/myproject/venv</code></p> <p>I activate my virtual environment with </p> <pre><code>source venv/bin/activate </code></pre> <p>Running </p> <pre><code>which django-admin </code></pre> <p>Returns the correct file from inside my virtual environment.</p> <p>Running </p> <pre><code> import django django.__file__ </code></pre> <p>Returns </p> <pre><code>/home/myname/myproject/venv/lib/python3.6/site-packages/django/__init__.py </code></pre> <p>Running </p> <pre><code>pip freeze </code></pre> <p>Returns all of my needed packages.</p> <p>I also have my apache2 config file pointing to the venv directory with the python-path argument</p> <p>However, after restarting the server I'm still getting a ModuleNotFoundError for django.</p> <p>What's the issue here?</p> <p>EDIT: apache2 config file</p> <pre><code> Alias /static /home/myname/myproject/static &lt;Directory /home/myname/myproject/static&gt; Require all granted &lt;/Directory&gt; Alias /media /home/myname/myproject/media &lt;Directory /home/myname/myproject/media&gt; Require all granted &lt;/Directory&gt; &lt;Directory /home/myname/myproject/myproj&gt; &lt;Files wsgi.py&gt; Require all granted &lt;/Files&gt; &lt;/Directory&gt; WSGIScriptAlias / /home/myname/myproject/myproj/wsgi.py WSGIDaemonProcess myproject_app python-path=/home/myname/myproject python-home=/home/myname/myproject/venv/ WSGIProcessGroup myproject_app WSGIApplicationGroup %{GLOBAL} </code></pre> <p>wsgi.py</p> <pre><code>import os from django.core.wsgi import get_wsgi_application #os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myproj.settings') application = get_wsgi_application() </code></pre>
<p>You need to add activation of virtual environment to <code>wsgi.py</code> file.</p> <pre><code>import os import sys PROJECT_DIR = '/home/myname/myproject' sys.path.insert(0, PROJECT_DIR) def execfile(filename): globals = dict( __file__ = filename ) exec( open(filename).read(), globals ) activate_this = os.path.join( PROJECT_DIR, 'venv/bin', 'activate_this.py' ) execfile( activate_this ) </code></pre> <p>This code should be before <code>django</code> import. Otherwise, whenever you restart <code>Apache2</code>, you will need to manually activate virtual environment.</p> <p><strong>NOTE:</strong> make sure that you have all requirements install in virtual environment.</p>
python|django|python-3.x|apache|mod-wsgi
0
1,902,452
65,074,784
Oversampling after splitting the dataset - Text classification
<p>I am having some issues with the steps to follow for over-sampling a dataset. What I have done is the following:</p> <pre><code># Separate input features and target y_up = df.Label X_up = df.drop(columns=['Date','Links', 'Paths'], axis=1) # setting up testing and training sets X_train_up, X_test_up, y_train_up, y_test_up = train_test_split(X_up, y_up, test_size=0.30, random_state=27) class_0 = X_train_up[X_train_up.Label==0] class_1 = X_train_up[X_train_up.Label==1] # upsample minority class_1_upsampled = resample(class_1, replace=True, n_samples=len(class_0), random_state=27) # # combine majority and upsampled minority upsampled = pd.concat([class_0, class_1_upsampled]) </code></pre> <p>Since my dataset looks like:</p> <pre><code>Label Text 1 bla bla bla 0 once upon a time 1 some other sentences 1 a few sentences more 1 this is my dataset! </code></pre> <p>I applied a vectorizer to transform string into numbers:</p> <pre><code>X_train_up=upsampled[['Text']] y_train_up=upsampled[['Label']] X_train_up = pd.DataFrame(vectorizer.fit_transform(X_train_up['Text'].replace(np.NaN, &quot;&quot;)).todense(), index=X_train_up.index) </code></pre> <p>Then I applied the logistic regression function:</p> <pre><code>upsampled_log = LogisticRegression(solver='liblinear').fit(X_train_up, y_train_up) </code></pre> <p>However, I have got the following error at this step:</p> <pre><code>X_test_up = pd.DataFrame(vectorizer.fit_transform(X_test_up['Text'].replace(np.NaN, &quot;&quot;)).todense(), index=X_test_up.index) pred_up_log = upsampled_log.predict(X_test_up) </code></pre> <blockquote> <p>ValueError: X has 3021 features per sample; expecting 5542</p> </blockquote> <p>Since it was told me that I should apply the oversampling after splitting my dataset into train e test, I have not vectorised the test set. My doubts are then the following:</p> <ul> <li>is it right to consider later a vectorisation of the test set: <code>X_test_up = pd.DataFrame(vectorizer.fit_transform(X_test_up['Text'].replace(np.NaN, &quot;&quot;)).todense(), index=X_test_up.index)</code></li> <li>is it right to consider the over-sampling after splitting the dataset into training and test?</li> </ul> <p>Alternatively, I tried with Smote function. The code below works, but I would prefer to consider also the oversampling, if possible, rather than SMOTE.</p> <pre><code>from sklearn.feature_extraction.text import TfidfTransformer from sklearn.feature_extraction.text import CountVectorizer from sklearn.pipeline import Pipeline X_train_up, X_test_up, y_train_up, y_test_up=train_test_split(df['Text'],df['Label'], test_size=0.2,random_state=42) count_vect = CountVectorizer() X_train_counts = count_vect.fit_transform(X_train_up) tfidf_transformer = TfidfTransformer() X_train_tfidf = tfidf_transformer.fit_transform(X_train_counts) sm = SMOTE(random_state=2) X_train_res, y_train_res = sm.fit_sample(X_train_tfidf, y_train_up) print(&quot;Shape after smote is:&quot;,X_train_res.shape,y_train_res.shape) nb = Pipeline([('clf', LogisticRegression())]) nb.fit(X_train_res, y_train_res) y_pred = nb.predict(count_vect.transform(X_test_up)) print(accuracy_score(y_test_up,y_pred)) </code></pre> <p>Any comments and suggestions will be appreciated. Thanks</p>
<p>It is better to do the countVectorizing and transformation on the whole dataset, split into test and train, and keep it as a sparse matrix without converting back into a data.frame.</p> <p>For example this is a dataset:</p> <pre><code>from sklearn.feature_extraction.text import TfidfTransformer from sklearn.feature_extraction.text import CountVectorizer from sklearn.pipeline import Pipeline from sklearn.model_selection import train_test_split df = pd.DataFrame({'Text':['This is bill','This is mac','here’s an old saying', 'at least old','data scientist years','data science is data wrangling', 'This rings particularly','true for data science leaders', 'who watch their data','scientists spend days', 'painstakingly picking apart','ossified corporate datasets', 'arcane Excel spreadsheets','Does data science really', 'they just delegate the job','Data Is More Than Just Numbers', 'The reason that', 'data wrangling is so difficult','data is more than text and numbers'], 'Label':[0,1,1,0,1,0,0,0,0,0,0,0,0,1,0,0,0,1,0]}) </code></pre> <p>We perform the vectorization and transformation, followed by split:</p> <pre><code>count_vect = CountVectorizer() df_counts = count_vect.fit_transform(df['Text']) tfidf_transformer = TfidfTransformer() df_tfidf = tfidf_transformer.fit_transform(df_counts) X_train_up, X_test_up, y_train_up, y_test_up=train_test_split(df_tfidf,df['Label'].values, test_size=0.2,random_state=42) </code></pre> <p>Up sampling can be done by resampling the index of the minority classes:</p> <pre><code>class_0 = np.where(y_train_up==0)[0] class_1 = np.where(y_train_up==1)[0] up_idx = np.concatenate((class_0, np.random.choice(class_1,len(class_0),replace=True) )) upsampled_log = LogisticRegression(solver='liblinear').fit(X_train_up[up_idx,:], y_train_up[up_idx]) </code></pre> <p>And the prediction will work:</p> <pre><code>upsampled_log.predict(X_test_up) array([0, 1, 0, 0]) </code></pre> <p>If you have concerns about data leakage, that is some of the information from test actually goes into the training, through the use of TfidfTransformer(). Honestly yet to see concrete proof or demonstration of this, but below is an alternative where you apply the tfid separately:</p> <pre><code>count_vect = CountVectorizer() df_counts = count_vect.fit_transform(df['Text']) X_train_up, X_test_up, y_train_up, y_test_up=train_test_split(df_counts,df['Label'].values, test_size=0.2,random_state=42) class_0 = np.where(y_train_up==0)[0] class_1 = np.where(y_train_up==1)[0] up_idx = np.concatenate((class_0, np.random.choice(class_1,len(class_0),replace=True) )) tfidf_transformer = TfidfTransformer() upsample_Xtrain = tfidf_transformer.fit_transform(X_train_up[up_idx,:]) upsamle_y = y_train_up[up_idx] upsampled_log = LogisticRegression(solver='liblinear').fit(upsample_Xtrain,upsamle_y) X_test_up = tfidf_transformer.transform(X_test_up) upsampled_log.predict(X_test_up) </code></pre>
python|scikit-learn|vectorization|logistic-regression|text-classification
2
1,902,453
22,775,599
How do you programmatically close a wxPython frame?
<p>I am trying to close a wxPython frame in the tearDown method of python's unittest framework. This is the code I am currently attempting to use to setUp and tearDown the frame.</p> <pre><code>class ValidInputTest4(unittest.TestCase): def setUp(self): total_food_calories = wx.App() self.one = FoodCalories(None) total_food_calories.MainLoop() def tearDown(self): self.one.Close() </code></pre> <p>This code properly displays the application, but it fails to completely close the application as if a user had manually clicked the "X" button in the top right corner.</p>
<p>Can you try out the <code>Destroy()</code> method? Maybe this works.</p>
python-2.7|wxpython
0
1,902,454
22,732,449
Categorical variable in Rpy2 (factor function)
<p>How can I do this in rpy2? mydata is a dataframe and rank is a varaible from 1 to 4</p> <pre><code>mydata$rank &lt;- factor(mydata$rank) </code></pre>
<p>According to this <a href="http://rpy.sourceforge.net/rpy2/doc-2.3/html/vector.html#dataframe" rel="nofollow">http://rpy.sourceforge.net/rpy2/doc-2.3/html/vector.html#dataframe</a>, you have to do it in an indirect way:</p> <pre><code>In [51]: import pandas as pd import rpy2.robjects as ro import pandas.rpy.common as py2r In [52]: DF=pd.DataFrame({'val':[1,1,1,2,2,3,3]}) In [53]: r_DF = py2r.convert_to_r_dataframe(DF) In [54]: print r_DF val 0 1 1 1 2 1 3 2 4 2 5 3 6 3 In [55]: from rpy2.robjects.vectors import DataFrame r_DF2=DataFrame({'factor': ro.r['factor'](r_DF.rx2('val')), 'val': r_DF.rx2('val')}) In [56]: print r_DF2 val factor 1 1 1 2 1 1 3 1 1 4 2 2 5 2 2 6 3 3 7 3 3 </code></pre>
python|r|rpy2|logistic-regression
0
1,902,455
22,592,764
Numpy Detection of region borders
<p>Given a 1 dimensional array of Values:</p> <p>A = [x,..,x,0,..,0,x,..,x,0,..,0,x,..,x,........]</p> <p>where:</p> <p>x,..,x stands for for an arbitrary number of arbitrary Values</p> <p>and</p> <p>0,..,0 stands for an arbitrary number of Zeros</p> <p>I need to find a fast algorithm to find the indices of the borders i.e.: ..,x,0,.. and ..,0,x..</p> <p>This problem seems to lend itself to parallelization but that is beyond my experience simple looping over the array is to slow as the data is to big</p> <p>THX Martin</p>
<p>@chthonicdaemon's answer gets you 90% of the way there, but if you actually want to use the indices to chop up your array, you need some additional information. </p> <p>Presumably, you want to use the indicies to extract the regions of the array that aren't 0. You've found the indices where the array changes, but you don't know if the change was from <code>True</code> to <code>False</code> or the opposite way around. Therefore, you need to check the first and last values and adjust accordingly. Otherwise, you'll wind up extracting the segment of zeros instead of data in some cases.</p> <p>For example:</p> <pre><code>import numpy as np def contiguous_regions(condition): """Finds contiguous True regions of the 1D boolean array "condition". Returns a 2D array where the first column is the start index of the region and the second column is the end index.""" # Find the indicies of changes in "condition" idx = np.flatnonzero(np.diff(condition)) + 1 # Prepend or append the start or end indicies to "idx" # if there's a block of "True"'s at the start or end... if condition[0]: idx = np.append(0, idx) if condition[-1]: idx = np.append(idx, len(condition)) return idx.reshape(-1, 2) # Generate an example dataset... t = np.linspace(0, 4*np.pi, 20) x = np.abs(np.sin(t)) + 0.1 x[np.sin(t) &lt; 0.5] = 0 print x # Get the contiguous regions where x is not 0 for start, stop in contiguous_regions(x != 0): print x[start:stop] </code></pre> <p>So in this case, our example dataset looks like:</p> <pre><code>array([ 0. , 0.71421271, 1.06940027, 1.01577333, 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0.93716648, 1.09658449, 0.83572391, 0. , 0. , 0. , 0. , 0. , 0. ]) </code></pre> <p>And by doing:</p> <pre><code>for start, stop in contiguous_regions(x != 0): print x[start:stop] </code></pre> <p>We'll get:</p> <pre><code>[ 0.71421271 1.06940027 1.01577333] [ 0.93716648 1.09658449 0.83572391] </code></pre>
python|numpy
2
1,902,456
28,620,313
Django Python Validation
<p>we aren't having a common validation issue. The issue we are having is that 0 validation is being checked. As long as the username has input form will submit as it should. We would like to not have the form submit and just display an error message above the field. Here is the code:</p> <h2>forms.py</h2> <pre><code>class register_form(forms.ModelForm): username = forms.CharField(max_length=200, help_text="Username: ") fname = forms.CharField(max_length=200, help_text="First Name: ") lname = forms.CharField(max_length=200, help_text="Last Name: ") email = forms.CharField(max_length=255, help_text="Email: ") remail = forms.CharField(max_length=255, help_text="Re-Type Email: ") passwd = forms.CharField(widget=forms.PasswordInput(), max_length=100, help_text="Password: ") rpasswd = forms.CharField(widget=forms.PasswordInput(), max_length=100, help_text="Re-Type Password: ") class Meta: model = User fields = () def clean_email(self): cd = self.cleaned_data email = cd.get('email') if validate_email(email): raise forms.ValidationError("Please enter a proper email address") </code></pre> <h2>views.py</h2> <pre><code>def addUser(request): context = RequestContext(request) if request.method == 'POST': form = register_form(request.POST) password = request.POST['passwd'] username = request.POST['username'] email = request.POST['email'] objects = UserManager() user = User.objects.create_user(username, email, password) if form.is_valid(): user.set_password(password) user.save() else: print form.errors else: form = register_form() return render_to_response('testapp/register.html', {'register_form': register_form}, context) </code></pre> <h2>register.html</h2> <pre><code>&lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;Registration&lt;/title&gt; &lt;/head&gt; &lt;body&gt; &lt;h1&gt;Register with us&lt;/h1&gt; {% if registered %} &lt;a href="/rango/"&gt;Return to the homepage.&lt;/a&gt;&lt;br /&gt; {% else %} &lt;form id="register_form" method="post" action="/testapp/register/" enctype="multipart/form-data"&gt; {% csrf_token %} {% for hidden in register_form.hidden_fields %} {{ hidden }} {% endfor %} {% for field in register_form.visible_fields %} {{ field.errors }} {{ field.help_text }} {{ field }} &lt;br&gt; {% endfor %} &lt;input type="submit" name="submit" value="Register" /&gt; &lt;/form&gt; {% endif %} &lt;/body&gt; &lt;/html&gt; </code></pre>
<p>You are using <code>request.POST</code> before form validation that's not good. Change your code after <code>form</code> <code>validation</code> and use <code>cleaned_data</code> for processing like</p> <pre><code>if request.method == 'POST': form = register_form(request.POST) if form.is_valid(): username = form.cleaned_data['username'] password = form.cleaned_data['passwd'] email = form.cleaned_data['email'] objects = UserManager() user = User.objects.create_user(username, email, password) user.set_password(password) user.save() else: form = reqgister_form() </code></pre>
python|django|validation
2
1,902,457
68,729,232
How to add the first four digits in to another list
<p>I want to take the first four digits from the already shuffled list and add them to a new list. I thought of append but all it does is return the whole list 4 times.</p> <pre><code>base = [1, 2, 3, 4, 5, 6] random.shuffle(base) correct = [] for i in range(4): correct.append(base) print(correct) </code></pre>
<p>You can do it like this:</p> <pre><code>import random base = [1, 2, 3, 4, 5, 6] random.shuffle(base) correct = base[:4] print(correct) </code></pre> <p>This works because of <a href="https://www.geeksforgeeks.org/python-list-slicing/" rel="nofollow noreferrer">list slicing</a></p> <p>If for whatever reason you want to avoid using list slicing here is how you would do it in a way you originally tried.</p> <pre><code>base = [1, 2, 3, 4, 5, 6] random.shuffle(base) correct = [] for i in range(4): correct.append(base[i]) print(correct) </code></pre>
python|list
2
1,902,458
41,411,330
Getting metadata from links using BeautifulSoup
<p>I'm trying to scrape links to get the title, description, and image to give a small overview of the article or webpage. Currently I have og:title by getting the meta property through BeautifulSoup. This works fine for news articles. </p> <pre><code>if tag.get("property", None) == "og:title": scraper.title = tag.get("content", None) </code></pre> <p>However, <a href="https://rads.stackoverflow.com/amzn/click/com/B01K9KW9A4" rel="nofollow noreferrer" rel="nofollow noreferrer">links for an Amazon Echo for example</a>, don't pull any images or product title. How can I go about doing this using BeautifulSoup and Python and pulling the first image found and the title from any website -- maybe not just one supported by opengraph? </p>
<p><a href="https://github.com/hboisgibault/unicontent" rel="nofollow noreferrer">unicontent</a> is a library trying to achieve that. It will get the opengraph tags or the HTML tags, or other types of tags. I don't think it can get the first image inside the page though.</p>
python|django|amazon-web-services|beautifulsoup|facebook-opengraph
1
1,902,459
56,985,814
Python 3 intermittent ssl.SSLEOFError
<p>I'm doing a fetch from google sheets using <code>pygsheets</code> python module every 90 secs.</p> <p>During early hours of morning (usually between 2-3 AM) this operation fails, and I get this error logged:</p> <pre><code>Traceback (most recent call last): File "/etc/naemon/naemon-automation/exec/pull-GSheets-CSV.py", line 42, in &lt;module&gt; wks.export(pygsheets.ExportType.CSV, path=outputDir + '/', filename=outputFileName) File "/usr/local/lib/python3.5/dist-packages/pygsheets/worksheet.py", line 1306, in export self.client.drive.export(self, file_format=file_format, filename=filename, path=path) File "/usr/local/lib/python3.5/dist-packages/pygsheets/drive.py", line 210, in export status, done = downloader.next_chunk() File "/usr/local/lib/python3.5/dist-packages/googleapiclient/_helpers.py", line 130, in positional_wrapper return wrapped(*args, **kwargs) File "/usr/local/lib/python3.5/dist-packages/googleapiclient/http.py", line 686, in next_chunk 'GET', headers=headers) File "/usr/local/lib/python3.5/dist-packages/googleapiclient/http.py", line 183, in _retry_request raise exception File "/usr/local/lib/python3.5/dist-packages/googleapiclient/http.py", line 164, in _retry_request resp, content = http.request(uri, method, *args, **kwargs) File "/usr/local/lib/python3.5/dist-packages/google_auth_httplib2.py", line 198, in request uri, method, body=body, headers=request_headers, **kwargs) File "/usr/local/lib/python3.5/dist-packages/httplib2/__init__.py", line 1926, in request cachekey, File "/usr/local/lib/python3.5/dist-packages/httplib2/__init__.py", line 1595, in _request conn, request_uri, method, body, headers File "/usr/local/lib/python3.5/dist-packages/httplib2/__init__.py", line 1501, in _conn_request conn.connect() File "/usr/local/lib/python3.5/dist-packages/httplib2/__init__.py", line 1291, in connect self.sock = self._context.wrap_socket(sock, server_hostname=self.host) File "/usr/lib/python3.5/ssl.py", line 377, in wrap_socket _context=self) File "/usr/lib/python3.5/ssl.py", line 752, in __init__ self.do_handshake() File "/usr/lib/python3.5/ssl.py", line 988, in do_handshake self._sslobj.do_handshake() File "/usr/lib/python3.5/ssl.py", line 633, in do_handshake self._sslobj.do_handshake() ssl.SSLEOFError: EOF occurred in violation of protocol (_ssl.c:645) </code></pre> <p>My code:</p> <pre><code>import pygsheets import sys from pathlib import Path # Import Arguments required for generating Configuration. If one is not set - Do not continue # str(sys.argv[1]) # Argument denoting the Google Service Account API Authentication File # str(sys.argv[2]) # Argument denoting the Google Sheets URL Key found when logging into the Google Sheets Manually via browser # str(sys.argv[3]) # Argument denoting the Google Worksheet Name, for unique identification within the Google Spreadsheet # str(sys.argv[4]) # Argument denoting the File Name and output destination # Variable Definitions try: gServiceAccAuthFile = str(sys.argv[1]) gSheetKey = str(sys.argv[2]) gWorksheetName = str(sys.argv[3]) outputFile = str(sys.argv[4]) outputDir = str(Path(outputFile).parents[0]) outputFileName = str(Path(outputFile).stem) except IndexError: print('Not enough Arguments Specified, Required Arguments:\nPOS 1.)\tGoogle Service Account API Authentication File\nPOS 2.)\tGoogle Sheets URL Key\nPOS 3.)\tGoogle Worksheet Name\nPOS 4.)\tOutput File Path &amp; Name') sys.exit() # Authorize Spreadsheet Access gc = pygsheets.authorize(service_file=gServiceAccAuthFile, retries=1) # Open spreadsheet sh = gc.open_by_key(gSheetKey) # Open Worksheet wks = sh.worksheet_by_title(gWorksheetName) # Export as CSV wks.export(pygsheets.ExportType.CSV, path=outputDir + '/', filename=outputFileName) </code></pre> <p>Proposed solutions:</p> <ul> <li>Issue with SSL Module: Update to newer binary?</li> <li>Try / except: What would be the except statement? <code>except ssl.SSLEOFError</code>?</li> <li>Pygsheets: Is there a <code>wks.export()</code> <code>retry</code> function?</li> </ul>
<blockquote> <p>Try / except: What would be the except statement? except ssl.SSLEOFError?<br> Pygsheets: Is there a wks.export() retry function?</p> </blockquote> <p>Combining these – I'm using <code>logging</code> for logging, but adapt as you like.</p> <pre class="lang-py prettyprint-override"><code>import ssl import logging log = logging.getLogger(...) for attempt in range(1, 6): # Try at most 5 times try: wks.export(pygsheets.ExportType.CSV, path=outputDir + '/', filename=outputFileName) except ssl.SSLError as e: log.warning('Attempt %d to export sheet failed: %s' % (attempt, e), exc_info=True) else: break # success! else: # executed if we didn't `break` out raise RuntimeError('All attempts to export the sheet failed!') </code></pre>
python|python-3.x|python-3.5|pygsheets
1
1,902,460
44,659,851
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte, while reading csv file in pandas
<p>I know similar questions has been asked already I have seen all of them and tried but of little help. I am using OSX 10.11 El Capitan, python3.6., virtual environment, tried without that also. I am using jupyter notebook and spyder3.</p> <p>I am new to python, but know basic ML and following a post to learn how to solve Kaggle challenges: <a href="https://www.dataquest.io/blog/kaggle-tutorial/" rel="noreferrer">Link to Blog</a>, <a href="https://www.kaggle.com/c/expedia-hotel-recommendations/data" rel="noreferrer">Link to Data Set</a></p> <p>.I am stuck at the first few lines of code `</p> <pre><code>import pandas as pd destinations = pd.read_csv("destinations.csv") test = pd.read_csv("test.csv") train = pd.read_csv("train.csv") </code></pre> <p>and it is giving me error</p> <pre><code>UnicodeDecodeError Traceback (most recent call last) &lt;ipython-input-19-a928a98eb1ff&gt; in &lt;module&gt;() 1 import pandas as pd ----&gt; 2 df = pd.read_csv('destinations.csv', compression='infer',date_parser=True, usecols=([0,1,3])) 3 df.head() /usr/local/lib/python3.6/site-packages/pandas/io/parsers.py in parser_f(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, escapechar, comment, encoding, dialect, tupleize_cols, error_bad_lines, warn_bad_lines, skipfooter, skip_footer, doublequote, delim_whitespace, as_recarray, compact_ints, use_unsigned, low_memory, buffer_lines, memory_map, float_precision) 653 skip_blank_lines=skip_blank_lines) 654 --&gt; 655 return _read(filepath_or_buffer, kwds) 656 657 parser_f.__name__ = name /usr/local/lib/python3.6/site-packages/pandas/io/parsers.py in _read(filepath_or_buffer, kwds) 403 404 # Create the parser. --&gt; 405 parser = TextFileReader(filepath_or_buffer, **kwds) 406 407 if chunksize or iterator: /usr/local/lib/python3.6/site-packages/pandas/io/parsers.py in __init__(self, f, engine, **kwds) 762 self.options['has_index_names'] = kwds['has_index_names'] 763 --&gt; 764 self._make_engine(self.engine) 765 766 def close(self): /usr/local/lib/python3.6/site-packages/pandas/io/parsers.py in _make_engine(self, engine) 983 def _make_engine(self, engine='c'): 984 if engine == 'c': --&gt; 985 self._engine = CParserWrapper(self.f, **self.options) 986 else: 987 if engine == 'python': /usr/local/lib/python3.6/site-packages/pandas/io/parsers.py in __init__(self, src, **kwds) 1603 kwds['allow_leading_cols'] = self.index_col is not False 1604 -&gt; 1605 self._reader = parsers.TextReader(src, **kwds) 1606 1607 # XXX pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader.__cinit__ (pandas/_libs/parsers.c:6175)() pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._get_header (pandas/_libs/parsers.c:9691)() UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte </code></pre> <p>Some answers on stakoverflow suggested that it is because it is gzipped, but Chrome downloaded the .csv file and .csv.gz was nowhere to be seen and returned file not found error.</p> <p>I then read somewhere to use <code>encoding='latin1'</code>, but after doing this I am getting parser error:</p> <pre><code>--------------------------------------------------------------------------- ParserError Traceback (most recent call last) &lt;ipython-input-21-f9c451f864a2&gt; in &lt;module&gt;() 1 import pandas as pd 2 ----&gt; 3 destinations = pd.read_csv("destinations.csv",encoding='latin1') 4 test = pd.read_csv("test.csv") 5 train = pd.read_csv("train.csv") /usr/local/lib/python3.6/site-packages/pandas/io/parsers.py in parser_f(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, escapechar, comment, encoding, dialect, tupleize_cols, error_bad_lines, warn_bad_lines, skipfooter, skip_footer, doublequote, delim_whitespace, as_recarray, compact_ints, use_unsigned, low_memory, buffer_lines, memory_map, float_precision) 653 skip_blank_lines=skip_blank_lines) 654 --&gt; 655 return _read(filepath_or_buffer, kwds) 656 657 parser_f.__name__ = name /usr/local/lib/python3.6/site-packages/pandas/io/parsers.py in _read(filepath_or_buffer, kwds) 409 410 try: --&gt; 411 data = parser.read(nrows) 412 finally: 413 parser.close() /usr/local/lib/python3.6/site-packages/pandas/io/parsers.py in read(self, nrows) 1003 raise ValueError('skipfooter not supported for iteration') 1004 -&gt; 1005 ret = self._engine.read(nrows) 1006 1007 if self.options.get('as_recarray'): /usr/local/lib/python3.6/site-packages/pandas/io/parsers.py in read(self, nrows) 1746 def read(self, nrows=None): 1747 try: -&gt; 1748 data = self._reader.read(nrows) 1749 except StopIteration: 1750 if self._first_chunk: pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader.read (pandas/_libs/parsers.c:10862)() pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._read_low_memory (pandas/_libs/parsers.c:11138)() pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._read_rows (pandas/_libs/parsers.c:11884)() pandas/_libs/parsers.pyx in pandas._libs.parsers.TextReader._tokenize_rows (pandas/_libs/parsers.c:11755)() pandas/_libs/parsers.pyx in pandas._libs.parsers.raise_parser_error (pandas/_libs/parsers.c:28765)() ParserError: Error tokenizing data. C error: Expected 2 fields in line 11, saw 3 </code></pre> <p>I have spent hours to debug this, tried to open the csv files on Atom( no other app could open it), online web-apps(some crashed) but of no help.I have tried using the kernels of other people who have solved the problem, but of no help.</p>
<p>It's still most likely gzipped data. gzip's magic number is <code>0x1f 0x8b</code>, which is consistent with the <code>UnicodeDecodeError</code> you get.</p> <p>You could try decompressing the data on the fly:</p> <pre class="lang-python prettyprint-override"><code>with open('destinations.csv', 'rb') as fd: gzip_fd = gzip.GzipFile(fileobj=fd) destinations = pd.read_csv(gzip_fd) </code></pre> <p>Or use pandas' built-in gzip support:</p> <pre class="lang-python prettyprint-override"><code>destinations = pd.read_csv('destinations.csv', compression='gzip') </code></pre>
python|python-3.x|csv|pandas|kaggle
57
1,902,461
61,630,221
Low FPS using OpenCv with PiCamera (python)
<p>I am trying to interface my OpenCV program with my Raspberry Pi PiCamera. Every time I use OpenCV to capture video, it drastically drops the FPS. When I capture video using PiCamera's Library, everything is fine and smooth.</p> <ol> <li>Why is this happening?</li> <li>Is there a way to fix it?</li> </ol> <p><strong>This is my code:</strong></p> <pre class="lang-py prettyprint-override"><code>import time import RPi.GPIO as GPIO from PCA9685 import PCA9685 import numpy as np import cv2 try: cap = cv2.VideoCapture(0) cap.set(cv2.CAP_PROP_FPS, 90) cap.set(cv2.CAP_PROP_FRAME_WIDTH, 800) cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 700) while(True): ret, frame = cap.read() cv2.imshow('frame',frame) if cv2.waitKey(1) &amp; 0xFF == ord('q'): break # When everything is done, release the capture except: pwm.exit_PCA9685() print ("\nProgram end") exit() cap.release() cv2.destroyAllWindows() </code></pre> <p><strong>This is the error I'm getting:</strong></p> <p><a href="https://i.stack.imgur.com/lVbSa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lVbSa.png" alt="enter image description here"></a></p>
<ol> <li><p>First of all, those are warnings not errors.</p></li> <li><p>Reduce the video dimension. Specify the dimension.</p></li> <li><p><code>cv2.VideoCapture</code> has some problems as it buffers the frames, and the frames are queued so if you're doing some processing and the speed is less than the bandwidth of <code>VideoCapture</code> the video will be slowed down.</p></li> </ol> <p>So, here is a bufferless <code>VideoCapture</code>.</p> <p><strong>video_capture_Q_buf.py</strong></p> <pre><code>import cv2, queue as Queue, threading, time is_frame = True # bufferless VideoCapture class VideoCaptureQ: def __init__(self, name): self.cap = cv2.VideoCapture(name) self.q = Queue.Queue() t = threading.Thread(target=self._reader) t.daemon = True t.start() # read frames as soon as they are available, keeping only most recent one def _reader(self): while True: ret, frame = self.cap.read() if not ret: global is_frame is_frame = False break if not self.q.empty(): try: self.q.get_nowait() # discard previous (unprocessed) frame except Queue.Empty: pass self.q.put(frame) def read(self): return self.q.get() </code></pre> <p>Using it:</p> <p><strong>test.py</strong></p> <pre><code>import video_capture_Q_buf as vid_cap_q # import as alias from video_capture_Q_buf import VideoCaptureQ # class import import time cap = VideoCaptureQ(vid_path) while True: t1 = time.time() if vid_cap_q.is_frame == False: print('no more frames left') break try: ori_frame = cap.read() # do your stuff except Exception as e: print(e) break t2 = time.time() print(f'FPS: {1/(t2-t1)}') </code></pre>
python|python-3.x|opencv|cv2
1
1,902,462
23,857,512
Openerp says an existing model does not exist in a many2many relation
<p>I'm trying to establish a many2many relation between my model and account.tax.</p> <p>I'm using the following column definition: </p> <pre><code>'tax_id': fields.many2many('account.tax', 'account_contract_line_tax', 'contract_line_id', 'tax_id', 'Taxes', domain=[('parent_id','=',False)]), </code></pre> <p>And I'm getting the following erorr:</p> <pre><code>2014-05-25 16:18:55,456 31937 ERROR ***_dev openerp.netsvc: Programming Error Many2Many destination model does not exist: `account.tax` Traceback (most recent call last): File "/usr/lib/pymodules/python2.7/openerp/netsvc.py", line 292, in dispatch_rpc result = ExportService.getService(service_name).dispatch(method, params) File "/usr/lib/pymodules/python2.7/openerp/service/web_services.py", line 622, in dispatch security.check(db,uid,passwd) File "/usr/lib/pymodules/python2.7/openerp/service/security.py", line 40, in check pool = pooler.get_pool(db) File "/usr/lib/pymodules/python2.7/openerp/pooler.py", line 49, in get_pool return get_db_and_pool(db_name, force_demo, status, update_module)[1] File "/usr/lib/pymodules/python2.7/openerp/pooler.py", line 33, in get_db_and_pool registry = RegistryManager.get(db_name, force_demo, status, update_module) File "/usr/lib/pymodules/python2.7/openerp/modules/registry.py", line 203, in get update_module) File "/usr/lib/pymodules/python2.7/openerp/modules/registry.py", line 233, in new openerp.modules.load_modules(registry.db, force_demo, status, update_module) File "/usr/lib/pymodules/python2.7/openerp/modules/loading.py", line 350, in load_modules force, status, report, loaded_modules, update_module) File "/usr/lib/pymodules/python2.7/openerp/modules/loading.py", line 256, in load_marked_modules loaded, processed = load_module_graph(cr, graph, progressdict, report=report, skip_modules=loaded_modules, perform_checks=perform_checks) File "/usr/lib/pymodules/python2.7/openerp/modules/loading.py", line 165, in load_module_graph init_module_models(cr, package.name, models) File "/usr/lib/pymodules/python2.7/openerp/modules/module.py", line 374, in init_module_models result = obj._auto_init(cr, {'module': module_name}) File "/usr/lib/pymodules/python2.7/openerp/osv/orm.py", line 3028, in _auto_init self._m2m_raise_or_create_relation(cr, f) File "/usr/lib/pymodules/python2.7/openerp/osv/orm.py", line 3338, in _m2m_raise_or_create_relation raise except_orm('Programming Error', 'Many2Many destination model does not exist: `%s`' % (f._obj,)) except_orm: ('Programming Error', 'Many2Many destination model does not exist: `account.tax`') </code></pre> <p>Of course, account.tax is existing since I'm using the ERP to established invoice with account module. Furthermore, I can see the model in configuration/database structure/model</p> <p>I nearly copied the line from account module...</p> <p>Any ideas ?</p> <p>A.</p>
<p>Ok...</p> <p>It seems the reason was the lack of explicit dependency in </p> <pre><code>__openerp__.py </code></pre> <p>After adding "account" to dependencies list, it started working...</p> <p>Best regards,</p> <p>A.</p>
python|orm|openerp
2
1,902,463
20,407,936
matplotlib not displaying intersection of 3D planes correctly
<p>I want to plot two planes and find their intersection line, but I get this result, where it's impossible to tell where they intersect, because one plane overlays the other.</p> <p>A 3D projection should hide the non-visible part of the plane, how do I attain this result using <strong>matplotlib</strong>?</p> <p><img src="https://i.stack.imgur.com/bHPMl.png" alt="planes"></p> <p>You can clearly see that these to plains <em>should</em> intersect.</p> <p><img src="https://i.stack.imgur.com/atZk8.png" alt="plane intersect"></p> <p>Here's the code I've used to get this result</p> <pre><code>import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D values = range(-10, 11) def plotPlane(plot, normal, d, values, colorName): # x, y, z x, y = np.meshgrid(values, values) z = (-normal[0] * x - normal[1] * y - d) * 1. / normal[2] # draw plot plot.plot_surface(x, y, z, color=colorName) image = plt.figure().gca(projection='3d') plotPlane(image, [3, 2, -4], 1, values, "red") plotPlane(image, [5, -1, 2], 4, values, "gray") plt.show() </code></pre>
<p>See <a href="https://stackoverflow.com/questions/14824893/how-to-draw-diagrams-like-this/14825951#14825951">How to draw intersecting planes?</a> for a long explanation + possible work around.</p> <p>The short answer in that matplotlib's 3D support is clever use of projections to generate a 2D view of the 3D object which is then rendered to the canvas. Due to the way that matplotlib renders (artist at a time) one artist is either fully above or fully below another. If you need real 3D support look into <code>mayavi</code>.</p>
python|3d|matplotlib
7
1,902,464
72,115,564
What do i need to change in my code to avoid sheets by sheetname.startswith in Python?
<p>My current code is able to do as desired and make changes in all sheets based on filename that starts with a certain string. But, i just realized that some of the sheets within the file may have slightly different names in the months going forward. My code-</p> <pre><code>import pandas as pd from openpyxl import load_workbook import os cols_to_drop = ['PSI ID','PSIvet Region','PSIvet region num'] column_name_update_map = {'Account name': 'Company Name','Billing address':'Address','Billing city':'City'} for file in os.listdir(&quot;C:/Users/hhh/Desktop/gu/python/PartMatching&quot;): if file.startswith(&quot;PSI&quot;): dfs = pd.read_excel(file, sheet_name=None) output = dict() for ws, df in dfs.items(): if ws in [&quot;Added&quot;]: continue if ws in [&quot;New Members 03.22&quot;, &quot;PVCC&quot;]: #sheets to avoid temp = df temp['Status'] = &quot;Active&quot; if ws == &quot;All Members&quot; else &quot;Cancelled&quot; #drop unneeded columns temp = df.drop(cols_to_drop, errors=&quot;ignore&quot;, axis=1) #rename columns temp = temp.rename(columns=column_name_update_map) #drop empty columns temp = temp.dropna(how=&quot;all&quot;, axis=1) temp['Partner'] = &quot;PSI&quot; output[ws] = temp writer = pd.ExcelWriter(f'{file.replace(&quot;.xlsx&quot;,&quot;&quot;)} (updated headers).xlsx') for ws, df in output.items(): df.to_excel(writer, index=None, sheet_name=ws) writer.save() writer.close() </code></pre> <p>My goal is to make my current code avoid the sheet whose name starts with &quot;New Members&quot;. But as you can see in my code I have to specifically mention New Members 03.22. This sheet next month will be named New Members 04.22 and so wont be compatible with my code to run on a scheduled task. I tried if ws.startswith in [&quot;New Members 03.22&quot;, &quot;PVCC&quot;]: but nothing happened.</p>
<p><code>startswith</code> can only be used with one string at a time, so you need to break up the test.</p> <pre><code>if any(ws.startswith(x) for x in [&quot;New Members&quot;, &quot;PVCC&quot;]): </code></pre>
python|pandas|loops|startswith
0
1,902,465
36,150,061
Strange thing when Python __setitem__ use multiple key
<p>I wan't to test the type of key when use <code>__setitem__</code>. But strangely I found some part of code be omitted when use mutiple keys. Here is my test class:</p> <pre><code>class foo(): def __init__(self): self.data=[[1,2],[3,4],[5,6]] def __getitem__(self, key): return self.data[key] def __setitem__(self, key, value): print('Key is {0}, type of key is {1}'.format(key,type(key))) self.data[key] = value f = foo() </code></pre> <p>When use one key it's ok: </p> <pre><code>&gt;&gt;&gt;f[1] = [0,0] Key is 1, type of key is &lt;class 'int'&gt; &gt;&gt;&gt;f[1] [0, 0] </code></pre> <p>when use two keys, result is correct, but <strong>why nothing be printed out</strong></p> <pre><code>&gt;&gt;&gt;f[1][1] = 100 &gt;&gt;&gt;f[1][1] 100 </code></pre> <p>I'm new in python any suggestion will appreciated!</p>
<p><code>f[1][1] = 0</code> is equivalent to</p> <pre><code>f.__getitem__(1).__setitem__(1, 0) </code></pre> <p>It calls <strong><code>__getitem__</code></strong> on your custom class; and this returns <code>[0, 0]</code> or <code>[3, 4]</code> or whatever was stored in <code>f[1]</code>; in any case this value is a plain Python <code>list</code>; then calls the <code>__setitem__</code> on this <code>list</code>. <code>list.__setitem__</code> does not print anything.</p>
python|class|magic-methods
7
1,902,466
35,880,779
Python 301 POST
<p>So basically I'm trying to make a request to this website - <a href="https://panel.talonro.com/login/" rel="nofollow">https://panel.talonro.com/login/</a> which is supposed to be <code>301 redirect</code>.</p> <p>I send data as I should but in the end there is no Location header in my request and status code is <code>200</code> instead of <code>301</code>.</p> <p>I can't figure out what I am doing wrong. Please help</p> <pre><code>def do_request(): req = requests.get('https://panel.talonro.com/login/').text soup = BeautifulSoup(req, 'html.parser') csrf = soup.find('input', {'name':'csrfKey'}).get('value') ref = soup.find('input', {'name':'ref'}).get('value') post_data = { 'auth':'mylogin', 'password':'mypassword', 'login__standard_submitted':'1', 'csrfKey':csrf, 'ref':ref, 'submit':'Go' } post = requests.post(url = 'https://forum.talonro.com/login/', data = post_data, headers = {'referer':'https://panel.talonro.com/login/'}) </code></pre>
<p>Right now <code>push_data</code> is in <code>do_request()</code>, so you cannot access it outside of that function. </p> <p>Instead, try this where you return that info and then pass it in:</p> <pre><code>import requests from bs4 import BeautifulSoup def do_request(): req = requests.get('https://panel.talonro.com/login/').text soup = BeautifulSoup(req, 'html.parser') csrf = soup.find('input', {'name':'csrfKey'}).get('value') ref = soup.find('input', {'name':'ref'}).get('value') post_data = { 'auth':'mylogin', 'password':'mypassword', 'login__standard_submitted':'1', 'csrfKey':csrf, 'ref':ref, 'submit':'Go' } return post_data post = requests.post(url = 'https://forum.talonro.com/login/', data = do_request(), headers = {'referer':'https://panel.talonro.com/login/'}) </code></pre>
python|redirect|post|python-requests|csrf
0
1,902,467
15,040,584
Getting error in specifying STATIC_ROOT STATIC_URL in Django settings.py file
<p>I am new to django and web application development. I am trying to add css to my demo site (using django v1.4 on windows7) and getting the following error. I have tried all, and wasted almost 3-4 hrs. still not been able to resolve it.</p> <p>I think I am doing some silly mistake, can anyone please let me know where I am making the blunder.</p> <p>either when I do</p> <pre><code>http://localhost:8000/static/css.css </code></pre> <p>or </p> <pre><code>python manage.py collectstatic </code></pre> <p>I get:</p> <blockquote> <p>TypeError: coercing to Unicode: need string or buffer, tuple found</p> </blockquote> <p><strong>setting.py file is:</strong></p> <pre><code>import os PROJECT_PATH = os.path.abspath(os.curdir).replace('\\', '/') DEBUG = True TEMPLATE_DEBUG = DEBUG ADMINS = ( # ('Your Name', 'your_email@example.com'), ) MANAGERS = ADMINS DATABASES = { 'default': { 'ENGINE' : 'django.db.backends.mysql', # Add 'postgresql_psycopg2', 'mysql', 'sqlite3' or 'oracle'. 'NAME' : 'ecomstore', # Or path to database file if using sqlite3. 'USER' : 'vivek', # Not used with sqlite3. 'PASSWORD' : 'linux', # Not used with sqlite3. 'HOST' : '', # Set to empty string for localhost. Not used with sqlite3. 'PORT' : '', # Set to empty string for default. Not used with sqlite3. } } TIME_ZONE = 'America/Chicago' LANGUAGE_CODE = 'en-us' SITE_ID = 1 USE_I18N = True USE_L10N = True USE_TZ = True MEDIA_ROOT = '' MEDIA_URL = '' STATIC_ROOT = os.path.join(PROJECT_PATH,'static').replace('\\', '/'), STATIC_URL = '/static/' STATICFILES_DIRS = ( os.path.join(PROJECT_PATH, 'static').replace('\\', '/'), ) STATICFILES_FINDERS = ( 'django.contrib.staticfiles.finders.FileSystemFinder', 'django.contrib.staticfiles.finders.AppDirectoriesFinder', 'django.contrib.staticfiles.finders.DefaultStorageFinder', ) SECRET_KEY = 'dyi(9i9e*oy#7o#-z$vcnk%d$2)n!)t=3(cqo5=prtp$7e2(*h' TEMPLATE_LOADERS = ( 'django.template.loaders.filesystem.Loader', 'django.template.loaders.app_directories.Loader', ) MIDDLEWARE_CLASSES = ( 'django.middleware.common.CommonMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', ) ROOT_URLCONF = 'ecomstore.urls' WSGI_APPLICATION = 'ecomstore.wsgi.application' TEMPLATE_DIRS = ( os.path.join(PROJECT_PATH, 'templates').replace('\\', '/'), ) INSTALLED_APPS = ( 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'django.contrib.messages', 'django.contrib.staticfiles', ) LOGGING = { 'version': 1, 'disable_existing_loggers': False, 'filters': { 'require_debug_false': { '()': 'django.utils.log.RequireDebugFalse' } }, 'handlers': { 'mail_admins': { 'level': 'ERROR', 'filters': ['require_debug_false'], 'class': 'django.utils.log.AdminEmailHandler' } }, 'loggers': { 'django.request': { 'handlers': ['mail_admins'], 'level': 'ERROR', 'propagate': True, }, } } </code></pre>
<pre><code>STATIC_ROOT = os.path.join(PROJECT_PATH,'static').replace('\\', '/'), </code></pre> <p>Remove the <code>,</code> at the end of the line.</p>
python|django
4
1,902,468
29,623,543
How can i make ipythons RichIPythonWidget use pyside in a pyinstaller environment?
<p>I have an application that i build using pyinstaller, and it uses PySide for its Qt Gui. I included an interactive prompt by embedding an ipython qtconsole. This breaks the builds created by pyinstaller.</p> <p>Here is a minimal (non-)working example:</p> <pre><code>from PySide.QtGui import * from IPython.qt.console.rich_ipython_widget import RichIPythonWidget from IPython.qt.inprocess import QtInProcessKernelManager from IPython.lib import guisupport class IPythonWidget(RichIPythonWidget): def __init__(self, parent=None, **kwargs): super(self.__class__, self).__init__(parent) self.app = app = guisupport.get_app_qt4() self.kernel_manager = kernel_manager = QtInProcessKernelManager() kernel_manager.start_kernel() self.kernel = kernel = kernel_manager.kernel kernel.gui = 'qt4' self.kernel_client = kernel_client = kernel_manager.client() kernel_client.start_channels() if __name__ == '__main__': app = QApplication([]) i = IPythonWidget() i.show() app.exec_() </code></pre> <p>When run directly from source (python mwe.py), it pops up an ipython qt console window. When i bundle this with pyinstaller in one directory and run the exe, i get this:</p> <pre><code>Traceback (most recent call last): File "&lt;string&gt;", line 3, in &lt;module&gt; File "C:\Python27\Lib\site-packages\PyInstaller\loader\pyi_importers.py", line 270, in load_module exec(bytecode, module.__dict__) File "H:\Home\pydd2swid\build\mwe\out00-PYZ.pyz\IPython.qt.console.rich_ipython_widget", line 8, in &lt;module&gt; File "C:\Python27\Lib\site-packages\PyInstaller\loader\pyi_importers.py", line 270, in load_module exec(bytecode, module.__dict__) File "H:\Home\pydd2swid\build\mwe\out00-PYZ.pyz\IPython.external.qt", line 23, in &lt;module&gt; File "H:\Home\pydd2swid\build\mwe\out00-PYZ.pyz\IPython.external.qt_loaders", line 296, in load_qt ImportError: Could not load requested Qt binding. Please ensure that PyQt4 &gt;= 4.7, PyQt5 or PySide &gt;= 1.0.3 is available, and only one is imported per session. Currently-imported Qt library: 'pyqtv1' PyQt4 installed: False PyQt5 installed: False PySide &gt;= 1.0.3 installed: False Tried to load: ['pyside', 'pyqt', 'pyqt5'] </code></pre> <p>and when i build a single executable (pyinstaller -F mwe.py) and run it, i get this:</p> <pre><code>WARNING: file already exists but should not: C:\Users\SARNOW4\AppData\Local\Temp\_MEI62362\Include\pyconfig.h Traceback (most recent call last): File "&lt;string&gt;", line 1, in &lt;module&gt; File "C:\Python27\Lib\site-packages\PyInstaller\loader\pyi_importers.py", line 270, in load_module exec(bytecode, module.__dict__) File "H:\Home\pydd2swid\build\mwe\out00-PYZ.pyz\PySide", line 41, in &lt;module&gt; File "H:\Home\pydd2swid\build\mwe\out00-PYZ.pyz\PySide", line 11, in _setupQtDirectories File "H:\Home\pydd2swid\build\mwe\out00-PYZ.pyz\PySide._utils", line 93, in get_pyside_dir File "C:\Python27\Lib\site-packages\PyInstaller\loader\pyi_importers.py", line 409, in load_module module = imp.load_module(fullname, fp, filename, self._c_ext_tuple) RuntimeError: the sip module has already registered a module called PyQt4.QtCore </code></pre> <p>It seems that the way pyinstaller hooks the import mechanism does not work with ipythons qt_loaders. How can i fix this?<br> I am using pyinstaller 2.1, ipython 3.0, python 2.7 (32-bit) on Windows 7.</p>
<p>You can fix it in two ways:</p> <p>1) Overwrite the function <strong>load_qt</strong> included in <em>IPython.external.qt_loaders</em>, e.g.:</p> <pre><code>def load_qt(api_options): from PySide import QtCore, QtGui, QtSvg return QtCore, QtGui, QtSvg, 'pyside' </code></pre> <p>so, you will force to choose the PySide Module.</p> <p>2) Another solution without overwriting the IPython Module installed, would be passing the function as reference before importing the IPython widget, e.g.</p> <pre><code>def new_load_qt(api_options): from PySide import QtCore, QtGui, QtSvg return QtCore, QtGui, QtSvg, 'pyside' from IPython.external import qt_loaders qt_loaders.load_qt = new_load_qt from IPython.qt.console.rich_ipython_widget import RichIPythonWidget </code></pre> <p>Now it should work. </p> <p>It was tested using the <em>PyInstaller Develop Version</em></p>
python|ipython|pyside|pyinstaller
5
1,902,469
29,676,754
Encoding a JSON file
<p>I get this string from the web</p> <p><code>'Probabilità'</code></p> <p>and I save it in a variable called <code>temp</code>. Than I stored it in a dictionary </p> <pre><code>dict["key"]=temp </code></pre> <p>Then I need to write all the dictionary in a JSON file and I use this function</p> <pre><code>json_data = json.dumps(dict) </code></pre> <p>But when I look at the JSON file written by my code I see this</p> <pre><code>'Probabilit\u00e0' </code></pre> <p>How can I solve this encoding problem?</p>
<p>Specify the <code>ensure_ascii</code> argument in the <code>json.dumps</code> call:</p> <pre><code>mydict = {} temp = "Probabilità" mydict["key"] = temp json_data = json.dumps(mydict, encoding="utf-8", ensure_ascii=False) </code></pre>
python|json|encoding
1
1,902,470
46,560,930
Change df.columns.names for multiindex columns
<p>I would like to rename one of my levels in a multiindexed columns dataframe in pandas.</p> <pre><code>df.columns.names </code></pre> <p>gives me </p> <pre><code>FrozenList(['level0', 'level1']) </code></pre> <p>I want to rename 'level0' to 'main'.</p> <p>I have tried different approaches, none works:</p> <pre><code> df.columns.set_names('findingkey', level=0, inplace=True) </code></pre> <p>gives me <code>TypeError: 'list' object is not callable</code></p> <p>I also tried to do it directly:</p> <pre><code>df.columns.names[0]='main' </code></pre> <p>with output: <code>TypeError: 'FrozenList' does not support mutable operations.</code></p>
<p>Use:</p> <pre><code>df.columns.names = ['main', 'level1'] </code></pre> <p>Or</p> <pre><code>df = df.rename_axis(['main', 'level1'], axis=1) </code></pre>
python|pandas|multi-index
3
1,902,471
46,470,359
Highest number in dictionary values
<p>How can I retrieve the key in the dictionary which contains the highest number in its list of values?</p> <pre><code>l = { '1': [1, 2, 3, 4, 5, 6, 8, 9, 10, 11], '3': [1, 2, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], '5': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 21, 17, 18, 19], '4': [4, 1, 2, 3, 5, 6], '7': [1, 2, 8, 3, 4, 5, 6, 7] } </code></pre> <p>In this example key 5 contains a value 19, so this should be returned</p>
<p>you can apply <code>max</code> on the dictionary keys converted to <code>list</code> (python 3), using a key function which returns the maximum value of the <code>list</code></p> <pre><code>l = {'1': [1, 2, 3, 4, 5, 6, 8, 9, 10, 11], '3': [1, 2, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], '5': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 21, 17, 18, 19], '4': [4, 1, 2, 3, 5, 6], '7': [1, 2, 8, 3, 4, 5, 6, 7]} print(max(list(l.keys()),key=lambda k:max(l[k]))) </code></pre> <p>result: <code>5</code></p> <p>EDIT: that works, but unnecessarily queries the dict when computing the maximum. Better get the key+value pairs and work on the list of <code>tuple</code>s. In the end, take the first element. Should be slightly faster (no key lookup):</p> <pre><code>max(list(l.items()),key=lambda v:v[1][-1])[0] </code></pre>
python|python-2.7|list|dictionary|max
3
1,902,472
61,073,113
Error when coverting TIF images into jpg with Python
<p>I have a folder that contains 2000 TIF images and I want to convert them to jpg images. I wrote two codes and both work well until they convert 370 images and then they raise an error</p> <p>Here is my first code :</p> <pre><code>DestPath='/media/jack/Elements/ToJPG95/' from PIL import Image import os def change(path, row): filename1=path+row filename=row.split('.')[0] + '.jpg' im = Image.open(filename1) img= im.convert('RGB') Dest=os.path.join(DestPath,filename) img.save(Dest, format='JPEG',quality=95) import csv sourcePath='/media/jack/Elements/TifImages/' with open("TIFFnames.csv") as f: filtered = (line.replace('\n', '') for line in f) reader = csv.reader(filtered) for row in filtered: change(sourcePath , row) </code></pre> <p>and here is my second code which I ran in inside the folder that has the images :</p> <pre><code>from PIL import Image # Python Image Library - Image Processing import glob DestPath='/media/jack/Elements/ToJPG95/' print(glob.glob("*.TIF")) for file in glob.glob("*.TIF"): im = Image.open(file) rgb_im = im.convert('RGB') rgb_im.save(DestPath+file.replace("TIF", "jpg"), quality=95) # based on SO Answer: https://stackoverflow.com/a/43258974/5086335 </code></pre> <p>they convert up to 370 images and then give an error Here is the error I am getting :</p> <pre><code>Traceback (most recent call last): File "conmg.py", line 7, in &lt;module&gt; rgb_im = im.convert('RGB') File "/home/jack/.local/lib/python3.6/site-packages/PIL/Image.py", line 873, in convert self.load() File "/home/jack/.local/lib/python3.6/site-packages/PIL/TiffImagePlugin.py", line 1070, in load return self._load_libtiff() File "/home/jack/.local/lib/python3.6/site-packages/PIL/TiffImagePlugin.py", line 1182, in _load_libtiff raise OSError(err) OSError: -2 </code></pre> <p>I have tried imagemagick mentioned in the solution <a href="https://askubuntu.com/questions/60401/batch-processing-tif-images-converting-tif-to-jpeg">Here</a></p> <p>but this is what I am getting when I click enter to run the command:</p> <pre><code>jack@jack-dell:/media/jack/Elements/TifImages$ for f in *.tif; do echo "Converting $f"; convert "$f" "$(basename "$f" .tif).jpg" &gt; &gt; &gt; &gt; </code></pre> <p>As you can see, it does nothing I think the codes work well but for some reason they fail after converting 370 images I am running on a 6 TB external hard drive. Can any one tell me please whats wrong ?</p>
<p>As @fmw42 says, you likely have a problem with the 370th file (corrupt, or some ill-supported TIFF variant). You bash code will convert all the files that can be read, it doesn't work because you are missing a closing <code>done</code>:</p> <pre><code> for f in *.tif; do echo "Converting $f"; convert "$f" "$(basename "$f" .tif).jpg" ; done </code></pre> <p>Your Python would also convert all the readable files if you use try/except to catch errors and continue with the next file:</p> <pre><code>for file in glob.glob("*.TIF"): try: im = Image.open(file) rgb_im = im.convert('RGB') rgb_im.save(DestPath+file.replace("TIF", "jpg"), quality=95) except: print('File not converted:',file) </code></pre>
python|image|imagemagick|python-imaging-library
1
1,902,473
49,393,185
Pythonnet System.Object[,] to Pandas DataFrame or Numpy Array
<p>I am using Pythonnet to call a C# function which returns a clr Object ( an <strong>n</strong> x <strong>m</strong> matrix). In python the type is <strong>System.Object[,]</strong>. How can I convert this variable to a Pandas DataFrame or something more manageable?</p> <p>Thank you.</p>
<pre><code>pd.DataFrame([[obj[j, i] for j in range(obj.GetLength(1))] for i in range(obj.GetLength(0))]) </code></pre>
python|.net|pandas|python.net|pythonnet
2
1,902,474
21,332,823
Django model without primary and unique field
<p>I have an DB table:</p> <pre><code> url_id( INT(11) ) | monitor_id( (INT(11) ) --------------------+-------------------------- 1 | 1 1 | 2 1 | 3 2 | 2 </code></pre> <p>And so on. Neither <code>url_id</code> nor <code>monitor_id</code> field aren't unique, more than that, they are Foreign Keys for other tables (table <code>urls</code> and table <code>monitors</code>). So I cant change DB structure. In my Django models file I created a model class for this table:</p> <pre><code>class MonitorForUrl(models.Model): url = models.ForeignKey(Url, primary_key=True) monitor = models.ForeignKey(Monitor) class Meta: db_table = 'monitors_for_url' </code></pre> <p>I set <code>primary_key</code> parameter for true, becouse Django creates default <code>*model_name*_id</code> field for primary key, if I don't set my own primary key field. There is no such field in DB so I set <code>primary_key</code> for true. In this way I cant create some rows with similar <code>url_id</code> value, becouse it's primary key. Can I tell Django not to create default primary key field without setting <code>primary_key</code> option, or maybe you can advice me some other ways to solve my problem?</p>
<p>Don't create a model for this. Instead, add a <a href="https://docs.djangoproject.com/en/dev/topics/db/examples/many_to_many/" rel="nofollow">many-to-many relation</a> in either the Url and Monitor model. </p> <p>Django will then maintain this intermediate table for you.</p>
python|django
1
1,902,475
20,945,223
All tags are not showing in python
<p>Am parsing on html page using following code</p> <pre><code>request = urllib2.Request(urllink, None, {'User-Agent':'Mosilla/5.0 (\ Macintosh; Intel\ Mac OS X 10_7_4) AppleWebKit/536.11 (KHTML, like Gecko) \ Chrome/20.0.1132.57 Safari/536.11'}) urlfile = urllib2.urlopen(request) page = urlfile.read() soup = BeautifulSoup(page) </code></pre> <p>Am generating urllink manually. Here problem is am not getting the entire webpage from,</p> <pre><code>page = urlfile.read() </code></pre> <p>I can see many html contents if i saved the page using "save page as option". Later i got to know that webpage was internally sending many requests. How can i get entire page or can i get those request url's??</p> <p>please help me </p>
<p>When your request a url, it return the sourcecode of the of that page. And maybe the page contains img, css, js files(we call these static files). And your brower will render the html, it will according the url of those static files to request the resources. Such as <code>&lt;img src="/static/a.png" /&gt;</code>, then the browser will request the <code>/static/a.png</code> to get the img. As same as css and js file.</p> <p>What's more, now most of the websites are web2.0, which means that we can use ajax to requests resource asynchronously. e.g. <code>$.ajax({url:'/xxx' ...})</code>(jquery). And the js may also modify the dom tree, such as adding a new tag.</p> <p>So if you want to get the all conttents as browser do. You need parse the html or rewrite the ajax requests accoding the javascript to do that. Or if you have a browser kernel, such as webkit, you can alse do the same things as browser. Such as <a href="http://jeanphix.me/Ghost.py/" rel="nofollow">ghost.py</a>, <a href="http://docs.seleniumhq.org/" rel="nofollow">selenium</a> <a href="http://casperjs.org/" rel="nofollow">capserjs</a>, <a href="http://phantomjs.org/" rel="nofollow">phantomjs</a></p>
python
1
1,902,476
21,202,804
How to write basic webpage in python without using framework?
<p>Could you please tell me, How do I write a basic webpage without using framework like <code>Django</code>, <code>Web2Py</code> and others third party frameworks. Also, I don't like to use <code>CGI</code> for it. I need basic <code>MVC</code> structure with a hello-world web page only.</p>
<p>I suppose you mean a http server, because for a webpage only, you'd use html, not python.</p> <p>I suggest you start with some reading on <a href="http://docs.python.org/3.3/library/http.server.html" rel="nofollow">this</a> page. It's the http server for python. As you want to keep things easy, you probably just want to overwrite the BaseHTTPRequestHandler or the SimpleHTTPRequestHandler classes, especially the do_GET and do_POST methods.</p> <p>Note that this won't force you to use MVC, that's your own responsibility. You'll need an actual framework if you want to enforce MVC.</p>
python|web
2
1,902,477
21,117,205
Why does my button only work once in Python?
<p>I created a button to call my main_menu function when it is clicked, but it can only be clicked once. When it is clicked, it brings the user back to the main menu, but if the user leaves the main menu and clicks on it again, it does nothing. </p> <pre><code>def __init__(self): self.main = main self.main.grid() &lt;Really long tuple declared here&gt; self.main_menu() def main_menu(self): self.main.grid_remove() main = Frame(root) self.main = main self.main.grid() self.sort_button = Tkinter.Button(main, text = "Sort the list using the bubble sort method", command = self.sort_choice) &lt;Some more buttons coded here&gt; self.sort_button.pack() def sort_choice(self): self.main.grid_remove() main = Frame(root) self.main = main self.main.grid() &lt;Some other buttons and messages coded here&gt; self.main_menu.pack() </code></pre> <p>How can I make a button work more than once?</p>
<p>As stated on the comments, the posted code does not express what is the intended behavior, and probably the whole structure of the code should be changed (and proper naming to be used). I try to clear some concepts, however this answer does not make this code valid (even might not executable).</p> <p>If you are using global variables (I'm assuming <em>main</em> should be a global variable), when you want to change their values inside a function you need to declare them with <em>global</em> construct. Otherwise changing the value of a global variable in a function would create a local variable with the same name, and won't affect the global variable.</p> <p>Here is simple code snippet to clear this:</p> <pre><code>val = "x" def use_global(): print "Global value is: %s" % val def change_global_wrong(): print "changing global in change_global_wrong" val = "y" print "global in change_global_wrong is: %s" % val def change_global_correct(): global val print "changing global in change_global_correct" val = "y" print "global in change_global_correct is: %s" % val use_global() change_global_wrong() use_global() change_global_correct() use_global() </code></pre> <p>So changing the value for <em>main</em> inside the functions are actually creating local variable that are going to be out of scope when each function is returned. This means the only reference you have to the <em>main</em> Frame, is <em>self.main</em>. So I suggest register that as the parent for the button.</p> <pre><code>self.sort_button = Tkinter.Button(self.main, text = "Sort the list using the bubble sort method", command = self.sort_choice) </code></pre> <p>Even when doing this, by calling the self.main.grid_remove() you are deleting the Frame object. This causes the Frame (and its child widgets) to be removed. Here I've added a code sample for a Tk app (and as specified on the comments, included a way to break it):</p> <pre><code>#!/usr/bin/env python import datetime from Tkinter import * class MyApp(object): def __init__(self): self.root = Tk() self.time_var = StringVar() self.time_var.set('...') self._init_widgets() def _init_widgets(self): self.label = Label(self.root, textvariable=self.time_var) frame = Frame(self.root) self.frame = frame self.button = Button(frame, text = "update time", command = self._on_button_click) self.frame.grid() self.button.grid() self.label.grid() def _on_button_click(self): self.time_var.set(str(datetime.datetime.now())) # uncomment these lines to get a broken code #self.frame.grid_remove() #self.frame = Frame(self.root) #self.frame.grid() def run(self): self.root.mainloop() if __name__ == '__main__': app = MyApp() app.run() </code></pre> <p>Now if you uncomment the bad lines, you would see that calling grid_remove() removes the Frame but creating a new Frame object (and assigning it to same reference) does not help us to recover. Because the old frame that had a nice button on it is gone. And the nice button that would update the time is gone with its parent too.</p> <p>I'm not sure if the code in this question actually runs or not (since it is missing the context and other lines), but if it runs, I'm expecting that since by clicking on the button, the <em>main</em> frame widget is being removed the button should also be removed from widgets (creating a new Frame should not recover the button).</p> <p>Since this is not happening (As you say the button is showing but runs no action), I'm concluding the posted lines of code in here are not expressing your situation correctly. Yet I hope these code samples would help you to understand a bit more about your application.</p>
python|button|tkinter
1
1,902,478
53,779,968
why Keras fit_generator() load before actually "training"
<p>sorry to bother:</p> <p>I am confused by keras function : fit_generator</p> <p>I use custom generator to generate (image,seg_image) for training </p> <p>Look carefully you can see inside of <code>get_seg()</code> function </p> <p>I put the <code>print(path)</code> ann the path is just the path to read image </p> <p>from data ,the one more intention is I would like to know how </p> <p><code>fit_generator()</code> get the data from generator </p> <pre><code>#import all the stuff def get_seg(#parameters ): print(path) #to track when this function is called return seg_image #for training #pre-processing image def getimage(#parameters): #do something to image return the imgage #for training def data_generator(): #load all the data for training zipped =cycle(zip(images,segmentations)) while True: X = [] Y = [] for _ in range(batch_size) : im , seg = next(zipped) X.append(getimage(#parameters)) Y.append(get_seg(#parameters)) yield np.array(X) , np.array(Y) #create an generator G = data_generator(#parameters) #start training for ep in range( epochs ): m.fit_generator( G , steps_per_epoch=512, epochs=1,workers=1) </code></pre> <p>While I start training, I get the really unexpected result ,As it goes </p> <p>through training:The terminal looks like: it print out 24 set of path </p> <p>fist which it take the data from custom <code>data_generator</code></p> <pre><code>data/train/0000_mask.png data/train/0001_mask.png data/train/0002_mask.png data/train/0003_mask.png data/train/0004_mask.png data/train/0005_mask.png data/train/0006_mask.png data/train/0007_mask.png data/train/0008_mask.png data/train/0009_mask.png data/train/0010_mask.png data/train/0011_mask.png data/train/0012_mask.png data/train/0013_mask.png data/train/0014_mask.png data/train/0015_mask.png data/train/0016_mask.png data/train/0017_mask.png data/train/0018_mask.png data/train/0019_mask.png data/train/0020_mask.png data/train/0021_mask.png data/train/0022_mask.png data/train/0023_mask.png </code></pre> <p>And then : I do believe the training starts here .</p> <pre><code>1/512 [..............................] - ETA: 2:14:34 - loss: 2.5879 - acc: 0.1697 </code></pre> <p>load the data(image) again </p> <pre><code>data/train/0024_mask.png data/train/0025_mask.png </code></pre> <hr> <p>After 512(steps_per_epoch) which means the next round training </p> <p>begins,it would just print next 24 path before training ....</p> <p>I would like to know why this is happening? Is this is how keras </p> <p>works? To load data before actually pass is through the network?</p> <p>Or I am misunderstand or miss something basic knowledge?</p>
<p>Yes, this is how Keras works.</p> <p>Training and loading are two parallel actions. One does not see how the other is going.</p> <p>In the <code>fit_generator</code> method there is a <code>max_queue_size</code> argument, usually equal to 10 by default. This means the generator will load data at full speed until the queue is full. So you're loading many images in advance (this is good to avoid that the model gets slowed by loading) </p> <p>And the training just checks: are there items in the queue? Good, train. </p> <p>You're printing more than your batches because you call <code>get_seg</code> inside a loop but only call <code>yield</code> outside this loop. </p>
python|tensorflow|keras|deep-learning
2
1,902,479
53,400,968
wget with subprocess.call()
<p>I'm working on a domain fronting project. Basically I'm trying to use the <code>subprocess.call()</code> function to interpret the following command: <code>wget -O - https://fronteddomain.example --header 'Host: targetdomain.example'</code></p> <p>With the proper domains, I know how to domain front, that is not the problem. Just need some help with writing using the python <code>subprocess.call()</code> function with wget.</p>
<p>I figured it out using curl: </p> <p><code>call(["curl", "-s", "-H" "Host: targetdomain.example", "-H", "Connection: close", "frontdomain.example"])</code></p>
python|subprocess|wget
1
1,902,480
33,260,887
Accept request to microphone in Chrome using selenium
<p>I'm writing selenium tests python using webdriver and my test cases require access to user media (microphone). I tried different variations of <a href="https://stackoverflow.com/questions/21628904/accept-permission-request-in-chrome-using-selenium">--disable-user-media-security</a> flag but couldn't achieve what I want. I'm doing something this:</p> <pre><code>chrome_options = Options() chrome_options.add_argument("--disable-user-media-security") self.driver = webdriver.Chrome('/Users/xxx/Develop/WebME/chromedriver', chrome_options=chrome_options) driver = self.driver </code></pre> <p>Any ideas on how to handle this? There must be some way to do get access to mic :/</p>
<p>For microphone, add this option to chrome:</p> <p><code>chrome_options.add_argument("--use-fake-ui-for-media-stream")</code></p>
python|google-chrome|selenium|selenium-webdriver|selenium-chromedriver
0
1,902,481
33,504,026
Error with openpyxl after converting a script to Python 3.x that utilizes pandas and numpy
<p>About a year ago I wrote a script that took a single column of datetime values and ran a window through the series to determine the greatest "lumping" of values based on an adjustable dimension of time. For example, given a million date time values what is the maximum value of entries that exist within 1 second, or 1 minute, or 1 hour of each other.</p> <p>The problem is that I had a machine blow up on me and lost some of the documentation, specifically the versions of packages that I was working with. I think I've updated the code to execute within 3.x but am now getting errors that seem to suggest that pandas no longer supports the packages I'm trying to use. I've tried just installing a few random versions, updating pip, etc., but am not having much luck.</p> <p>The exact error states, 'UserWarning: Installed openpyxl is not supported at this time. Use >=1.61 and &lt;2.0.0' -- I'm not seeing a version history in their repository. Might just try installing older versions of Python and trying to bash this into place.</p> <p>Here is the code: </p> <pre><code>import numpy as np import pandas as pd # Your original code was correct here. I assumed there will be a data column along with the timestamps. df = pd.read_csv("ET.txt", parse_dates=["dt"]) # Construct a univariate `timeseries` instead of a single column dataframe as output by `read_csv`. # You can think of a dataframe as a matrix with labelled columns and rows. A timeseries is more like # an associative array, or labelled vector. Since we don't need a labelled column, we can use a simpler # representation. data = pd.Series(0, df.dt) print(data) window_size = 1 buckets_sec = data.resample("1S", how="count").fillna(0) # We have to shift the data back by the same number of samples as the window size. This is because `rolling_apply` # uses the timestamp of the end of the period instead of the beginning. I assume you want to know when the most # active period started, not when it ended. Finally, `dropna` will remove any NaN entries appearing in the warmup # period of the sliding window (ie. it will output NaN for the first window_size-1 observations). rolling_count = pd.rolling_apply(buckets_sec, window=window_size, func=np.nansum).shift(-window_size).dropna() print(rolling_count.describe()) # Some interesting data massaging # E.g. See how the maximum hit count over the specified sliding window evolves on an hourly # basis: seconds_max_hits = rolling_count.resample("S", how="max").dropna() # Plot the frequency of various hit counts. This gives you an idea how frequently various # hit counts occur. seconds_max_hits.hist() # Same on a daily basis daily_max_hits = rolling_count.resample("S", how="max").dropna() </code></pre> <p>Screen cap of the error: <a href="http://i.imgur.com/uSv29I5.png" rel="nofollow">http://i.imgur.com/uSv29I5.png</a></p>
<p>I'm not sure why you're seeing an openpyxl related error but if you are it seems like you should update your version of Pandas. There were some significant changes in openpyxl that affected exporting to Excel from Pandas but these have since been resolved.</p>
python-3.x|pandas|openpyxl
0
1,902,482
73,646,283
I want to display all videoId via multiple channel id in youtube data API in python
<pre><code>channel_id=['UCiT9RITQ9PW6BhXK0y2jaeg', 'UC7cs8q-gJRlGwj4A8OmCmXg', 'UC2UXDak6o7rBm23k3Vv5dww'] request = youtube.search().list( part='id', channelId=&quot;UCiT9RITQ9PW6BhXK0y2jaeg&quot;, type='video', order='date', maxResults=10 ) </code></pre> <p>I am able to see all videoId if i choose one channelId.In keyword parameter (channelId) take string not list so i did this</p> <pre><code>request = youtube.search().list( part='id', channelId=','.join(channel_ids), type='video', order='date', maxResults=10 ) {'kind': 'youtube#searchListResponse', 'etag': 'O3-nwicpOKLtnp0e6OBdRVSWFTA', 'regionCode': 'IN', 'pageInfo': {'totalResults': 0, 'resultsPerPage': 0}, 'items': []} </code></pre> <p>Can you please check what's the issue.</p>
<p>YouTube Data API v3 <a href="https://developers.google.com/youtube/v3/docs/search/list" rel="nofollow noreferrer">Search: list</a> endpoint <strong>can only filter for a single given YouTube channel id</strong>.</p> <blockquote> <p><strong>channelId</strong><br/> <strong>string</strong><br/> The <strong>channelId</strong> parameter indicates that the API response should only contain resources created by the channel.</p> </blockquote> <p>Source: <a href="https://developers.google.com/youtube/v3/docs/search/list#channelId" rel="nofollow noreferrer">Search: list#channelId</a> documentation.</p> <p>So you are obliged to list video ids channel by channel and then regroup them.</p> <p>Instead of using the 100 quota cost <code>Search: list</code> endpoint, you can use for a <em>1</em> quota cost <a href="https://developers.google.com/youtube/v3/docs/playlists/list" rel="nofollow noreferrer">Playlists: list</a> to retrieve the <code>uploads</code> playlist id and then pass it to <a href="https://developers.google.com/youtube/v3/docs/playlistItems/list" rel="nofollow noreferrer">PlaylistItems: list</a>, as described more precisely <a href="https://stackoverflow.com/a/27872244/7123660">here</a>.</p>
python|web-scraping|youtube-api
0
1,902,483
41,154,656
URLconf not matching URL patterns for loading a template
<p>Python/Django beginner here. I'm running into this error:</p> <blockquote> <p>Using the URLconf defined in learning_log.urls, Django tried these URL patterns, in this order: </p> <ol> <li><p>^admin/</p></li> <li><p>^$ [name='index'] </p></li> <li><p>^topics/$ [name='topics'] </p></li> <li><p>^topics/(?P\d+)/$ [name='topic'] </p></li> </ol> <p>The current URL, topics/% url 'learning_logs:topic' topic.id %}, didn't match any of these.</p> </blockquote> <p>When I am trying to load my topic template. Here is my template:</p> <pre><code>{% extends 'learning_logs/base.html' %} {% block content %} &lt;p&gt;Topic: {{ topic }}&lt;/p&gt; &lt;p&gt;Entries:&lt;/p&gt; &lt;ul&gt; {% for entry in entries %} &lt;li&gt; &lt;p&gt;{{ entry.date_added|date:'M d, Y H:i' }} &lt;/p&gt; &lt;p&gt;{{ entry.text|linebreaks }}&lt;/p&gt; &lt;/li&gt; {% empty %} &lt;li&gt; There are no entries for this topic yet. &lt;/li&gt; {% endfor %} &lt;/ul&gt; {% endblock content %} </code></pre> <p>This is my views.py:</p> <pre><code>from django.shortcuts import render from .models import Topic def index(request): '''The home page for Learning Log''' return render(request, 'learning_logs/index.html') def topics(request): '''Show all topics.''' topics = Topic.objects.order_by('date_added') context = {'topics': topics} return render(request, 'learning_logs/topics.html', context) def topic(request, topic_id): '''Show a single topic and all its entries.''' topic = Topic.objects.get(id=topic_id) entries = topic.entry_set.order_by('-date_added') context = {'topic': topic, 'entries': entries} return render(request, 'learning_logs/topic.html', context) </code></pre> <p>And this is my urls.py code:</p> <pre><code>'''Defines URL patterns for learning_logs.''' from django.conf.urls import url from . import views urlpatterns = [ # Home page url(r'^$', views.index, name='index'), # Show all topics. url(r'^topics/$', views.topics, name='topics'), # Detail page for a single topic url(r'^topics/(?P&lt;topic_id&gt;\d+)/$', views.topics, name='topic') ] </code></pre> <p>I am using Python Crash Course: A Hands-On, Project-Based Introduction to Programming for my tutorials.</p> <p>Any help would be much appreciated.</p>
<p>In </p> <pre><code>urlpatterns = [ # Home page url(r'^$', views.index, name='index'), # Show all topics. url(r'^topics/$', views.topics, name='topics'), # Detail page for a single topic url(r'^topics/(?P&lt;topic_id&gt;\d+)/$', views.topics, name='topic') ] </code></pre> <p>The last url detailing that it is calling a single topic, you are telling it when Django finds a URL that matches that pattern it calls the view function topics() again? I think it should be topic(), so it should be </p> <pre><code> **# Detail page for a single topic url(r'^topics/(?P&lt;topic_id&gt;\d+)/$', views.topic, name='topic')** </code></pre>
python|django|url|url-pattern
0
1,902,484
38,487,913
How to put an opencv function to play videos in django views?
<p>I have some python code that uses opencv to play a video from a certain path and I've been reading about how to incorporate that python code with Django and I saw that the python code can be put into the django views.py file but my question is what am I supposed to put as a parameter for the piece of code that renders it, like <code>return render(request, [what do I put here?])</code> because usually after request the location of the html file is put but if I want that video to play do I just specify the html page I want the video to play on, will that work or do I have to do something more? Also if you know any good tutorials that deal with this type of stuff I would appreciate any links. Thanks in advance.</p> <p>Here's the python code that just plays a video</p> <pre><code> filename = 'C:/Desktop/Videos/traffic2.mp4' vidcap = cv2.VideoCapture(filename) while(vidcap.isOpened()): success, frame_org = vidcap.read() cv2.imshow('frame',frame_org) if cv2.waitKey(1) &amp; 0xFF == ord('q'): break vidcap.release() cv2.destroyAllWindows() </code></pre>
<p>Quick answer: Don't bother with templates and <code>render()</code>, just use an <a href="https://docs.djangoproject.com/en/1.9/ref/request-response/#django.http.HttpResponse" rel="nofollow noreferrer"><code>HttpResponse</code></a>. The video will play, then the response will be returned so it all works out in the end.</p> <pre><code>from django.http import HttpResponse def index(request): play_video() return HttpResponse("OK") </code></pre> <hr> <p>Opinions answer:</p> <p>So I've actually done <a href="https://github.com/simon-andrews/mpvrc" rel="nofollow noreferrer">something <em>kinda</em> similar to this</a>.</p> <p>I'd recommend having a main view with the button on it, that when clicked calls a JavaScript function that <a href="https://stackoverflow.com/questions/247483/http-get-request-in-javascript">sends a GET request</a> to another <em>hidden</em> view that actually plays the video on the server.</p> <p>This hidden view would basically be the code snippet I posted above.</p> <p>You may also want to consider putting your video playing code in a <a href="https://stackoverflow.com/questions/2046603/is-it-possible-to-run-function-in-a-subprocess-without-threading-or-writing-a-se">subprocess</a> because Django or the webbrowser might time out or something.</p>
python|html|django|opencv
1
1,902,485
31,197,073
itertools.groupby returning wrong result (this is not about sorting)
<p>I wanted to break a string into words, but keeping the index where the word started. E.g., I want to transform <code>'aaa bbb ccc'</code> into <code>[(0, 'aaa'), (4, 'bbb'), (8, 'ccc')]</code>. <em>This is just the background, not the question</em>.</p> <p>The problem is that I tried to use <code>itertools.groupby</code> with <code>str.isalpha</code> as key, but it's giving me weird results.</p> <p>This code shows what I'm talking about (please ignore the <code>list</code> everywhere. I just wanted to be sure I was dealing with iterables, not iterators):</p> <pre><code>from itertools import groupby text = 'aaa bbb ccc' chars = list(groupby(list(enumerate(text)), lambda x: x[1].isalpha())) result = [list(v) for k, v in chars if k] print result assert result == [ [(0, 'a'), (1, 'a'), (2, 'a')], [(4, 'b'), (5, 'b'), (6, 'b')], [(8, 'c'), (9, 'c'), (10, 'c')]] </code></pre> <p>The variable <code>result</code> is ending up as <code>[[(10, 'c')], [], []]</code> and I don't know why. Maybe I'm missing something really simple here, but I just can't see it.</p>
<p>Correct the code: </p> <pre><code>chars = groupby(l, lambda x: x[1].isalpha()) result = [list(v) for k, v in chars if k] </code></pre> <p>To figure out the weird output </p> <pre><code>&gt;&gt;&gt; l = list(enumerate(text)) &gt;&gt;&gt; chars = groupby(l, lambda x: x[1].isalpha()) &gt;&gt;&gt; list(chars.next()[1]) [(0, 'a'), (1, 'a'), (2, 'a')] &gt;&gt;&gt; for k,v in list(chars): print list(v) [] [(10, 'c')] [] [] </code></pre> <p><code>list</code> would take effect on the sub-iterator in <code>groupby</code></p>
python|python-2.7|iterator|itertools
0
1,902,486
40,009,384
acos getting error math domain error
<p>Trying to figure out why I am getting an error. My numbers are between -1 and 1, but still errors. </p> <blockquote> <p>ValueError: math domain error</p> </blockquote> <p>Any ideas?</p> <p>Thanks</p> <pre><code>from math import sqrt, acos, pi from decimal import Decimal, getcontext getcontext().prec = 30 class Vector(object): CANNOT_NORMALIZE_ZERO_VECTOR_MSG = 'Cannot normalize the zero vector' def __init__(self, coordinates): try: if not coordinates: raise ValueError self.coordinates = tuple([Decimal(x) for x in coordinates]) self.dimension = len(self.coordinates) except ValueError: raise ValueError('The coordinates must be nonempty') except TypeError: raise TypeError('The coordinates must be an iterable') def __str__(self): return 'Vector: {}'.format(self.coordinates) def __eq__(self, v): return self.coordinates == v.coordinates def magnitude(self): coordinates_squared = [x ** 2 for x in self.coordinates] return sqrt(sum(coordinates_squared)) def normalized(self): try: magnitude = self.magnitude() return self.times_scalar(Decimal(1.0 / magnitude)) except ZeroDivisionError: raise Exception('Cannot normalize the zero vector') def plus(self, v): new_coordinates = [x + y for x, y in zip(self.coordinates, v.coordinates)] return Vector(new_coordinates) def minus(self, v): new_coordinates = [x - y for x, y in zip(self.coordinates, v.coordinates)] return Vector(new_coordinates) def times_scalar(self, c): new_coordinates = [Decimal(c) * x for x in self.coordinates] return Vector(new_coordinates) def dot(self, v): return sum([x * y for x, y in zip(self.coordinates, v.coordinates)]) def angle_with(self, v, in_degrees=False): try: u1 = self.normalized() u2 = v.normalized() angle_in_radians = acos(u1.dot(u2)) if in_degrees: degrees_per_radian = 180. / pi return angle_in_radians * degrees_per_radian else: return angle_in_radians except Exception as e: if str(e) == self.CANNOT_NORMALIZE_ZERO_VECTOR_MSG: raise Exception('Cannot comput an angle with a zero vector') else: raise e def is_orthogonal_to(self, v, tolerance=1e-10): return abs(self.dot(v)) &lt; tolerance def is_parallel_to(self, v): return self.is_zero() or v.is_zero() or self.angle_with(v) == 0 or self.angle_with(v) == pi def is_zero(self, tolerance=1e-10): return self.magnitude() &lt; tolerance print('first pair...') v = Vector(['-7.579', '-7.88']) w = Vector(['22.737', '23.64']) print('is parallel:', v.is_parallel_to(w)) print('is orthogonal:', v.is_orthogonal_to(w)) print('second pair...') v = Vector(['-2.029', '9.97', '4.172']) w = Vector(['-9.231', '-6.639', '-7.245']) print('is parallel:', v.is_parallel_to(w)) print('is orthogonal:', v.is_orthogonal_to(w)) print('third pair...') v = Vector(['-2.328', '-7.284', '-1.214']) w = Vector(['-1.821', '1.072', '-2.94']) print('is parallel:', v.is_parallel_to(w)) print('is orthogonal:', v.is_orthogonal_to(w)) print('fourth pair...') v = Vector(['2.118', '4.827']) w = Vector(['0', '0']) print('is parallel:', v.is_parallel_to(w)) print('is orthogonal:', v.is_orthogonal_to(w)) </code></pre>
<p>Could it be that <code>u1.dot(u2)</code> equals <code>-1.00000000000000018058942747512</code></p> <pre><code>print(u2) print(u1.dot(u2)) angle_in_radians = acos(u1.dot(u2)) </code></pre> <p>This is around line 60</p> <p>Update, with further tests:</p> <pre><code>getcontext().prec = 16 ...... def dot(self, v): print(self.coordinates, v.coordinates) print("asf") result = 0 for x, y in zip(self.coordinates, v.coordinates): print("=================") print("x: ", x) print("y: ", y) print("x*y: ", x*y) result += (x*y) print("=================") print("Result: ", result) print(sum([x * y for x, y in zip(self.coordinates, v.coordinates)])) return sum([x * y for x, y in zip(self.coordinates, v.coordinates)]) </code></pre> <p>Results in:</p> <pre><code>================= x: -0.6932074151971374 y: 0.6932074151971375 x*y: -0.4805365204842965 ================= ================= x: -0.7207381490636552 y: 0.7207381490636553 x*y: -0.5194634795157037 ================= Result: -1.000000000000000 -1.000000000000000 </code></pre> <p>But with:</p> <pre><code>getcontext().prec = 30 </code></pre> <p>The decimal begins to drift.</p> <pre><code>================= x: -0.693207415197137377521618972764 y: 0.693207415197137482701372768190 x*y: -0.480536520484296481693529594664 ================= ================= x: -0.720738149063655170190045851086 y: 0.720738149063655279547013776664 x*y: -0.519463479515703698895897880460 ================= Result: -1.00000000000000018058942747512 </code></pre> <p>Which leaves the result less than -1 breaking the <code>acos()</code> function.</p> <p>After finding the floats were out, I looked through your code I noticed a couple of functions that return floats. The culprit is the <code>sqrt()</code> function which doesn't have a high enough accuracy.</p> <pre><code>def magnitude(self): coordinates_squared = [x ** 2 for x in self.coordinates] return Decimal(sum(coordinates_squared)).sqrt() def normalized(self): try: magnitude = self.magnitude() return self.times_scalar(Decimal(1.0) / magnitude) </code></pre> <p>Using the <code>Decimal(x).sqrt()</code> function will fix your issue. You'll then need to update the <code>normalized()</code> function a bit too.</p>
python|math|vector
2
1,902,487
29,192,624
How to execute a program at the powershell console from a ps1 script
<p>I am new to python and powershell. To automate my testing as much as possible I am trying to trigger the nosetests executable to run at the console when I make a change to a specific .py file. </p> <p>So far I have the code below in file filemonitor.ps1. When I run this at the console and make a change to file lexicon.py "Yippee" is echoed in the console. Good start. However, I have tried various commands to invoke the nosetests script in the action block. My research suggests that something like this should work -> Invoke-Command -ScriptBlock { &amp; $program }<br> However, nothing happens at the console. I think it could be something to do with the fact that nosetests needs to run from the project folder ie ex48 in this example. But I am not sure. </p> <h2>Appreciate any guidance.</h2> <h2>filemonitor.ps1</h2> <pre><code>$watcher = New-Object System.IO.FileSystemWatcher $watcher.Path = "C:\envs\projects\ex48\ex48" $watcher.Filter = "lexicon.py" $watcher.IncludeSubdirectories = $false $watcher.EnableRaisingEvents = $true $program = "C:\envs\acme\Scripts\nosetests.exe" $changed = Register-ObjectEvent $watcher "Changed" -Action { Write-Host "Yippee" } </code></pre>
<p>first of all in powershell for security no script run before you use</p> <pre><code>Set-ExecutionPolicy bypass </code></pre> <p>second for run programm u can use</p> <pre><code>Start-Process C:\Windows\System32\PING.EXE </code></pre> <p>or like</p> <pre><code>.\winrar.exe </code></pre> <p>third if you want run powershell script in <code>run.exe</code> or <code>batch file</code> you can use this syntax</p> <pre><code>powershell -noexit "&amp; ""c:\test-webssl.ps1""" </code></pre> <p>ok and your script System.IO.FileSystemWatcher just see your path and if you Created changed or deleted notify you see that </p> <pre><code>$watcher = New-Object System.IO.FileSystemWatcher $watcher.Path = "S:\soheil" $watcher.IncludeSubdirectories = $false $watcher.EnableRaisingEvents = $true $changed = Register-ObjectEvent $watcher "Created" -Action { Write-Host "Created: $($event.Args.Fullpath)"} </code></pre> <p>ok with another powershell if you make txt file with <code>add-content</code> or make directory with <code>mkdir</code> that powershell you run that notify something like that Created: or in script you can end of script call <code>$changed</code></p>
python|powershell
0
1,902,488
29,064,182
Using python 2.7. logging in python 2.6
<p>Is there a way to use the logging package of python 2.7 in python 2.6?</p> <p>There are a few things that I need like not <a href="https://docs.python.org/2/library/logging.config.html#module-logging.config" rel="nofollow">disabling existing loggers or the dictConfig function</a>.</p>
<p>The <a href="https://pypi.python.org/pypi/logutils" rel="nofollow"><code>logutils</code></a> package might serve your needs - it backports certain functions - including <code>dictConfig()</code> - to earlier Python versions.</p>
python|python-2.7|logging|python-2.6
1
1,902,489
29,241,639
python variable not defined
<p>Hi I am flowing <a href="https://electrosome.com/hc-sr04-ultrasonic-sensor-raspberry-pi/" rel="nofollow">this</a> tutorial but in python code I always get </p> <pre><code>NameError: name 'pulse_start' is not defined </code></pre> <p>error. What is wrong, what do you suggest ? thanks.</p>
<p>You've made a reference to a variable called <code>pulse_start</code>, but you never defined it, so the interpreter isn't sure what you're talking about. Define it before using it.</p>
python|raspberry-pi
0
1,902,490
8,776,238
'module' object has no attribute 'HTTPSConncetion'
<p>Hi I have the following code but am getting an error that the <code>module object has no attribute HTTPSConnection</code>:</p> <pre><code>from ecomstore import settings import httplib import urllib def do_auth_capture(amount='0.00', card_num=None, exp_date=None, card_cvv=None): delimiter = '|' raw_params = { 'x_login':settings.AUTHNET_LOGIN, 'x_tran_key':settings.AUTHNET_KEY, 'x_type':'AUTH_CAPTURE', 'x_amount':amount, 'x_version':'3.1', 'x_card_num':card_num, 'x_exp_date':exp_date, 'x_delim_char':delimiter, 'x_relay_response':'FALSE', 'x_delim_data':'TRUE', 'x_card_code':card_cvv } params = urllib.urlencode(raw_params) headers = {'content-type':'application/x-www-form-urlencoded', 'content-length':len(params)} post_url = settings.AUTHNET_POST_URL post_path = settings.AUTHNET_POST_PATH cn = httplib.HTTPSConncetion(post_url,httplib.HTTPS_PORT) cn.request('POST',post_path, params, headers) return cn.getresponse().read().split(delimiter) </code></pre> <p>Is there a reason this is happening?</p>
<pre><code> cn = httplib.HTTPSConncetion(post_url,httplib.HTTPS_PORT) </code></pre> <p>You've misspelled "Connection".</p>
python|authorize.net
3
1,902,491
8,425,720
reassigning row/list in a 2d list in python
<p>I was trying to create an 2d list in python that was x + 1,y + 1 sized and had some initial values. Those initial values are that the first row contains the numbers 0 to 'x' and the first column contains the numbers 0 to 'y' (both inclusively)</p> <p>Lets say x and y were 3 and 4.</p> <p>So I went: listName = [range(0, x + 1)] * (y + 1);</p> <p>This gives me a 2d list that has 5 rows and each row is a list with the numbers 0 to 3 giving 4 indexes on each row (4 columns):</p> <pre><code>[[0, 1, 2, 3], [0, 1, 2, 3], [0, 1, 2, 3], [0, 1, 2, 3], [0, 1, 2, 3]] </code></pre> <p>I understand that at this point I have a 2d array, but each row is an instance so if I changed any value in each row, all the rows would reflect that change. So to fix that I decided to set each row to a new unique list:</p> <pre><code>for row in listName: row = range(0, x + 1); </code></pre> <p>But I noticed that this seems to have no effect, my original list Even if I went:</p> <pre><code>for row in listName: row = ["A", "B", "C", "D"]; </code></pre> <p>Printing before and after the assignment shows 'row' is getting changed, but outside the loop, I get my original list when I print it. Even though I've found another way to do what I want, I can't seem to figure out why this happens. Any ideas?</p>
<p>Slice-assign in order to modify the existing list, instead of just rebinding the name.</p> <pre><code>row[:] = ... </code></pre> <p>Also, you're constructing it incorrectly.</p> <pre><code>listName = [range(0, x + 1) for z in range(y + 1)] </code></pre>
python|list-comprehension
1
1,902,492
52,048,854
How to copy a preloaded Pandas Dataframe in Python from an online kernel system, to my main IDE on my Ubuntu System
<p>I'm using an online website that has it's own kernel for practising manipulating dataframes, but occasionally the dataframe is preloaded into the kernel and all I can do is use commands like .head() .info() and all other dataframe oriented commands to manipulate it but I am unable to find a way to copy this dataframe in a way that allows me to use the contents of the dataframe on jupyter notebooks or other IDEs on my system.</p> <p>I thought maybe I could save the dataframe as a CSV but alas I am unable to download files through the online kernel.</p> <p>Any idea's as to how I can bypass this problem?</p>
<p>***Answering my own question after finding an answer.</p> <p>The best way to do this if you can't download and upload files is to convert the columns of the dataframe into a list and then copy the output of those lists on to a seperate system, where you can then rezip the columns with each other and thereby recreate the dataframe for usage there.</p> <pre><code> Temp = [79.0, 77.4, 76.4, 75.7] </code></pre> <p>(this list could theoretically have any number of values no matter how high) this was converted from the dataframe within the kernel by doing </p> <pre><code> print(df['Temperature (deg F)'].values.tolist()) </code></pre> <p>and the lists can be recopied elsewhere and then converted back into a dataframe by using </p> <pre><code> dataFrame = pd.DataFrame({'Temperature (deg F)': Temp}) </code></pre>
python|pandas|dataframe|kernel
1
1,902,493
51,801,801
How to get (and append) content of a text file on Google Drive
<p>This should be super easy, but somehow I cannot figure it out by myself... I want to get content of a txt file from Google Drive Api v3, using (for example) python. According to docs (<a href="https://developers.google.com/drive/api/v3/reference/files/get" rel="nofollow noreferrer">https://developers.google.com/drive/api/v3/reference/files/get</a>) <strong>get</strong> method "Gets a file's metadata or <strong>content</strong> by ID." Here is what I have:</p> <pre><code>body = service.files().get(fileId="rGCalhPNeL9HejmmHCJhyt2aBRG40hDhb") print(body) </code></pre> <p>but this prints: <strong>googleapiclient.http.HttpRequest object at 0x1ed5990</strong> instead of file content. What I am doing wrong?</p> <p>Second question is: can I append a new line to existing google drive file? I know how to create a new file and update a file (but this overwrites everything what is inside a file). Is there a way I can just add another line to existing text file?</p> <p>Thanks! </p>
<h3>Answer for question 1 :</h3> <p>How about this modification?</p> From : <pre><code>body = service.files().get(fileId="rGCalhPNeL9HejmmHCJhyt2aBRG40hDhb") </code></pre> To : <pre><code>body = service.files().get_media(fileId="rGCalhPNeL9HejmmHCJhyt2aBRG40hDhb").execute() </code></pre> <h3>Answer for question 2 :</h3> <p>For example, if the file is spreadsheet and slides, there are APIs for adding contents. But in the case of a text file, because there are not such specific APIs, when a new line is added, the following flow is used.</p> <ol> <li>Download contents from the file.</li> <li>Add the new line to the contents.</li> <li>Upload the new contents as update to the text file.</li> </ol> <p>If this answer was not what you want, I'm sorry.</p>
python|google-drive-api|google-api-python-client
3
1,902,494
51,851,367
Translate variables from a Textfile
<p>Let's say I have a textfile which is built like this:</p> <pre><code>text "car" translation "" text "tree" translation "" </code></pre> <p>Now I'm trying to translate this file with Python and I have figured out how to translate strings with googletrans and how to read a file but I can't figure out how to actually read what's within the quotation marks only.</p> <p>This is how far I have gotten</p> <pre><code>from googletrans import Translator f = open('file.txt','r') translator = Translator() text=f.read() </code></pre>
<p>You can get just the values inside the quotes by using the very helpful <a href="https://docs.python.org/3/library/shlex.html" rel="nofollow noreferrer"><code>shlex</code></a> module, which provides utilities for parsing shell-like syntaxes:</p> <pre><code>&gt;&gt;&gt; import shlex &gt;&gt;&gt; shlex.split('text "car"') ['text', 'car'] </code></pre> <p>Note that in addition to stripping the quotation marks, this supports spaces, and escaped quotes, etc:</p> <pre><code>&gt;&gt;&gt; shlex.split(r'text "a thing with spaces and literal \"s in it"') ['text', 'a thing with spaces and literal "s in it'] </code></pre> <p>You can hook this up to your file with something like:</p> <pre><code>with open('file.txt','r') as file_: for line in file_: parsed = shlex.split(line) if parsed[0] == "text": # do translation with with parsed[1] else: # do something else </code></pre>
python
1
1,902,495
51,921,547
anaconda navigator getting stuck at loading applications
<p>I have installed Anaconda on my Desktop running Ubuntu 18 Mate LTS. The output of</p> <pre><code>python --version </code></pre> <p>is</p> <pre><code>Python 3.6.5 :: Anaconda, Inc. </code></pre> <p>First of all, when I try to launch Anaconda Navigator using</p> <pre><code>anaconda-navigator </code></pre> <p>the application gets stuck at Loading applications stage. I have to eventually kill this using <kbd>CTRL</kbd>+<kbd>C</kbd>/<kbd>CTRL</kbd>+<kbd>Z</kbd>. The anaconda navigator never launches.</p> <p>Following this, I tried</p> <pre><code>conda update anaconda-navigator </code></pre> <p>and</p> <pre><code>conda update conda </code></pre> <p>None of these commands work.</p> <p>I tried to look for the solution online and at one site, I was guided to use the following set of commands</p> <pre><code>source ~/anaconda*/bin/activate root anaconda-navigator </code></pre> <p>Even this did not work. It was showing some SSL Verification Failed message. The message was as follows: -</p> <blockquote> <pre><code>CondaHTTPError: HTTP 000 CONNECTION FAILED for url &lt;https://repo.anaconda.com/pkgs/main/noarch/repodata.json.bz2&gt; Elapsed: - An HTTP error occurred when trying to retrieve this URL. HTTP errors are often intermittent, and a simple retry will get you on your way. If your current network has https://www.anaconda.com blocked, please file a support request with your network engineering team. </code></pre> <p>SSLError(MaxRetryError('HTTPSConnectionPool(host='repo.anaconda.com', port=443): Max retries exceeded with url: /pkgs/main/noarch/repodata.json.bz2 (Caused by SSLError(SSLError(&quot;bad handshake: Error([('SSL routines', 'ssl3_get_server_certificate', 'certificate verify failed')],)&quot;,),))',),)</p> </blockquote> <p>Following this, I googled this problem and at one of the sites, the developers suggested me to use</p> <pre><code>conda config --set ssl_verify False </code></pre> <p>I did this. Then afterwards, I do not see the error message (obviously because ssl verify has been turned off). But then, instead of any error message, I keep on getting the following report at my terminal (no matter what conda command I use). The report looks something like this: -</p> <pre><code>environment variables: CIO_TEST=&lt;not set&gt; CONDA_BACKUP_HOST=x86_64-conda_cos6-linux-gnu CONDA_DEFAULT_ENV=base CONDA_EXE=/home/upendra/anaconda3/bin/conda CONDA_PREFIX=/home/upendra/anaconda3 CONDA_PROMPT_MODIFIER=(base) CONDA_PYTHON_EXE=/home/upendra/anaconda3/bin/python CONDA_ROOT=/home/upendra/anaconda3 CONDA_SHLVL=1 PATH=/home/upendra/anaconda3/bin:/home/upendra/anaconda3/bin:/usr/local/sbi n:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/g ames:/snap/bin REQUESTS_CA_BUNDLE=&lt;not set&gt; SSL_CERT_FILE=&lt;not set&gt; UBUNTU_MENUPROXY=&lt;set&gt; XDG_SEAT_PATH=/org/freedesktop/DisplayManager/Seat0 XDG_SESSION_PATH=/org/freedesktop/DisplayManager/Session0 ftp_proxy=&lt;set&gt; http_proxy=&lt;set&gt; https_proxy=&lt;set&gt; active environment : base active env location : /home/upendra/anaconda3 shell level : 1 user config file : /home/upendra/.condarc populated config files : /home/upendra/.condarc conda version : 4.5.9 conda-build version : 3.10.5 python version : 3.6.5.final.0 base environment : /home/upendra/anaconda3 (writable) channel URLs : https://repo.anaconda.com/pkgs/main/linux-64 https://repo.anaconda.com/pkgs/main/noarch https://repo.anaconda.com/pkgs/free/linux-64 https://repo.anaconda.com/pkgs/free/noarch https://repo.anaconda.com/pkgs/r/linux-64 https://repo.anaconda.com/pkgs/r/noarch https://repo.anaconda.com/pkgs/pro/linux-64 https://repo.anaconda.com/pkgs/pro/noarch package cache : /home/upendra/anaconda3/pkgs /home/upendra/.conda/pkgs envs directories : /home/upendra/anaconda3/envs /home/upendra/.conda/envs platform : linux-64 user-agent : conda/4.5.9 requests/2.18.4 CPython/3.6.5 Linux/4.15.0-30-generic ubuntu/18.04 glibc/2.27 UID:GID : 1000:1000 netrc file : None offline mode : False </code></pre> <p>The interesting this is that the first time when I launched anaconda-navigator after installing it, navigator window did get launched. After that, I shut down my system after working and from then onward anaconda-navigator window would not launch. I am also not sure if this is due to internet connection or anaconda installation/configuration.</p> <p>Any suggestions ?</p>
<p>There was an issue with proxy settings. It was solved. However there are some issues with anaconda navigator software. Its elements such as Spyder or ipython often fail to communicate with proxy properly particularly when there are credentials involved. As a result downloading datasets can often get involved.</p> <p>I used this trick. I installed anaconda and configured the environment. Then I closed anaconda navigator window, launched Spyder/ipython from terminal and things always work for me.in case Spyder gives segmentation fault upon launch, simple downgrade the mkl package. The above tricks work like magic for me.</p>
python|anaconda
0
1,902,496
51,958,471
Forecasting, (finding the right model)
<p>Using Python, I am trying to predict the future sales count of a product, using historical sales data. I am also trying to predict these counts for various groups of products. </p> <p>For example, my columns looks like this:</p> <pre><code>Date Sales_count Department Item Color 8/1/2018, 50, Homegoods, Hats, Red_hat </code></pre> <p>If I want to build a model that predicts the sales_count for each Department/Item/Color combo using historical data (time), what is the best model to use?</p> <p>If I do Linear regression on time against sales, how do I account for various categories? Can I group them?</p> <p>Would I instead use multilinear regression, treating the various categories as independent variables? </p>
<p>The best way I have come across in forecasting in python is using SARIMAX( Seasonal Auto Regressive Integrated Moving Average with Exogenous Variables) model in statsmodel Library. Here is the link for a very good tutorial in <a href="https://www.digitalocean.com/community/tutorials/a-guide-to-time-series-forecasting-with-arima-in-python-3" rel="nofollow noreferrer">SARIMAX using python</a> Also, If you are able to group the data frame according to your Department/Item?color combo, you can put them in a loop and apply the same model. May be you can create a key for each unique combination and for each key condition you can forecast the sales. For example,</p> <pre><code>df=pd.read_csv('your_file.csv') df['key']=df['Department']+'_'+df['Item']+'_'+df['Color'] for key in df['key'].unique(): temp=df.loc[df['key']==key]#filtering only the specific group temp=temp.groupby('Date')['Sales_count'].sum().reset_index() #aggregating the sum of sales in that date. Ignore if not required. #write the forecasting code here from the tutorial </code></pre>
python|pandas|machine-learning|regression|forecasting
1
1,902,497
51,735,106
pandas: assign random numbers in given range to equal column values
<p>I am working with a large dataset, and one of the columns has very long integers, like below:</p> <pre><code> Column_1 Column_2 1 A 12345123451 2 B 12345123451 3 C 12345123451 4 D 23456789234 5 E 23456789234 6 F 34567893456 </code></pre> <p>What is important here is not the actual number in Column_2, but when those numbers are the same while Column_1 is different. I would like to reassign the values of Column_2 randomly from a range of smaller numbers, say (1, 999).</p> <pre><code> Column_1 Column_2 1 A 120 2 B 120 3 C 120 4 D 54 5 E 54 6 F 567 </code></pre> <p>My issue is figuring a way to describe in a lambda function that each equal value in Column_2 needs the same random number. </p>
<p>Took a cue from sacul on <code>replace=False</code> (updated answer) </p> <h2>Using <code>pandas.factorize</code> and <code>numpy.random</code></h2> <pre><code>i, r = pd.factorize(df.Column_2) choices = np.arange(max(999, r.size)) c = np.random.choice(choices, r.shape, False) df.assign(Column_2=c[i]) Column_1 Column_2 1 A 812 2 B 812 3 C 812 4 D 751 5 E 751 6 F 574 </code></pre>
python|pandas|dataframe|random
3
1,902,498
69,061,409
How do I find start and end indices in python list for all the rows
<p>My code -</p> <pre><code>df=pd.read_csv(&quot;file&quot;) l1=[] l2=[] for i in range(0,len(df['unions']),len(df['district'])): l1.append(' '.join((df['unions'][i], df['district'][i]))) l2.append(({&quot;entities&quot;: [[(ele.start(), ele.end() - 1) for ele in re.finditer(r'\S+', df['unions'][i])] ,df['subdistrict'][i]],})) TRAIN_DATA=list(zip(l1,l2)) print(TRAIN_DATA) </code></pre> <p>Result - <code>[('Dhansagar Bagerhat', {'entities': [[(0, 8)], 'Sarankhola']})]</code></p> <p>My expected output - <code>[('Dhansagar Bagerhat', {'entities': [[(0, 8)], 'Sarankhola'],[[(10, 17)], 'AnyLabel']})]</code> How do I get this output for all the rows? I am getting the result for only one row. It seems like my loop is not working. Can anyone please point out my mistake?</p> <p>My csv file looks like this. &quot;AnyLabel&quot; is another column. I have around 500 rows -</p> <pre><code>unions subdistrict district Dhansagar Sarankhola Bagerhat Daibagnyahati Morrelganj Bagerhat Ramchandrapur Morrelganj Bagerhat Kodalia Mollahat Bagerhat </code></pre>
<p>Try using <code>str.join</code>:</p> <pre><code>df=pd.read_csv(&quot;file&quot;) l1=[] l2=[] for idx, row in df.iterrows(): l1.append(' '.join((row['unions'], row['district']))) l2.append(({&quot;entities&quot;: [[[ele.start(), ele.end() - 1], ele.group(0)] for ele in re.finditer(r'\S+', ' '.join([row['unions'] ,row['subdistrict']]))]})) TRAIN_DATA=list(zip(l1,l2)) print(TRAIN_DATA) </code></pre> <p>Output:</p> <pre><code>[('Dhansagar Bagerhat', {'entities': [[[0, 8], 'Dhansagar'], [[10, 19], 'Sarankhola']]}), ('Daibagnyahati Bagerhat', {'entities': [[[0, 12], 'Daibagnyahati'], [[14, 23], 'Morrelganj']]}), ('Ramchandrapur Bagerhat', {'entities': [[[0, 12], 'Ramchandrapur'], [[14, 23], 'Morrelganj']]}), ('Kodalia Bagerhat', {'entities': [[[0, 6], 'Kodalia'], [[8, 15], 'Mollahat']]})] </code></pre>
python|pandas|list|dataframe|loops
1
1,902,499
68,955,757
How do I sort objects in a python dictionary when the value is an object?
<p>I have a python dict that looks like this:</p> <pre><code>file_dict = {'a.txt' : &lt;#:text_object&gt;, 'b.txt': &lt;#:text_object&gt;, 'c.txt': &lt;#:text_object&gt;} </code></pre> <p>The value text_object is an object that contains a bunch of analytic data about the file. It's an object like this...</p> <pre><code>class text_object: #...a bunch of methods and setters... def get_word_count(self): return self.word_count #integer value </code></pre> <p>These objects are in a dict so that I can find specific files and their corresponding data quickly via their filename. I want to sort the file_dict in word_count order so I can output the objects with the smallest word count to help find anomalies in the data collection process.</p> <p>How do I sort the file_dict based on the value in the text_object.get_word_count() classes stored within it?</p>
<p>I believe this'll do the trick, but I can't test it without your actual dictionary and the objects. I suggest providing a sample of your data.</p> <pre class="lang-py prettyprint-override"><code>sorted_keys = sorted(file_dict, key=lambda k: file_dict[k].get_word_count()) sorted_file_dict = {k: file_dict[k] for k in sorted_keys} </code></pre> <p>Also, &quot;sorting&quot; a dictionary is a bit unnecessary. The whole point is that objects are able to be looked up via a hashtable, negating the need for any sort of ordering. If you want some kind of ordering while iterating, then you can iterate over the sorted keys. But a dictionary itself doesn't really need to be sorted, in most cases.</p> <p>Closure edit: I disagree with closing this as a duplicate, since implementing <code>__lt__</code> for the <code>text_object</code> class still won't help you sort a dictionary of <code>text_object</code> instances. Also, heads up, class names are conventionally CamelCase, consider renaming <code>text_object</code> to <code>TextObject</code>.</p> <h2>Addressing <code>__lt__()</code></h2> <p>Since this question may not be reopened, I'll address how OP might use the information in the duplicate. To be honest, there isn't <em>much</em> of a difference in the end product, BUT it may not be the worst idea to implement it anyway:</p> <pre class="lang-py prettyprint-override"><code>class TextObject: # attributes and methods, etc def get_word_count(self): return self.word_count def __lt__(self, other): return self.get_word_count() &lt; other.get_word_count() </code></pre> <p>Then, in order to &quot;sort&quot; your dictionary:</p> <pre class="lang-py prettyprint-override"><code>sorted_keys = sorted(file_dict, key=lambda k: file_dict[k]) sorted_file_dict = {k: file_dict[k] for k in sorted_keys} </code></pre> <p>Notice that the only difference is that the <code>sorted()</code> key function is no longer directly calling <code>TextObject.get_word_count()</code>.</p> <p>As others have mentioned, this method may be ideal for you if you're planning to do things like <code>some_text_object &lt; other_text_object</code>.</p> <p>PS - you may want to look into the <code>@property</code> decorator.</p>
python|python-3.x|sorting
3