Unnamed: 0
int64
0
1.91M
id
int64
337
73.8M
title
stringlengths
10
150
question
stringlengths
21
64.2k
answer
stringlengths
19
59.4k
tags
stringlengths
5
112
score
int64
-10
17.3k
1,904,500
57,936,218
select last rows based on a condition and store value into the variables
<p>I am having following dataframe in pandas in python 3.7 and reading excel. e.g. of a dataframe is</p> <pre><code>data = {'s':['a','a','a','a','b','b'], 'cp':['C','P','C','C','C','P'], 'st':[300,300,300,300,310,310], 'qty':[3000,3000,3000,6000,9000,3000], 'p':[16,15,14,10,8,12]} df=pd.DataFrame(data) df s cp st qty p 0 a C 300 3000 16 1 a P 300 3000 15 2 a C 300 3000 14 3 a C 300 6000 10 4 b C 310 9000 8 5 b P 310 3000 12 </code></pre> <p>I want to store a last value of cp i.e. "C" and "P" of column "cp" of group by column "s" of values column "p" into variables for e.g. a and b variables for condition "s" = a are last value of cp where cp == "C" is 10 and for cp == "P" is 15 then variable "a" should have value 10 and variable "b" should have value of 15. </p> <p>For "s" == "b" then values of a = 8 and b = 12 </p> <p>I am reading s values from another excel/csv file using pandas.</p> <p>Can you pl help me out ?</p> <p>Thanks</p>
<p>If always exist at least one <code>C</code> and one <code>P</code> is convert <code>p</code> column to index by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.set_index.html" rel="nofollow noreferrer"><code>DataFrame.set_index</code></a>, compare by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.eq.html" rel="nofollow noreferrer"><code>Series.eq</code></a> for <code>==</code>, swap ordering with slicing <code>[::-1]</code> and get last matched <code>C</code> or <code>P</code> per condition:</p> <pre><code>a = df.set_index('p')['cp'].eq('C')[::-1].idxmax() print (a) 8 b = df.set_index('p')['cp'].eq('P')[::-1].idxmax() print (b) 12 </code></pre> <p>EDIT:</p> <pre><code>df1 = df.drop_duplicates(['s','cp'], keep='last')[['s','cp','p']] print (df1) s cp p 1 a P 15 3 a C 10 4 b C 8 5 b P 12 </code></pre> <p>General solution with specify values for <code>s</code> and <code>cp</code>:</p> <pre><code>a = next(iter(df.loc[df['cp'].eq('C') &amp; df['s'].eq('a'), 'p'].values[::-1]), 'no exist') print (a) 10 b = next(iter(df.loc[df['cp'].eq('P')&amp; df['s'].eq('a'), 'p'].values[::-1]), 'no exist') print (b) 15 a = next(iter(df.loc[df['cp'].eq('C') &amp; df['s'].eq('b'), 'p'].values[::-1]), 'no exist') print (a) 8 b = next(iter(df.loc[df['cp'].eq('P')&amp; df['s'].eq('b'), 'p'].values[::-1]), 'no exist') print (b) 12 </code></pre> <p><strong>Details</strong>:</p> <p>First filter by both conditions with bitwise <code>AND</code> by <code>&amp;</code> and <code>loc</code> for filter column <code>p</code>:</p> <pre><code>print (df.loc[df['cp'].eq('C') &amp; df['s'].eq('a'), 'p']) 0 16 2 14 3 10 Name: p, dtype: int64 </code></pre> <p>Then convert to numpy array and slicing with <code>[::-1]</code>:</p> <pre><code>print (df.loc[df['cp'].eq('C') &amp; df['s'].eq('a'), 'p'].values[::-1]) [10 14 16] </code></pre> <p>And last get first value of array:</p> <pre><code>print (next(iter(df.loc[df['cp'].eq('C') &amp; df['s'].eq('a'), 'p'].values[::-1]), 'no exist')) 10 </code></pre> <hr> <p>If test not exist values, here <code>AAA</code>:</p> <pre><code>print (df.loc[df['cp'].eq('AAAA') &amp; df['s'].eq('a'), 'p']) Series([], Name: p, dtype: int64) print (df.loc[df['cp'].eq('AAAA') &amp; df['s'].eq('a'), 'p'].values[::-1]) [] print (next(iter(df.loc[df['cp'].eq('AAA') &amp; df['s'].eq('a'), 'p'].values[::-1]), 'no exist')) no exist </code></pre>
python|python-3.x|pandas
2
1,904,501
45,381,688
Python: Can't Convert lists from SQL
<p>Hello I am trying to learn Python, and right now I want to import some data from SQL. Unfortunately I have a problem that I wasn't capable of solving. I am getting the following error message: </p> <blockquote> <p>cursor.execute("select * from matchinfo WHERE matchid = '%i'" % Matchid) TypeError: %i format: a number is required, not pyodbc.Row"</p> </blockquote> <p>I guess the issue is that the list data isn't formatted as integers, however when I try to do stuff like "Matchid[0] = int(Matchid[0])" (and then print out Matchid[0]) it's not working either. So I am not really sure how to fix the problem. The code is below and thanks in advance.</p> <pre><code>import pyodbc cnxn = pyodbc.connect('DRIVER={SQL Server};SERVER=DESKTOP- FCDHA0J\SQLEXPRESS;DATABASE=Data') cursor = cnxn.cursor() cursor.execute("select matchid from matchinfo where matchid &gt; 1 order by dato asc") Matchinfo = cursor.fetchall() for Matchid in Matchinfo: print(Matchinfo[0]) cursor.execute("select * from matchinfo WHERE matchid = '%i'" % Matchid) </code></pre>
<p>You are trying to cast <code>Pyodbc.Row</code> into a <code>integer</code>. To fix this just make this changes:</p> <pre><code>cursor.execute("select * from matchinfo WHERE matchid = %s", (Matchid[0],)) </code></pre> <p>Why you need <code>fetchall</code>, you need to work offline or something? </p> <p>Just do this:</p> <pre><code>cursor.execute("select matchid from matchinfo where matchid &gt; 1 order by dato asc") for Matchid in Matchinfo: match_id = Matchid[0] cursor.execute("select * from matchinfo WHERE matchid = '%i'" % match_id) </code></pre> <p>And you are gain some performance if the table is large, because you are retreiving one result at a time. In the other way, you are fetching all the results into memory at once.</p>
python
0
1,904,502
28,816,060
python :- can't start new thread
<p>i need to know why when i run the below code it gives me this error</p> <pre><code>Traceback (most recent call last): File "C:\Users\moksh\Desktop\moksh.py", line 29, in &lt;module&gt; server_B_thread.start() error: can't start new thread </code></pre> <p>Code:</p> <pre><code>#!/usr/bin/python import threading import time import SocketServer import socket class ThreadedTCPRequestHandler(SocketServer.BaseRequestHandler): def handle(self): self.allow_reuse_address = True self.data = self.request.recv(1024).strip() print "%s wrote: " % self.client_address[0] print 'Connection from',self.client_address[0] print self.data self.request.send(self.data.upper()) class ThreadedTCPServer(SocketServer.ThreadingMixIn, SocketServer.TCPServer): pass if __name__ == "__main__": HOST = '0.0.0.0' PORT = 1000 while PORT &lt;1900: server_B = ThreadedTCPServer((HOST, PORT), ThreadedTCPRequestHandler) server_B_thread = threading.Thread(target=server_B.serve_forever) server_B_thread.setDaemon(True) server_B_thread.start() PORT +=1 while 1: time.sleep(1) </code></pre>
<p>You're trying to start 900 threads, and probably hitting a limit on the OS. I don't know what you're trying to do, but I would look into <a href="https://twistedmatrix.com/trac/" rel="nofollow">Twisted</a>, which will probably have more options for running a high volume TCP server.</p>
python
1
1,904,503
57,011,591
Efficiently create categorical data frame with nulls
<p>I want to create a categorical data frame with nulls and set the categories before expanding the index. The index is very large and I want to avoid the memory spike and I cannot seem to do this.</p> <p>Example:</p> <pre><code># memory spike df = pd.DataFrame(index=list(range(0, 1000)), columns=['a', 'b']) df.info(memory_usage='deep') </code></pre> <p>Output:</p> <pre><code>&lt;class 'pandas.core.frame.DataFrame'&gt; Int64Index: 1000 entries, 0 to 999 Data columns (total 2 columns): a 0 non-null object b 0 non-null object dtypes: object(2) memory usage: 70.3 KB </code></pre> <p>Convert to Categorical:</p> <pre><code>for _ in df.columns: df[_] = df[_].astype('category') # set categories for columns df['a'] = df['a'].cat.add_categories(['d', 'e', 'f']) df['b'] = df['b'].cat.add_categories(['g', 'h', 'i']) # check memory usage df.info(memory_usage='deep') </code></pre> <p>Output:</p> <pre><code>&lt;class 'pandas.core.frame.DataFrame'&gt; Int64Index: 1000 entries, 0 to 999 Data columns (total 2 columns): a 0 non-null category b 0 non-null category dtypes: category(2) memory usage: 9.9 KB </code></pre> <p>Is there a way to do this while avoiding the memory spike?</p>
<p>If the data frame is created by the <code>DataFrame</code> constructor, the columns can be initialized as category types.</p> <pre><code>import numpy as np import pandas as pd from pandas.api.types import CategoricalDtype cat_type1 = CategoricalDtype(["d", "e", "f"]) cat_type2 = CategoricalDtype(["g", "h", "i"]) index = pd.Index(range(1000)) df = pd.DataFrame({"a": pd.Series([np.nan] * len(index), dtype=cat_type1, index=index), "b": pd.Series([np.nan] * len(index), dtype=cat_type2, index=index)}, index=index) </code></pre> <p>Another alternative solution is the following.</p> <pre><code>cols = ["a", "b"] index = pd.Index(range(1000)) df = pd.DataFrame({k: [np.nan] * len(index) for k in cols}, index=index, dtype="category") df["a"].cat.set_categories(["d", "e", "f"], inplace=True) df["b"].cat.set_categories(["g", "h", "i"], inplace=True) </code></pre> <p>If the data frame is created via methods such as <code>read_csv</code>, the <code>dtype</code> keyword parameter can be used to make sure the output columns have desired data types rather than making conversions after the data frame is created -- which leads to more memory consumption.</p> <pre><code>df = pd.read_csv("file.csv", dtype={"a": cat_type1, "b": cat_type2}) </code></pre> <p>Here, the category values can also be directly inferred from the data by passing in <code>dtype={"a": "category"}</code>. Specifying the categories beforehand can save the inference overhead and also let the parser check the data values match the specified category values. It is also necessary if some category values do not occur in the data.</p>
python-3.x|pandas
1
1,904,504
49,517,016
Not able to access new method in derived class
<p>I have written the following piece of code for crawling web pages and then storing them in the <code>Solr</code> index.</p> <pre><code>crawledLinks = [] solr = pysolr.Solr('some url', timeout=10) class MySpider(Spider): name = "tutsplus" start_urls = ["some url"] allowed_domains = ["some domain"] custom_settings = { 'CONCURRENT_REQUESTS': 100, 'CONCURRENT_REQUESTS_PER_DOMAIN': 100, 'DEPTH_LIMIT': 100, 'LOG_ENABLED': True, } def parse(self, response): links = response.xpath('//a/@href').extract() current_url = response.url asyncio.ensure_future(add_to_index(response.body, current_url)) for link in links: # If it is a proper link and is not checked yet, yield it to the Spider internal_link = urljoin(current_url, link) result = urlparse(internal_link) if result.scheme and result.netloc and result.path and not internal_link in crawledLinks: crawledLinks.append(internal_link) yield Request(internal_link, self.parse) item = TutsplusItem() item["url"] = current_url yield item async def add_to_index(body, current_url): soup = BeautifulSoup(body) texts = soup.find_all(text=True) visible_texts = [] for text in texts: if text.parent.name not in ['style', 'script', 'meta', '[document]'] and not isinstance(text, Comment): visible_texts.append(text) fetched_text = u" ".join(t.strip() for t in visible_texts) words = nltk.word_tokenize(fetched_text) stop = set(stopwords.words('english')) stopwordsfree_words = [word for word in words if word not in stop] detokenizer = MosesDetokenizer() doc = detokenizer.detokenize(stopwordsfree_words, return_str=True) doc = doc.encode('utf-8') url = "some url" try: res = requests.post(url, data=doc) except Exception as e: print(e) if not doc: doc = soup.title.string if res.status_code == 200: words = json.loads(res.text) doc = detokenizer.detokenize(words, return_str=True) solr.add([{"doc": doc, "url": str(current_url)}]) </code></pre> <p>I want to call the function <code>add_to_index()</code> in a "fire and forget" manner. But the problem I'm facing is I'm getting the error</p> <blockquote> <p>undefined name 'add_to_index'</p> </blockquote> <p>in the parse method. So function is not being recognized. I'm new to python. Could you help me with this issue?</p> <p>Thanks,</p> <p>Nilesh.</p>
<p>Have you tried calling <code>add_to_index</code> like so: <code>self.add_to_index(response.body, current_url)</code></p>
python|scrapy|web-crawler
0
1,904,505
54,827,924
How can I use the scapy module to send a request and accept three answers?
<p>I am trying to use the scapy module to ask the 4.2.2.4 server for some url IP addresses.</p> <p>Most of the requests for inquiries will only get one answer, which may have one or more IP addresses. But &quot;facebook.com&quot; is different. The server will give me three answers, and each answer has an IP address.</p> <p>Why is this happening? How can I get all three answers in my python program? I tried the sr() and sr1() functions, but they all only get one answer.</p> <p>My code:</p> <pre><code>from scapy.all import * url = 'facebook.com' server = '4.2.2.4' result1, unanswer = sr(IP(dst=server) / UDP() / DNS(qd=DNSQR(qname=url, qtype='A', qclass='IN'))) result2 = sr1(IP(dst=server) / UDP() / DNS(qd=DNSQR(qname=url, qtype='A', qclass='IN'))) </code></pre> <p>The results I got (part of the answer):</p> <blockquote> <p>\an \</p> <p>|###[ DNS Resource Record ]###</p> <p>| rrname = 'facebook.com.'</p> <p>......</p> <p>| rdata = '173.252.103.64'</p> </blockquote> <p>The result I got with wireshark: <img src="https://i.stack.imgur.com/DQxlA.png" alt="The result I got with wireshark" /></p>
<p> Hi,</p> <p>You can use the <code>multiple</code> keyword argument in <code>sr()</code>:</p> <pre><code>sr([...], multiple=True) </code></pre>
python|python-3.x|ip|scapy
1
1,904,506
22,398,841
Pandas DataFrame formatting to get expected output
<p>In the following Pandas <code>DataFrame</code>,</p> <pre><code>df = pd.DataFrame({'session' : ["1","1","2","2","3","3"], 'path' : ["p1","p2","p1","p2","p2","p3"], 'seconds' : ["20","21","132","10","24","45"]}) </code></pre> <p>I need to get an output like the following. (Pages as columns, sessions as rows and seconds in each cell.)</p> <pre><code>session,p1,p2,p3 1,20,21,0 2,132,10,0 3,0,24,45 </code></pre> <p>What I have done so far.</p> <pre><code>In [76]: wordlist = ['p1', 'p2', 'p3'] In [77]: df2 = pd.DataFrame(df.groupby('session').apply(lambda x: ','.join(x.path))) In [78]: df2 #I have renamed the columns Out[78]: path session 1 p1,p2 2 p1,p2 3 p2,p3 In [79]: df3 = pd.DataFrame(df.groupby('session').apply(lambda x: ','.join(x.seconds.astype(str)))) In [80]: df3 #I have renamed the columns Out[80]: path session 1 20,21 2 132,10 3 24,45 </code></pre> <p>The following just gives the boolean result. I need to get my expected output. Any help on this?</p> <pre><code>In [84]: pd.DataFrame({name : df2["path"].str.contains(name) for name in wordlist}) Out[84]: p1 p2 p3 session 1 True True False 2 True True False 3 False True True </code></pre>
<p>Use a pivot table:</p> <pre><code>df.pivot(index='session', columns='path') </code></pre> <p>Then replace all the Nan's with zeros:</p> <pre><code>df2 = df1.fillna(0) </code></pre> <p>This gives you the following output:</p> <pre><code> seconds path p1 p2 p3 session 1 20 21 0 2 132 10 0 3 0 24 45 </code></pre> <p>Then you might want to drop the multiindex column:</p> <pre><code>df1.columns = df1.columns.droplevel(0) </code></pre> <p>Yielding your desired solution (sans commas):</p> <pre><code>path p1 p2 p3 session 1 20 21 0 2 132 10 0 3 0 24 45 </code></pre> <p>Finally you can convert it to comma-separated string using <code>StringIO</code>:</p> <pre><code>import StringIO s = StringIO.StringIO() df1.to_csv(s) print s.getvalue() </code></pre> <p>With the following output:</p> <pre><code>session,p1,p2,p3 1,20,21,0 2,132,10,0 3,0,24,45 </code></pre>
python|pandas|dataframe
2
1,904,507
54,574,823
How i can acces to another class, while i'm enter the information for the first class from shell?
<p>I'm trying to fill write the models from shell and i was trying to fill the information for input has ( ForeignKey ) that make it access to another class. that is my code in pycharm : </p> <pre><code>class Team(models.Model): name = models.CharField(max_length=256, unique=True) details = models.TextField() def __str__(self): return self.name class Player(models.Model): name = models.CharField(max_length=256) number = models.IntegerField() age = models.IntegerField() position_in_field = models.CharField(max_length=256, choices=(('1', 'حارس'), ('2', 'دفاع'), ('3', 'وسط'), ('4', 'هجوم'))) is_captain = models.BooleanField(default=False) team = models.ForeignKey(Team) def __str__(self): return '{} - {}'.format(self.name, self.team) </code></pre> <p>and this is the result:</p> <pre><code>python manage.py shell Python 3.7.2 (tags/v3.7.2:9a3ffc0492, Dec 23 2018, 22:20:52) [MSC v.1916 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. (InteractiveConsole) &gt;&gt;&gt; from teams.models import Player &gt;&gt;&gt; from teams.models import Team &gt;&gt;&gt; Player.objects.create(name='محمد إبراهيم', number='25', age='27', position_in_field='هجوم', is_captain=False, team='فريق الزمالك') Traceback (most recent call last): File "&lt;console&gt;", line 1, in &lt;module&gt; File "D:\cj\projects\django\teammanager_env\lib\site-packages\django\db\models\manager.py", line 85, in manager_method return getattr(self.get_queryset(), name)(*args, **kwargs) File "D:\cj\projects\django\teammanager_env\lib\site-packages\django\db\models\query.py", line 392, in create obj = self.model(**kwargs) File "D:\cj\projects\django\teammanager_env\lib\site-packages\django\db\models\base.py", line 555, in __init__ _setattr(self, field.name, rel_obj) File "D:\cj\projects\django\teammanager_env\lib\site-packages\django\db\models\fields\related_descriptors.py", line 216, in __set__ self.field.remote_field.model._meta.object_name, ValueError: Cannot assign "'فريق الزمالك'": "Player.team" must be a "Team" instance. </code></pre>
<p>When you create a record with a foreign key, you have to indicate the record in the referenced model through its primary key.</p> <p>In the case of model <strong>Team</strong>, you didn't set a primary key explicitly, so <strong>Django</strong>sets de default field <strong>mode_id</strong>, that is a autoincremental <code>PositiveIntegerField</code>, that is what you have to indicate in the referencing record.</p> <pre><code>Player.objects.create(name='محمد إبراهيم', number='25', age='27', position_in_field='هجوم', is_captain=False, team_id=1) </code></pre> <p>If you have a model object with the referenced team, you can use it too:</p> <pre><code>Player.objects.create(name='محمد إبراهيم', number='25', age='27', position_in_field='هجوم', is_captain=False, team=team_instance) </code></pre>
django-models|python-3.7
0
1,904,508
47,900,237
Why does (1 == 2 != 3) evaluate to False in Python?
<p>Why does <code>(1 == 2 != 3)</code> evaluate to <code>False</code> in Python, while both <code>((1 == 2) != 3)</code> and <code>(1 == (2 != 3))</code> evaluate to <code>True</code>? </p> <p>What operator precedence is used here?</p>
<p>This is due to the operators <a href="https://docs.python.org/3/reference/expressions.html#comparisons" rel="noreferrer"><code>chaining phenomenon</code></a>. The Pydoc explains it as :</p> <blockquote> <p>Comparisons can be chained arbitrarily, e.g., <strong>x &lt; y &lt;= z</strong> is equivalent to <strong>x &lt; y and y &lt;= z</strong>, except that y is evaluated only once (but in both cases z is not evaluated at all when x &lt; y is found to be false).</p> </blockquote> <p>And if you look at the <a href="https://docs.python.org/3/reference/expressions.html#operator-precedence" rel="noreferrer">precedence</a> of the <code>==</code> and <code>!=</code> operators, you will notice that they have the <strong>same precedence</strong> and hence applicable to the <strong>chaining phenomenon</strong>.</p> <p>So basically what happens :</p> <pre><code>&gt;&gt;&gt; 1==2 =&gt; False &gt;&gt;&gt; 2!=3 =&gt; True &gt;&gt;&gt; (1==2) and (2!=3) # False and True =&gt; False </code></pre>
python|operator-precedence
19
1,904,509
31,906,884
Pandoc "--filter" option is not working with "comments.py" script
<p>I'm trying to use the 'comments.py' script (from repository <a href="https://github.com/aaren/pandocfilters/blob/master/examples/comments.py" rel="nofollow">github.com/aaren/pandocfilters</a>) with the following command:</p> <pre><code>Pandoc -o myOutput.tex myInput.html --filter ./comments.py </code></pre> <p>but it is not working.</p> <p>Pandoc always converts text that is between the tags <code>&lt;!-- BEGIN COMMENT --&gt;</code> and <code>&lt;!-- END COMMENT --&gt;</code> and remove the comments.</p> <p>Could someone help me figure out what is the problem?</p> <p>PS: I tested 'caps.py' script (from the same repo) and it worked fine, but 'comments.py' doesn't. </p> <p>I need to ignore the text between specified tag.</p> <p>This is my HTML input:</p> <pre><code>&lt;i&gt; normal text &lt;/i&gt; &lt;!-- BEGIN COMMENT --&gt; &lt;i&gt; ignore this line &lt;/i&gt; &lt;!-- END COMMENT --&gt; &lt;b&gt; normal text 2 &lt;/b&gt; </code></pre> <p>And this is my LaTeX output:</p> <pre><code>\emph{normal text} \emph{ignore this line} \textbf{normal text 2} </code></pre> <p>Thanks in advance!</p>
<p>The reason the filter doesn't work is because when Pandoc reads HTML, it doesn't take comments with it. On the other hand when Pandoc reads Markdown, it stores the comments in its AST as <code>RawBlock 'html'</code>.</p> <p>So you need to use the filter with markdown input, like:</p> <pre><code>normal text &lt;!-- BEGIN COMMENT --&gt; ignore this line &lt;!-- END COMMENT --&gt; normal text 2 </code></pre> <pre><code>pandoc -o myOutput.tex myInput.md --filter ./comments.py </code></pre> <p>Or just use plain HTML comments instead:</p> <pre><code>&lt;i&gt; normal text &lt;/i&gt; &lt;!-- &lt;i&gt; ignore this line &lt;/i&gt; --&gt; &lt;b&gt; normal text 2 &lt;/b&gt; </code></pre>
python|pandoc
1
1,904,510
29,851,583
Django - Build a model filter from string
<p>Is it possible to filter models using string arguments?</p> <p>Consider the following filter: </p> <pre><code>some_model.filter(parameter__gte = x) </code></pre> <p>I want to build that filter argument using strings.</p> <p>eg. </p> <pre><code>if equality == "&gt;" and argument == x: query = "{0}__gte".format(parameter) </code></pre> <p>Then filter using that built argument.</p> <pre><code>some_model.filter(query = x) </code></pre> <p>Is something along those lines possible without using raw sql?</p>
<p>Yes. Use your strings as the keys and values of a dictionary, then pass that dictionary to <code>filter</code> using the <code>**</code> operator to use them as keyword argument pairs. Using your above example:</p> <pre><code>filter_arguments = {query: x} some_model.filter(**filter_arguments) </code></pre>
python|django|database|postgresql
10
1,904,511
48,742,736
Using SSL with SQLAlchemy
<p>I've recently changed my project to use SQLAlchemy and my project runs fine, it used an external MySQL server.</p> <p>Now I'm trying to work with a different MySQL server with SSL CA, and it doesn't connect.</p> <p>(It did connect using MySQL Workbench, so the certificate should be fine)</p> <p>I'm using the following code:</p> <pre><code>ssl_args = {'ssl': {'ca': ca_path}} engine = create_engine("mysql+pymysql://&lt;user&gt;:&lt;pass&gt;@&lt;addr&gt;/&lt;schema&gt;", connect_args=ssl_args) </code></pre> <p>and I get the following error:</p> <blockquote> <p>Can't connect to MySQL server on '\addr\' ([WinError 10054] An existing connection was forcibly closed by the remote host)</p> </blockquote> <p>Any suggestions?</p>
<p>I changed the DBAPI to MySQL-Connector, and used the following code:</p> <pre><code>ssl_args = {'ssl_ca': ca_path} engine = create_engine("mysql+mysqlconnector://&lt;user&gt;:&lt;pass&gt;@&lt;addr&gt;/&lt;schema&gt;", connect_args=ssl_args) </code></pre> <p>And now it works.</p>
python|mysql|sqlalchemy
20
1,904,512
48,676,128
How to create a dictionary from two list in python
<p>Hello guys i have two list in my code like this</p> <pre><code>other_concords = ['a','b','c'] leamanyi_concords = ['fruit','drink','snack'] temp_dic = { 'a':['fruit','drink','snack'], 'b':['fruit','drink','snack'], 'c':['fruit','drink','snack'] } </code></pre> <p>Is it possible to insert items in my temp_dic using a loop and it appears like this when i output temp_dic? </p>
<pre><code>temp_dic = {v: list(leamanyi_concords) for v in other_concords} </code></pre>
python|python-3.x|list|dictionary
3
1,904,513
73,333,597
All values discarded from spark dataframe while filtering blank values using pyspark
<p>I have some spark code:</p> <pre><code>import pyspark.sql.functions as f df=spark.read.parquet(&quot;D:\\source\\202204121920-seller_central_opportunity_explorer_category.parquet&quot;) </code></pre> <p><a href="https://i.stack.imgur.com/ybvSw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ybvSw.png" alt="enter image description here" /></a></p> <p>I have <code>parent_ids</code> field which is blank. I need only the records whose <code>parent_ids</code> are <code>Blank</code>. I searched into the SO and I found these answers:</p> <pre><code>df1=df.where(df[&quot;parent_ids&quot;].isNull()) df1.toPandas() df1=df.filter(&quot;parent_ids is NULL&quot;) df1.toPandas() df.filter(f.isnull(&quot;parent_ids&quot;)).show() df.where(f.isnull(f.col(&quot;parent_ids&quot;))).show() </code></pre> <p>Since there is clearly that parent_ids are <code>Null</code>, when I try to look the result I am getting <code>0 record counts.</code> <a href="https://i.stack.imgur.com/VYu3L.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VYu3L.png" alt="enter image description here" /></a> Why is my result showing <code>zero counts</code> thoughthere are <code>parent_ids</code> which are blank? Any option I tired didnt worked.</p>
<p>I think your data is not null, it is empty :</p> <pre class="lang-py prettyprint-override"><code>df.where(&quot;parent_ids = ''&quot;).show() </code></pre>
python-3.x|pandas|apache-spark|pyspark
1
1,904,514
66,652,861
Accessing variable from outside the function in GUI (Tkinter) Python
<p>I have a simple GUI in python. I want to print the value entered by the user from outside the function. The code I made is given below</p> <pre><code> def save_BasePath(): Base_Path_info = Base_Path.get() #print(Base_Path_info) file_basepath = open(&quot;basepath.txt&quot;, &quot;w&quot;) file_basepath.write(str(Base_Path_info)) file_basepath.close() app = Tk() app.geometry(&quot;500x500&quot;) app.title(&quot;Python File Handling in Forms&quot;) heading = Label(text=&quot;Input Print&quot;, fg=&quot;black&quot;, bg=&quot;yellow&quot;, width=&quot;500&quot;, height=&quot;3&quot;, font=&quot;10&quot;) heading.pack() Base_Path_text = Label(text=&quot;Base_Path :&quot;) Base_Path_text.place(x=155, y=70) Base_Path = StringVar() Base_Path_entry = Entry(textvariable=Base_Path, width=&quot;30&quot;) Base_Path_entry.place(x=155, y=100) button_basepath = Button(app, text=&quot;Enter Base Path&quot;, command=save_BasePath, width=&quot;15&quot;, height=&quot;2&quot;, bg=&quot;grey&quot;) button_basepath.place(x=175, y=125) #I need the user input from the function here so that I can use it further mainloop() </code></pre> <p>On Pressing the button, I get the user input. I am able to print from within the save_basepath function. But I want to access the user input from outside so that I can work on it. Any help is appreciated</p>
<p>we can use the global keyword:</p> <pre><code>from tkinter import * def save_BasePath(): global Base_Path_info Base_Path_info = Base_Path.get() #print(Base_Path_info) file_basepath = open(&quot;basepath.txt&quot;, &quot;w&quot;) file_basepath.write(str(Base_Path_info)) file_basepath.close() print_it() def print_it(): print(Base_Path_info) app = Tk() app.geometry(&quot;500x500&quot;) app.title(&quot;Python File Handling in Forms&quot;) heading = Label(text=&quot;Input Print&quot;, fg=&quot;black&quot;, bg=&quot;yellow&quot;, width=&quot;500&quot;, height=&quot;3&quot;, font=&quot;10&quot;) heading.pack() Base_Path_text = Label(text=&quot;Base_Path :&quot;) Base_Path_text.place(x=155, y=70) Base_Path = StringVar() Base_Path_entry = Entry(textvariable=Base_Path, width=&quot;30&quot;) Base_Path_entry.place(x=155, y=100) button_basepath = Button(app, text=&quot;Enter Base Path&quot;, command=save_BasePath, width=&quot;15&quot;, height=&quot;2&quot;, bg=&quot;grey&quot;) button_basepath.place(x=175, y=125) #I need the user input from the function here so that I can use it further mainloop() </code></pre> <p>But keep in mind that we want to avoid this, and the best way to do that is by using a class.</p> <p>Here is how I would do it:</p> <pre><code>from tkinter import * class myProgram: def __init__(self): heading = Label(text=&quot;Input Print&quot;, fg=&quot;black&quot;, bg=&quot;yellow&quot;, width=&quot;500&quot;, height=&quot;3&quot;, font=&quot;10&quot;) heading.pack() Base_Path_text = Label(text=&quot;Base_Path :&quot;) Base_Path_text.place(x=155, y=70) self.Base_Path = StringVar() Base_Path_entry = Entry(textvariable=self.Base_Path, width=&quot;30&quot;) Base_Path_entry.place(x=155, y=100) button_basepath = Button(app, text=&quot;Enter Base Path&quot;, command=self.save_BasePath, width=&quot;15&quot;, height=&quot;2&quot;, bg=&quot;grey&quot;) button_basepath.place(x=175, y=125) def save_BasePath(self): self.Base_Path_info = self.Base_Path.get() file_basepath = open(&quot;basepath.txt&quot;, &quot;w&quot;) file_basepath.write(str(self.Base_Path_info)) file_basepath.close() self.print_it() def print_it(self): print(self.Base_Path_info) # Continue your code here... app = Tk() app.geometry(&quot;500x500&quot;) app.title(&quot;Python File Handling in Forms&quot;) myProgram() mainloop() </code></pre>
python|tkinter|tkinter-entry
1
1,904,515
66,571,161
Groupby one column and show distinct size on the other (Pandas)
<p>I have data where I would like to groupby one column and take the distinct size of the another column. I would like to keep both column titles and add a count column</p> <p>data</p> <pre><code>Id location A.1 ny A.2 ny A ny A ny B ca B ca </code></pre> <p>desired</p> <pre><code>Id location count A.1 ny 3 A.2 ny A ny B ca 2 B ca </code></pre> <p>doing</p> <pre><code>df.groupby('location')['Id'].size() </code></pre> <p>However, the output is not showing the titles and I am not sure how to add the Id column. Any suggestion is appreciated</p>
<pre><code>df.assign(count=df.groupby('location').transform('nunique')) </code></pre> <p>Output:</p> <pre><code> Id location count 0 A.1 ny 3 1 A.2 ny 3 2 A ny 3 3 A ny 3 4 B ca 1 5 B ca 1 </code></pre>
python|pandas|numpy
1
1,904,516
64,077,109
execute block of code independently in the function in Python2.7
<p>I got stuck in unique scenario. I have a function &quot;hello()&quot; that <strong>should</strong> executes in every 10 seconds. Now, I have a requirement that within the same function a piece of code should run after <strong>every</strong> 60 seconds independent of the 10 sec execution. What would be the best way to achieve it. Thanks.</p> <p>Sample code:</p> <pre><code>import time def hello(): while True: print &quot;Hello World&quot; time.sleep(10) #Do something after every 60 seconds #Do something after every 86400 seconds </code></pre>
<p>As I realized, you have 3 independent processes. One approach is defining three independent threads and make each run every X seconds. By the way, You should use the newer version of Python</p> <pre><code>from multiprocessing import Pool, current_process import time def f(seconds): while True: print(&quot;%s runs every %d seconnds&quot; % (current_process().name, seconds) ) time.sleep(n) if __name__ == '__main__': p = Pool(3) print([w.name for w in p._pool]) p.map(f, [10, 60, 10]) </code></pre>
python|python-2.7
1
1,904,517
53,129,602
Collect a single variable in the list
<pre><code>list = [ "test1", "test2", "test3"] </code></pre> <p>I would like to assign the variables in the list as a string to x, and every new variable will be put together, but not put at the end. I would like <code>"test, test2, test3"</code></p> <p>What should I do for this?</p>
<p>You can use <code>join</code> like this to get desired result if all the list items are <code>str</code></p> <pre><code>data = [ "test1", "test2", "test3"] print(', '.join(data)) # Output : test1, test2, test3 </code></pre> <p>Also, keep in mind it is not a good practice to use Python keywords as variable names.</p> <p>If you want to do the same with a list of other data items that are not string you can use <code>map</code> over the list followed by <code>join</code> like shown below.</p> <pre><code>data = [ "test1", "test2", "test3"] print(', '.join(map(str, data))) # Output : test1, test2, test3 </code></pre> <p>Alternatively, if you are not comfortable in using <code>map</code>, you can use a list comprehension inside join like this</p> <pre><code>data = [ "test1", "test2", "test3"] print(', '.join(str(x) for x in data)) </code></pre>
python|python-2.7
2
1,904,518
53,021,168
given x by x grid of characters, find word and its location
<p>looking for suggestion on algorithm/script to look for word within the list of list below, A word can be read :</p> <ol> <li>horizontally from left to right</li> <li>vertically from top to bottom</li> <li>diagonally from top left to bottom right</li> </ol> <p>return </p> <pre><code>L = [['N', 'D', 'A', 'O', 'E', 'L', 'D', 'L', 'O', 'G', 'B', 'M', 'N', 'E'], ['I', 'T', 'D', 'C', 'M', 'E', 'A', 'I', 'N', 'R', 'U', 'T', 'S', 'L'], ['C', 'L', 'U', 'U', 'E', 'I', 'C', 'G', 'G', 'G', 'O', 'L', 'I', 'I'], ['K', 'M', 'U', 'I', 'M', 'U', 'I', 'D', 'I', 'R', 'I', 'A', 'L', 'T'], ['E', 'U', 'R', 'T', 'U', 'N', 'G', 'S', 'T', 'E', 'N', 'B', 'V', 'H'], ['L', 'I', 'L', 'S', 'L', 'T', 'T', 'U', 'L', 'R', 'U', 'O', 'E', 'I'], ['C', 'M', 'A', 'T', 'E', 'T', 'I', 'U', 'R', 'D', 'R', 'C', 'R', 'U'], ['I', 'D', 'S', 'C', 'A', 'M', 'A', 'G', 'N', 'E', 'S', 'I', 'U', 'M'], ['M', 'A', 'M', 'P', 'D', 'M', 'U', 'I', 'N', 'A', 'T', 'I', 'T', 'I'], ['P', 'C', 'N', 'P', 'L', 'A', 'T', 'I', 'N', 'U', 'M', 'D', 'L', 'L'], ['H', 'Z', 'E', 'M', 'A', 'N', 'G', 'A', 'N', 'E', 'S', 'E', 'I', 'G'], ['M', 'G', 'I', 'T', 'I', 'N', 'R', 'U', 'N', 'O', 'R', 'I', 'T', 'C'], ['R', 'I', 'A', 'N', 'N', 'A', 'M', 'E', 'R', 'C', 'U', 'R', 'Y', 'N'], ['U', 'O', 'T', 'C', 'C', 'R', 'E', 'P', 'P', 'O', 'C', 'E', 'E', 'R']] </code></pre> <p>I am thinking of format like</p> <pre><code>def find_word(filename, word): location = find_word_horizontally(grid, word) found = False if location: found = True print(word, 'was found horizontally (left to right) at position', location) location = find_word_vertically(grid, word) if location: found = True print(word, 'was found vertically (top to bottom) at position', location) location = find_word_diagonally(grid, word) if location: found = True print(word, 'was found diagonally (top left to bottom right) at position', location) if not found: print(word, 'was not found') def find_word_horizontally(grid, word): def find_word_vertically(grid, word): def find_word_diagonally(grid, word): </code></pre> <p>expected output:</p> <pre><code>find_word('word_search_1.txt', 'PLATINUM') PLATINUM was found horizontally (left to right) at position (10, 4) find_word('word_search_1.txt', 'LITHIUM') LITHIUM was found vertically (top to bottom) at position (2, 14) find_word('word_search_1.txt', 'MISS') LITHIUM was found vertically (top to bottom) at position (2, 5) </code></pre>
<p>You can do a DFS to find the word like this:</p> <pre><code>class Find: def __init__(self): self.dx = [1, 1, 0] #go down, go diag, go right self.dy = [0, 1, 1] def FindWord(self, grid, word): if len(word) == 0: return False m = len(grid) if m == 0: return False n = len(grid[0]) if n == 0: return False for i in range(m): for j in range(n): for d in range(3): #try all 3 directions if self.Helper(grid, word, i, j, 0, m, n, d): print("Found word at " + str(i) + "," + str(j)) return True return False def Helper(self, grid, word, x, y, k, rows, cols, direction): if k == len(word): return True if grid[x][y] != word[k]: return False new_x = x + self.dx[direction] new_y = y + self.dy[direction] return (self.InBound(new_x, new_y, rows, cols) and self.Helper(grid, word, new_x, new_y, k + 1, rows, cols, direction)) def InBound(self, i, j, rows,cols): return (i &gt;= 0 and i &lt; rows and j &gt;= 0 and j &lt; cols) L = [['N', 'D', 'A', 'O', 'E', 'L', 'D', 'L', 'O', 'G', 'B', 'M', 'N', 'E'], ['I', 'T', 'D', 'C', 'M', 'E', 'A', 'I', 'N', 'R', 'U', 'T', 'S', 'L'], ['C', 'L', 'U', 'U', 'E', 'I', 'C', 'G', 'G', 'G', 'O', 'L', 'I', 'I'], ['K', 'M', 'U', 'I', 'M', 'U', 'I', 'D', 'I', 'R', 'I', 'A', 'L', 'T'], ['E', 'U', 'R', 'T', 'U', 'N', 'G', 'S', 'T', 'E', 'N', 'B', 'V', 'H'], ['L', 'I', 'L', 'S', 'L', 'T', 'T', 'U', 'L', 'R', 'U', 'O', 'E', 'I'], ['C', 'M', 'A', 'T', 'E', 'T', 'I', 'U', 'R', 'D', 'R', 'C', 'R', 'U'], ['I', 'D', 'S', 'C', 'A', 'M', 'A', 'G', 'N', 'E', 'S', 'I', 'U', 'M'], ['M', 'A', 'M', 'P', 'D', 'M', 'U', 'I', 'N', 'A', 'T', 'I', 'T', 'I'], ['P', 'C', 'N', 'P', 'L', 'A', 'T', 'I', 'N', 'U', 'M', 'D', 'L', 'L'], ['H', 'Z', 'E', 'M', 'A', 'N', 'G', 'A', 'N', 'E', 'S', 'E', 'I', 'G'], ['M', 'G', 'I', 'T', 'I', 'N', 'R', 'U', 'N', 'O', 'R', 'I', 'T', 'C'], ['R', 'I', 'A', 'N', 'N', 'A', 'M', 'E', 'R', 'C', 'U', 'R', 'Y', 'N'], ['U', 'O', 'T', 'C', 'C', 'R', 'E', 'P', 'P', 'O', 'C', 'E', 'E', 'R']] inst = Find() inst.FindWord(L, "TUI") </code></pre> <p>The output in this case will be "Found Word at 1,1"</p>
python
0
1,904,519
65,218,650
Create a barplot using vispy Rectangle
<p>I'm tryng to create a barchart using Vispy since im tended to use opengl cause of a large set of data.</p> <p>From the vispy docs i found the visuals.Rectangle class which provides me a &quot;bar&quot;. Unfortunately the class takes only 1 set of points so i tried to create multiple rectangles within a loop. At this point, the plot already slow down at a few rectangles, 100-1000 is already really bad. Obviously thats the wrong way to create the chart which brings me here. How can i create multiple rectangle without adding every rectangle separatley?</p> <p>Heres a snippet:</p> <pre><code>import sys import numpy as np from vispy import scene, app canvas = scene.SceneCanvas(keys='interactive') canvas.size = 600, 600 canvas.show() grid = canvas.central_widget.add_grid() view = grid.add_view(row=0, col=0) view.camera = scene.PanZoomCamera(rect=(-1,-1,10,10)) for i in range(10): rec = scene.visuals.Rectangle(center=(i, i), height=1, width=0.5, color='r') view.add(rec) gl = scene.visuals.GridLines(parent=view.scene) if __name__ == '__main__' and sys.flags.interactive == 0: app.run() </code></pre>
<p>As you've discovered, there is no builtin bar chart for VisPy (it would be a welcome contribution to the plotting API). I think the easiest way to do it would be to make a MeshVisual that contains multiple rectangles. I don't have the code to do this, but this would be the starting point.</p> <p>Generally, it is not recommended to create a lot of Visuals as it will hurt performance. VisPy (and OpenGL) will draw each visual serially, one at a time, and having a lot can really hurt your frames per second and overall performance.</p> <p>Edit: Ah you could probably base this off of the HistogramVisual: <a href="https://github.com/vispy/vispy/blob/master/vispy/visuals/histogram.py" rel="nofollow noreferrer">https://github.com/vispy/vispy/blob/master/vispy/visuals/histogram.py</a></p>
python|opengl|vispy
0
1,904,520
65,437,995
Python, exit While loop with User Input using multithreading (cntrl+c wont work)
<p>I have a system utilization function, monitor(), that I want to run until the user stops it, by typing the number Zero, exit_collecting(). I don't want to abruptly end the program because that would negate the subsequent code.<br /> I tried running the two functions using multithreading, exit_collecting() and monitor(), letting monitor() run until the user stops it by typing Zero, exit_collecting().</p> <p>My code below threw a pile of tracebacks. I am a newbie at this, any ideas, any help would be great. Thank you. BTW I originally attempted using &quot;try with an except for KeyboardInterrupt&quot;, but when using the IDLE (Spyder), the ctrl-c combo doesn't work (its assigned to &quot;copy&quot;), ctrl-c didn't work when I ran it from the console in Linux either.</p> <pre><code>def exit_collecting(): try: val = int(input(&quot;Type 0 to exit data collection mode&quot;)) if val == 0: flag = 0 except: print(val,&quot;typo, enter 0 to exit data collection mode&quot;) def monitor(): import time import psutil flag = 1 while flag &gt; 0: time.sleep(1) print(&quot;cpu usuage:&quot;,psutil.cpu_percent()) from multiprocessing import Process p1 = Process(target=exit_collecting) p1.start() p2 = Process(target=monitor) p2.start() p1.join() p2.join() </code></pre>
<p>Your version with multiprocessing is doomed to failure since each process has its own memory space and does not share the same variable <code>flag</code>. It can be done with multiprocessing but you must use an implementation of <code>flag</code> that uses shared memory.</p> <p>A solution using threading is far simpler and your code should work if only you make one change. You neglected to declare <code>flag</code> as global. This is required to ensure that functions <code>exit_collecting</code> and <code>monitor</code> are modifying the same variable. Without these declarations, each function is modifying a <em>local</em> <code>flag</code> variablle:</p> <pre><code>def exit_collecting(): global flag try: val = int(input(&quot;Type 0 to exit data collection mode&quot;)) if val == 0: flag = 0 except: print(val,&quot;typo, enter 0 to exit data collection mode&quot;) def monitor(): global flag import time import psutil flag = 1 while flag &gt; 0: time.sleep(1) print(&quot;cpu usuage:&quot;,psutil.cpu_percent()) from threading import Thread p1 = Thread(target=exit_collecting) p1.start() p2 = Thread(target=monitor) p2.start() p1.join() p2.join() </code></pre> <p>But perhaps the above code can be simplified by making the <code>monitor</code> thread a <em>daemon</em> thread, that is a thread that it will automatically terminate when all non-daemon threads terminate (as currently written, it appears that function <code>monitor</code> can be terminated at any point in processing). But in any case, the main thread can perform the function that <code>exit_collecting</code> was performing. And there is no reason why you now can't use a keyboard interrupt (as long as you are waiting on an <code>input</code> statement in the main thread):</p> <pre><code>def monitor(): import time import psutil while True: time.sleep(1) print(&quot;cpu usuage:&quot;,psutil.cpu_percent()) from threading import Thread p = Thread(target=monitor) p.daemon = True p.start() try: input(&quot;Input enter to halt (or ctrl-C) ...&quot;) except KeyboardInterrupt: pass &quot;&quot;&quot; When the main thread finishes all non-daemon threads have completed and therefore the monitor thread will terminate. &quot;&quot;&quot; </code></pre> <p><strong>Update: Allow <code>monitor</code> thread to gracefully terminate and allow keyboard interrupt</strong></p> <p>I have simplified the logic a but just to use a simple global flag, <code>terminate_flag</code>, initially <code>False</code>, that is only read by the <code>monitor</code> thread and therefore does not have to explicitly declare it as Global:</p> <pre><code>terminate_flag = False def monitor(): import time import psutil while not terminate_flag: time.sleep(1) print(&quot;cpu usuage:&quot;, psutil.cpu_percent()) from threading import Thread p = Thread(target=monitor) p.start() try: input('Hit enter or ctrl-c to terminate ...') except KeyboardInterrupt: pass terminate_flag = True # tell monitor it is time to terminate p.join() # wait for monitor to gracefully terminate </code></pre>
python|while-loop|parallel-processing|python-multithreading|keyboardinterrupt
1
1,904,521
72,036,671
How to get the path or the image object from a .jpg uploaded to a server using FastAPI's UploadFile?
<p>I'm importing an image using the following code:</p> <pre><code>files = { 'file': open(r'C:/Users/jared/Deblur Project/curl requests/test.jpg', 'rb'), } response = requests.post('http://localhost:5000/net/image/evaluate_local', files=files) print(response) </code></pre> <p>This sends 'test.jpg' over to the following route:</p> <pre><code>@app.post(&quot;/net/image/evaluate_local&quot;) async def get_net_image_evaluate_local(file: UploadFile = File(...)): image_path = file.read() threshold = 0.75 model_path = &quot;model.tflite&quot; prediction = analyze_images(model_path, image_path, threshold) return prediction </code></pre> <p>Obviously, <code>image_path = file.read()</code> is not working, but it's showcasing what I'm trying to do. I need to provide an image path to the <code>analyze_images()</code> function, but I'm not exactly sure how to do so.</p> <p>If I cannot provide it as a path, I am also trying to provide it as raw bytes array for the model to use.</p> <pre><code>image_path = file.read() </code></pre> <p>returns</p> <pre><code>b'\xff\xd8\xff\xe0\x00\x10JFIF\x00\x01\x01\x00\x00\x01\x00\x01\x00\x00\xff\xe2\x02(ICC_PROFILE\x00\x01\x01\x00\x00\x02\x18\x00\x00\x00\x00\x02\x10\x00\x00mntrRGB XYZ \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00acsp\x00\x00\x00\x00\x00\x00... </code></pre> <p>which I am also unsure how to work with.</p> <p>Anyone have any advice on how to proceed?</p>
<p>There is no immediate file path because when you use FastAPI's/Starlette's <a href="https://fastapi.tiangolo.com/tutorial/request-files/#file-parameters-with-uploadfile" rel="nofollow noreferrer"><code>UploadFile</code></a>,</p> <blockquote> <p>It uses a &quot;spooled&quot; file:</p> <ul> <li>A file stored in memory up to a maximum size limit, and after passing this limit it will be stored in disk.</li> </ul> </blockquote> <p>The underlying implementation is actually from Python's standard <a href="https://docs.python.org/3/library/tempfile.html#module-tempfile" rel="nofollow noreferrer"><code>tempfile</code> module</a> for generating temporary files and folders. See the <a href="https://docs.python.org/3/library/tempfile.html#module-tempfile" rel="nofollow noreferrer"><code>tempfile.SpooledTemporaryFile</code></a> section:</p> <blockquote> <p>This class operates exactly as <a href="https://docs.python.org/3/library/tempfile.html#tempfile.TemporaryFile" rel="nofollow noreferrer"><code>TemporaryFile()</code></a> does, except that data is spooled in memory until the file size exceeds <code>max_size</code>, or until the file’s <code>fileno()</code> method is called, at which point the contents are written to disk and operation proceeds as with <a href="https://docs.python.org/3/library/tempfile.html#tempfile.TemporaryFile" rel="nofollow noreferrer"><code>TemporaryFile()</code></a>.</p> </blockquote> <p>If you can work instead with the raw bytes of the image (<a href="https://stackoverflow.com/a/72037114/2745495">as what you did using <code>.read()</code> in your other answer</a>), then I think that's the better approach, as most image processing starts with the images bytes anyway. (Make sure to use <code>await</code> appropriately if you are calling the async methods!)</p> <p>An alternative is, if a function expects a &quot;file-like&quot; object, then you can pass in the <code>UploadFile.file</code> object itself, which is the <code>SpooledTemporaryFile</code> object, which is the</p> <blockquote> <p>... actual Python file that you can pass directly to other functions or libraries <strong>that expect a &quot;file-like&quot; object</strong>.</p> </blockquote> <p>If you <em>really</em> need a file <em>on disk</em> and get a path to it, you can write the contents to a <a href="https://docs.python.org/3/library/tempfile.html#tempfile.NamedTemporaryFile" rel="nofollow noreferrer"><code>NamedTemporaryFile</code>'</a> which</p> <blockquote> <p>... is guaranteed to have a visible name in the file system (on Unix, the directory entry is not unlinked). That name can be retrieved from the name attribute of the returned file-like object.<br /> ...<br /> The returned object is always a file-like object whose file attribute is the underlying true file object. This file-like object can be used in a with statement, just like a normal file.</p> </blockquote> <pre><code>@app.post(&quot;/net/image/evaluate_local&quot;) async def get_net_image_evaluate_local(file: UploadFile = File(...)): file_suffix = &quot;&quot;.join(file.filename.partition(&quot;.&quot;)[1:]) with NamedTemporaryFile(mode=&quot;w+b&quot;, suffix=file_suffix) as file_on_disk: file_contents = await file.read() file_on_disk.write(file_contents) image_path = file_on_disk.name print(image_path) threshold = 0.75 model_path = &quot;model.tflite&quot; prediction = analyze_images(model_path, image_path, threshold) return prediction </code></pre> <p>On my macOS, <code>image_path</code> prints out something like</p> <pre><code>/var/folders/3h/pdjwtnlx4p13chnw21rvwbtw0000gp/T/tmp1v52fm95.png </code></pre> <p>and that file would be available as long as the file is not yet closed.</p>
python|fastapi
0
1,904,522
68,713,976
How to input a mix feature into a LSTM model?
<p>Assume, I have two features: x1 and x2. Here, <strong>x1 is a vector of word index</strong> and <strong>x2 is a vector of numerical values</strong>. The length of x1 and x2 are equal to 50. There are 6000 rows for each x1 and x2. I combine these two into one such as</p> <pre><code>X = np.array([np.row_stack((x1[i], x2[i])) for i in range(x1.shape[0])]) </code></pre> <p>My initial LSTM model is</p> <pre><code>X_input = Input(shape = (50, 2), name = &quot;X_seq&quot;) X_hidden1 = LSTM(units = 256, dropout = 0.25, return_sequences = True)(X_input) X_hidden2 = LSTM(units = 256, dropout = 0.25, return_sequences = True)(X_hidden1) X_hidden3 = LSTM(units = 128, dropout = 0.25)(X_hidden2) X_dense = Dense(units = 128, activation = 'relu')(X_hidden3) X_dense_dropout = Dropout(0.25)(X_dense) concat = tf.keras.layers.concatenate(inputs = [X_dense_dropout]) output = Dense(units = num_category, activation = 'softmax', name = &quot;output&quot;)(concat) model = tf.keras.Model(inputs = [X_input], outputs = [output]) model.compile(optimizer = 'adam', loss = &quot;sparse_categorical_crossentropy&quot;, metrics = [&quot;accuracy&quot;]) </code></pre> <p>However, I know I need to have an embedding layer to take care of <code>X[0,:]</code> right below the Input layer. Thus, I modified my above code to</p> <pre><code>X_input = Input(shape = (50, 2), name = &quot;X_seq&quot;) x1_embedding = Embedding(input_dim = max_pages, output_dim = embedding_dim, input_length = max_length)(X_input[0,:]) X_concat = tf.keras.layers.concatenate(inputs = [x1_embedding, X_input[1,:]]) X_hidden1 = LSTM(units = 256, dropout = 0.25, return_sequences = True)(X_concat) X_hidden2 = LSTM(units = 256, dropout = 0.25, return_sequences = True)(X_hidden1) X_hidden3 = LSTM(units = 128, dropout = 0.25)(X_hidden2) X_dense = Dense(units = 128, activation = 'relu')(X_hidden3) X_dense_dropout = Dropout(0.25)(X_dense) concat = tf.keras.layers.concatenate(inputs = [X_dense_dropout]) output = Dense(units = num_category, activation = 'softmax', name = &quot;output&quot;)(concat) model = tf.keras.Model(inputs = [X_input], outputs = [output]) model.compile(optimizer = 'adam', loss = &quot;sparse_categorical_crossentropy&quot;, metrics = [&quot;accuracy&quot;]) </code></pre> <p>Python shows an error</p> <pre><code>ValueError: A `Concatenate` layer requires inputs with matching shapes except for the concat axis. Got inputs shapes: [(None, 2, 15), (None, 2)] </code></pre> <p>any suggestion? many thanks</p>
<p>The problem is that the input to the <code>concat</code> layer has different dimensions and hence we can't <code>concat</code> them. To overcome this issue we could just reshape the input to the <code>concat</code> layer using <code>tf.keras.layers.Reshape</code> like below and the rest will be same.</p> <pre><code>reshaped_input = tf.keras.layers.Reshape((-1,1))(X_input[:, 1]) X_concat = tf.keras.layers.concatenate(inputs = [x1_embedding, reshaped_input]) </code></pre>
python|tensorflow|machine-learning|keras|deep-learning
0
1,904,523
62,613,851
Pandas dataframe unstack data and create new columns
<p>I have two sets of stacked data as follows:</p> <pre><code> set n value_1 value_2 0 1 1024 25942.6 25807.8 ----&gt; first set starts here 1 1 2048 72000.5 71507.9 2 1 4096 161095.0 160303.0 3 1 8192 356419.0 354928.0 4 1 16384 793562.0 788666.0 5 1 32768 1914250.0 1889850.0 6 1 65536 3490860.0 3479040.0 7 1 131072 8096130.0 8036290.0 8 1 262144 16616500.0 16525400.0 11 2 1024 35116.3 35032.5 ----&gt; second set starts here 12 2 2048 98783.8 98507.0 13 2 4096 230813.0 230206.0 14 2 8192 521754.0 518052.0 15 2 16384 1046870.0 1040990.0 16 2 32768 2118340.0 2112680.0 17 2 65536 4693000.0 4673130.0 18 2 131072 9960240.0 9892870.0 19 2 262144 21230600.0 21068700.0 </code></pre> <p>How can I unstack them so that I get two new columns <code>value_1_2</code>, and <code>value_2_2</code>, which correspond to the second set of data and match to the first one based on the value of <code>n</code>?</p> <p>This is what I want:</p> <pre><code>n value_1 value_2 value_1_2 value_2_2 1024 25942.6 25807.8 35116.3 35032.5 2048 72000.5 71507.9 98783.8 98507 4096 161095 160303 230813 230206 8192 356419 354928 521754 518052 16384 793562 788666 1046870 1040990 32768 1914250 1889850 2118340 2112680 65536 3490860 3479040 4693000 4673130 131072 8096130 8036290 9960240 9892870 262144 16616500 16525400 21230600 21068700 </code></pre>
<p>First we create a list of <code>dfs</code> by using <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>DataFrame.groupby</code></a> to group the dataframe on the column <code>Set</code>, then for each group in the dataframe we use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.add_suffix.html" rel="nofollow noreferrer"><code>DataFrame.add_suffix</code></a> to add the group identifier to each of the columns:</p> <p>Finally, we use <a href="https://docs.python.org/3/library/functools.html" rel="nofollow noreferrer"><code>functools.reduce</code></a> to reduce the list of dataframes <code>dfs</code> to the single <em>unstacked dataframe</em> by applying <a href="https://pandas.pydata.org/pandas-docs/version/0.23.4/generated/pandas.merge.html" rel="nofollow noreferrer"><code>pd.merge</code></a> operation on the consecutive dataframe on the column <code>n</code>.</p> <pre><code>from functools import reduce dfs = [ g.drop('set', 1).add_suffix(f'_{k}').rename({f'n_{k}': 'n'}, axis=1) for k, g in df.groupby('set') ] df1 = reduce(lambda x, y: pd.merge(x, y, on='n'), dfs) </code></pre> <p>Result:</p> <pre><code># print(df1) n value_1_1 value_2_1 value_1_2 value_2_2 0 1024 25942.6 25807.8 35116.3 35032.5 1 2048 72000.5 71507.9 98783.8 98507.0 2 4096 161095.0 160303.0 230813.0 230206.0 3 8192 356419.0 354928.0 521754.0 518052.0 4 16384 793562.0 788666.0 1046870.0 1040990.0 5 32768 1914250.0 1889850.0 2118340.0 2112680.0 6 65536 3490860.0 3479040.0 4693000.0 4673130.0 7 131072 8096130.0 8036290.0 9960240.0 9892870.0 8 262144 16616500.0 16525400.0 21230600.0 21068700.0 </code></pre>
python|pandas|dataframe|group-by
2
1,904,524
67,227,088
Ignore a condition is the values is not present in pandas
<p>Hi i have following code :</p> <pre><code>def label_exception (row): if row['Dividend with no set frequency'] &gt;= 1 : return 'ExceptionCase' if (row['count'] != 12) &amp; (row['Monthly dividend'] &gt; 1) : return 'ExceptionCase ' if (row['count'] != 4) &amp; (row['Quarterly dividend'] &gt; 1) : return 'ExceptionCase' if (row['count'] != 2) &amp; (row['Semi-annual dividend'] &gt; 1): return 'ExceptionCase' if (row['count'] != 1) &amp; (row['Annual dividend'] &gt; 1): return 'ExceptionCase' if row['Semi-annual dividend'] is not in : else: return 'NotanExceptionCase' Final_Output['exception'] = Final_Output.apply (lambda row: label_exception(row), axis=1) </code></pre> <p>Now if any of the above row value is not present in the data frame its giving an Key error What i am trying to do is if the value is not present it should ignore and continue to next condition instead of giving an key error</p> <p>How can this be put in place</p>
<p>You should us try except, that is:</p> <pre><code>try: if row['something'] &gt; something: return 'something' except KeyError: pass </code></pre> <p>You should also check how to raise exceptions, rather than simply print them as strings.</p>
python|exception|key
0
1,904,525
67,208,806
Pivot dataframe using 2 columns
<p>Thanks for taking the time to read this :)</p> <p>I have a data frame that contains 3 columns:</p> <pre><code>account opportunity product c1 o1 p1 c1 o1 p2 c1 o1 p3 c2 o2 p2 c2 o2 p3 c2 o4 p1 </code></pre> <p>I want to pivot it to something like this:</p> <pre><code>account opportunity product c1 o1 [p1,p2,p3] c2 o2 [p3,p3] c2 o4 [p1] </code></pre> <p>so that I am able to one-hot encode the <code>product</code> field like so</p> <pre><code>df= df.join(pd.DataFrame(mlb.fit_transform(issues.pop('product')), columns=mlb.classes_, index=df.index)) </code></pre> <p>The final output will look like this:</p> <pre><code> account opportunity p1 p2 p3 c1 o1 1 1 1 c2 o2 0 1 1 c2 o4 1 0 0 </code></pre> <p>I have not been able to find out the appropriate way to do the first transformation... Could anyone please help me in this regard? Is it possible through df.pivot?</p>
<p>This is crosstab with multiple columns:</p> <pre><code>pd.crosstab( index=[df[&quot;account&quot;], df[&quot;opportunity&quot;]], columns=df[&quot;product&quot;] ).reset_index() </code></pre> <pre><code>product account opportunity p1 p2 p3 0 c1 o1 1 1 1 1 c2 o2 0 1 1 2 c2 o4 1 0 0 </code></pre>
python|pandas|dataframe
1
1,904,526
10,917,245
Numpy: ImportError: cannot import name TestCase
<p>I installed numpy from </p> <pre><code>sudo apt-get install numpy </code></pre> <p>Then in python2.7 on importing numpy with</p> <pre><code>import numpy </code></pre> <p>I get this error</p> <pre><code>Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "/usr/local/lib/python2.7/dist-packages/numpy/__init__.py", line 137, in &lt;module&gt; import add_newdocs File "/usr/local/lib/python2.7/dist-packages/numpy/add_newdocs.py", line 9, in &lt;module&gt; from numpy.lib import add_newdoc File "/usr/local/lib/python2.7/dist-packages/numpy/lib/__init__.py", line 4, in &lt;module&gt; from type_check import * File "/usr/local/lib/python2.7/dist-packages/numpy/lib/type_check.py", line 8, in &lt;module&gt; import numpy.core.numeric as _nx File "/usr/local/lib/python2.7/dist-packages/numpy/core/__init__.py", line 45, in &lt;module&gt; from numpy.testing import Tester File "/usr/local/lib/python2.7/dist-packages/numpy/testing/__init__.py", line 8, in &lt;module&gt; from unittest import TestCase ImportError: cannot import name TestCase </code></pre> <p>I then removed Numpy and Scipy. Then again installed from the github repo. But I still get the same error. Please help.</p> <p>Thank You.</p>
<p>I suspect that you have a local file called <code>unittest.py</code> that is getting imported instead of the standard module.</p>
python|numpy|scipy
20
1,904,527
63,469,889
decisive trees to pandas
<p>all day trying to implement decisive trees on the titanic database using TensorFlow.</p> <pre><code>fc = tf.feature_column CATEGORICAL_COLUMNS = ['Pclass', 'Sex', 'SibSp', 'Parch', 'Embarked'] NUMERIC_COLUMNS = ['Age', 'Fare'] def one_hot_cat_column(feature_name, vocab): return tf.feature_column.indicator_column(tf.feature_column.\ categorical_column_with_vocabulary_list(feature_name,vocab)) feature_columns = [] for feature_name in CATEGORICAL_COLUMNS: # Need to one-hot encode categorical features. vocabulary = train[feature_name].unique() feature_columns.append(one_hot_cat_column(feature_name, vocabulary)) for feature_name in NUMERIC_COLUMNS: feature_columns.append(tf.feature_column.\ numeric_column(feature_name,dtype=tf.float32)) </code></pre> <p>everywhere only this method of implementation. this block implements the &quot;one-hot encoder&quot;. How to convert the result after these manipulations into a Pandas table? Needed for easy viewing and better understanding of how these lines work</p>
<p>Suppose y_test is your predictions and y is the columns specified for submission, then execute: <code>df = pd.DataFrame(data=y_test,columns=['y'])</code> Columns refer to the column name and then simply do <code>df.to_csv</code> to convert your predictions dataframe in csv type file and submit.</p>
python|pandas|dataframe|tensorflow
1
1,904,528
56,707,661
Manually Assign Dropout Layer in Keras
<p>I'm trying to learn the inner workings of dropout regularization in NN. I'm largely working from "Deep Learning with Python" by Francois Chollet.</p> <p>Say I'm using the IMDB movie review sentiment data and building a simple model like below:</p> <pre><code># download IMDB movie review data # keeping only the first 10000 most freq. occurring words to ensure manageble sized vectors from keras.datasets import imdb (train_data, train_labels), (test_data, test_labels) = imdb.load_data( num_words=10000) # prepare the data import numpy as np # create an all 0 matrix of shape (len(sequences), dimension) def vectorize_sequences(sequences, dimension=10000): results = np.zeros((len(sequences), dimension)) for i, sequence in enumerate(sequences): # set specific indices of results[i] = 1 results[i, sequence] = 1. return results # vectorize training data x_train = vectorize_sequences(train_data) # vectorize test data x_test = vectorize_sequences(test_data) # vectorize response labels y_train = np.asarray(train_labels).astype('float32') y_test = np.asarray(test_labels).astype('float32') # build a model with L2 regularization from keras import regularizers from keras import models from keras import layers model = models.Sequential() model.add(layers.Dense(16, kernel_regularizer=regularizers.l2(0.001), activation='relu', input_shape=(10000,))) model.add(layers.Dense(16, kernel_regularizer=regularizers.l2(0.001), activation='relu')) model.add(layers.Dense(1, activation='sigmoid')) </code></pre> <p>The book gives an example of manually setting random dropout weights using the line below:</p> <pre><code># at training time, zero out a random fraction of the values in the matrix layer_output *= np.random.randint(0, high=2, size=layer_output.shape) </code></pre> <p>How would I 1) actually integrate that into my model and 2) how would I remove the dropout at test time?</p> <p>EDIT: I'm aware of the integrated method of using dropout like the line below, I'm actually looking for a way to implement the above manually</p> <pre><code>model.add(layers.Dropout(0.5)) </code></pre>
<p>This can be implemented using a Lambda layer. </p> <pre><code>from keras import backend as K def dropout(input): training = K.learning_phase() if training is 1 or training is True: input *= K.cast(K.random_uniform(K.shape(input), minval=0, maxval=2, dtype='int32'), dtype='float32') input /= 0.5 return input def get_model(): model = models.Sequential() model.add(layers.Dense(16, kernel_regularizer=regularizers.l2(0.001), activation='relu', input_shape=(10000,))) model.add(layers.Dense(16, kernel_regularizer=regularizers.l2(0.001), activation='relu')) model.add(layers.Lambda(dropout)) # add dropout using Lambda layer model.add(layers.Dense(1, activation='sigmoid')) print(model.summary()) return model K.set_learning_phase(1) model = get_model() model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy']) model.fit(x_train, y_train, epochs=5) weights = model.get_weights() K.set_learning_phase(0) model = get_model() model.set_weights(weights) print('model prediction is {}, label is {} '.format(model.predict(x_test[0][None]), y_test[0])) </code></pre> <p>model prediction is [[0.1484453]], label is 0.0</p>
python|tensorflow|keras
2
1,904,529
17,752,111
How can I limit my sprite update rate?
<p>I'm creating a small RPG worl with pygame. So far so good, I load my maps with Tiled TMX entity manager, and everything is great. Except one thing. When my main character moves, the sprite animation is going to damn fast and I don't know what to do to avoid that. Here is my update() code :</p> <pre><code>def update(self): self.index += 1 if self.index &gt;= 2: self.index = 0 if self.direction == DIRECTIONS['down']: self.image = self.down_anim[self.index] elif self.direction == DIRECTIONS['up']: self.image = self.up_anim[self.index] elif self.direction == DIRECTIONS['left']: self.image = self.left_anim[self.index] elif self.direction == DIRECTIONS['right']: self.image = self.right_anim[self.index] </code></pre> <p>And my keyboard event management :</p> <pre><code>key=pygame.key.get_pressed() try: event = pygame.event.wait() if event.type == KEYDOWN: if (event.key == K_LEFT): if angus.direction != DIRECTIONS['left']: angus.direction = DIRECTIONS['left'] angus.update() angus.position.x -= 1 elif (event.key == K_RIGHT): if angus.direction != DIRECTIONS['right']: angus.direction = DIRECTIONS['right'] angus.update() angus.position.x += 1 elif (event.key == K_UP): if angus.direction != DIRECTIONS['up']: angus.direction = DIRECTIONS['up'] angus.update() angus.position.y -= 1 elif (event.key == K_DOWN): if angus.direction != DIRECTIONS['down']: angus.direction = DIRECTIONS['down'] angus.update() angus.position.y += 1 </code></pre> <p>I'm using a clock to force 60fps. Is there a way I can tell pygame for example : update the sprite but only if it has been more than 1 second since the last update ?<br> Thanks</p> <p>EDIT : Solution :</p> <pre><code>def update(self): self.timer += 1 if self.timer &gt;= self.UPDATE_TIME: self.index += 1 self.timer = 0 if self.index &gt;= 2: self.index = 0 if self.direction == DIRECTIONS['down']: self.image = self.down_anim[self.index] elif self.direction == DIRECTIONS['up']: self.image = self.up_anim[self.index] elif self.direction == DIRECTIONS['left']: self.image = self.left_anim[self.index] elif self.direction == DIRECTIONS['right']: self.image = self.right_anim[self.index] </code></pre>
<p>Keep a seperate variable in charge of timing whether or not to increase <code>self.index</code>.</p> <pre><code>## within __init__ self.UPDATE_TIME = 60 # or whatever works self.timer = 0 def update(self): self.timer += 1 if self.timer = self.UPDATE_TIME: self.timer = 0 self.index += 1 # unmodified below here if self.index &gt;= 2: self.index = 0 if self.direction == DIRECTIONS['down']: self.image = self.down_anim[self.index] elif self.direction == DIRECTIONS['up']: self.image = self.up_anim[self.index] elif self.direction == DIRECTIONS['left']: self.image = self.left_anim[self.index] elif self.direction == DIRECTIONS['right']: self.image = self.right_anim[self.index] </code></pre>
python|pygame
1
1,904,530
61,070,025
Python minesweeper count adjacent mines bug
<p>I am in the process of coding a minesweeper game on python. I am having trouble with my countAdjacentMines function in python. I am trying to count the number of mines around a given cell in a 2d list like this:</p> <pre><code>L = [ ['','',''], ['x','',''], ['x','',''] ] </code></pre> <p>When I try to run my code for row 1 column 2, I should get 1 as there is only 1 mine in the 3x3 around row 1 column 2, but my function returns 2. How do I go around the fact that when python 2d Lists get a value like L[-1][1] the -1 just doesnt count anything instead of the len(lis)-1 position?</p> <p>heres my code:</p> <pre><code>def countAdjacentMines(lis,row,col): total = 0 try: if lis[row-1][col-1] == 'x': total+=1 except IndexError: pass try: if lis[row-1][col] == 'x': total+=1 except: pass try: if lis[row-1][col+1] == 'x': total+=1 except: pass try: if lis[row][col-1] == 'x': total+=1 except: pass try: if lis[row][col+1] == 'x': total+=1 except: pass try: if lis[row+1][col-1] == 'x': total+=1 except: pass try: if lis[row+1][col] == 'x': total+=1 except: pass try: if lis[row+1][col+1] == 'x': total+=1 except: pass return total L = [ ['','',''], ['x','',''], ['x','',''] ] print(countAdjacentMines(L,0,1)) </code></pre>
<p>I am not 100% sure I understand the question but if I did correctly, you want python to not go to the last row because of index -1 in the list?</p> <p>If so, try using a max function like this!</p> <pre><code>L[max(0,row-1)][1] </code></pre> <p>EDIT: My old solution was bogus and counted the cells you wanted to ignore so I removed it. Instead,what you could do is add an and to your if where appropriate like so</p> <pre><code>def countAdjacentMines(lis,row,col): total = 0 if ((row &gt; 0 and col &gt; 0) and lis[row-1][col-1] == 'x'): total+=1 if (row &gt; 0 and lis[row-1][col] == 'x'): print("here") total+=1 if ((row &gt; 0 and col &lt; len(lis)-1) and lis[row-1][col+1] == 'x'): total+=1 if (col &gt; 0 and lis[row][col-1] == 'x'): total+=1 if (col &lt; len(lis)-1 and lis[row][col+1] == 'x'): total+=1 if ((row &lt; len(lis)-1 and col &gt; 0) and lis[row+1][col-1] == 'x'): total+=1 if (row &lt; len(lis)-1 and lis[row+1][col] == 'x'): total+=1 if ((row &lt; len(lis)-1 and col &lt; len(lis)-1) and lis[row+1][col+1] == 'x'): total+=1 return total L = [ ['','',''], ['x','',''], ['x','',''] ] </code></pre> <p>This will work so long as you have a square matrix, otherwise, you can use len(lis[row]) or something similar.</p> <p>Also, I suggest finding another way than try except pass to handle errors.</p> <p>Fourth time's the charm?</p> <p>LAST EDIT: if you do countAdjacentMines(L,2,0) it outputs 1 because there's no case to count the current cell we're on as it is now.</p>
python|list|2d
3
1,904,531
60,778,749
Why does simplejson think this string is invalid json?
<p>I'm trying to convert a simple JSON string <code>{\n 100: {"a": "b"}\n}</code> to a python object but its giving me this error: <code>Expecting property name enclosed in double quotes</code></p> <p>Why is it insisting that the name of the attribute be a string?</p> <pre><code>&gt;&gt;&gt; import simplejson &gt;&gt;&gt; my_json = simplejson.loads('{\n 100: {"a": "b"}\n}') Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "/Users/myuser/myenv/lib/python2.7/site-packages/simplejson/__init__.py", line 525, in loads return _default_decoder.decode(s) File "/Users/myuser/myenv/lib/python2.7/site-packages/simplejson/decoder.py", line 370, in decode obj, end = self.raw_decode(s) File "/Users/myuser/myenv/lib/python2.7/site-packages/simplejson/decoder.py", line 400, in raw_decode return self.scan_once(s, idx=_w(s, idx).end()) simplejson.errors.JSONDecodeError: Expecting property name enclosed in double quotes: line 2 column 5 (char 6) </code></pre>
<p>This value is invalid <code>{100: {"a": "b"}}</code>, you need <code>{"100": {"a": "b"}}</code>.</p> <p>The property name there being <code>100</code> needs to be enclosed in double quotes so <code>"100"</code>.</p> <blockquote> <p>Why is it insisting that the name of the attribute be a string?</p> </blockquote> <p>That's how JSON is.</p> <p>You may have been used to be able to write <code>{100: {"a": "b"}}</code> in Javascript or another language, (without double quoting the property name), but you'll still get a parsing error if you try to parse it as JSON in Javascript, e.g.:</p> <pre><code>JSON.parse('{100: {"a": "b"}}') SyntaxError: JSON.parse: expected property name or '}' at line 1 column 2 of the JSON data </code></pre>
python|simplejson
2
1,904,532
72,664,301
How to make Tensorflow-lite available for the entire system in Ubuntu
<p>My tflite directory is as follows:</p> <pre><code>/home/me/tensorflow_src/tensorflow/lite/ </code></pre> <p>However, I fail to import it in my C++ project:</p> <pre><code>#include &quot;tensorflow/lite/interpreter.h&quot; // getting a not found error </code></pre> <p>How can I add resolve this error? My assumption is that I'd need to add the tflite to my bash to make it available for all of my projects. How can I add tflite to the bash file?</p> <p>This is my CMAKE file:</p> <pre><code>cmake_minimum_required(VERSION 3.22) project(mediafile_device_crossverification) set(CMAKE_CXX_STANDARD 17) set(OpenCV FOUND 1) find_package(OpenCV REQUIRED) include_directories( ${OpenCV_INCLUDE_DIRS} ) add_executable(mediafile_device_crossverification main.cpp src/VideoProcessing.cpp src/VideoProcessing.h) </code></pre>
<p>There are various options:</p> <p>First option: Install/copy the tensorflow header files to e.g. <code>/urs/local/include</code> that folder is usually in the system include path by default.</p> <p>Second option: GCC has some environment variables that can be used to modify the system include path. <code>C_INCLUDE_PATH</code> and <code>CPLUS_INCLUDE_PATH</code>, your can add them to <code>.bashrc</code> to set them when you login. See: <a href="https://gcc.gnu.org/onlinedocs/cpp/Environment-Variables.html" rel="nofollow noreferrer">https://gcc.gnu.org/onlinedocs/cpp/Environment-Variables.html</a></p> <p>Third option is to add <code>/home/me/tensorflow_src</code> to the include path in the CMakefile.</p> <p>When searching the include path <code>#include &lt;tensorflow/lite/interpreter.h&gt;</code> should be used.</p>
c++|bash|ubuntu|cmake|tensorflow-lite
1
1,904,533
68,053,633
Error:- too many values to unpack (expected 2) python function
<p>My Code:-</p> <pre><code>Videos10k=[{'title': '', 'titleWords': ['...','...'], 'titleLength': 10, 'likes': 86, 'disLikes': 5, 'views': 2202, 'creator': '...', 'description': '...'}] def getavg(number, array=[]): views_avg = 0 for idx, Video in array: views = Video[&quot;views&quot;] views_avg = views_avg + views views_avg = views_avg / len(array) print(&quot;Average views for &quot; + number + &quot; &quot; + views_avg) getavg(&quot;10k&quot;, Videos10k) </code></pre> <p>I am getting this error. Error:-</p> <blockquote> <p>in getavg for idx, Video in array: ValueError: too many values to unpack (expected 2)</p> </blockquote>
<p>You need to get item in array with just Video, not with idx, Video</p> <pre><code>Videos10k=[{'title': '', 'titleWords': ['...','...'], 'titleLength': 10, 'likes': 86, 'disLikes': 5, 'views': 2202, 'creator': '...', 'description': '...'}] def getavg(number, array=[]): views_avg = 0 for Video in array: views = Video[&quot;views&quot;] views_avg = views_avg + float(views) views_avg = views_avg / len(array) print(&quot;Average views for &quot; + str(number) + &quot; &quot; + str(views_avg)) </code></pre> <p>Or you can change to like this</p> <pre><code>for idex, Video in enumerate(array): </code></pre>
python
6
1,904,534
68,407,578
How to correctly use clf.predict_proba?
<p>My goal is to have the three most accurate predicted label.</p> <p>By using this solution</p> <pre class="lang-py prettyprint-override"><code>clf = svm.SVC( kernel='rbf', C=51, gamma=1, probability=True ).fit(X,y) predictions=[] with open('model.pkl', 'rb') as f: clf = pickle.load(f) for line in X: output=clf.predict(X) #predictions.append(output) df['prediction'] = output # you add the list to the dataframe, then save the datframe to new csv print(df) </code></pre> <p>I'm able to retrive the predicted label. However, when I add the <code>clf.predict_proba(X)</code> as follows</p> <pre class="lang-py prettyprint-override"><code>clf = svm.SVC( kernel='rbf', C=51, gamma=1, probability=True ).fit(X,y) predictions=[] with open('model.pkl', 'rb') as f: clf = pickle.load(f) for line in X: output=clf.predict(X) output_prob=clf.predict_proba(X) #predictions.append(output) df['prediction'] = output # you add the list to the dataframe, then save the datframe to new csv print(df) </code></pre> <p>I'm having the following error:</p> <pre><code>AttributeError: predict_proba is not available when probability=False </code></pre> <p>According to the Scikit <a href="https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html" rel="nofollow noreferrer">documentation</a> the probability as True should be defined explicitly as I did in</p> <pre class="lang-py prettyprint-override"><code>clf = svm.SVC( kernel='rbf', C=51, gamma=1, probability=True ).fit(X,y) </code></pre> <p>How to fix this issue?</p> <p>Thanks</p>
<p>The code below overwrites your clf variable with the model loaded from pickle file which probably its probability attribute is False.</p> <pre><code>with open('model.pkl', 'rb') as f: clf = pickle.load(f) </code></pre> <p>So you are not using the SVC instance you've created at the first part of your code.</p> <p>According to <a href="https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html" rel="nofollow noreferrer">this</a> probability must be enabled prior to calling fit, so you can't change it in the model loaded from pickle file.</p> <p>You have to either use your own trained model (the one you created and called fit on) or use another pretrained model(loaded from pickle file) with proabiblity attribute enabled.</p>
python|scikit-learn
1
1,904,535
35,692,221
python 3.5.1 syntax error on an if else statement
<p>I'm working on a homework assignment, it looks like this:</p> <pre><code>def main(): keep_going = 'y' number_of_salespeople = 0 while keep_going == 'y' or keep_going == 'Y': process_sales() number_of_salespeople += 1 keep_going = input('Are there more salespeople? (enter y or Y for yes) ') print(' ') print('There were', number_of_salespeople, 'salespeople today.') def process_sales(): print(' ') name = input('What is the salesperson\'s name? ') first_sale_amount = float(input('What is', name, '\'s first sale amount? ')) while 0 &lt;= first_sale_amount &lt;= 25000: print('Error: that is not a valid sales amount. The amount must be greater than 0') first_sale_amount = float(input('Please enter a correct sale amount: ')) highest_sale = first_sale_amount lowest_sale = first_sale_amount average_sale = first_sale_amount number_of_sales = float(input('How many sales did', name, 'make this month? ')) for number in range(2, number_of_sales + 1): sale_amount = float(input('Enter', name, '\'s time for sale #' + str(number) + ': ')) while 0 &lt;= sale_amount &lt;= 25000: print('Error: that is not a valid sales amount. The amount must be greater than 0') sale_amount = float(input('Please enter a correct sale amount: ')) if sale_amount &gt; highest_sale: highest_sale = sale_amount else sale_amount &lt; lowest_sale: lowest_sale = sale_amount total_sales += sale_amount average_sale = (first_sale_amount + total_sales) / number_of_sales print('The highest sale for', name, 'was', \ format(highest_sale, ',.2f'), \ sep='') print('The lowest sale for', name, 'was', \ format(lowest_sale, ',.2f'), \ sep='') print('The average sale for', name, 'was', \ format(average_sale, ',.2f'), \ sep='') main() </code></pre> <p>the error I am having is in the if else statement towards the bottom, </p> <pre><code>if sale_amount &gt; highest_sale: highest_sale = sale_amount else sale_amount &lt; lowest_sale: lowest_sale = sale_amount </code></pre> <p>the error looks like this:</p> <blockquote> <p>Syntax Error: else sale_amount &lt; lowest_sale:: , line 58, pos 24</p> </blockquote> <p>I cannot see what the issue is, can anyone help me figure out where the error is coming from. Thank you for any assistance.</p>
<p><code>else</code> can't have a condition. Change to <code>elif</code> ("else if").</p>
python|python-3.x
2
1,904,536
35,725,521
Python IF AND operators
<p>I currently have two if statements that look for text in a string</p> <pre><code>if "element1" in htmlText: print("Element 1 Is Present") return if "element2" in htmlText: print("Element 2 Is Present") return </code></pre> <p>These both work great, what I would now like to do is add an if statement that checks if <code>element3</code> is present, but neither <code>element1</code> or <code>element2</code> are present</p> <p>How do I chain these 3 checks together, is there an AND operator like in PHP?</p>
<p>Since <code>return</code> will return when a match was previously found, it's enough to append this code:</p> <pre><code>if "element3" in htmlText: print("Element 3 Is Present") return </code></pre>
python
4
1,904,537
58,964,344
Capturing webcam snapshot with python silently
<p>How is it possible to capture a snapshot of webcam or stream webcam in python without popping up windows?</p>
<p>Using opencv-python package, the following piece of code momentarily captures the webcam information and saves the image in the directory specified. Note: The webcam indicator does light up for a fraction of second. I am not sure how that can be avoided as of now.</p> <pre><code>import cv2 webcam = cv2.VideoCapture(1) # Number which capture webcam in my machine # try either VideoCapture(0) or (1) based on your camera availability # in my desktop it works with (1) check, frame = webcam.read() cv2.imwrite(filename=r'&lt;Your Directory&gt;\saved_img.jpg', img=frame) webcam.release() </code></pre> <p>This piece of code can be written in a bat file and you can trigger this for what ever event you are looking for (like startup security)</p>
python|python-3.x|opencv|webcam
1
1,904,538
73,431,700
is there a way to combine lists without copying the content?
<p>I basically want to combine the references to the two lists in a way that doesn't copy the content.</p> <p>for example:</p> <pre class="lang-py prettyprint-override"><code>list1 = [1, 2, 3] list2 = [4, 5, 6] combined_list = combine(list1, list2) # combined_list = [1, 2, 3, 4, 5, 6] list1.append(3) # combined_list = [1, 2, 3, 3, 4, 5, 6] </code></pre>
<p>You can refer to <code>collections.ChainMap</code> to implement a <code>ChainSeq</code>. Here is a print only version:</p> <pre><code>from itertools import chain class ChainSeq: def __init__(self, *seqs): self.seqs = list(seqs) if lists else [[]] def __repr__(self): return f'[{&quot;, &quot;.join(map(repr, chain.from_iterable(self.seqs)))}]' </code></pre> <p>Test:</p> <pre><code>&gt;&gt;&gt; list1 = [1, 2, 3] &gt;&gt;&gt; list2 = [4, 5, 6] &gt;&gt;&gt; combined_list = ChainSeq(list1, list2) &gt;&gt;&gt; combined_list [1, 2, 3, 4, 5, 6] &gt;&gt;&gt; list1.append(3) &gt;&gt;&gt; combined_list [1, 2, 3, 3, 4, 5, 6] </code></pre>
python|list
0
1,904,539
31,536,266
Create new Plone package Error
<p>When i am trying to create new package for my Plone site,i got following trace. Command that i used:</p> <pre><code>paster create -t plone myorg.mypackage Traceback (most recent call last): File "/usr/bin/paster", line 4, in &lt;module&gt; command.run() File "/usr/lib/python2.7/dist-packages/paste/script/command.py", line 104, in run invoke(command, command_name, options, args[1:]) File "/usr/lib/python2.7/dist-packages/paste/script/command.py", line 143, in invoke exit_code = runner.run(args) File "/usr/lib/python2.7/dist-packages/paste/script/command.py", line 238, in run result = self.command() File "/usr/lib/python2.7/dist-packages/paste/script/create_distro.py", line 73, in command self.extend_templates(templates, tmpl_name) File "/usr/lib/python2.7/dist-packages/paste/script/create_distro.py", line 267, in extend_templates 'Template by name %r not found' % tmpl_name) LookupError: Template by name 'plone' not found </code></pre>
<p>Nifty ways to generate a Plone-addon-bloilerplate, four of them explained:</p> <h2>1.) <a href="https://pypi.python.org/pypi/paster" rel="nofollow">paster</a></h2> <p>The error-message tells you that there is no template name 'plone' available, it should be:</p> <pre><code>$ paster plone_basic myorg.mypackage </code></pre> <h2>2.) <a href="https://pypi.python.org/pypi/ZopeSkel" rel="nofollow">zopeskel</a></h2> <p>You probably mixed up zopeskel's corresponding template-name with paster's, in this case, they actually do the same, as zopeskel is a project deriving of paster and bases upon it. Here a template named 'plone' is available:</p> <pre><code>$ zopeskel plone myorg.mypackage </code></pre> <p>To see all available template-names of zopeskel, do:</p> <pre><code>$ zopeskel --list </code></pre> <h2>3.) <a href="https://pypi.python.org/pypi/mr.bob" rel="nofollow">mr.bob</a></h2> <p>I haven't used it, but it's propagated massively these days, so you might want to have a look, the command would be:</p> <pre><code>$ mrbob -O myorg.mypackage bobtemplates:plone_addon </code></pre> <h2>4.) <a href="https://github.com/ida/adi.devgen/" rel="nofollow">adi.devgen</a></h2> <p>Disclaimer: I'm the author. The motivation was to not be depending on other libraries and thereby avoid conflicts, that is: It doesn't have any dependencies, just some Python-methods. The corresponding command is:</p> <pre><code>$ devgen addProfileSkel myorg.mypackage </code></pre>
python|plone-4.x|plone-3.x
0
1,904,540
15,765,699
Python 2.7 "import hashlib" segmentation fault
<p>Whenever I try to import hashlib in Python 2.7 I get a segmentation fault. I've installed openssl version 1.0.0, pyOpenssl version .10, and recompiled python with the ssl lines uncommented in Modules/Setup, pointing to the correct path for the libraries and include files for openssl.</p> <p>I've run ldd on all the libraries I can find that might use libssl or libcrypto, and they're all pointing to the same versions of the files.</p> <p>gdb returns: <code>0x0000003d1d0f670 in EVP_PKEY_CTX_dup () from /usr/lib64/libcrypto.so.1.0.0</code></p> <p>Any ideas what might be going on, and how to repair it?</p>
<p><code>hashlib</code> uses libcrypto for hash algorithms if it can find libcrypto while building python. </p> <p>I suspect somehow it's ending up using a different libcrypto at runtime vs. build time.</p>
linux|python-2.7|segmentation-fault|hashlib
2
1,904,541
49,161,174
Tensorflow : logits and labels must have the same first dimension
<p>I am new in tensoflow and I want to adapt the MNIST tutorial <a href="https://www.tensorflow.org/tutorials/layers" rel="noreferrer">https://www.tensorflow.org/tutorials/layers</a> with my own data (images of 40x40). This is my model function : </p> <pre><code>def cnn_model_fn(features, labels, mode): # Input Layer input_layer = tf.reshape(features, [-1, 40, 40, 1]) # Convolutional Layer #1 conv1 = tf.layers.conv2d( inputs=input_layer, filters=32, kernel_size=[5, 5], # To specify that the output tensor should have the same width and height values as the input tensor # value can be "same" ou "valid" padding="same", activation=tf.nn.relu) # Pooling Layer #1 pool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=[2, 2], strides=2) # Convolutional Layer #2 and Pooling Layer #2 conv2 = tf.layers.conv2d( inputs=pool1, filters=64, kernel_size=[5, 5], padding="same", activation=tf.nn.relu) pool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=[2, 2], strides=2) # Dense Layer pool2_flat = tf.reshape(pool2, [-1, 10 * 10 * 64]) dense = tf.layers.dense(inputs=pool2_flat, units=1024, activation=tf.nn.relu) dropout = tf.layers.dropout( inputs=dense, rate=0.4, training=mode == tf.estimator.ModeKeys.TRAIN) # Logits Layer logits = tf.layers.dense(inputs=dropout, units=2) predictions = { # Generate predictions (for PREDICT and EVAL mode) "classes": tf.argmax(input=logits, axis=1), # Add `softmax_tensor` to the graph. It is used for PREDICT and by the # `logging_hook`. "probabilities": tf.nn.softmax(logits, name="softmax_tensor") } if mode == tf.estimator.ModeKeys.PREDICT: return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions) # Calculate Loss (for both TRAIN and EVAL modes) loss = tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits) # Configure the Training Op (for TRAIN mode) if mode == tf.estimator.ModeKeys.TRAIN: optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001) train_op = optimizer.minimize( loss=loss, global_step=tf.train.get_global_step()) return tf.estimator.EstimatorSpec(mode=mode, loss=loss, train_op=train_op) # Add evaluation metrics (for EVAL mode) eval_metric_ops = { "accuracy": tf.metrics.accuracy( labels=labels, predictions=predictions["classes"])} return tf.estimator.EstimatorSpec( mode=mode, loss=loss, eval_metric_ops=eval_metric_ops) </code></pre> <p>I have a shape size error between labels and logits : </p> <p><strong>InvalidArgumentError (see above for traceback): logits and labels must have the same first dimension, got logits shape [3,2] and labels shape [1]</strong> </p> <p>filenames_array is an array of 16 string </p> <pre><code>["file1.png", "file2.png", "file3.png", ...] </code></pre> <p>and labels_array is an array of 16 integer </p> <pre><code>[0,0,1,1,0,1,0,0,0,...] </code></pre> <p>The main function is :</p> <pre><code># Create the Estimator mnist_classifier = tf.estimator.Estimator(model_fn=cnn_model_fn, model_dir="/tmp/test_convnet_model") # Train the model cust_train_input_fn = lambda: train_input_fn_custom( filenames_array=filenames, labels_array=labels, batch_size=1) mnist_classifier.train( input_fn=cust_train_input_fn, steps=20000, hooks=[logging_hook]) </code></pre> <p>I tried to reshape logits without success :</p> <p>logits = tf.reshape(logits, [1, 2])</p> <p>I need your help, thanks</p> <hr> <p><strong>EDIT</strong></p> <p>After more time to search, in the first line of my model function</p> <pre><code>input_layer = tf.reshape(features, [-1, 40, 40, 1]) </code></pre> <p>the "-1" that signifies that the batch_size dimension will be dynamically calculated have here the value "3". The same "3" as in my error : <strong>logits and labels must have the same first dimension, got logits shape [3,2] and labels shape [1]</strong></p> <p>If I force the value to "1" I have this new error :</p> <p><strong>Input to reshape is a tensor with 4800 values, but the requested shape has 1600</strong></p> <p>Maybe a problem with my features ?</p> <hr> <p><strong>EDIT2 :</strong> </p> <p>the complete code is here : <a href="https://gist.github.com/geoffreyp/cc8e97aab1bff4d39e10001118c6322e" rel="noreferrer">https://gist.github.com/geoffreyp/cc8e97aab1bff4d39e10001118c6322e</a></p> <hr> <p><strong>EDIT3</strong></p> <p>I updated the gist with </p> <pre><code>logits = tf.layers.dense(inputs=dropout, units=1) </code></pre> <p><a href="https://gist.github.com/geoffreyp/cc8e97aab1bff4d39e10001118c6322e" rel="noreferrer">https://gist.github.com/geoffreyp/cc8e97aab1bff4d39e10001118c6322e</a></p> <p>But I don't completely understand your answer about the batch size, how the batch size can be 3 here whereas I choose a batch size of 1 ? </p> <p>If I choose a batch_size = 3 I have this error : <strong>logits and labels must have the same first dimension, got logits shape [9,1] and labels shape [3]</strong></p> <p>I tried to reshape labels : </p> <pre><code>labels = tf.reshape(labels, [3, 1]) </code></pre> <p>and I updated features and labels structure : </p> <pre><code> filenames_train = [['blackcorner-data/1.png', 'blackcorner-data/2.png', 'blackcorner-data/3.png', 'blackcorner-data/4.png', 'blackcorner-data/n1.png'], ['blackcorner-data/n2.png', 'blackcorner-data/n3.png', 'blackcorner-data/n4.png', 'blackcorner-data/11.png', 'blackcorner-data/21.png'], ['blackcorner-data/31.png', 'blackcorner-data/41.png', 'blackcorner-data/n11.png', 'blackcorner-data/n21.png', 'blackcorner-data/n31.png'] ] labels = [[0, 0, 0, 0, 1], [1, 1, 1, 0, 0], [0, 0, 1, 1, 1]] </code></pre> <p>but without success...</p>
<p>The problem is in your target shape and is related to the correct choice of an appropriate loss function. you have 2 possibilities:</p> <p><strong>1. possibility</strong>: if you have 1D integer encoded target, you can use <code>sparse_categorical_crossentropy</code> as loss function</p> <pre><code>n_class = 3 n_features = 100 n_sample = 1000 X = np.random.randint(0,10, (n_sample,n_features)) y = np.random.randint(0,n_class, n_sample) inp = Input((n_features,)) x = Dense(128, activation='relu')(inp) out = Dense(n_class, activation='softmax')(x) model = Model(inp, out) model.compile(loss='sparse_categorical_crossentropy',optimizer='adam',metrics=['accuracy']) history = model.fit(X, y, epochs=3) </code></pre> <p><strong>2. possibility</strong>: if you have one-hot encoded your target in order to have 2D shape (n_samples, n_class), you can use <code>categorical_crossentropy</code></p> <pre><code>n_class = 3 n_features = 100 n_sample = 1000 X = np.random.randint(0,10, (n_sample,n_features)) y = pd.get_dummies(np.random.randint(0,n_class, n_sample)).values inp = Input((n_features,)) x = Dense(128, activation='relu')(inp) out = Dense(n_class, activation='softmax')(x) model = Model(inp, out) model.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy']) history = model.fit(X, y, epochs=3) </code></pre>
python|tensorflow|keras|tensorflow-datasets|tensorflow-estimator
71
1,904,542
49,155,546
Properly render text with a given font in Python and accurately detect its boundaries
<p>This might strike as something very simple, and I too thought it'd be, but it apparently isn't. I must've spent a week trying to make this work, but I for the love of me can't manage to do so.</p> <p><strong>What I need</strong></p> <p>I need to render any given string (only containing standard characters) with any given font (handwritten-like) in Python. The font must be loaded from a TTF file. I also need to be able to accurately detect its borders (get the exact start and end position of the text, vertically and horizontally), preferably before drawing it. Lastly, it'd really make my life easier if the output is an array which I can then keep processing, and not an image file written to disc.</p> <p><strong>What I've tried</strong></p> <p>Imagemagick bindings (namely Wand): Couldn't figure out how to get the text metrics before setting the image size and rendering the text on it.</p> <p>Pango via Pycairo bindings: nearly inexistent documentation, couldn't figure out how to load a TrueType font from a file.</p> <p>PIL (Pillow): The most promising option. I've managed to accurately calculate the height for any text (which surprisingly is not the height <code>getsize</code> returns), but the width seems buggy for some fonts. Not only that, but those fonts with buggy width also get rendered incorrectly. Even when making the image large enough, they get cut off.</p> <p>Here are some examples, with the text "Puzzling":</p> <p>Font: <a href="https://fonts.google.com/specimen/Lovers+Quarrel" rel="noreferrer">Lovers Quarrel</a></p> <p>Result:</p> <p><a href="https://i.stack.imgur.com/06f1Q.png" rel="noreferrer"><img src="https://i.stack.imgur.com/06f1Q.png" alt="Lovers Quarrel Render"></a></p> <p>Font: <a href="https://fonts.google.com/specimen/Miss+Fajardose" rel="noreferrer">Miss Fajardose</a></p> <p>Result:</p> <p><a href="https://i.stack.imgur.com/zIvNk.png" rel="noreferrer"><img src="https://i.stack.imgur.com/zIvNk.png" alt="Miss Fajardose Render"></a></p> <p>This is the code I'm using to generate the images:</p> <pre><code>from PIL import Image, ImageDraw, ImageFont import cv2 import numpy as np import glob import os font_size = 75 font_paths = sorted(glob.glob('./fonts/*.ttf')) text = "Puzzling" background_color = 180 text_color = 50 color_variance = 60 cv2.namedWindow('display', 0) for font_path in font_paths: font = ImageFont.truetype(font_path, font_size) text_width, text_height = font.getsize(text) ascent, descent = font.getmetrics() (width, baseline), (offset_x, offset_y) = font.font.getsize(text) # +100 added to see that text gets cut off PIL_image = Image.new('RGB', (text_width-offset_x+100, text_height-offset_y), color=0x888888) draw = ImageDraw.Draw(PIL_image) draw.text((-offset_x, -offset_y), text, font=font, fill=0) cv2.imshow('display', np.array(PIL_image)) k = cv2.waitKey() if chr(k &amp; 255) == 'q': break </code></pre> <p><strong>Some questions</strong></p> <p>Are the fonts the problem? I've been told by some colleagues that might be it, but I don't think so, since they get rendered correctly by the Imagemagick via command line.</p> <p>Is my code the problem? Am I doing something wrong which is causing the text to get cut off?</p> <p>Lastly, is it a bug in PIL? In that case, which library do you recommend I use to solve my problem? Should I give Pango and Wand another try?</p>
<p><a href="https://pypi.python.org/pypi/pyvips" rel="nofollow noreferrer">pyvips</a> seems to do this correctly. I tried this:</p> <pre><code>$ python3 Python 3.7.3 (default, Apr 3 2019, 05:39:12) [GCC 8.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; import pyvips &gt;&gt;&gt; x = pyvips.Image.text("Puzzling", dpi=300, font="Miss Fajardose", fontfile="/home/john/pics/MissFajardose-Regular.ttf") &gt;&gt;&gt; x.write_to_file("x.png") </code></pre> <p>To make:</p> <p><a href="https://i.stack.imgur.com/Tquk7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Tquk7.png" alt="enter image description here"></a></p> <p>The pyvips docs have a quick intro to the options:</p> <p><a href="https://libvips.github.io/pyvips/vimage.html#pyvips.Image.text" rel="nofollow noreferrer">https://libvips.github.io/pyvips/vimage.html#pyvips.Image.text</a></p> <p>Or the C library docs have a lot more detail:</p> <p><a href="http://libvips.github.io/libvips/API/current/libvips-create.html#vips-text" rel="nofollow noreferrer">http://libvips.github.io/libvips/API/current/libvips-create.html#vips-text</a></p> <p>It makes a one-band 8-bit image of the antialiased text which you can use for further processing, pass to NumPy or PIL, etc etc. There's a section in the intro on how to convert libvips images into arrays:</p> <p><a href="https://libvips.github.io/pyvips/intro.html#numpy-and-pil" rel="nofollow noreferrer">https://libvips.github.io/pyvips/intro.html#numpy-and-pil</a></p>
python|fonts|python-imaging-library|text-rendering
5
1,904,543
70,924,480
"TFlite" to Caffe model
<p>I'm trying to use Opencv &quot;cv::Net&quot; with a Caffe model (.caffemodel and .prototxt) but the the model I have is a &quot;.tflite&quot;.</p> <p>Is it possible to convert &quot;.tflite&quot; to a caffer model so I can use it in opencv c++?</p> <p>Thank you.</p>
<p>OpenCV have open issue for this to be native <a href="https://github.com/opencv/opencv/wiki/OE-35.-TFLite-support" rel="nofollow noreferrer">https://github.com/opencv/opencv/wiki/OE-35.-TFLite-support</a>. Also in you can convert TFLite models to frozen TensorFlow graphs</p> <pre><code>bazel run --config=opt //tensorflow/lite/toco:toco -- --input_file=model.tflite --output_file=graph.pb --input_format=TFLITE --output_format=TENSORFLOW_GRAPHDEF </code></pre>
c++|tensorflow|opencv|caffe
0
1,904,544
60,029,808
rpy2 code snippet returns an empty object
<p>I am using rpy2 to use a R library in Python. The library has a function prebas() that returns an array in which the item with index [8] is the output. When I write this output to CSV within the R code snippet, everything works as expected (the output is a CSV over 200kB). However, when I return the same object (PREBASout[8]), it returns an empty object. So, obviously, when I write that object to CSV, the file is empty.</p> <pre><code>run_prebasso = robjects.r(''' weather &lt;- read.csv("/home/example_inputs/weather.csv",header = T) PAR = c(weather$PAR,weather$PAR,weather$PAR) TAir = c(weather$TAir,weather$TAir,weather$TAir) Precip = c(weather$Precip,weather$Precip,weather$Precip) VPD = c(weather$VPD,weather$VPD,weather$VPD) CO2 = c(weather$CO2,weather$CO2,weather$CO2) DOY = c(weather$DOY,weather$DOY,weather$DOY) library(Rprebasso) PREBASout = prebas(nYears = 100, PAR=PAR,TAir=TAir,VPD=VPD,Precip=Precip,CO2=CO2) write.csv(PREBASout[8],"/home/outputs/written_in_r.csv",row.names = F) PREBASout[8] ''') r_write_csv = robjects.r['write.csv'] r_write_csv(run_prebasso, "/home/outputs/written_in_py.csv") </code></pre> <p>This is what the code snippet returns:</p> <pre><code>(Pdb) run_prebasso &lt;rpy2.rinterface.NULLType object at 0x7fc1b31e6b48&gt; [RTYPES.NILSXP] </code></pre> <p><strong>Question</strong>: Why aren't <code>written_in_py.csv</code> and <code>written_in_r.csv</code> the same?</p>
<p>I have just found the bug. The problem was in the line</p> <pre><code>write.csv(PREBASout[8],"/home/outputs/written_in_r.csv",row.names = F) </code></pre> <p>This statement was returned instead of what I wanted (PREBASout[8]). When I removed it or assigned it to a variable, everything worked as expected.</p>
python|r|rpy2
0
1,904,545
67,745,141
Facing this error : container_linux.go:367: starting container process caused: exec: "python": executable file not found in $PATH: unknown
<p>So I'm a beginner in docker and containers and I've been getting this error for days now. I get this error when my lambda function runs a sagemaker processing job. My core python file resides in an s3 bucket. My docker image resides in ECR. But I dont understand why I dont get a similar error when I run the same processing job with a python docker image. PFB the python docker file that didnt throw any errors.</p> <pre class="lang-sh prettyprint-override"><code>FROM python:latest #installing dependencies RUN pip3 install argparse RUN pip3 install boto3 RUN pip3 install numpy RUN pip3 install scipy RUN pip3 install pandas RUN pip3 install scikit-learn RUN pip3 install matplotlib </code></pre> <p>I only get this error when i run this with a an ubunutu docker image with python3 installed. PFB the dockerfile which throws the error mentioned.</p> <pre class="lang-sh prettyprint-override"><code>FROM ubuntu:20.04 RUN apt-get update -y RUN apt-get install -y python3 RUN apt-get install -y python3-pip RUN pip3 install argparse RUN pip3 install boto3 RUN pip3 install numpy==1.19.1 RUN pip3 install scipy RUN pip3 install pandas RUN pip3 install scikit-learn ENTRYPOINT [ &quot;python3&quot; ] </code></pre> <p>How do I fix this?</p>
<p>Fixed this error by changing the entry point to</p> <p><strong>ENTRYPOINT [ &quot;/usr/bin/python3.8&quot;]</strong></p>
python|amazon-web-services|docker|aws-lambda|amazon-sagemaker
1
1,904,546
68,018,891
Add the values of several columns when the number of columns exceeds 3 - Pandas
<p>I have a pandas dataframe with several columns of dates, numbers and bill amounts. I would like to add the amounts of the other invoices with the 3rd one and change the invoice number by &quot;1111&quot;. Here is an example:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>ID customer</th> <th>Bill1</th> <th>Date 1</th> <th>ID Bill 1</th> <th>Bill2</th> <th>Date 2</th> <th>ID Bill 2</th> <th>Bill3</th> <th>Date3</th> <th>ID Bill 3</th> <th>Bill4</th> <th>Date 4</th> <th>ID Bill 4</th> <th>Bill5</th> <th>Date 5</th> <th>ID Bill 5</th> </tr> </thead> <tbody> <tr> <td>4</td> <td>6</td> <td>2000-10-04</td> <td>1</td> <td>45</td> <td>2000-11-05</td> <td>2</td> <td>51</td> <td>1999-12-05</td> <td>3</td> <td>23</td> <td>2001-11-23</td> <td>6</td> <td>76</td> <td>2011-08-19</td> <td>12</td> </tr> <tr> <td>6</td> <td>8</td> <td>2016-05-03</td> <td>7</td> <td>39</td> <td>2017-08-09</td> <td>8</td> <td>38</td> <td>2018-07-14</td> <td>17</td> <td>21</td> <td>2009-05-04</td> <td>9</td> <td>Nan</td> <td>Nan</td> <td>Nan</td> </tr> <tr> <td>12</td> <td>14</td> <td>2016-11-16</td> <td>10</td> <td>73</td> <td>2017-05-04</td> <td>15</td> <td>Nan</td> <td>Nan</td> <td>Nan</td> <td>Nan</td> <td>Nan</td> <td>Nan</td> <td>Nan</td> <td>Nan</td> <td>Nan</td> </tr> </tbody> </table> </div> <p>And I would like to get this :</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>ID customer</th> <th>Bill1</th> <th>Date 1</th> <th>ID Bill 1</th> <th>Bill2</th> <th>Date 2</th> <th>ID Bill 2</th> <th>Bill3</th> <th>Date3</th> <th>ID Bill 3</th> </tr> </thead> <tbody> <tr> <td>4</td> <td>6</td> <td>2000-10-04</td> <td>1</td> <td>45</td> <td>2000-11-05</td> <td>2</td> <td>150</td> <td>1999-12-05</td> <td>1111</td> </tr> <tr> <td>6</td> <td>8</td> <td>2016-05-03</td> <td>7</td> <td>39</td> <td>2017-08-09</td> <td>8</td> <td>59</td> <td>2018-07-14</td> <td>1111</td> </tr> <tr> <td>12</td> <td>14</td> <td>2016-11-16</td> <td>10</td> <td>73</td> <td>2017-05-04</td> <td>15</td> <td>Nan</td> <td>Nan</td> <td>Nan</td> </tr> </tbody> </table> </div> <p>This example is a sample of my data, I may have many more than 5 columns. Thanks for your help</p>
<p>with a little of data manipulation, you should be able to do it as:</p> <pre><code>df = df.replace('Nan', np.nan) idx_col_bill3 = 7 step = 3 idx_col_bill3_id = 10 cols = df.columns bills = df[cols[range(idx_col_bill3,len(cols), step)]].sum(axis=1) bills.replace(0, nan, inplace=True) df = df[cols[range(idx_col_bill3_id)]] df['Bill3'] = bills df['ID Bill 3'].iloc._setitem_with_indexer(df['ID Bill 3'].notna(),1111) </code></pre>
python|pandas
1
1,904,547
30,446,236
Tkinter: How can I fix a top-level window not displaying correctly
<p>I'm doing a voting system to try to implement it in my school.</p> <p>So the program is supposed to go like this:</p> <ol> <li><p>a function that is not yet implemented will generate some passwords according to the number of students that is going to vote.</p></li> <li><p>The main window displays all the candidates.</p></li> <li><p>When you click on the button, a pop up window (toplevel) appears prompting you to enter your password</p></li> <li><p>If the password is correct, another window pops up (another Toplevel, and here is the problem) showing you some buttons that you just have to click on to choose your candidate.</p></li> <li><p>Repeat until complete</p></li> </ol> <p><strong>Original Link :</strong> <a href="http://pastebin.com/579ybmPD" rel="nofollow">http://pastebin.com/579ybmPD</a></p> <p><strong>Code</strong></p> <pre><code>from Tkinter import * import tkMessageBox class app(object): def __init__(self, parent): self.availableCodes = [1,2, 3, 4, 5] top = self.top = Toplevel(parent) self.label1 = Label(top, text = "Ingrese su contrasena") self.label1.pack() self.entry1 = Entry(top) self.entry1.pack() self.button1 = Button(top, text = "Ingrese", command = self.ok) self.button1.pack(pady= 5) self.button1val = 0 self.button2val = 0 self.button3val = 0 self.button4val = 0 self.button5val = 0 self.button6val = 0 self.button7val = 0 self.button8val = 0 self.button9val = 0 self.button10val = 0 self.button11val = 0 self.button12val = 0 def ok(self): self.code = int(self.entry1.get()) self.voteWindow(self.code, self.availableCodes) self.top.destroy() def voteWindow(self, code, listOfCodes): if code in listOfCodes: print "True" self.optionsWindows(self.top) listOfCodes.remove(code) else: print "False" def optionsWindows(self, parent): new = self.new = Toplevel(parent) self.topframe = Frame(new) self.button1 = Button(self.topFrame, text = "Proyecto 1", command = self.close(self.button1val))#( self.button1val)) self.button1.pack(side = LEFT) self.button2 = Button(self.topFrame, text = "Proyecto 2", command = self.close(self.button2val))#(options, self.button2val)) self.button2.pack(side = LEFT) self.button3 = Button(self.topFrame, text = "Proyecto 3", command = self.close(self.button3val))#(options, self.button3val)) self.button3.pack(side = LEFT) self.button4 = Button(self.topFrame, text = "Proyecto 4", command = self.close(self.button4val))#(options, self.button4val)) self.button4.pack(side = LEFT) self.button5 = Button(self.topFrame, text = "Proyecto 5", command = self.close(self.button5val))#(options, self.button5val)) self.button5.pack(side = LEFT) self.button6 = Button(self.topFrame, text = "Proyecto 6", command = self.close(self.button6val))#(options, self.button6val)) self.button6.pack(side = LEFT) self.button7 = Button(self.topFrame, text = "Proyecto 7", command = self.close(self.button7val))#(options, self.button7val)) self.button7.pack(side = LEFT) self.button8 = Button(self.topFrame, text = "Proyecto 8", command = self.close(self.button8val))#(options, self.button8val)) self.button8.pack(side = LEFT) self.button9 = Button(self.topFrame, text = "Proyecto 9", command = self.close(self.button9val))#(options, self.button9val)) self.button9.pack(side = LEFT) self.button10 = Button(self.topFrame, text = "Proyecto 10", command = self.close(self.button10val))#(options, self.button10val)) self.button10.pack(side = LEFT) self.button11 = Button(self.topFrame, text = "Proyecto 11", command = self.close(self.button11val))#(options, self.button11val)) self.button11.pack(side = LEFT) self.button12 = Button(self.topFrame, text = "Proyecto 12", command = self.close(self.button12val))#(options, self.button12val)) self.button12.pack(side = LEFT) def close(self, variable): variable +=1 self.top.destroy() def onClick(): run = app(root) root.wait_window(run.top) root = Tk() root.configure(bg = "white") mainButton = Button(root, text='Click aqui para votar', command=onClick) mainButton.pack() root.mainloop() </code></pre> <p>So the problem is that the last toplevel window is appearing on my main window instead of displaying on a new pop up, and one time I got it to work, the options were not displaying properly. Please help.</p>
<p>You misspelled topframe in optionsWindows(). Also you don't pack topf(F)rame so it doesn't appear. Finally you pass a variable to command= incorrectly for the buttons in optionsWindows.</p> <pre><code>from Tkinter import * import tkMessageBox from functools import partial class app(object): def __init__(self, parent): self.availableCodes = [1,2, 3, 4, 5] top = self.top = Toplevel(parent) self.label1 = Label(top, text = "Ingrese su contrasena") self.label1.pack() self.entry1 = Entry(top) self.entry1.pack() self.button1 = Button(top, text = "Ingrese", command = self.ok) self.button1.pack(pady= 5) def ok(self): self.code = int(self.entry1.get()) self.voteWindow() self.top.destroy() def voteWindow(self): print "vote window" if self.code in self.availableCodes: print "True" self.optionsWindows(self.top) self.availableCodes.remove(code) else: print "False" def optionsWindows(self, parent): new = self.new = Toplevel(parent) self.topFrame = Frame(new) self.topFrame.pack() for num in range(12): button = Button(self.topFrame, text = "Proyecto %s" % (num+1), command = partial(self.close, num)) button.pack(side="left") def close(self, variable): variable +=1 print variable, "passed to self.close()" self.top.destroy() def onClick(): run = app(root) root.wait_window(run.top) root = Tk() root.configure(bg = "white") mainButton = Button(root, text='Click aqui para votar', command=onClick) mainButton.pack() Button(root, text="Exit", command=root.quit).pack(side="bottom") root.mainloop() </code></pre>
python|tkinter|toplevel
0
1,904,548
67,076,852
reading multiple files into dask dataframe
<p>I want to read multiple csv files into one single dask dataframe. Due to some reasons some portion of my original data get lost (no clue why?!). I am wondering whats the best method to read them all into dask? I used a for loop though not sure if its correct.</p> <blockquote> <pre><code> for file in os.listdir(dds_glob): if file.endswith('issued_processed.txt'): ddf = dd.read_fwf(os.path.join(dds_glob,file), colspecs=cols, header=None, dtype=object, names=names) </code></pre> </blockquote> <p>or should I use something like this:</p> <blockquote> <pre><code>dfs = delayed(pd.read_fwf)('/data/input/*issued_processed.txt', colspecs=cols, header=None, dtype=object, names=names) ddf = dd.from_delayed(dfs) </code></pre> </blockquote>
<p>There are at least two approaches:</p> <ol> <li>provide <code>dask.dataframe</code> with a list of files, so using your first snippet it would look like:</li> </ol> <pre class="lang-py prettyprint-override"><code>file_list = [ os.path.join(dds_glob,file) for file os.listdir(dds_glob) if file.endswith('issued_processed.txt') ] # other options are skipped for convenience ddf = dd.read_fwf(file_list) </code></pre> <ol start="2"> <li>construct dataframe from <code>delayed</code> objects, which using your second snippet would look like:</li> </ol> <pre class="lang-py prettyprint-override"><code># other options are skipped, but can be included after the `file` dfs = [delayed(pd.read_fwf)(file) for file in file_list] ddf = dd.from_delayed(dfs) </code></pre> <p>The first approach is something that will solve about 82% of the use-cases, but for the other cases you might need to try the second approach or something more involved.</p>
python|dask|dask-dataframe
1
1,904,549
66,874,448
Multiple loops/Lines at the same time
<p>is it possible to run a while loop and other lines of code at the same time? I've tried to run multiple while loops but what I made is a balance. <strong>Here is my try</strong></p> <pre><code>parity = 0 while True: while parity%2 == 0: print('running') parity += 1 while parity &amp; 1 : print('True') parity +=1 </code></pre> <p>Any other ideas?</p>
<pre><code>for parity in range(ANY_NUMBER): if parity%2 == 0: print('running') if parity &amp; 1 : print('True') if not final_condition: break </code></pre> <p>I don't think while loops are recommended here. You can replace final_condition by any condition and if it is True, it will stop the loop</p>
python|loops|python-3.9|parity|balance
0
1,904,550
43,020,047
By Using python pil, I want to make image centered with text from url and download. How to do it?
<p>I input a text at url, and want to get downloaded image that the text is centered.</p> <p>for example, when I input, <a href="http://127.0.0.1:8000/app/hello" rel="nofollow noreferrer">http://127.0.0.1:8000/app/hello</a>, then an image is downloaded. That image centers the text 'hello'</p> <pre><code>def homework(request, name): text = name img = Image.new('RGB',(256,256),(0,0,0)) draw = ImageDraw.Draw(img) font = ImageFont.load_default().font draw.text((128,128), text, font = font, fill="black") with open(img, 'rb') as f: response = HttpResponse(f, content_type='image/jpeg') response['Content-Disposition'] = 'attachment; filename="textimage"' return response </code></pre> <p>By using python pil.</p> <p>How to do it?</p>
<pre><code>#install PIL (Pillow) via Pip from PIL import Image from PIL import ImageFont from PIL import ImageDraw #position of Text x = 50 y = 50 #font color as tuple color = (255, 255, 255) #download the image to the local system and open it img = Image.open("1.jpg") #draw image and prepare font draw = ImageDraw.Draw(img) font = ImageFont.truetype('arial.ttf', size=30) #draw text on image draw.text((x, y),"Sample Text", color,font=font) #save the new image, that contains the text img.save('sample-out.jpg') </code></pre>
python|django
0
1,904,551
66,743,042
Matrix vs. loop for discretization of PDE
<p>I am experimenting with using diffeqpy to solve a PDE by discretization of the spacial dimension, while I treat the time dimension as a set of ordinary differential equations. I managed to solve a very simple problem using a for loop. However, when I try to use matrixes, to scale the problem up, the solver provides incorrect answers.</p> <p>The following piece of code works:</p> <pre><code>from diffeqpy import de import numpy as np def f(du,u,p,t): #define shape of matrix s = (6,7) cc = np.matrix((np.zeros(s))) for j in range(0,6): for i in range(0,6): if (j == i): cc[j,i] = -1.0 cc[j,i+1] = 1.0 for j in range(0,6): du[j] = cc[j,0]*u[0] + cc[j,1]*u[1] + cc[j,2]*u[2] + cc[j,3]*u[3] + cc[j,4]*u[4] + cc[j,5]*u[5] + cc[j,6]*u[6] u0 = [0.1,0.0,0.0,0.0,0.0,0.0,1.0] tspan = (0., 20.) prob = de.ODEProblem(f, u0, tspan) sol = de.solve(prob) </code></pre> <p>This codes is similar to the following piece of code that also works:</p> <pre><code>from diffeqpy import de def f(du,u,p,t): du[0] = -u[0]+u[1] du[1] = -u[1]+u[2] du[2] = -u[2]+u[3] du[3] = -u[3]+u[4] du[4] = -u[4]+u[5] du[5] = -u[5]+u[6] u0 = [0.1,0.0,0.0,0.0,0.0,0.0,1.0] tspan = (0., 20.) prob = de.ODEProblem(f, u0, tspan) sol = de.solve(prob) </code></pre> <p>However, when I try and use a matrix operation, the problem just does not solve correctly. I don't have a background in computer science. However, I would like to learn more. Why is the following piece of code not working? Has it got to do with mutable vs. immutable object? How can I utilize a matrix to make this problem scale to bigger discretisation steps?</p> <pre><code>from diffeqpy import de import numpy as np def f(du,u,p,t): #define shape of matrix s = (6,7) cc = np.matrix((np.zeros(s))) for j in range(0,6): for i in range(0,6): if (j == i): cc[j,i] = -1.0 cc[j,i+1] = 1.0 x = np.matrix(u).T du = (cc*x).T u0 = [0.1,0.0,0.0,0.0,0.0,0.0,1.0] tspan = (0., 20.) prob = de.ODEProblem(f, u0, tspan) sol = de.solve(prob) </code></pre> <p>I would appreciate any guidance on this problem.</p>
<p>If you're not doing in-place modification, use the 3-argument form:</p> <pre><code>from diffeqpy import de import numpy as np def f(u,p,t): #define shape of matrix s = (6,7) cc = np.matrix((np.zeros(s))) for j in range(0,6): for i in range(0,6): if (j == i): cc[j,i] = -1.0 cc[j,i+1] = 1.0 x = np.matrix(u).T du = (cc*x).T u0 = [0.1,0.0,0.0,0.0,0.0,0.0,1.0] tspan = (0., 20.) prob = de.ODEProblem(f, u0, tspan) sol = de.solve(prob) </code></pre>
python|julia|sparse-matrix
1
1,904,552
66,697,488
set item at multiple indexes in a list
<p>I am trying to find a way to use a list of indexes to set values at multiple places in a list (as is possible with numpy arrays).</p> <p>I found that I can map <code>__getitem__</code> and a list of indexes to return the values at those indexes:</p> <pre><code># something like a_list = ['a', 'b', 'c'] idxs = [0, 1] get_map = map(a_list.__getitem__, idxs) print(list(get_map)) # yields ['a', 'b'] </code></pre> <p>However, when applying this same line of thought to <code>__setitem__</code>, the setting fails. This probably has something to do with pass-by-reference vs pass-by-value, which I have never fully understood no matter how many times I've read about it.</p> <p><strong>Is there a way to do this?</strong></p> <pre><code>b_list = ['a', 'b', 'c'] idxs = [0, 1] put_map = map(b_list.__setitem__, idx, ['YAY', 'YAY']) print(b_list) # yields ['YAY', 'YAY', 'c'] </code></pre> <p>For my use case, I only want to set one value at multiple locations. Not multiple values at multiple locations.</p> <p><strong>EDIT</strong>: I know how to use list comprehension. I am trying to mimic <code>numpy</code>'s capability to accept a list of indexes for both getting and setting items in an array, except for lists.</p>
<p>The difference between the <code>get</code> and <code>set</code> case is that in the <code>get</code> case you are interested in the result of <code>map</code> itself, but in the <code>set</code> case you want a side effect. Thus, you never consume the <code>map</code> generator and the instructions are never actually executed. Once you do, <code>b_list</code> gets changed as expected.</p> <pre><code>&gt;&gt;&gt; put_map = map(b_list.__setitem__, idxs, ['YAY', 'YAY']) &gt;&gt;&gt; b_list ['a', 'b', 'c'] &gt;&gt;&gt; list(put_map) [None, None] &gt;&gt;&gt; b_list ['YAY', 'YAY', 'c'] </code></pre> <p>Having said that, the proper way for <code>get</code> would be a list comprehension and for <code>set</code> a simple <code>for</code> loop. That also has the advantage that you do not have to repeat the value to put in place n times.</p> <pre><code>&gt;&gt;&gt; for i in idxs: b_list[i] = &quot;YAY&quot; &gt;&gt;&gt; [b_list[i] for i in idxs] ['YAY', 'YAY'] </code></pre>
python|map-function
1
1,904,553
72,261,124
How to iterate and run AsyncGenerator concurrently in Python
<p>A bit new to Python, not sure if this question is too naive. Trying to grasp the concurrency model.</p> <p>Third party function (from the library) connects to multiple hosts through ssh and perform some bash command. This function returns AsyncGenerator.</p> <p>Then my code iterates through this AsyncGenerator using <code>async for</code>:</p> <pre class="lang-py prettyprint-override"><code> # some code before asyncgenerator_var = # ... async for (host, exit_code, stdout, stderr) in asyncgenerator_var: if exit_code != 0: self.err(f&quot;[{host}]: {''.join(decode_strings(stderr))}&quot;) continue self.out(f&quot;[{host}]: {''.join(decode_strings(stdout))}&quot;) self.err(f&quot;[{host}]: {''.join(decode_strings(stderr))}&quot;) # some code after </code></pre> <p>And then code calls await on this function. But if runs one after another. Not concurrently.</p> <p>Code someone explain why is that? And what should be done to make it run concurrently.</p>
<p>In async model, no functions are run simultaneously. The event loop may switch functions if the current function is <code>await</code>-ing other functions / futures.</p> <p>The <code>async for</code> statement essentially means the event loop may run other scheduled callbacks/tasks between iterations.</p> <p>The <code>async for</code> body still run in the order yielded by the async generator.</p> <p>To run the body in arbitrary order, wrap it inside an async function. Create a separate task for each input, and finally <code>gather</code> the results of all tasks.</p> <pre class="lang-py prettyprint-override"><code># some code before asyncgenerator_var = # ... async def task(host, exit_code, stdout, stderr): if exit_code != 0: self.err(f&quot;[{host}]: {''.join(decode_strings(stderr))}&quot;) return self.out(f&quot;[{host}]: {''.join(decode_strings(stdout))}&quot;) self.err(f&quot;[{host}]: {''.join(decode_strings(stderr))}&quot;) tasks = [] async for (host, exit_code, stdout, stderr) in asyncgenerator_var: tasks.append(asyncio.create_task(task, host, exit_code, stdout, stderr)) await asyncio.gather(*tasks) # some code after </code></pre>
python|python-3.x|python-asyncio
1
1,904,554
72,177,472
Flappy Bird - bird.draw(win) AttributeError: type object 'Bird' has no attribute 'draw'
<p>I would like to make a flappy bird game. After defining the <code>Bird</code> class and after running the program I get following prompt from PyCharm, and when I run <code>bird.draw(win)</code> I get a pygame window with the background.</p> <p>Error:</p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\marci\PycharmProjects\pythonProject\main.py&quot;, line 109, in &lt;module&gt; main() File &quot;C:\Users\marci\PycharmProjects\pythonProject\main.py&quot;, line 104, in main draw_window(win,bird) File &quot;C:\Users\marci\PycharmProjects\pythonProject\main.py&quot;, line 92, in draw_window bird.draw(win) AttributeError: type object 'Bird' has no attribute 'draw' </code></pre> <p>Code:</p> <pre><code>import pygame import neat import time import os import random WIN_WIDTH = 600 WIN_HEIGHT = 800 BIRD_IMGS = [pygame.transform.scale2x(pygame.image.load(os.path.join(&quot;imgs&quot;, &quot;bird1.png&quot;))), pygame.transform.scale2x(pygame.image.load(os.path.join(&quot;imgs&quot;, &quot;bird2.png&quot;))), pygame.transform.scale2x(pygame.image.load(os.path.join(&quot;imgs&quot;, &quot;bird3.png&quot;)))] PIPE_IMG = pygame.transform.scale2x(pygame.image.load(os.path.join(&quot;imgs&quot;, &quot;pipe.png&quot;))) BASE_IMG = pygame.transform.scale2x(pygame.image.load(os.path.join(&quot;imgs&quot;, &quot;base.png&quot;))) BG_IMG = pygame.transform.scale2x(pygame.image.load(os.path.join(&quot;imgs&quot;, &quot;bg.png&quot;))) class Bird: IMGS = BIRD_IMGS MAX_ROTATION = 25 ROT_VEL = 20 ANIMATION_TIME = 5 def __int__(self, x, y): self.x = x # &quot;Starting Position x&quot; self.y = y # &quot;Starting Position y&quot; self.tilt = 0 #&quot;How much the image is tilted&quot; self.tick_count = 0 #&quot;Physics (if we fall down or move up)&quot; self.vel = 0 #&quot;is not moving, that's the reason for 0&quot; self.height = self.y self.img_count = 0 #which image is shown - animation self.img = self.IMGS[0] #reference to image 0 def jump(self): #jump of the bird self.vel = -10.5 #random number to be confirmed later, because trainer # told this works with other settings. 0,0 - a left top corner y up -, y down +, x left -, x right +) self.tick_count = 0 #keeping track where we last jumped, 0 is needed fot the next method to work self.height = self.y #where we start, from which point bird starts to move def move(self): self.tick_count += 1 #tick happen frame go by, so we moved since the last jump d = self.vel*self.tick_count + 0.5*(3)*(self.tick_count)** 2 #displacement, how many pixels we are moving up or down # time #we are moving tick_count to 0 after each jump, # additionally we set velocity to 10.5, and we update our y position if d &gt;= 16: d =16 #maxium movements if d &lt; 0: d -= 2 #jump's size self.y = self.y + d if d &lt; 0 or self.y &lt; self.height + 50: # to keep track from where w jump exactly and not to tilt too much if self.tilt &lt; self.MAX_ROTATION: self.tilt = self.MAX_ROTATION else: if self.tilt &gt; -90: #Bird can go directly down, but not up let see above lines of code self.tilt -= self.ROT_VEL def draw(self, win): self.img_count += 1 if self.img_count &lt; self.ANIMATION_TIME: # is the image count is less than 5 (animation time) then image 0 self.img = self.IMGS[0] elif self.img_count &lt; self.ANIMATION_TIME*2: # is the image count is less than 10 (2x animation time) then image 1 self.img = self.IMGS[1] elif self.img_count &lt; self.ANIMATION_TIME * 3: self.img = self.IMGS[2] elif self.img_count &lt; self.ANIMATION_TIME * 4: self.img = self.IMGS[1] elif self.img_count &lt; self.ANIMATION_TIME * 4 + 1: self.img = self.IMGS[0] self.img_count = 0 #all this if and elif function is: in a nutshell an animation of the movement of the bird if self.tilt &lt;= -80: self.img = self.IMGS[1] #when the bird is going down no movement of the wings self.img_count = self.ANIMATION_TIME*2 rotated_image = pygame.transform.rotate(self.img, self.tilt) new_rect = rotated_image.get_rect(center=self.image.get_rect(topleft= (self.x, self.y).center)) win.blit(rotated_image, new_rect.topleft) def get_mask(self): return pygame.mask.from_surface(self.img) def draw_window (win, bird): win.blit(BG_IMG, [0,0]) #blit ~ draw bird.draw(win) pygame.display.update() def main(): bird = Bird win = pygame.display.set_mode((WIN_WIDTH, WIN_HEIGHT)) run = True while run: for event in pygame.event.get(): if event.type == pygame.QUIT: run = False draw_window(win,bird) pygame.quit() quit() main() </code></pre>
<p>It looks to me like the <code>__init__</code> function and following belong to the <code>Bird</code> object, which is why I think you miss the right indentation level.</p> <p>Furthermore, <code>bird</code> should be an instance of the class, meaning all birds ever, like so: <code>bird = Bird(0, 0)</code> This is from the constructor you can see in the method <code>__init__</code>.</p> <p>Here is a modified version taking into account the above, would that solve your issue?</p> <pre class="lang-py prettyprint-override"><code>import pygame import neat import time import os import random WIN_WIDTH = 600 WIN_HEIGHT = 800 BIRD_IMGS = [ pygame.transform.scale2x(pygame.image.load(os.path.join(&quot;imgs&quot;, &quot;bird1.png&quot;))), pygame.transform.scale2x(pygame.image.load(os.path.join(&quot;imgs&quot;, &quot;bird2.png&quot;))), pygame.transform.scale2x(pygame.image.load(os.path.join(&quot;imgs&quot;, &quot;bird3.png&quot;))), ] PIPE_IMG = pygame.transform.scale2x(pygame.image.load(os.path.join(&quot;imgs&quot;, &quot;pipe.png&quot;))) BASE_IMG = pygame.transform.scale2x(pygame.image.load(os.path.join(&quot;imgs&quot;, &quot;base.png&quot;))) BG_IMG = pygame.transform.scale2x(pygame.image.load(os.path.join(&quot;imgs&quot;, &quot;bg.png&quot;))) class Bird: IMGS = BIRD_IMGS MAX_ROTATION = 25 ROT_VEL = 20 ANIMATION_TIME = 5 def __init__(self, x, y): self.x = x # &quot;Starting Position x&quot; self.y = y # &quot;Starting Position y&quot; self.tilt = 0 # &quot;How much the image is tilted&quot; self.tick_count = 0 # &quot;Physics (if we fall down or move up)&quot; self.vel = 0 # &quot;is not moving, that's the reason for 0&quot; self.height = self.y self.img_count = 0 # which image is shown - animation self.img = self.IMGS[0] # reference to image 0 def jump(self): # jump of the bird self.vel = -10.5 # random number to be confirmed later, because trainer # told this works with other settings. 0,0 - a left top corner y up -, y down +, x left -, x right +) self.tick_count = 0 # keeping track where we last jumped, 0 is needed fot the next method to work self.height = self.y # where we start, from which point bird starts to move def move(self): self.tick_count += 1 # tick happen frame go by, so we moved since the last jump d = self.vel * self.tick_count + 0.5 * (3) * (self.tick_count) ** 2 # displacement, how many pixels we are moving up or down # time # we are moving tick_count to 0 after each jump, # additionally we set velocity to 10.5, and we update our y position if d &gt;= 16: d = 16 # maxium movements if d &lt; 0: d -= 2 # jump's size self.y = self.y + d if ( d &lt; 0 or self.y &lt; self.height + 50 ): # to keep track from where w jump exactly and not to tilt too much if self.tilt &lt; self.MAX_ROTATION: self.tilt = self.MAX_ROTATION else: if ( self.tilt &gt; -90 ): # Bird can go directly down, but not up let see above lines of code self.tilt -= self.ROT_VEL def draw(self, win): self.img_count += 1 if ( self.img_count &lt; self.ANIMATION_TIME ): # is the image count is less than 5 (animation time) then image 0 self.img = self.IMGS[0] elif self.img_count &lt; self.ANIMATION_TIME * 2: # is the image count is less than 10 (2x animation time) then image 1 self.img = self.IMGS[1] elif self.img_count &lt; self.ANIMATION_TIME * 3: self.img = self.IMGS[2] elif self.img_count &lt; self.ANIMATION_TIME * 4: self.img = self.IMGS[1] elif self.img_count &lt; self.ANIMATION_TIME * 4 + 1: self.img = self.IMGS[0] self.img_count = 0 # all this if and elif function is: in a nutshell an animation of the movement of the bird if self.tilt &lt;= -80: self.img = self.IMGS[ 1 ] # when the bird is going down no movement of the wings self.img_count = self.ANIMATION_TIME * 2 rotated_image = pygame.transform.rotate(self.img, self.tilt) new_rect = rotated_image.get_rect( center=self.img.get_rect(topleft=(self.x, self.y).center) ) win.blit(rotated_image, new_rect.topleft) def get_mask(self): return pygame.mask.from_surface(self.img) def draw_window(win, bird): win.blit(BG_IMG, [0, 0]) # blit ~ draw bird.draw(win) pygame.display.update() def main(): bird = Bird(0, 0) # Default start coordinates of the bird. win = pygame.display.set_mode((WIN_WIDTH, WIN_HEIGHT)) run = True while run: for event in pygame.event.get(): if event.type == pygame.QUIT: run = False draw_window(win, bird) pygame.quit() quit() main() </code></pre>
python|pygame
0
1,904,555
65,545,949
Sklearn datasets default data structure is pandas or numPy?
<p>I'm working through an exercise in <a href="https://www.oreilly.com/library/view/hands-on-machine-learning/9781492032632/" rel="nofollow noreferrer">https://www.oreilly.com/library/view/hands-on-machine-learning/9781492032632/</a> and am finding unexpected behavior on my computer when I fetch a dataset. The following code returns</p> <pre><code>numpy.ndarray </code></pre> <p>on the author's Google Collab page, but returns</p> <pre><code>pandas.core.frame.DataFrame </code></pre> <p>on my local Jupyter notebook. As far as I know, my environment is using the exact same versions of libraries as the author. I can easily convert the data to a numPy array, but since I'm using this book as a guide for novices, I'd like to know what could be causing this discrepancy.</p> <pre><code>from sklearn.datasets import fetch_openml mnist = fetch_openml('mnist_784', version=1) mnist.keys() type(mnist['data']) </code></pre> <p>The author's Google Collab is at the following link, scrolling down to the &quot;MNIST&quot; heading. Thanks! <a href="https://colab.research.google.com/github/ageron/handson-ml2/blob/master/03_classification.ipynb#scrollTo=LjZxzwOs2Q2P" rel="nofollow noreferrer">https://colab.research.google.com/github/ageron/handson-ml2/blob/master/03_classification.ipynb#scrollTo=LjZxzwOs2Q2P</a>.</p>
<p>Just to close off this question, the <a href="https://stackoverflow.com/questions/65545949/sklearn-datasets-default-data-structure-is-pandas-or-numpy#comment115885287_65545949">comment</a> by Ben Reiniger, namely to add <code>as_frame=False</code>, is correct. For example:</p> <pre><code>mnist = fetch_openml('mnist_784', version=1, as_frame=False) </code></pre> <p>The OP has already made this change to the Colab code in the link.</p>
pandas|numpy|scikit-learn|dataset|mnist
0
1,904,556
50,391,703
Reshaping Pytorch tensor
<p>I have a tensor of size <code>(24, 2, 224, 224)</code> in Pytorch. <br></p> <ul> <li>24 = batch size</li> <li>2 = matrixes representing foreground and background</li> <li>224 = image height dimension</li> <li>224 = image width dimension</li> </ul> <p>This is the output of a CNN that performs binary segmentation. In each cell of the 2 matrixes is stored the probability for that pixel to be foreground or background: <code>[n][0][h][w] + [n][1][h][w] = 1</code> for every coordinate</p> <p>I want to reshape it into a tensor of size <code>(24, 1, 224, 224)</code>. The values in the new layer should be <code>0</code> or <code>1</code> according to the matrix in which the probability was higher.</p> <p>How can I do that? Which function should I use?</p>
<p>Using <a href="https://pytorch.org/docs/master/torch.html#torch.argmax" rel="nofollow noreferrer"><code>torch.argmax()</code></a> (for PyTorch +0.4):</p> <pre><code>prediction = torch.argmax(tensor, dim=1) # with 'dim' the considered dimension prediction = prediction.unsqueeze(1) # to reshape from (24, 224, 224) to (24, 1, 224, 224) </code></pre> <hr> <p>If the PyTorch version is below 0.4.0, one can use <a href="https://pytorch.org/docs/master/tensors.html#torch.Tensor.max" rel="nofollow noreferrer"><code>tensor.max()</code></a> which returns both the max values and their indices (but which isn't differentiable over the index values):</p> <pre><code>_, prediction = tensor.max(dim=1) prediction = prediction.unsqueeze(1) # to reshape from (24, 224, 224) to (24, 1, 224, 224) </code></pre>
python|arrays|numpy|matrix|pytorch
2
1,904,557
50,637,089
How to Install aws-sam-cli in ubuntu 14?
<p>I want to update aws-sam-cli on my ubuntu 14.04. I have sam 0.2.11 version. I want to update in 0.3.0. When I run </p> <pre><code>pip install --user aws-sam-cli </code></pre> <p>or </p> <pre><code>pip install --user --upgrade aws-sam-cli </code></pre> <p>I got</p> <blockquote> <p>Downloading/unpacking aws-sam-cli Downloading aws-sam-cli-0.3.0.tar.gz (128kB): 128kB downloaded Running setup.py (path:/tmp/pip_build_amber/aws-sam-cli/setup.py) egg_info for package aws-sam-cli /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'python_requires' warnings.warn(msg) error in aws-sam-cli setup command: 'install_requires' must be a string or list of strings containing valid project/version requirement specifiers Complete output from command python setup.py egg_info: /usr/lib/python2.7/distutils/dist.py:267: UserWarning: Unknown distribution option: 'python_requires' warnings.warn(msg) error in aws-sam-cli setup command: 'install_requires' must be a string or list of strings containing valid project/version requirement specifiers</p> <p>Cleaning up... Command python setup.py egg_info failed with error code 1 in /tmp/pip_build_amber/aws-sam-cli Storing debug log for failure in /home/amber/.pip/pip.log**</p> </blockquote>
<p>I had the same issue and here's how I installed aws-sam-cli</p> <p>Make sure you uninstall aws-sam-local if you have an older version with</p> <pre><code>npm uninstall -g aws-sam-local </code></pre> <p>Then run</p> <pre><code>pip install --user --upgrade setuptools pip install ez_setup pip install --user --upgrade aws-sam-cli </code></pre>
python|amazon-web-services|pip
8
1,904,558
50,523,786
migrate error in python mysql django
<p>on running python manage.py migrate arises this error, if I delete already existing table this command still show this type of error</p> <pre><code>Apply all migrations: admin, auth, bridge, contenttypes, sessions Running migrations: Applying contenttypes.0001_initial...Traceback (most recent call last): File "/usr/share/nginx/html/django_env/lib/python3.4/site-packages/django/db/backends/utils.py", line 83, in _execute return self.cursor.execute(sql) File "/usr/share/nginx/html/django_env/lib/python3.4/site-packages/django/db/backends/mysql/base.py", line 71, in execute return self.cursor.execute(query, args) File "/usr/share/nginx/html/django_env/lib/python3.4/site-packages/MySQLdb/cursors.py", line 250, in execute self.errorhandler(self, exc, value) File "/usr/share/nginx/html/django_env/lib/python3.4/site-packages/MySQLdb/connections.py", line 50, in defaulterrorhandler raise errorvalue File "/usr/share/nginx/html/django_env/lib/python3.4/site-packages/MySQLdb/cursors.py", line 247, in execute res = self._query(query) File "/usr/share/nginx/html/django_env/lib/python3.4/site-packages/MySQLdb/cursors.py", line 411, in _query rowcount = self._do_query(q) File "/usr/share/nginx/html/django_env/lib/python3.4/site-packages/MySQLdb/cursors.py", line 374, in _do_query db.query(q) File "/usr/share/nginx/html/django_env/lib/python3.4/site-packages/MySQLdb/connections.py", line 277, in query _mysql.connection.query(self, query) _mysql_exceptions.OperationalError: (1050, "Table 'django_content_type' already exists") </code></pre> <p>The above exception was the direct cause of the following exception:</p> <pre><code>Traceback (most recent call last): File "manage.py", line 15, in &lt;module&gt; execute_from_command_line(sys.argv) File "/usr/share/nginx/html/django_env/lib/python3.4/site-packages/django/core/management/__init__.py", line 371, in execute_from_command_line utility.execute() File "/usr/share/nginx/html/django_env/lib/python3.4/site-packages/django/core/management/__init__.py", line 365, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/usr/share/nginx/html/django_env/lib/python3.4/site-packages/django/core/management/base.py", line 288, in run_from_argv self.execute(*args, **cmd_options) File "/usr/share/nginx/html/django_env/lib/python3.4/site-packages/django/core/management/base.py", line 335, in execute output = self.handle(*args, **options) File "/usr/share/nginx/html/django_env/lib/python3.4/site-packages/django/core/management/commands/migrate.py", line 200, in handle fake_initial=fake_initial, File "/usr/share/nginx/html/django_env/lib/python3.4/site-packages/django/db/migrations/executor.py", line 117, in migrate state = self._migrate_all_forwards(state, plan, full_plan, fake=fake, fake_initial=fake_initial) File "/usr/share/nginx/html/django_env/lib/python3.4/site-packages/django/db/migrations/executor.py", line 147, in _migrate_all_forwards state = self.apply_migration(state, migration, fake=fake, fake_initial=fake_initial) File "/usr/share/nginx/html/django_env/lib/python3.4/site-packages/django/db/migrations/executor.py", line 244, in apply_migration state = migration.apply(state, schema_editor) File "/usr/share/nginx/html/django_env/lib/python3.4/site-packages/django/db/migrations/migration.py", line 122, in apply operation.database_forwards(self.app_label, schema_editor, old_state, project_state) File "/usr/share/nginx/html/django_env/lib/python3.4/site-packages/django/db/migrations/operations/models.py", line 92, in database_forwards schema_editor.create_model(model) File "/usr/share/nginx/html/django_env/lib/python3.4/site-packages/django/db/backends/base/schema.py", line 314, in create_model self.execute(sql, params or None) File "/usr/share/nginx/html/django_env/lib/python3.4/site-packages/django/db/backends/base/schema.py", line 133, in execute cursor.execute(sql, params) File "/usr/share/nginx/html/django_env/lib/python3.4/site-packages/django/db/backends/utils.py", line 100, in execute return super().execute(sql, params) File "/usr/share/nginx/html/django_env/lib/python3.4/site-packages/django/db/backends/utils.py", line 68, in execute return self._execute_with_wrappers(sql, params, many=False, executor=self._execute) File "/usr/share/nginx/html/django_env/lib/python3.4/site-packages/django/db/backends/utils.py", line 77, in _execute_with_wrappers return executor(sql, params, many, context) File "/usr/share/nginx/html/django_env/lib/python3.4/site-packages/django/db/backends/utils.py", line 85, in _execute return self.cursor.execute(sql, params) File "/usr/share/nginx/html/django_env/lib/python3.4/site-packages/django/db/utils.py", line 89, in __exit__ raise dj_exc_value.with_traceback(traceback) from exc_value File "/usr/share/nginx/html/django_env/lib/python3.4/site-packages/django/db/backends/utils.py", line 83, in _execute return self.cursor.execute(sql) File "/usr/share/nginx/html/django_env/lib/python3.4/site-packages/django/db/backends/mysql/base.py", line 71, in execute return self.cursor.execute(query, args) File "/usr/share/nginx/html/django_env/lib/python3.4/site-packages/MySQLdb/cursors.py", line 250, in execute self.errorhandler(self, exc, value) File "/usr/share/nginx/html/django_env/lib/python3.4/site-packages/MySQLdb/connections.py", line 50, in defaulterrorhandler raise errorvalue File "/usr/share/nginx/html/django_env/lib/python3.4/site-packages/MySQLdb/cursors.py", line 247, in execute res = self._query(query) File "/usr/share/nginx/html/django_env/lib/python3.4/site-packages/MySQLdb/cursors.py", line 411, in _query rowcount = self._do_query(q) File "/usr/share/nginx/html/django_env/lib/python3.4/site-packages/MySQLdb/cursors.py", line 374, in _do_query db.query(q) File "/usr/share/nginx/html/django_env/lib/python3.4/site-packages/MySQLdb/connections.py", line 277, in query _mysql.connection.query(self, query) django.db.utils.OperationalError: (1050, "Table 'django_content_type' already exists") </code></pre>
<p>For some reason, Django believes that the table still exists. Here's several things to do:</p> <ol> <li>Make sure you remove any rows that mention migrations converting this table from the migrations table in your database.</li> <li>Make sure that your table is actually deleted, and not simply cleared</li> <li>Rerun <code>python manage.py makemigrations</code> and <code>python manage.py migrate</code>.</li> <li>In general, since you're using Django's ORM, try not to edit or delete tables from your database manually, as this can throw Django off. Rather, delete the related model (or mark the model as an abstract one that doesn't use the database) </li> </ol>
python|django
1
1,904,559
35,079,503
Django Oracle connection as SYS should be as SYSDBA or SYSOPER
<p>I'm trying to use Oracle Database for Django. Oracle DB is active and I can connect with SQLDeveloper. But cannot connect from Django to Oracle DB. I got this error:</p> <pre><code>django.db.utils.DatabaseError: ORA-28009: connection as SYS should be as SYSDBA or SYSOPER </code></pre> <p>I use Conntection Type = Basic, Role = SYSDBA in SQLDeveloper. Where to give this parameters in Django?</p> <p>My Current setting.py parameters:</p> <pre><code>DATABASES = { 'default': { 'ENGINE': 'django.db.backends.oracle', 'NAME': 'payman', 'USER': 'sys', 'PASSWORD': 'ssys', 'HOST': '172.55.79.9', 'PORT': '1521', } } </code></pre>
<p>You need to provide this in the 'OPTIONS' to cx_oracle. I set the following while using the full DSN mode: </p> <pre><code>'OPTIONS': { 'purity': cx_Oracle.ATTR_PURITY_SELF, 'mode': cx_Oracle.SYSDBA }, </code></pre>
python|django|database|oracle|sysdba
1
1,904,560
26,721,068
Signal handler accepts (*args), how do I provide them?
<p>I'm using a library called BACpypes to communicate over network with a PLC. The short version here is that I need to start a BACpypes application in its own thread and then perform read/write to the plc in this separate thread. </p> <p>For multiple PLC's, there is a processing loop that creates an application (providing the plc ip address), performs read writes on plc using application, kills application by calling BACpypes <strong>stop(*args)</strong> from the Core module, calls join on the thread, and then moves on to next ip address in the list until we start over again. This works for as many ip addresses (PLCs) as we have, but as soon as we are back at the first ip address (PLC) again, I get the error: </p> <pre><code>socket.error: [Errno 98] Address already in use </code></pre> <p>Here is the short code for my thread class, which uses the stop() and run() functions from BACpypes core.</p> <pre><code>class BACpypeThread(Thread): def __init__(self, name): Thread.__init__(self) Thread.name = name def run(self): run() def stop(self): stop() </code></pre> <p>It seems like I'm not correctly killing the application. So, I know stop(*args) is registered as a signal handler according to BACpypes docs. Here is a snippet I pulled from this link <a href="http://bacpypes.sourceforge.net/modules/core.html" rel="nofollow">http://bacpypes.sourceforge.net/modules/core.html</a> </p> <pre><code>core.stop(*args) Parameters: args – optional signal handler arguments This function is called to stop a BACpypes application. It resets the running boolean value. This function also installed as a signal handler responding to the TERM signal so you can stop a background (deamon) process: $ kill -TERM 12345 </code></pre> <p>I feel like I need to provide a kill -term signal to make the ip address available again. I don't know how to do that. Here's my question...</p> <p>1) In this example, 12345 is the process number I believe. How do I figure out that number for my thread?</p> <p>2) Once I have the number, how do I actually pass the kill -TERM signal to the stop function? I just don't know how to actually write this line of code. So if someone could explain this that would be great.</p> <p>Thanks for the help!</p>
<p>Before stopping the core, you need to free the socket.</p> <p>I use :</p> <pre><code> try: self.this_application.mux.directPort.handle_close() except: self.this_application.mux.broadcastPort.handle_close() </code></pre> <p>After that I call stop then <code>thread.join()</code></p>
python|signal-handling
0
1,904,561
45,240,106
Pymongo Exception handling not working right in python 3
<p>I'm running a script to check if pymongo exceptions are successfully caught, but so far the only errors I get are Pycharm IDE errors. I turned the mongo daemon off to test this out. The relevant portion of the script is as follows, with IDE errors following after it:</p> <pre><code>import pymongo from pymongo import MongoClient from pymongo import errors import os from os.path import basename def make_collection_name(path): filename = os.path.splitext(basename(path))[0] collection_name = filename if collection_name in db.collection_names(): db[collection_name].drop() return collection_name if __name__ == '__main__': try: client = MongoClient() except pymongo.errors.ConnectionFailure as e: print("Could not connect to MongoDB: %s") % e except pymongo.errors.ServerSelectionTimeoutError as e: print("Could not connect to MongoDB: %s") % e filepath = **hidden filepath** db = client.TESTDB collection_name = make_collection_name(filepath) </code></pre> <p>Instead of having the exceptions handled, I rather get the following errors from the IDE: </p> <pre><code>Traceback (most recent call last): File "**hidden path**", line 278, in &lt;module&gt; collection_name = make_collection_name(filepath) File "**hidden path**", line 192, in make_collection_name if collection_name in db.collection_names(): File "C:\Python34\lib\site-packages\pymongo\database.py", line 530, in collection_names ReadPreference.PRIMARY) as (sock_info, slave_okay): File "C:\Python34\lib\contextlib.py", line 59, in __enter__ return next(self.gen) File "C:\Python34\lib\site-packages\pymongo\mongo_client.py", line 859, in _socket_for_reads with self._get_socket(read_preference) as sock_info: File "C:\Python34\lib\contextlib.py", line 59, in __enter__ return next(self.gen) File "C:\Python34\lib\site-packages\pymongo\mongo_client.py", line 823, in _get_socket server = self._get_topology().select_server(selector) File "C:\Python34\lib\site-packages\pymongo\topology.py", line 214, in select_server address)) File "C:\Python34\lib\site-packages\pymongo\topology.py", line 189, in select_servers self._error_message(selector)) pymongo.errors.ServerSelectionTimeoutError: localhost:27017: [WinError 10061] No connection could be made because the target machine actively refused it Process finished with exit code 1 </code></pre>
<p>Beginning in PyMongo 3 (not Python 3, PyMongo 3!), the <code>MongoClient</code> constructor no longer blocks trying to connect to the MongoDB server. Instead, the first actual operation you do will wait until the connection completes, and then throw an exception if connection fails.</p> <p><a href="http://api.mongodb.com/python/current/migrate-to-pymongo3.html#mongoclient-connects-asynchronously" rel="nofollow noreferrer">http://api.mongodb.com/python/current/migrate-to-pymongo3.html#mongoclient-connects-asynchronously</a></p> <p>As you can see from your stack trace, the exception is thrown from <code>db.collection_names()</code>, not from <code>MongoClient()</code>. So, wrap your call to <code>make_collection_name</code> in try / except, not the <code>MongoClient</code> call.</p>
python|mongodb|exception|exception-handling|pymongo
3
1,904,562
44,986,741
python Tkinter: how to disallow a window growing bigger than screen size
<p>My routine adds buttons to a window in tkinter. At some point the window becomes bigger (in height) than my actual screen and I cannot access the buttons at the bottom anymore. </p> <p>Demo code:</p> <pre><code>from Tkinter import * n_buttons = 20 # &lt;- change this as you wish for testing the code def generate_buttons(n): for i in range(n): button = Button(myframe, text="Button Clone", height=4) button.grid(row=i) root = Tk() myframe = Frame(root) myframe.pack() generate_buttons(n=n_buttons) root.mainloop() </code></pre> <p>As a first attempt, I placed the following code into my loop:</p> <pre><code>if root.winfo_height() &gt; root.winfo_screenheight(): print("I'm full!") break </code></pre> <p>But 1) it does not work and 2) I'd strongly prefer a solution that checks if the window can take another button <strong>before</strong> it is created, so that I do not have to destroy it afterwards. </p> <p>In my real code the buttons a placed as a 2D-grid and my first reaction to an overload of my window would be to increase the number of columns in my frame and then to reduce the height of all buttons. Again, it would be bad if I had to re-generate all buttons until all requirements are met. It would be great if you could help me with a solution that checks if the geometry is possible, before the first button is created. Thanks!</p>
<p>It's difficult to know if a widget will fit since there are so many variables. For example, padding, margins, other widgets on the screen, etc.</p> <p>That being said, since you know your own application you can account for that. The only thing that's missing is knowing how tall a button is. You can get that from tkinter with the <code>winfo_reqheight</code> and <code>winfo_reqwidth</code> methods which return the height and width requested by the button. Then, it's just a matter of math. </p>
python|tkinter|widget
1
1,904,563
44,996,688
Frame Switch to click on a Box (Python Selenium)
<p>I have a problem trying to automate my browser with Selenium on Python. It's been several hours that I block, and since I'm a beginner .. :(</p> <p>I explain my problem: I have to reach click on a box of Recaptcha. To do this, my bot must click on a button on the site, which then displays the recaptcha that I have to validate. Here are the source page screenshot:</p> <p><a href="https://i.stack.imgur.com/8s5Ub.png" rel="nofollow noreferrer">The popup of the recaptcha, in which the checkbox is located</a></p> <p><a href="https://i.stack.imgur.com/tBVIU.png" rel="nofollow noreferrer">The location of the checkbox that I have to click</a></p> <p>I try this code: </p> <pre><code>time.sleep(5) browser.switch_to_frame(browser.find_element_by_tag_name("CaptchaPopup")) browser.switch_to_frame(browser.find_element_by_tag_name("iframe")) CheckBox = WebDriverWait(browser, 10).until( browser.find_element_by_id('recaptcha-anchor').click()) time.sleep(0.7) CheckBox.click() </code></pre> <p>But the latter returns me an error :(</p> <pre><code>selenium.common.exceptions.NoSuchFrameException: Message: no such frame </code></pre> <p>I use Python 2.7. Do you have a solution ? Thank you very much in advance!</p>
<p>Try to use below code to handle required check-box:</p> <pre><code>from selenium.webdriver.support.ui import WebDriverWait as wait from selenium.webdriver.support import expected_conditions as EC wait(browser, 10).until(EC.frame_to_be_available_and_switch_to_it(browser.find_element_by_xpath('//iframe[contains(@src, "google.com/recaptcha")]'))) wait(browser, 10).until(EC.element_to_be_clickable((By.ID, 'recaptcha-anchor'))).click() </code></pre>
python|selenium|click|box
0
1,904,564
44,933,738
Convert a Large JSON file into multiple JSON files with maximum 1000 lines
<p>I'm trying to parse a large JSON file with ever increasing data into smaller files with a maximum of 1000 lines using Python.</p> <p>So far I've managed to print upto a thousand lines, but now I'm stuck where to go next:</p> <pre><code>with open(input_file) as f: count = 0 data = (lines for lines in f if count &lt; 1000) for x in data: count +=1 print (x + str(count)) </code></pre> <p>Since this needs to be a scalable solution, any other ideas on how I could do this better ?</p> <p>EDIT: The inner structure of the JSON is similar to the following: {"newsletter_optin": 1, "language": "gv", "country": "UY", "username": "xy32", "email": "xyz@gm.com", "user_id": 138123918}</p> <p>I am working on a project where my mentor wants me to split a Large file with millions of JSON lines into mini files of 1000 lines each.</p>
<p>JSON files have an inner structure, so you can't just break it on any line, because the result would not be a valid JSON. Since JSON files are a combination of dictionaries and list nested within one another it makes the most sense to break the JSON separating elements of the same list.</p> <p>here is an example:</p> <pre><code>{'Big JSON':[{'little JSON1':values},{'little JSON2':values}]} </code></pre> <p>This could be broken up to </p> <pre><code>{'Big JSON':[{'little JSON1':values}]} </code></pre> <p>and</p> <pre><code>{'Big JSON':[{'little JSON2':values}]} </code></pre> <p>The exact code for breaking the JSON up depends on the inner structure of your JSON file. But it is important that each of your files be a stand alone valid JSON file</p>
python|json|pandas
0
1,904,565
64,701,137
Failing to scrape dynamic webpage using selenium in python
<p>Im trying to scrape all 5000 companies from this page. its dynamic page and companies are loaded when i scroll down. But i can only scrape <strong>5 companies, So how can i scrape all 5000?</strong> <strong>URL is changing as I scroll down the page</strong>. I tried selenium but not working. <a href="https://www.inc.com/profile/onetrust" rel="nofollow noreferrer">https://www.inc.com/profile/onetrust</a> Note: I want to scrape all info of companies but just now selected two.</p> <pre><code>import time from urllib.request import urlopen as uReq from bs4 import BeautifulSoup as soup from selenium import webdriver from selenium.webdriver.chrome.options import Options my_url = 'https://www.inc.com/profile/onetrust' options = Options() driver = webdriver.Chrome(chrome_options=options) driver.get(my_url) time.sleep(3) page = driver.page_source driver.quit() uClient = uReq(my_url) page_html = uClient.read() uClient.close() page_soup = soup(page_html, &quot;html.parser&quot;) containers = page_soup.find_all(&quot;div&quot;, class_=&quot;sc-prOVx cTseUq company-profile&quot;) container = containers[0] for container in containers: rank = container.h2.get_text() company_name_1 = container.find_all(&quot;h2&quot;, class_=&quot;sc-AxgMl LXebc h2&quot;) Company_name = company_name_1[0].get_text() print(&quot;rank :&quot; + rank) print(&quot;Company_name :&quot; + Company_name) </code></pre> <p><strong>UPDATED CODE BUT PAGE IS NOT SCROLLING AT ALL. Corrected some mistake in BeautifulSoup codes</strong></p> <pre><code>import time from bs4 import BeautifulSoup as soup from selenium import webdriver my_url = 'https://www.inc.com/profile/onetrust' driver = webdriver.Chrome() driver.get(my_url) def scroll_down(self): &quot;&quot;&quot;A method for scrolling the page.&quot;&quot;&quot; # Get scroll height. last_height = self.driver.execute_script(&quot;return document.body.scrollHeight&quot;) while True: # Scroll down to the bottom. self.driver.execute_script(&quot;window.scrollTo(0, document.body.scrollHeight);&quot;) # Wait to load the page. time.sleep(2) # Calculate new scroll height and compare with last scroll height. new_height = self.driver.execute_script(&quot;return document.body.scrollHeight&quot;) if new_height == last_height: break last_height = new_height page_soup = soup(driver.page_source, &quot;html.parser&quot;) containers = page_soup.find_all(&quot;div&quot;, class_=&quot;sc-prOVx cTseUq company-profile&quot;) container = containers[0] for container in containers: rank = container.h2.get_text() company_name_1 = container.find_all(&quot;h2&quot;, class_=&quot;sc-AxgMl LXebc h2&quot;) Company_name = company_name_1[0].get_text() print(&quot;rank :&quot; + rank) print(&quot;Company_name :&quot; + Company_name) </code></pre> <p>Thank you for reading!</p>
<p>Try below approach using python - <a href="https://requests.readthedocs.io" rel="nofollow noreferrer">requests</a> simple, straightforward, reliable, fast and less code is required when it comes to requests. I have fetched the API URL from website itself after inspecting the network section of google chrome browser.</p> <p>What exactly below script is doing:</p> <ol> <li><p>First it will take the API URL and do GET request.</p> </li> <li><p>After getting the data script will parse the JSON data using json.loads library.</p> </li> <li><p>Finally it will iterate all over the list of companies list and print them for ex:- Rank, Company name, Social account links, CEO name etc.</p> <pre><code>import json import requests from urllib3.exceptions import InsecureRequestWarning requests.packages.urllib3.disable_warnings(InsecureRequestWarning) def scrap_inc_5000(): URL = 'https://www.inc.com/rest/companyprofile/nuleaf-naturals/withlist' response = requests.get(URL,verify = False) result = json.loads(response.text) #Parse result using JSON loads extracted_data = result['fullList']['listCompanies'] for data in extracted_data: print('-' * 100) print('Rank : ',data['rank']) print('Company : ',data['company']) print('Icon : ',data['icon']) print('CEO Name : ',data['ifc_ceo_name']) print('Facebook Address : ',data['ifc_facebook_address']) print('File Location : ',data['ifc_filelocation']) print('Linkedin Address : ',data['ifc_linkedin_address']) print('Twitter Handle : ',data['ifc_twitter_handle']) print('Secondary Link : ',data['secondary_link']) print('-' * 100) scrap_inc_5000() </code></pre> </li> </ol>
python|selenium|web-scraping
0
1,904,566
57,969,679
Is there a way to not make a copy when a numpy array is sliced?
<p>I need to handle some large <code>numpy</code> arrays in my project. After such an array is loaded from the disk, over half of my computer's memory will be consumed. </p> <p>After the array is loaded, I make several slices (almost half of the array will be selected) of it, then I receive error tells me the memory is insufficient. </p> <p>By doing a little experiment I understand, I receive the error because when a <code>numpy</code> array is sliced, a copy will be created</p> <pre><code>import numpy as np tmp = np.linspace(1, 100, 100) inds = list(range(100)) tmp_slice = tmp[inds] assert id(tmp) == id(tmp_slice) </code></pre> <p>returns <code>AssertionError</code></p> <p>Is there a way that a slice of a <code>numpy</code> array only refers to the memory addresses of the original array thus data entries are not copied? </p>
<p>In Python <code>slice</code> is a well defined class, with <code>start</code>, <code>stop</code>, <code>step</code> values. It is used when we index a list with <code>alist[1: 10: 2]</code>. This makes a new list with copies of the pointers from the original. In <code>numpy</code> these are used in <code>basic indexing</code>, e.g. <code>arr[:3, -3:]</code>. This creates a <code>view</code> of the original. The <code>view</code> shares the data buffer, but has its own <code>shape</code> and <code>strides</code>.</p> <p>But when we index arrays with lists, arrays or boolean arrays (mask), it has to make a copy, an array with its own data buffer. The selection of elements is too complex or irregular to express in terms of the <code>shape</code> and <code>strides</code> attributes.</p> <p>In some cases the index array is small (compared to the original) and copy is also small. But if we are permuting the whole array, then the index array, and copy will both be as large as the original.</p>
python|arrays|numpy
2
1,904,567
58,000,663
Using Python Ray With CPLEX Model Object
<p>I am trying to parallelize an interaction with a Python object that is computationally expensive. I would like to use Ray to do this but so far my best efforts have failed.</p> <p>The object is a CPLEX model object and I'm trying to add a set of constraints for a list of conditions.</p> <p>Here's my setup:</p> <pre><code>import numpy as np import docplex.mp.model as cpx import ray m = cpx.Model(name="mymodel") def mask_array(arr, mask_val): array_mask = np.argwhere(arr == mask_val) arg_slice = [i[0] for i in array_mask] return arg_slice weeks = [1,3,7,8,9] const = 1.5 r = rate = np.array(df['r'].tolist(), dtype=np.float) x1 = m.integer_var_list(data_indices, lb=lower_bound, ub=upper_bound) x2 = m.dot(x1, r) @ray.remote def add_model_constraint(m, x2, x2sum, const): m.add_constraint(x2sum &lt;= x2*const) return m x2sums = [] for w in weeks: arg_slice = mask_array(x2, w) x2sum = m.dot([x2[i] for i in arg_slice], r[arg_slice]) x2sums.append(x2sum) #: this is the expensive part for x2sum in x2sums: add_model_constraint.remote(m, x2, x2sum, const) </code></pre> <p>In a nutshell, what I'm doing is creating a model object, some variables, and then looping over a set of weeks in order to build a constraint. I subset my variable, compute some dot products and apply the constraint. I would like to be able to create the constraint in parallel because it takes a while but so far my code just hangs and I'm not sure why.</p> <p>I don't know if I should return the model object in my function because by default the m.add_constraint method modifies the object in place. But at the same time I know Ray returns references to the remote value so yea, not sure what's supposed to happen there.</p> <p>Is this at all a valid use of ray? It it reasonable to expect to be able to modify a CPLEX object in this way (or any other arbitrary python object)? </p> <p>I am new to Ray so I may be structuring this all wrong, or maybe this will never work for X, Y, and Z reason which would also be good to know.</p>
<p>The <code>Model</code> object is not designed to be used in parallel. You cannot add constraints from multiple threads at the same time. This will result in undefined behavior. You will have to at least a lock to make sure only thread at a time adds constraints.</p> <p>Note that parallel model building may not be a good idea at all: the order of constraints will be more or less random. On the other hand, behavior of the solver may depend on the order of constraints (this is called performance variability). So you may have a hard time reproducing certain results/behavior.</p>
python|cplex|ray|docplex
0
1,904,568
69,618,420
How to deactivate or cease script execution from a specific directory
<p>I have a directory having user uploaded scripts, Python, Js and so on, but due to the fear of security breach I want to deactivate those script in such directory to restrict them from execution.</p> <p>I wanna use python flask to do so (Python 3).</p>
<p>If I understand the problem correctly, you would like to allow the user to upload arbitrary files to your sever. A good idea here would be create a secure file name before saving it in the server. The official documentation suggests using <code>secure_filename</code> to achieve this, however also notes that you may need to disallow certain extension if the server executes the file ('php' for example).</p> <p>(Assuming you are using SQLAlchemy)</p> <p>You can create a model that will store the custom filename and meta data about the file. Below is a sample app using some of the code from the <a href="https://flask.palletsprojects.com/en/2.0.x/patterns/fileuploads/" rel="nofollow noreferrer">official documentation</a>.</p> <pre><code>from flask import (Flask, request, send_from_directory, abort, redirect) from werkzeug.utils import secure_filename from flask_sqlalchemy import SQLAlchemy from uuid import uuid4 import os app = Flask(__name__) app.config['SECRET_KEY'] = 'change' app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///app.db' app.config['UPLOAD_FOLDER'] = './uploads' app.config['MAX_CONTENT_LENGTH'] = 16 * 1000 * 1000 db = SQLAlchemy() db.init_app(app) class User(db.Model): id = db.Column(db.Integer, primary_key=True) uploaded_files = db.relationship('UserFile', backref='user') class UserFile(db.Model): id = db.Column(db.Integer, primary_key=True) filename = db.Column(db.String(50)) client_filename = db.Column(db.String(100)) ... #other file info user_id = db.Column(db.Integer, db.ForeignKey('user.id')) @app.before_first_request def create_tables(): db.create_all() @app.route('/', methods=['GET', 'POST']) def upload(): if request.method == 'POST': # check if the post request has the file part if 'file' not in request.files: return redirect(request.url) file = request.files['file'] # If the user does not select a file, the browser submits an # empty file without a filename. if file.filename == '': return redirect(request.url) if file: filename = f&quot;{str(uuid4())}.txt&quot; db.session.add(UserFile( filename=filename, client_filename = secure_filename(file.filename) #minimize security issues in the future #user_id = however you determine who the user is )) db.session.commit() file.save(os.path.join(app.config['UPLOAD_FOLDER'], filename)) return ''' &lt;!doctype html&gt; &lt;title&gt;Upload new File&lt;/title&gt; &lt;h1&gt;Upload new File&lt;/h1&gt; &lt;form method=post enctype=multipart/form-data&gt; &lt;input type=file name=file&gt; &lt;input type=submit value=Upload&gt; &lt;/form&gt; ''' @app.route('/files/&lt;int:id_&gt;/download') def file_download(id_): user_file = UserFile.query.get(id_) if not user_file: abort(404) return send_from_directory(app.config['UPLOAD_FOLDER'], user_file.filename, attachment_filename = user_file.client_filename, as_attachment=True) if __name__ == '__main__': app.run('localhost', port=5000, debug=True) </code></pre>
python|flask
0
1,904,569
55,561,248
XlsxWriter formatting in nested loop
<p>I have tried all imaginable combos from <a href="https://xlsxwriter.readthedocs.io/format.html" rel="nofollow noreferrer">https://xlsxwriter.readthedocs.io/format.html</a> and any Stack Overflow answer I could find that's related.</p> <p>99.99% chance it's something simple, as these are normally the reason I hit a dead end.</p> <p>I'm writing a summary to the first sheet and looping through a dict of df's to write a number of df's to separate sheets (this works fine)</p> <p>I can applied number formats to the summary but not in the loop.</p> <pre><code>writer = ExcelWriter('Players ' + str(date.datetime.now().strftime("%d-%m-%Y %H;%M;%S")) +'.xlsx', engine='xlsxwriter') summary.to_excel(writer,'Summary',index=False) workbook = writer.book worksheet = writer.sheets['Summary'] format1 = workbook.add_format({'num_format': '0.000'}) worksheet.set_column('A:A', 15) worksheet.set_column('B:B', 15) worksheet.set_column('C:C', 6, format1) worksheet.set_column('D:D', 15) worksheet.conditional_format('C2:C100', {'type': '3_color_scale', 'min_color': "green", 'mid_color': "yellow", 'max_color': "red"}) #Above works #below doesn't format #loop through summary and players individual tables #write an individual table to excel if score &gt; 0 for index, row in summary.iterrows(): for tbl in player_df_dict: if (row['Player_1'] == tbl[0]) &amp; (row['Player_2'] == tbl[1]) &amp; (row['Score']&gt;0): player_df_dict[tbl].to_excel(writer,str(tbl[0])+" and "+str(tbl[1]),index=False, columns=cols) worksheet = writer.sheets[str(tbl[0])+" and "+str(tbl[1])] #format2 = workbook.add_format({'num_format': '0.000'}) #worksheet.set_column('D:D', 20, format1) #worksheet.set_column('D:D', 20, format2) writer.save() </code></pre> <p>I've tried:</p> <ul> <li>using the original workbook object (from the summary code)</li> <li>instantiating a new workbook object in all variations of inside and outside each loop</li> <li>same for both above with sheet object</li> <li>same for both above with setting and applying format</li> </ul>
<p>You didn't exactly present a <a href="https://stackoverflow.com/help/mcve">mcve</a></p> <p>All I can tell you is that setting the column works; I am also doing it in two slightly different ways (yours + using <code>writer.sheets[tabname]</code> ,ie without creating an explicit variable for the worksheet.</p> <p>In my example below, the formatting gets applied in both cases.</p> <p>Also, define the formatting outside of the loop - it's an attribute of the book, not of the sheet.</p> <p>The code below creates a dataframe of random floats, exports it to excel and formats columns B and C - remember that Python's column 0 is column A)</p> <pre><code>import numpy as np import pandas as pd writer =pd.ExcelWriter('xlsx_test.xlsx') format_1 = writer.book.add_format({'num_format': '0.000'}) for i in range(3): tabname = str(i) pd.DataFrame(np.random.rand(10,4) ).to_excel(writer, tabname) writer.sheets[tabname].set_column(1,2, None, format_1) for i in ["a","b"]: tabname = i pd.DataFrame(np.random.rand(10,4) ).to_excel(writer, tabname) ws = writer.sheets[i] ws.set_column(1,2, None, format_1) writer.close() </code></pre>
python|pandas|xlsxwriter
2
1,904,570
55,193,168
UnicodeDecodeError: 'utf-8' codec can't decode bytes in position 0-1: invalid continuation byte
<p>I am trying to try pandas methods in a csv file I've made which looks like:</p> <pre><code>Location Time Number Seoul Nov.11 5 Jinju dec.22 2 wpg 3 june.6 2 </code></pre> <p>something like this. It is giving me an error message in the title. How can I fix this and what position is it referring exactly?</p>
<p>According to <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html</a> , you can add encoding parameter when reading the CSV file. I suggest you add &quot;utf-8&quot; or &quot;ISO-8859-1&quot;.</p> <pre class="lang-py prettyprint-override"><code>pandas.read_csv(yourfile, encoding=&quot;utf-8&quot;) </code></pre> <p>or</p> <pre class="lang-py prettyprint-override"><code>pandas.read_csv(yourfile, encoding=&quot;ISO-8859-1&quot;) </code></pre>
python|pandas
13
1,904,571
55,433,024
Why aren't my matplotlib tick labels visible? Same code works for Jupyter notebooks
<p>I am creating a basic figure with 9 subplots, all with shared x and y axes. The default only shows the tick labels for the outermost subplots. I am trying to show the x and y tick labels for all of the inner subplots as well, but nothing I am trying is working on my machine (windows), despite the same code working when I use Jupyter notebooks. </p> <p>I am using two for loops to loop through all 9 subplots, and then loop through all of the individual x and y labels within those axes, writing label.set_visible(True). This works when using Jupyter notebooks, but does not work when I am writing the code on my machine. See below my versions of Python and Anaconda: Python 3.7.1 (default, Dec 10 2018, 22:54:23) [MSC v.1915 64 bit (AMD64)] :: Anaconda, Inc. on win32</p> <p>Current Code:</p> <pre><code>fig, ((ax1,ax2,ax3), (ax4,ax5,ax6), (ax7,ax8,ax9)) = plt.subplots(3, 3, sharex=True, sharey=True) linear_data = np.array([1,2,3,4,5,6,7,8]) ax5.plot(linear_data, '-') # set inside tick labels to visible for ax in plt.gcf().get_axes(): for label in ax.get_xticklabels() + ax.get_yticklabels(): label.set_visible(True) plt.show() </code></pre> <p>This works with Jupyter notebooks, but on my machine, the tick labels for the inner subplots continue to not be visible.</p> <p>Jupyter Notebook result: <a href="https://i.stack.imgur.com/uvjRj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uvjRj.png" alt="Jupyter Notebook Result"></a></p> <p>Local machine result: <a href="https://i.stack.imgur.com/Ut7qc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ut7qc.png" alt="Local machine Python / Anaconda Result"></a></p> <p>Any help would be greatly appreciated!</p>
<p>Use this to show the tick labels:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt # Plot figure of subplots with shared axes fig, ((ax1,ax2,ax3), (ax4,ax5,ax6), (ax7,ax8,ax9)) = plt.subplots(3, 3, sharex=True, sharey=True) linear_data = np.array([1,2,3,4,5,6,7,8]) ax5.plot(linear_data, '-') # Show tick labels on all subplots for ax in plt.gcf().get_axes(): ax.xaxis.set_tick_params(labelbottom=True) ax.yaxis.set_tick_params(labelleft=True) plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/3HtTX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3HtTX.png" alt="Plot with labels on subplots"></a></p> <p>Note: I am using Python 3.6.6 on my local machine.</p>
python|windows|matplotlib|visualization
0
1,904,572
57,333,103
AWS Lambda Python3.7 Function - numpy: cannot import name 'WinDLL'
<p>I have a function set up in <em>lambda</em> that runs a python script from a <em>.zip</em> file. I have created a <code>virtualenv</code> and included all of the necessary packages in the <em>.zip</em> file (from the <code>Lib\site-packages</code> folder).</p> <p>Below are the import statements for the packages used in the script:</p> <pre><code>import requests import boto3 import logging import os from botocore.exceptions import ClientError from pprint import pprint import pandas as pd from datetime import datetime import s3fs </code></pre> <p>When I attempt to run the <em>lambda function</em> I am receiving the following error:</p> <pre><code>START RequestId: e302cee0-3c51-453a-84c1-6eb1f9c123a0 Version: $LATEST [ERROR] Runtime.ImportModuleError: Unable to import module 'export-dev': Unable to import required dependencies: numpy: cannot import name 'WinDLL' from 'ctypes' (/var/lang/lib/python3.7/ctypes/__init__.py) END RequestId: e302cee0-3c51-453a-84c1-6eb1f9c123a0 REPORT RequestId: e302cee0-3c51-453a-84c1-6eb1f9c123a0 Duration: 1.65 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 70 MB </code></pre> <p>I do not use the <code>ctypes</code>, <code>WinDLL</code> or any related packages explicitly in my code.</p>
<p>Aws lambda will throw you an error if you don't have the correct version of dependencies packaged with your code, which may depend on the OS (lambda runs on linux) and the python version.</p> <p>Based on your requirements, it's pandas throwing you the error. To run pandas on lambda, you need to include the following packages:</p> <p><strong>pandas</strong> - code compiled for the linux, which is what lambda runs you. You can find it here <a href="https://pypi.org/project/pandas/#files" rel="noreferrer">https://pypi.org/project/pandas/#files</a> download the 'manylinux' version of the .whl file, that matches your python lambda version. </p> <ul> <li><p>e.g. if you are running py3.7, then get pandas-0.25.3-cp37-cp37m-manylinux1_x86_64.whl</p></li> <li><p>Unzip the contents of the .whl file into the root folder of your lambda folder. This is the library version that lambda needs</p></li> <li><p>Note for pandas 0.25+, you also need to include the pytz package as well, see note below on requests</p></li> </ul> <p><strong>numpy</strong> - You can now get in lambda (tested for py3.7) through installing a 'layer' through the lambda console, see screenshots below. </p> <ul> <li>Doesn't seem there's a layer for py3.8 yet so you'll need to do download the correct .whl file from pypi, just like pandas <a href="https://pypi.org/project/numpy/#files" rel="noreferrer">https://pypi.org/project/numpy/#files</a></li> </ul> <p><strong>Side note on requests</strong></p> <ul> <li><p>Notice that the package here <a href="https://pypi.org/project/requests/#files" rel="noreferrer">https://pypi.org/project/requests/#files</a> only have a 'none-any' version, that means the source doesn't need to be compiled, so you can safely include the version you got from pip</p></li> <li><p>this applies to the pytz dependency of pandas as well</p></li> </ul> <p><strong>Screenshots installing layers in aws console</strong></p> <p><a href="https://i.stack.imgur.com/INapE.png" rel="noreferrer"><img src="https://i.stack.imgur.com/INapE.png" alt="adding a layer im lambda"></a> <a href="https://i.stack.imgur.com/FBIU9.png" rel="noreferrer"><img src="https://i.stack.imgur.com/FBIU9.png" alt="select layer"></a></p>
python|python-3.x|numpy|lambda|boto3
5
1,904,573
42,340,224
Tensorflow classify_image
<p>This shiny app is built from: <a href="https://github.com/yukiegosapporo/2016-01-12-mini-ai-app-using-tensorflow-and-shiny" rel="nofollow noreferrer">here</a> It basically uses tensorflow python in R shiny. My main question is getting the py code to run in R. </p> <p>Edit: I managed to make it run by making some changes. Everything runs. However, there is no wordcloud, nor can i get the output printed on shiny. After uploading the image, the output will be in Rstudio's console. </p> <pre><code>library(wordcloud) shinyServer(function(input, output) { PYTHONPATH &lt;- "C:/Program Files/Anaconda3" #should look like /Users/yourname/anaconda/bin if you use anaconda python distribution in OS X CLASSIFYIMAGEPATH &lt;- "C:/Program Files/Anaconda3/Lib/site-packages/tensorflow/models/image/imagenet" #should look like ~/anaconda/lib/python2.7/site-packages/tensorflow/models/image/imagenet outputtext &lt;- reactive({ ###This is to compose image recognition template### inFile &lt;- input$file1 #This creates input button that enables image upload template &lt;- paste0("python"," ", "classify_image.py") #Template to run image recognition using Python if (is.null(inFile)) {system(paste0(template," --image_file /tmp/imagenet/cropped_panda.jpg"))} else { #Initially the app classifies cropped_panda.jpg, if you download the model data to a different directory, you should change /tmp/imagenet to the location you use. system(paste0(template," --image_file ",inFile$datapath)) #Uploaded image will be used for classification } }) output$answer &lt;- renderPrint({outputtext()}) output$plot &lt;- renderPlot({ ###This is to create wordcloud based on image recognition results### df &lt;- data.frame(gsub(" *\\(.*?\\) *", "", outputtext()),gsub("[^0-9.]", "", outputtext())) #Make a dataframe using detected objects and scores names(df) &lt;- c("Object","Score") #Set column names df$Object &lt;- as.character(df$Object) #Convert df$Object to character df$Score &lt;- as.numeric(as.character(df$Score)) #Convert df$Score to numeric s &lt;- strsplit(as.character(df$Object), ',') #Split rows by comma to separate rows df &lt;- data.frame(Object=unlist(s), Score=rep(df$Score, sapply(s, FUN=length))) #Allocate scores to split words # By separating long categories into shorter terms, we can avoid "could not be fit on page. It will not be plotted" warning as much as possible wordcloud(df$Object, df$Score, scale=c(4,2), colors=brewer.pal(6, "RdBu"),random.order=F) #Make wordcloud }) output$outputImage &lt;- renderImage({ ###This is to plot uploaded image### if (is.null(input$file1)){ outfile &lt;- "/tmp/imagenet/cropped_panda.jpg" contentType &lt;- "image/jpg" #Panda image is the default }else{ outfile &lt;- input$file1$datapath contentType &lt;- input$file1$type #Uploaded file otherwise } list(src = outfile, contentType=contentType, width=300) }, deleteFile = TRUE) }) </code></pre> <p>Example of output on Rstudio's console: </p> <blockquote> <p>pug, pug-dog (score = 0.89841) bull mastiff (score = 0.01825) Brabancon griffon (score = 0.01114) French bulldog (score = 0.00161) Pekinese, Pekingese, Peke (score = 0.00091) W c:\tf_jenkins\home\workspace\release-win\device\cpu\os\windows\tensorflow\core\framework\op_def_util.cc:332] Op BatchNormWithGlobalNormalization is deprecated. It will cease to work in GraphDef version 9. Use tf.nn.batch_normalization().</p> </blockquote> <p>Does anyone knows what is going on? I tried various methods, but i cant get the output printed (used renderPrint, rendertext... etc)</p>
<p>OK here are two ways - </p> <p><strong>1</strong></p> <p>a. Install RPython for windows [Linux/Mac from CRAN]. Download as zip [Clone/Download as zip option] - <a href="https://github.com/cjgb/rPython-win" rel="nofollow noreferrer">https://github.com/cjgb/rPython-win</a></p> <p>b. Unzip it, rename the folder to rPython, Change configure.win to point to your python installation [it must be the Anaconda I guess]</p> <p>c. Execute the code in R - </p> <pre><code>library(devtools) # devtools needs to be installed - install.packages("devtools") install("C:/Users/username/Downloads/rPython") # location where you have downloaded rPython </code></pre> <p>My output - </p> <pre><code>Installing rPython "C:/PROGRA~1/R/R-33~1.2/bin/x64/R" --no-site-file --no-environ --no-save --no-restore --quiet CMD \ INSTALL "C:/Users/vk046010/Downloads/rPython" \ --library="C:/Users/vk046010/Documents/R/win-library/3.3" --install-tests * installing *source* package 'rPython' ... ** libs Warning: this package has a non-empty 'configure.win' file, so building only the main architecture c:/Rtools/mingw_64/bin/gcc -I"C:/PROGRA~1/R/R-33~1.2/include" -DNDEBUG -I"d:/Compiler/gcc-4.9.3/local330/include" -I"C:/Anaconda2/include" -O2 -Wall -std=gnu99 -mtune=core2 -c pycall.c -o pycall.o c:/Rtools/mingw_64/bin/gcc -shared -s -static-libgcc -o rPython.dll tmp.def pycall.o -LC:/Anaconda2/libs -lpython27 -Ld:/Compiler/gcc-4.9.3/local330/lib/x64 -Ld:/Compiler/gcc-4.9.3/local330/lib -LC:/PROGRA~1/R/R-33~1.2/bin/x64 -lR installing to C:/Users/vk046010/Documents/R/win-library/3.3/rPython/libs/x64 ** R ** inst ** preparing package for lazy loading ** help *** installing help indices ** building package indices ** testing if installed package can be loaded * DONE (rPython) </code></pre> <p>d. Now execute this from RStudio - </p> <pre><code># tensorflow caller library("rPython") py_code &lt;- " def classify_image(image_loc): # load tensorflow and predict here return image_loc+'person' # return the result here " python.exec(py_code) python.call( "classify_image", 'path/loc/to/image/') </code></pre> <p>The output of python.call should be the thing you are looking for.</p> <p><strong>2</strong></p> <p>I haven't done this before so no personal guarantees. You can install tensorflow in R itself - <a href="https://github.com/rstudio/tensorflow" rel="nofollow noreferrer">https://github.com/rstudio/tensorflow</a></p>
python|r|shiny
0
1,904,574
54,063,406
how to remove some code to an external python file
<p>I have a python file, it has a short Main function and a big class. <br> The main function only creates an instance of that class, and calls its <code>run()</code> function. <br> The class has 7 functions, 5 of them will not be changed, and the other 2 will be changed a lot, I want to only deal with those 2 in this file. I want to remove the 5 functions to an outside class in another file and be able to just import that class and run its functions easily.</p>
<p>Welcome to class inheritance.</p> <p>File 1:</p> <pre><code>class Foo(): def foo2(): return 2 def foo3(): return 3 </code></pre> <p>Main file:</p> <pre><code>import file1 class Foo(file1.Foo): def foo1(): return 1 print Foo.foo3() </code></pre>
python|class|import
1
1,904,575
53,970,696
Python - Flask: Static folder outside root directory
<p>Just for fun, I am trying to understand how I can create a website with <code>Python</code> and <code>Flask</code>. That website has to run on my own computer and I will be the only client. So far I got most things I want to do working, but now I encounter a technical problem that I cannot solve.</p> <p>On the client side I want to display images that are returned by the server. In <code>my __init__.py</code> I placed <code>app = Flask(__name__, static_url_path='/static')</code> and in my html document <code>&lt;img src="/static/images/2012035.jpg" height="150px"&gt;</code>. This works like a charm.</p> <p>But in fact, my images are in a directory <code>d:\genealogie\documenten</code>, which is outside the application directory and I do not want to copy more than 2000 files to the directory <code>static/images</code>. </p> <p>I tried:</p> <pre><code>documenten = "d:\genealogie\documenten" os.mkdir(documenten) </code></pre> <p>which gives a <code>WinError 183</code> because the directory already exists.</p> <p>I also tried:</p> <pre><code>documenten = "d:\genealogie\documenten" app = Flask(__name__, static_url_path='documenten') </code></pre> <p>which gives a <code>ValueError</code>: urls must start with a leading slash.</p> <p>I have seen quite a lot of similar questions here, but unfortunately I was unable to see how I could use the answers for my specific problem. Can I configure the website in such a way that I as a user can ask for say <code>&lt;img src="documenten/2012035.jpg" height="150px"&gt;</code>, maybe with some or other <code>localhost</code> prefix? Any help is highly appreciated.</p> <p><em>EDIT</em></p> <p>What I want to do is to give the server access to directories that are outside the directory of the server. Maybe I can illustrate that by showing how easily this can be done in <code>WAMP</code>. There we only need to add a few lines to the file <code>httpd.conf</code>. For example:</p> <pre><code>Include "C:/wamp64/alias/*" Alias /adressen "d:/adressen" &lt;Directory "d:/adressen"&gt; Options Indexes FollowSymLinks Multiviews AllowOverride all Require all granted &lt;/Directory&gt; Alias /genealogie "d:/genealogie" &lt;Directory "d:/genealogie"&gt; Options Indexes FollowSymLinks Multiviews AllowOverride all Require all granted &lt;/Directory&gt; </code></pre> <p>The server and all its files are in <code>c:/wamp64</code> and its subdirectories. But when we include <code>&lt;img src="http://localhost/genealogie/documenten/doc1064.jpg"&gt;</code> and <code>&lt;img src="http://localhost/adressen/doc5127.jpg"&gt;</code> in an <code>html</code> document, both images are nicely displayed, despite the fact that physically they reside far outside <code>WAMP</code>, even on different drives. So my question is: can we do this with <code>FLASK</code> as well?</p>
<p>So you need to open the folder with your python code, extract the right picture and send it to your user?</p> <p>You can do this with </p> <pre><code>file = "d:\genealogie\documenten\your_file_name" #you may consider a path library so you can do this in windows or Linux os.open(file) #you should really use a image library here. open is just a placeholder </code></pre> <p>In other words you want to open the image in your code (function, class, whatever) and then do to the image whatever you need to do. For example returning it to the user.</p> <p>I personally would try to use something like this:</p> <pre><code>from flask import send_file file = "d:\genealogie\documenten\your_file_name" #you may consider a path library so you can do this in windows or Linux return send_file(file, mimetype='image/jpg') #in case of jpg image, you may opt to not use the mimetype </code></pre> <p>You can't do this with </p> <pre><code>documenten = "d:\genealogie\documenten" app = Flask(__name__, static_url_path='documenten') </code></pre> <p>But why? Basically the <code>static_url_path</code> is the url the user enters into the browser. This has nothing to do with your folder structure on the server. Heck, you do not even need to have folders on your server. </p> <p>The structure of your internet presents does not have to related to the structure of your file system. Basically those are two completely different worlds that collide here.</p> <p>The URL is used to structure the web hierarchically and mainly copes with organisational structures (domains, subdomains). The file server on the other hand can be structured in very many ways. Usually you want to represent the nature of the files and/or the age of the files.</p> <p>By the way, the <code>mkdir</code> command creates folders, but you already have some. </p>
python|flask|wamp
2
1,904,576
53,929,317
Adding new lines until getting a specific number line python
<p>I have large number of files with different line numbers and same column numbers(17 columns) in a directory. I want to loop through all the files and perform the following operation. </p> <ol> <li>If number of lines in file is smaller than 100 </li> <li>Add new lines with value(0.00 0.00 ...) with same column number (17) until file has 100 lines</li> <li>If line number between 100 and 200, then repeat step 2 but upto 200 lines</li> </ol> <p>the code below but I don't know what I have to write in the if statements to get actual results</p> <pre><code> os.chdir('./directory/') names={} for fn in glob.glob('*.dat'): with open(fn) as f: names[fn]=sum(1 for line in f) if names[fn] &lt; 100: ..... if names[fn]&gt;100 and names[fn]&lt;200: .... </code></pre> <p>thanks.</p>
<p>If you care for a bash implemented solution, you can look into this <code>bash</code> script</p> <pre><code>#!/usr/bin/env bash printfn() { for (( i=0;i&lt;$1;i++ )) do for (( j=0;j&lt;$3;j++ )) do printf "0.00 "#&lt;---change spacing that fits your column requiments done printf "\n" done &gt;&gt; $2 } #Here columns are 17 COLS=17 #&lt;---change column numbers if you need f=(*.dat) for file in ${f[@]}; do lines=`cat $file | wc -l` if [[ $lines -lt 100 ]] ; then printfn 100-$lines $file 17 elif [[ $lines -lt 200 &amp;&amp; $lines -gt 100 ]]; then printfn 200-$lines $file 17 fi done </code></pre> <p>This fills up the file with zeros if #of lines is less than 100 or the number of lines is in between 200 and 100. Put this into a file ( say script.sh ). </p> <p>Don't forget to <code>chmod +x script.sh</code> this</p>
python|file
0
1,904,577
65,104,279
How to check if string dateformat is sortable?
<p>Given a string that contains a datetime format, I want to tell the user whether it can be sorted as string or not.</p> <p>For instance:</p> <ul> <li><code>'%Y-%m-%d %H:%M:%S'</code> is sortable</li> <li><code>'%m-%d %H:%M'</code> is sortable</li> <li><code>'%d-%m %H:%M'</code> is not (28-05 will appear before 29-04 if we sort it in string)</li> </ul> <p>My approach would be to parse the string and check if we follow the following order:</p> <ul> <li>Y before m before d</li> <li>H before M before S</li> </ul> <p>This approach is tedious, is there an easy effective way to accomplish this ?</p>
<p>The key is selecting only the markers that are present, then see if they are in the right order:</p> <pre><code>def is_sortable(format_string): markers = ['%Y', '%m', '%d', '%H', '%M', '%s'] indices = [format_string.index(marker) for marker in markers if marker in format_string] return indices == sorted(indices) </code></pre> <p>If the indices are not in the right order, the string is not sortable.</p> <pre><code>is_sortable('%Y-%m-%d %H:%M:%S') True is_sortable('%d-%m') False </code></pre> <p>Cheers</p>
python|sorting|datetime
3
1,904,578
65,311,538
Extracting creation-minute of a file in Python
<p>I need to extract the creation-minute of a file. But not the date. As an example of a file which was created on:</p> <blockquote> <p>Tue Jul 31 22:48:58 2018</p> </blockquote> <p>the code should print <strong>1368</strong> (result from:22hours*60+24minutes)</p> <p>I got the following program which works perfectly, but in my eyes is ugly.</p> <pre class="lang-py prettyprint-override"><code>import os, time created=time.ctime(os.path.getctime(filename)) # example result: Tue Jul 31 22:48:58 2018 hour=int(created[11:13]) minute=int(created[14:16]) minute_created=hour*60+minute print (minute_created) </code></pre> <p>As I like to write beautiful code, here my question: Is there a more elegant way to do it in Python?</p>
<p>Using regular expressions:</p> <pre><code>import os, time, re time = time.ctime(os.path.getctime(filename)) hours, mins, secs = map(int, re.search(r'(\d{2}):(\d{2}):(\d{2})', time).groups()) minutes_created = hours*60+mins minutes_created_fr = minutes_created + secs/60 # bonus with fractional seconds </code></pre>
python|python-3.x|file|time|minute
2
1,904,579
65,099,815
Summarizing results of random function in Python
<p>Sorry if this has already been asked, I searched but didn't find anything.</p> <p>I'm having some fun with odds, making a function that uses a list and random weighted results to simulate a race. I was wondering how I would summarize the results of x amount of races.</p> <p>my code looks like this:</p> <pre><code>cars= [&quot;Favorite&quot;, &quot;Second&quot;, &quot;Third&quot;, &quot;Forth&quot;, &quot;Fifth&quot;, &quot;Sixth&quot;, &quot;Seventh&quot;, &quot;Eigth&quot;] def Race(): for i in range(100): Winner = random.choices(Cars, weights=(33, 28.57, 14.285, 10, 8.33, 6.66, 4, 1.9), k=1) print(Winner[0]) Race() </code></pre> <p>In this example, we have eight cars that are racing 100 times.</p> <p>this prints a list of all of of the winners from each race. How would I get it to summarize how many times each outcome has resulted instead of getting a large list? For example, something like &quot;Favorite won x times!&quot;, &quot;Second won x times!&quot;, and so on. I tried using for statements but had no success.</p> <p>Edit - I was able to clean up my function with help from Ruli and Ashley below. Thanks for the help!</p>
<p>Why wont you avoid using loop and rather don't do it directly setting k parameter:</p> <pre><code>Winners = random.choices(Cars, weights=(33, 28.57, 14.285, 10, 8.33, 6.66, 4, 1.9), k=100) print(Winners.count(&quot;yourDesiredCar&quot;)) </code></pre> <p>If you want all counts you can do with following</p> <pre><code>from collections import Counter print(Counter(Winners)) </code></pre> <p>which outputs something like:</p> <pre><code>Counter({'Second': 31, 'Favorite': 27, 'Third': 13, 'Fifth': 10, 'Forth': 9, 'Sixth': 7, 'Eigth': 2, 'Seventh': 1}) </code></pre>
python|list|function|loops|random
2
1,904,580
22,703,979
getting an answer to two decimal places using the int function
<p>I am brand new to python, only a week or two into my course. Here is what I have written:</p> <pre><code>#prompt user for input purchaseAmount = eval(input("please enter Purchase Amount: ")) # compute sales tax salesTax = purchaseAmount * 0.06 # display sales tax to two decimal points print("sales tax is", int(salesTax * 100 / 100.0)) </code></pre> <p>and this is what it returns:</p> <blockquote> <blockquote> <blockquote> <p>please enter Purchase Amount: 197.55 sales tax is 11</p> </blockquote> </blockquote> </blockquote> <p>My text suggest I should get the answer 11.85. </p> <p>What should I change to get 2 decimal places in my answer?</p>
<p>You should use the <a href="http://docs.python.org/2/library/string.html#format-specification-mini-language" rel="nofollow">string formatting mini-language</a>.</p> <pre><code>&gt;&gt;&gt; salesTax = 22.2 &gt;&gt;&gt; print("sales tax is %0.2f" % salesTax) sales tax is 22.20 </code></pre> <p>Alternatively use the newer format method</p> <pre><code>&gt;&gt;&gt; salesTax = 22.256 &gt;&gt;&gt; print("sales tax is {:.2f}".format(salesTax)) sales tax is 22.26 </code></pre>
python
1
1,904,581
45,518,304
Encountered a Error When execute spark program by using python on pycharm
<p>I writed a python file named Wordcount.py on PyCharm. This is the content of Wordounct.py</p> <pre><code>import sys,os from pyspark import SparkContext sc = SparkContext() myrdd = sc.textFile("passwd") myrdd.count() </code></pre> <p>When I run it ,i encounted a ERROR that display on console </p> <p>The following is ERROR INFO</p> <pre><code>/usr/local/bin/python3 /home/plters/PycharmProjects/Spark21/Wordcount.py Traceback (most recent call last): File "/home/plters/PycharmProjects/Spark21/Wordcount.py", line 2, in &lt;module&gt; from pyspark import SparkContext File "/opt/spark2/python/pyspark/__init__.py", line 44, in &lt;module&gt; from pyspark.context import SparkContext File "/opt/spark2/python/pyspark/context.py", line 29, in &lt;module&gt; from py4j.protocol import Py4JError ImportError: No module named 'py4j </code></pre> <p>How should I do?</p>
<p>looks like py4j module is missing, just install from the terminal</p> <pre><code>pip install py4j </code></pre>
python|apache-spark|pyspark|pycharm
1
1,904,582
45,558,809
How can i add a Sprite to a sprite group in pygame
<p>this is my code. I have tried everything I can think of to get It to work, but I cant figure it out. I added the whole code, if you can find the error it would help tremoundously.</p> <pre><code>#Librarys/Modules and Stuffs import pygame, sys from gamebase import * from pygame.locals import * pygame.init() #Variables MainSprites = pygame.sprite.Group() #Classes class Player(pygame.sprite.Sprite()): def __init__(self): pygame.sprite.Sprite.__init__(self) player = Player() MainSprites.add(player) print(MainSprites) #Functions #Main def Main(): pass if __name__ == "__main__": Main() </code></pre>
<p>you just need to create an instance of a sprite and add it to a declared spritegroup. So in your case it would look like this:</p> <pre><code>MainSprites = pygame.sprite.Group() #Classes class Player(pygame.sprite.Sprite): def __init__(self): pygame.sprite.Sprite.__init__(self) player = Player() MainSprites.add(player) </code></pre> <p>edit: corrected the class code. Replaced:</p> <pre><code>class Player(pygame.sprite.Sprite()): </code></pre> <p>with:</p> <pre><code>class Player(pygame.sprite.Sprite): </code></pre> <p>and that fixes your recursion problem.</p>
python|pygame
0
1,904,583
45,687,392
Django deleting querysets with paging, not catching all parts of the set
<p>I have a bit of a strange problem that I'm not quite able to explain.</p> <p>I have a django project with some old, stale objects lying around. For example, lets say my objects look something like this:</p> <pre><code>class blog_post(models.Model): user_account = models.ForeignKey('accounts.Account') text = models.CharField(max_length=255) authors = models.ManyToManyField(author) created = models.DateTimeField(blank=True, null=True) </code></pre> <p>This is not an exact copy of my model, but is close enough. </p> <p>I've created a management command to build ordered querysets of these objects, and then delete with with a Paginator</p> <p>My command looks something like this:</p> <pre><code>all_accounts = Account.objects.all() for act in all_accounts.iterator(): stale_objects = blog_post.objects.filter(user_account=act, created=django.utils.timezone.now() - datetime.timedelta(days=7)) paginator = Paginator(stale_objects.order_by('id'), 100) for page in range(1, paginator.num_pages + 1): page_stale_objects = blog_post.objects.filter(id__in=paginator.page(page).object_list.values_list('id')) page_stale_objects.delete() </code></pre> <p>The problem I'm having is, after I delete these objects with my command, there are still objects that fit the queryset parameters but are not deleted. So, I have to run the command 3+ times to properly find and remove all the objects. </p> <p>I first figured that my date range was just weirdly on the edge of the DateTime so was not catching objects made shortly after 1 week past my command time. This is not the case, I've removed the created=... filter from the queryset, and have the same results.</p> <p>Why are my querysets not catching all objects the first time this command runs? There are not excessive objects, at the most ~30,000 rows. </p>
<p>Paging through a queryset is translated into consecutive LIMIT/OFFSET calls. So, think about the sequence:</p> <ul> <li>get items with offset 0 and limit 20</li> <li>delete those items</li> <li>get next page, ie 20 items from offset 21</li> </ul> <p>But wait! Once we've deleted the first set, the queryset now starts at 0 again. The items that are now from 0 to 20 are skipped.</p> <p>The solution is, don't do this. Pagination is for displaying objects, not deleting them.</p>
python|django|django-queryset
2
1,904,584
28,598,572
How to convert a column or row matrix to a diagonal matrix in Python?
<p>I have a row vector A, A = [a1 a2 a3 ..... an] and I would like to create a diagonal matrix, B = diag(a1, a2, a3, ....., an) with the elements of this row vector. How can this be done in Python?</p> <p><strong>UPDATE</strong></p> <p>This is the code to illustrate the problem:</p> <pre><code>import numpy as np a = np.matrix([1,2,3,4]) d = np.diag(a) print (d) </code></pre> <p>the output of this code is [1], but my desired output is:</p> <pre><code>[[1 0 0 0] [0 2 0 0] [0 0 3 0] [0 0 0 4]] </code></pre>
<p>You can use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.diag.html" rel="noreferrer">diag</a> method:</p> <pre><code>import numpy as np a = np.array([1,2,3,4]) d = np.diag(a) # or simpler: d = np.diag([1,2,3,4]) print(d) </code></pre> <p>Results in:</p> <pre><code>[[1 0 0 0] [0 2 0 0] [0 0 3 0] [0 0 0 4]] </code></pre> <p>If you have a row vector, you can do this:</p> <pre><code>a = np.array([[1, 2, 3, 4]]) d = np.diag(a[0]) </code></pre> <p>Results in:</p> <pre><code>[[1 0 0 0] [0 2 0 0] [0 0 3 0] [0 0 0 4]] </code></pre> <p>For the given matrix in the question:</p> <pre><code>import numpy as np a = np.matrix([1,2,3,4]) d = np.diag(a.A1) print (d) </code></pre> <p>Result is again:</p> <pre><code>[[1 0 0 0] [0 2 0 0] [0 0 3 0] [0 0 0 4]] </code></pre>
python|numpy|matrix|scipy
67
1,904,585
14,606,677
Visualize a clickable graph in an HTML page
<p>I have defined a data structure using <a href="http://code.google.com/p/python-graph/" rel="noreferrer">pygraph</a>. I can display that data easily as <code>PNG</code> using <code>graphviz</code>.</p> <p><img src="https://i.stack.imgur.com/2RdMg.png" alt="Node Graph"></p> <p>I would like to display the data graphically, but making each node in the graph clickable. Each node must be linked to a different web page. How would I approach this?</p> <p>What I would like is:</p> <ol> <li>Assign an <code>href</code> for each node</li> <li>Display all the graph as image</li> <li>Make each node in the image clickable</li> <li>Tooltips for <code>hover</code> event: whenever the cursor is positioned on top of an edge / node, a tooltip should be displayed</li> </ol>
<p>I believe graphviz can already output an image as a map for use in html.</p> <p>Check <a href="http://www.graphviz.org/doc/info/output.html#d%3acmapx" rel="noreferrer">this doc</a> or <a href="http://www.graphviz.org/doc/info/output.html#d%3aimap" rel="noreferrer">this one</a> for how to tell graphviz to output a coordinate map to use. It will even append the url you specify, and there even is a <a href="http://www.graphviz.org/doc/info/output.html#d%3aimap_np" rel="noreferrer">version</a> that only uses rectangles for mapping links</p> <p>Edit:</p> <p>You can also check <a href="http://c2.com/cgi/wiki?GraphVizForInteractiveDevelopmentDiagrams" rel="noreferrer">this document</a> by LanyeThomas which outlines the basic steps:</p> <blockquote> <p>Create a GraphViz dot file with the required linking information, maybe with a script.</p> <p>Run GraphViz once to generate an image of the graph. </p> <p>Run GraphViz again to generate an html image-map of the graph.</p> <p>Create an index.html (or wiki page) with an IMG tag of the graph, followed by the image-map's html. </p> <p>Direct the image-map urls to a Wiki page with each node's name - generate the Wiki pages automatically if needed. </p> <p>Optionally, link directly to the class hierarchy image generated by DoxyGen. Have the Wiki page link to any additional documentation, including DoxyGen docs, images, etc.</p> </blockquote>
python|html|graph
7
1,904,586
68,628,378
Python: How to get double backslash from Path object
<p>I'm using the <code>pathlib Path</code> module to store the paths of a couple of my programs. The problem is, when I go to use this variable, I end up with an error because the <code>\\</code> gets turned into a <code>\</code> which the system in interpreting as a special character. My understanding was that depending on the OS, the <code>Path</code> module would handle this accordingly. (I'm using Windows)</p> <p>Here is a recreation of my code:</p> <pre><code>from pathlib import Path def get_dictionary(): path1 = Path(&quot;C:\\Programs\\program1&quot;) path2 = Path(&quot;C:\\Programs\\program2&quot;) path3 = Path(&quot;C:\\Programs\\program3&quot;) path4 = Path(&quot;C:\\Programs\\program4&quot;) info = { &quot;program1&quot; : str(path1), &quot;program2&quot; : str(path2), &quot;program3&quot; : str(path3), &quot;program4&quot; : str(path4) } return info if __name__ == &quot;__main__&quot;: theInfo = get_dictionary() print(theInfo['program1']) print(theInfo['program2']) print(theInfo['program3']) print(theInfo['program4']) print(theInfo) </code></pre> <p>And the console output is the following:</p> <pre><code>C:\Programs\program1 C:\Programs\program2 C:\Programs\program3 C:\Programs\program4 {'program1': 'C:\\Programs\\program1', 'program2': 'C:\\Programs\\program2', 'program3': 'C:\\Programs\\program3', 'program4': 'C:\\Programs\\program4'} </code></pre> <p>So my question is: Say I want to use <code>theInfo['program1']</code>. I get <code>C:\Programs\program1</code> but I need to get <code>C:\\Programs\\program1</code>. How can I go about doing this? Thank you for any help!</p> <p>Edit: The values I get from the dictionary are placed in a string that ends up being a line in a Tcl file. For instance I have a function where I write:</p> <pre><code>f&quot;puts {theInfo['program1']}&quot; </code></pre> <p>where I expect:</p> <pre><code>puts C:\\Programs\\program1 </code></pre> <p>but I get:</p> <pre><code>puts C:\Programs\program1 </code></pre> <p>With other characters, this interprets as a tab, newline, ect...</p>
<p>This phenomenon is caused caused by <code>Escape character '\'</code>, if you really want the escaped data to be <code>C:\\Programs\\program1</code>, you can split the path then join it with <code>\\\\</code>.</p> <pre><code>In [4]: data = 'C:\\Programs\\program1' In [5]: data Out[5]: 'C:\\Programs\\program1' In [6]: print(data) C:\Programs\program1 In [7]: new_data = 'C:\\\\Programs\\\\program1' In [8]: new_data Out[8]: 'C:\\\\Programs\\\\program1' In [9]: print(new_data) C:\\Programs\\program1 In [21]: p = '\\\\'.join((&quot;C:\\Programs\\program1&quot;).split('\\')) In [22]: p Out[22]: 'C:\\\\Programs\\\\program1' In [23]: print(p) C:\\Programs\\program1 </code></pre>
python|python-3.x|windows|dictionary|path
0
1,904,587
68,545,485
Is it possible to identify if we are in the middle of a loop?
<p>Is it possible to identify if we are in the middle of a loop?</p> <p>I want to make the class automatically identify if the variable came from an iterator or not.</p> <p>I checked inspect module and didn't succeed. Maybe there is some scope identifier or some brilliant magic methods manipulation that can flag when a code runs inside a loop.</p> <p>Here's an example to make myself clear. How to set inside_loop Student attribute?</p> <pre><code>class Student: def __init__(self, id): self.id = id self.inside_loop = ??? def print_grade(self): if self.inside_loop: print(&quot;you are inside a loop&quot;) else: print(&quot;checking grade...&quot;) &gt;&gt;&gt; S = Student(1) &gt;&gt;&gt; S.print_grade() &gt;&gt;&gt; checking grade... &gt;&gt;&gt; student_ids = [1, 2, 3] &gt;&gt;&gt; for student in student_ids: &gt;&gt;&gt; S = Student(student) &gt;&gt;&gt; S.print_grade() &gt;&gt;&gt; &gt;&gt;&gt; you are inside a loop &gt;&gt;&gt; you are inside a loop &gt;&gt;&gt; you are inside a loop </code></pre> <p>Thanks in advance.</p>
<p>I have advanced a little bit in this question. I used <code>ast</code> and <code>inspect</code> standard modules of python. Since we had some worrisome comments, I would appreciate if you point out possible bugs in the code below. Until now, it solves the original problem.</p> <p>With <code>inspect.currentframe().f_back</code> I get the frame where the class was instantiate. Then the frame is used to build the abstract tree. For each tree node, it is set the parent attribute. Finally, I select the node that has the same line number than the original frame class instantiation assignment and check if this node parent is a <code>ast.For, ast.While,</code> or a <code>ast.AsyncFor</code>.</p> <p><strong>It does not work in a pure python shell.</strong> See discussion in <a href="https://stackoverflow.com/questions/60797338/get-ast-from-python-object-in-interpreter">get ast from python object in interpreter</a>.</p> <pre><code>import ast import inspect class Student: def __init__(self, id): self.id = id self.previous_frame = inspect.currentframe().f_back self.inside_loop = self._set_inside_loop() def _set_inside_loop(self): loops = ast.For, ast.While, ast.AsyncFor nodes = ast.parse(inspect.getsource(self.previous_frame)) for node in ast.walk(nodes): try: for child in ast.iter_child_nodes(node): child.parent = node instantiate_class = node.lineno == self.previous_frame.f_lineno loop_parent = isinstance(node.parent, loops) if instantiate_class &amp; loop_parent: return True except Exception: continue else: return False def print_grade(self): if self.inside_loop: print(&quot;you are inside a loop&quot;) else: print(&quot;checking grade...&quot;) </code></pre> <p>Now we have:</p> <pre><code>d = Student(4) d.print_grade() &gt;&gt;&gt; checking grade... students = [1,2,3] for i in students: s = Student(i) s.print_grade() &gt;&gt;&gt; you are inside a loop &gt;&gt;&gt; you are inside a loop &gt;&gt;&gt; you are inside a loop i = 1 while i&lt;4: s = Student(i) s.print_grade() i +=1 &gt;&gt;&gt; you are inside a loop &gt;&gt;&gt; you are inside a loop &gt;&gt;&gt; you are inside a loop </code></pre>
python|for-loop|abstract-syntax-tree
1
1,904,588
57,186,134
CANopen Device Updates too slowly
<p>Using CANopen on a Revolution Pi I have data coming from an MLS (Magnetic Line Sensor), however the data being received is far too slow for the needs as the updates need to be instant. What do I need to do to make the data update much faster?</p> <p>The CAN is setup using:</p> <pre><code>sudo ip link set can0 type can bitrate 125000 sudo ip link set can0 up candump can0 -td </code></pre> <p>I have used the Python-can library to create a basic program to investigate whether it would poll faster:</p> <pre><code>import can can_interface = 'can0' bus = can.interface.Bus(can_interface, bustype='socketcan') while 1 &lt; 2: bus.flush_tx_buffer() message = bus.recv() print(message) </code></pre> <p>The data printed message data (similar to that of the candump) should be posting in a new message many times within a second, however I'm waiting from between &lt;1sec to >10mins between the messages coming from the sensor</p>
<p>After trying a different MLS sensor the data was being posted every 0.01 seconds which is at the ideal speed. So there must be an unknown error with the original sensor however, this did work with a CANET-2 (CAN to Ethernet) device</p>
python|linux|raspberry-pi|raspbian|canopen
0
1,904,589
44,674,356
matplotlib mpatches.FancyArrowPatch is too short
<p>I want to use <code>mpatches.FancyArrowPatch</code> to plot a lot of paths (hundreds in a single plot). I used to use <code>plt.arrow</code>, but it makes the plot windows way slow and also takes longer than the patches approach.</p> <p>Anyway, When I started using <code>mpatches.Arrow</code> I got good results for large scales, but scaling down the arrow sizes <a href="https://stackoverflow.com/questions/43586086/matplotlib-patches-fancyarrow-has-thinned-out-tail/43588158">leads to a weird bug were the tail becomes triangular.</a>. That's why I am using <code>FancyArrowPatch</code> now, which scales very well thanks to the <code>dpi_corr</code> kwarg.</p> <p>But now take a look at the picture: The bottom arrow is a <code>mpatches.Arrow</code> and the top one is a <code>mpatches.FancyArrowPatch</code> and the red crosses mark start and end of the arrows. The top one is <em>way</em> too short! What's happening here? How can I make it the correct size?</p> <p>Additional info: In my main program I have large lists containing start and end coordinates. From those I create arrows in a <code>for</code> loop using the function(s) you see below. I am using <code>python 3.4</code> and <code>matplotlib 2.0.2</code>.</p> <p><img src="https://i.imgur.com/UKK05Tb.png" alt="fancyarrowpatch plot"></p> <p>Here is my MWE:</p> <pre><code>import matplotlib.pyplot as plt import matplotlib.patches as mpatches from matplotlib.collections import PatchCollection start1 = [1, 1] end1 = [3, 3] start2 = [1, 3] end2 = [3, 5] def patchArrow(A, B): arrow = mpatches.Arrow(A[0], A[1], B[0] - A[0], B[1] - A[1]) return arrow def patchFancyArrow(A, B): arrow = mpatches.FancyArrowPatch((A[0], A[1]), (B[0], B[1])) return arrow patches = [] patches.append(patchArrow(start1, end1)) patches.append(patchFancyArrow(start2, end2)) collection = PatchCollection(patches) plt.plot([1, 3, 1, 3], [1, 3, 3, 5], "rx", markersize=15) plt.gca().add_collection(collection) plt.xlim(0, 6) plt.ylim(0, 6) plt.show() </code></pre>
<p>If you just add the arrow as a patch, everything works as expected.</p> <pre><code>import matplotlib.pyplot as plt import matplotlib.patches as mpatches style="Simple,head_length=28,head_width=36,tail_width=20" arrow = arrow = mpatches.FancyArrowPatch((1,1), (3,3), arrowstyle=style) plt.gca().add_patch(arrow) plt.plot([1, 3], [1,3], "rx", markersize=15) plt.xlim(0, 6) plt.ylim(0, 6) plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/qoUUg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qoUUg.png" alt="enter image description here"></a></p> <p>In case you need a collection to add the arrow to, it might be enough to set the <code>shrinkA</code> and <code>shrinkB</code> arguments to 0.</p> <pre><code>import matplotlib.pyplot as plt import matplotlib.patches as mpatches from matplotlib.collections import PatchCollection arrow = mpatches.FancyArrowPatch((1,1), (3,3), shrinkA=0, shrinkB=0) collection = PatchCollection([arrow]) plt.gca().add_collection(collection) plt.plot([1, 3], [1,3], "rx", markersize=15) plt.xlim(0, 6) plt.ylim(0, 6) plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/G9G8H.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/G9G8H.png" alt="enter image description here"></a></p>
python|matplotlib|plot
2
1,904,590
44,704,465
Pandas: df.groupby() is too slow for big data set. Any alternatives methods?
<p>I have a pandas.DataFrame with 3.8 Million rows and one column, and I'm trying to group them by index.</p> <p>The index is the customer ID. I want to group the <code>qty_liter</code> by the index:</p> <p><code>df = df.groupby(df.index).sum()</code></p> <p>But it takes forever to finish the computation. Are there any alternative ways to deal with a very large data set?</p> <p>Here is the <code>df.info()</code>:</p> <pre><code>&lt;class 'pandas.core.frame.DataFrame'&gt; Index: 3842595 entries, -2147153165 to \N Data columns (total 1 columns): qty_liter object dtypes: object(1) memory usage: 58.6+ MB </code></pre> <p>The data looks like this:</p> <p><a href="https://i.stack.imgur.com/2rqA7.png" rel="noreferrer"><img src="https://i.stack.imgur.com/2rqA7.png" alt="enter image description here" /></a></p>
<p>The problem is that your data are not numeric. Processing strings takes a lot longer than processing numbers. Try this first:</p> <pre><code>df.index = df.index.astype(int) df.qty_liter = df.qty_liter.astype(float) </code></pre> <p>Then do <code>groupby()</code> again. It should be much faster. If it is, see if you can modify your data loading step to have the proper dtypes from the beginning.</p>
python|pandas|grouping|bigdata
6
1,904,591
44,788,503
Using Python to find a global maximum from array of variables and functional values
<p>In the case of my problem, I have a dataset with a function dependent on 4 variables. The dataset is in the form of an array whose rows have 5 columns, where columns 0,1,2,3 are input values and column 4 contains the output. I want to find the location and value of the global maxima of the manifold defined by the input values. In particular, this dataset has several local maxima which are not of interest. </p> <p>I am new to using python which is making things a little difficult, especially because when using libraries like scipy it is necessary to define a functional form prior to throwing random values and initiating a search for minima/maxima. If anyone has ideas for a library or a specific technique I could use, I would greatly appreciate the help.</p>
<p>Considering <code>dataset</code> is a numpy ndarray of shape (n, 4),</p> <pre><code>import numpy as np global_maxima = dataset[np.argmax(dataset[:,-1])][:-1] </code></pre> <p>will give you the global maxima over the first three columns which if I understood well are your input columns. If you want the global minimum replace <code>np.argmax</code> by <code>np.argmin</code>.</p>
python|function|scipy|mathematical-optimization|maximize
0
1,904,592
61,976,561
Python, decorator that wraps function with "with" statement
<p>In Python 3.7+, is there a way to create a decorator that takes something like that:</p> <pre class="lang-py prettyprint-override"><code>@some_dec_fun def fun(): ... </code></pre> <p>And transforms and executes something like</p> <pre class="lang-py prettyprint-override"><code>def fun(): with some_dec_fun(): ... </code></pre>
<p>You can't use a decorator to 'get inside' another function. But you can use a decorator to execute the 'whole' decorated function within a particular context. For example the following will execute the function <code>some_function</code> within the context manager <code>some_context_manager</code>:</p> <pre><code>def my_decorator(func): def wrap(): with some_context_manager(): func() return wrap() @my_decorator def some_function: ... </code></pre>
python|python-3.x
4
1,904,593
61,884,066
Can both os.DirEntry.is_dir() and os.DirEntry.is_file() return False for an object in os.scandir()?
<p>I am making an event handler similar to <code>os.system("dir")</code> and it would be nice to know. I'm not sure what the object could be, but I wanted to be sure.</p>
<p>Yes it's possible for both <code>os.DirEntry.is_dir</code> and <code>os.DirEntry.is_file</code> to return False: <code>is_dir()</code> is for directories (which is obvious) but <code>is_file()</code> is only for so-called <a href="https://docs.python.org/3/library/stat.html#stat.S_ISREG" rel="nofollow noreferrer">regular files</a>.</p> <p>This means it can return <code>False</code> for something which is not a directory, like devices, pipes, ... for instance on unices most of <code>/dev</code> is neither a file nor a directory. I'm less sure about windows, but it probably has such concepts e.g. the "magic" reserved names like <code>CON</code> or <code>PRN</code> or <code>LPT1</code> (though I guess those wouldn't show up in a scandir).</p>
python|python-3.x
2
1,904,594
61,632,870
RaspberryPi: TypeError: unsupported format string passed to NoneType.__format__
<p>Using DHT11 sensor and raspberrypi to collect temperature and humidity value.Please help me out with this error.Thanks in advance </p> <pre><code>import sys import Adafruit_DHT import time while True: humidity, temperature = Adafruit_DHT.read_retry(11, 4) print( 'Temp: {0:0.1f} C Humidity: {1:0.1f} %'.format(temperature, humidity)) # print() time.sleep(1) </code></pre>
<p>maybe this will help, after humidity,temeperature = Adafruit bla...bla...add line after that</p> <p>humidity, temperature = Adafruit_DHT.read_retry(11, 4)</p> <p>if humidity and temperature is not None:</p> <pre><code>print( 'Temp: {0:0.1f} C Humidity: {1:0.1f} %'.format(temperature, humidity)) </code></pre> <h1>print()</h1> <p>time.sleep(1)</p>
python|python-3.x|raspberry-pi3
0
1,904,595
71,926,706
How return one dimension list by list comprehension from nested list in list
<pre><code>a = [ [1, 2], [1, 2] ] print([x for x in [x for x in a]]) [[1, 2], [1, 2]] </code></pre> <p>But I want to see something like that:</p> <pre><code>[1, 2, 1, 2] </code></pre> <p>How I cad do this?</p>
<p>Try this.</p> <pre class="lang-py prettyprint-override"><code>a = [ [1, 2], [1, 2] ] lst = [q for i in a for q in i] print(lst) </code></pre> <p>The above one is not always working it gives you an error if the list is something like this.</p> <pre><code>a = [[1, 2],[1, 2],2] </code></pre> <p>But this one always work.</p> <pre class="lang-py prettyprint-override"><code>a = [ [1, 2], [1, 2] ] lst = [] for i in a: if type(i) == list: lst.extend(i) else: lst.append(i) print(lst) </code></pre>
python|list-comprehension
1
1,904,596
15,254,344
db.StringProperty but property is occasionally datastore_types.Text?
<p>Are there any circumstances under which GAE datastore might change a property type from <code>StringProperty</code> to <code>TextProperty</code> (effectively ignoring the model definition)?</p> <p>Consider the following situation (simplified):</p> <pre><code>class MyModel(db.Model): user_input = db.StringProperty(default='', multiline=True) </code></pre> <p>Some entity instances of this model in my datastore have a datatype of <code>TextProperty</code> for 'user_input' (rather than simple <code>str</code>) and are therefore not indexed. I can fix this by retrieving the entity, setting <code>model_instance.user_input = str(model_instance.user_input)</code> and then putting the entity back into the datastore.</p> <p>What I don't understand is how this is happening to only some entities, when there have been no changes to this model. Is it possible that the'type' of a db.model's property can be overridden from <code>StringProperty</code> to <code>TextProperty</code>?</p>
<p>At least in <code>ndb</code> there is a special code path for StringProperties containing Unicode <strong>if the property is not indexed</strong>. In this case it is changed into a Text Property: The relevant snippet is:</p> <pre><code>if isinstance(value, str): v.set_stringvalue(value) elif isinstance(value, unicode): v.set_stringvalue(value.encode('utf8')) if not self._indexed: p.set_meaning(entity_pb.Property.TEXT) </code></pre> <p>I'm unable to observe this on the Python side (see <a href="https://stackoverflow.com/questions/15254344/db-stringproperty-but-property-is-occasionally-datastore-types-text#">db.StringProperty but property is occasionally datastore_types.Text?</a>) but I see it in the datastore and it caused headaches when loading the Datastore into BigQuery.</p> <p>So use only:</p> <ul> <li>StringProperty(indexed=True)</li> <li>TextProperty()</li> </ul> <p>Avoid <code>StringProperty(indexed=False)</code> If you might write mixed <code>unicode</code> and <code>str</code> values to the property - as it tends to happen with external data.</p>
python|google-app-engine|google-cloud-datastore
1
1,904,597
46,583,065
Python: Create a list of all four digits numbers with all different digits within it
<p>I was wondering if there is any easier way to achieve what this code is achieving. Now the code creates all 4-digits number in a list (if the number starts with a 0 is doesn't count as a 4-digit, for example 0123) and no digit is repeated within the number. So for example 1231 is not in the list. Preferable I want a code that does what this one is doing but a depending on what argument N is given to the function upon calling it creates this kind of list with all numbers with N digits. I hope this wasn't impossible to understand since Im new to programing. </p> <pre><code>def guessables(): '''creates a list of all 4 digit numbers wherest every element has no repeating digits inside of that number+ it doesn't count as a 4 digit number if it starts with a 0''' guesses=[] for a in range(1,10): for b in range(0,10): if a!=b: for c in range(0,10): if b!=c and a!=c: for d in range(0,10): if c!=d and d!=b and d!=a: guesses.append(str(a)+str(b)+str(c)+str(d)) return guesses </code></pre>
<p>This can be expressed more easily.</p> <pre><code>def all_digits_unique(number): # `set` will only record unique elements. # We convert to a string and check if the unique # elements are the same number as the original elements. return len(str(number)) == len(set(str(number))) </code></pre> <p>Edit:</p> <pre><code>def guesses(N): return filter(all_digits_unique, range(10**(N-1), 10**N)) print guesses(4) </code></pre>
python|python-3.x
2
1,904,598
21,039,786
Python Selemium2 Robotframework testing without opening browser instance
<p>I'm using Selenium2Library with RobotFramework for my framework. when I run this it always open a browser first and then start running test cases. Though I can minimize the browser by providing the keyword <code>Minimize Browser Window</code> . but problem is when some window popup it automatically maximize(foreground) the browser. </p> <p>so I'm looking some thing which can hide the browser permanently, i mean browser will run in background(taking care of popups and all GUI design) and run all test cases.</p> <p>is there any library or module for that. I heard phantomJs(<a href="http://phantomjs.org/" rel="nofollow">http://phantomjs.org/</a> ) but i dont know whether it's a good choice or not (i heard this name first time). </p>
<p>Ok! finally I've figured it out. I used <code>Xvfb</code> with <code>PyVirtualDisplay</code> . this is the code for same- visible=0 means browser will run in headless mode(in background)</p> <pre><code>from pyvirtualdisplay import Display display = Display(visible=0, size=(1024, 700)) display.start() </code></pre>
python|selenium|selenium-webdriver|robotframework
0
1,904,599
53,736,724
How to load the JSON file too the PSQL database
<p>I have a JSON file. Now I need to load the JSON data to my PSQL database.</p> <p>So far I tried this one</p> <pre><code>import psycopg2 import json with open('new.json') as f: data = f.read() dd = json.loads(data) conn = psycopg2.connect(database="newdb", user = "postgres", password = "postgres",host = "127.0.0.1", port = "5432") print "Opened database successfully" cur = conn.cursor() cur.execute(''' CREATE TABLE jsontable(SUM INT NOT NULL, APP CHAR[30] NOT NULL, ID INT NOT NULL, DOMAINNAME TEXT NOT NULL, DOMAINID INT NOT NULL);''') print "Table Created successfully" cur.execute('''INSERT INTO jsontable(data) VALUES(%s) ''', (data, str(dd['sum'],str(dd['app'],str(dd['id'],str(dd['Domain_name'],str(dd['Domain_Id']))) print ("Data Entered successfully") conn.commit() conn.close() </code></pre> <p>Please provide some examples, how to pass the JSON file data to the database</p>
<p>Personally I like <a href="https://magicstack.github.io/asyncpg/current/usage.html#" rel="nofollow noreferrer">asyncpg</a> as it's fully async especially if you're using Python 3.x and essentially all you need to do is put await in front of the sync commands.</p> <pre><code>import asyncpg import json with open('new.json') as f: data = f.read() dd = json.loads(data) conn = await asyncpg.connect(database="newdb", user = "postgres", password = "postgres",host = "127.0.0.1", port = "5432") print "Opened database successfully" await con.execute(''' CREATE TABLE jsontable(SUM INT NOT NULL, APP CHAR[30] NOT NULL, ID INT NOT NULL, DOMAINNAME TEXT NOT NULL, DOMAINID INT NOT NULL);''') print "Table Created successfully" await con.execute('''INSERT INTO jsontable(SUM, APP, ID, DOMAINNAME, DOMAINID) VALUES($1, $2, $3, $4, $5) ''',(str(dd['sum'],str(dd['app'],str(dd['id'],str(dd['Domain_name'],str(dd['Domain_Id']))) print ("Data Entered successfully") await conn.commit() await conn.close() </code></pre>
python|database|python-3.x|postgresql|postgresql-9.5
0