Unnamed: 0
int64 0
1.91M
| id
int64 337
73.8M
| title
stringlengths 10
150
| question
stringlengths 21
64.2k
| answer
stringlengths 19
59.4k
| tags
stringlengths 5
112
| score
int64 -10
17.3k
|
---|---|---|---|---|---|---|
1,905,600 | 59,909,738 |
AttributeError: 'NoneType' object has no attribute 'pencolor'
|
<p><strong>Hi everyone.</strong> </p>
<p>It's my first time here and I'm new in the Python.</p>
<p>When I wrote this code </p>
<pre><code>import turtle
t=turtle.Pen()
t=turtle.bgcolor("black")
sides=6
colors=("blue", "red", "green", "white", "yellow", "purple")
for x in range(360):
t.pencolor(colors[x % sides])
t.forward(x*3/sides+x)
t.left(360/sides+1)
t.width(x*sides/200)
</code></pre>
<p>and ran it, I received a message: </p>
<blockquote>
<p>"Traceback (most recent call last):<br>
File "C:/Users/emin_/PycharmProjects/firstproject/AydA.py", line 10, in
t.pencolor(colors[x % sides]) AttributeError: 'NoneType' object has no attribute 'pencolor'".</p>
</blockquote>
<p>I will be very thankful for any advice and help. </p>
<p><em>Sincerely, paDrEdadash</em></p>
|
<p>Along with the assignment of <code>None</code> in <code>t=turtle.bgcolor("black")</code> that @JohnGordon points out (although <code>turtle.bgcolor("black")</code> is fine), your indentation as shown is incorrect and the code has the potential to error with an <code>index out of range</code> on <code>colors</code> if <code>sides</code> and the <code>len(colors)</code> don't coincidentally match. I recommend an approach like the following to avoid problems:</p>
<pre><code>from turtle import Screen, Turtle
SIDES = 6
COLORS = ("blue", "red", "green", "white", "yellow", "purple")
screen = Screen()
screen.bgcolor("black")
turtle = Turtle()
for x in range(360):
turtle.pencolor(COLORS[(x % SIDES) % len(COLORS)])
turtle.forward(x*3 / SIDES + x)
turtle.left(360 / SIDES+1)
turtle.width(x * SIDES/200)
screen.exitonclick()
</code></pre>
<p><a href="https://i.stack.imgur.com/VPfZe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VPfZe.png" alt="enter image description here"></a></p>
|
python|turtle-graphics|attributeerror|nonetype
| 0 |
1,905,601 | 67,821,568 |
Move inherited item with Drag and Drop within QStandardItemModel
|
<p>I'm trying to implement a function to move nodes by dragging in a single tree (QStandardItemModel, PyQt5, Python). My nodes are classes created by multiple inheritance like <code>class Node(A, QStandardItem)</code>. When I drag and drop this node, only properties from QStandardItem parent class are moved, everything from the class A is lost.
Here is minimal working example:</p>
<pre><code>import sys
from PyQt5 import QtCore, QtGui, QtWidgets
from PyQt5.QtCore import (Qt, QModelIndex, QMimeData, QByteArray)
from PyQt5.QtWidgets import (QApplication, QMainWindow, QAbstractItemView, QPushButton, QVBoxLayout, QWidget)
from PyQt5.QtGui import QStandardItemModel, QStandardItem
class A:
def __init__(self, *args, **kwargs):
self.symbol = None
super().__init__(*args, **kwargs) # forwards all unused arguments
class Node(A, QStandardItem):
def __init__(self, symbol, *args, **kwargs):
super().__init__(*args, **kwargs)
self.symbol = symbol
self.setText("Node " + str(self.symbol))
class DragDropTreeModel(QStandardItemModel):
def __init__(self, parent=None):
super(DragDropTreeModel, self).__init__(parent)
def supportedDropActions(self):
return Qt.MoveAction
def flags(self, index):
defaultFlags = QStandardItemModel.flags(self, index)
if index.isValid():
return Qt.ItemIsDragEnabled | Qt.ItemIsDropEnabled | defaultFlags
else:
return Qt.ItemIsDropEnabled | defaultFlags```
class DemoDragDrop(QWidget):
def __init__(self, parent=None):
super(DemoDragDrop, self).__init__(parent)
self.setWindowTitle('drag&drop in PyQt5')
self.resize(480, 320)
self.initUi()
def initUi(self):
self.vLayout = QVBoxLayout(self)
self.TreeView = QtWidgets.QTreeView(self)
self.TreeView.setSelectionMode(QAbstractItemView.ExtendedSelection)
self.TreeView.setDragEnabled(True)
self.TreeView.setAcceptDrops(True)
self.TreeView.setDropIndicatorShown(True)
self.ddm = DragDropTreeModel()
self.TreeView.setDragDropMode(QAbstractItemView.InternalMove)
self.TreeView.setDefaultDropAction(Qt.MoveAction)
self.TreeView.setDragDropOverwriteMode(False)
self.root_node = Node('root')
self.ddm.appendRow(self.root_node)
node_a = Node('a')
self.root_node.appendRow(node_a)
node_b = Node('b')
self.root_node.appendRow(node_b)
node_c = Node('c')
self.root_node.appendRow(node_c)
self.TreeView.setModel(self.ddm)
self.printButton = QPushButton("Print")
self.vLayout.addWidget(self.TreeView)
self.vLayout.addWidget(self.printButton)
self.printButton.clicked.connect(self.printModelProp)
def printModelProp(self):
cur_ind = self.TreeView.currentIndex()
obj = self.ddm.itemFromIndex(cur_ind)
obj: Node
print(obj.symbol)
if __name__ == '__main__':
app = QApplication(sys.argv)
app.setStyle('fusion')
window = DemoDragDrop()
window.show()
sys.exit(app.exec_())
</code></pre>
<p>In this example, select the node from the tree and click "Print" button - it will print to the console, 'a' for "Node a", 'b' for "Node b" and so on. Then move one node, select it and push "Print" again. The application will crash with the error <code>AttributeError: 'QStandardItem' object has no attribute 'symbol'</code>.</p>
<p>Then I tried to move a node manually by overriding methods <code>mimeData</code> and <code>dropMimeData</code>. I saved row and column indexes in mimeData and tried to get this node from the index in dropMimeData to move it. But this doesn't work because the index has changed meanwhile.</p>
<pre><code>import sys
from PyQt5 import QtCore, QtGui, QtWidgets
from PyQt5.QtCore import (Qt, QModelIndex, QMimeData, QByteArray)
from PyQt5.QtWidgets import (QApplication, QMainWindow, QAbstractItemView, QPushButton, QVBoxLayout, QWidget)
from PyQt5.QtGui import QStandardItemModel, QStandardItem
class A:
def __init__(self, *args, **kwargs):
self.symbol = None
super().__init__(*args, **kwargs) # forwards all unused arguments
class Node(A, QStandardItem):
def __init__(self, symbol, *args, **kwargs):
super().__init__(*args, **kwargs)
self.symbol = symbol
self.setText("Node " + str(self.symbol))
class DragDropTreeModel(QStandardItemModel):
def __init__(self, parent=None):
super(DragDropTreeModel, self).__init__(parent)
def supportedDropActions(self):
return Qt.MoveAction
def flags(self, index):
defaultFlags = QStandardItemModel.flags(self, index)
if index.isValid():
return Qt.ItemIsDragEnabled | Qt.ItemIsDropEnabled | defaultFlags
else:
return Qt.ItemIsDropEnabled | defaultFlags
def mimeData(self, indexes) -> QtCore.QMimeData:
m_data = super().mimeData(indexes)
if (m_data):
r = indexes[0].row()
c = indexes[0].column()
obj = self.itemFromIndex(indexes[0])
print(f"row:{r}, column:{c}, type:{type(obj)}, ind:{indexes[0]}")
m_data.setData('row', QByteArray.number(indexes[0].row()))
m_data.setData('col', QByteArray.number(indexes[0].column()))
return m_data
def dropMimeData(self, data: QtCore.QMimeData, action: QtCore.Qt.DropAction, row: int, column: int,
parent: QtCore.QModelIndex) -> bool:
if data is None or action != QtCore.Qt.MoveAction:
return False
_row = data.data('row').toInt()[0]
_col = data.data('col').toInt()[0]
old_index = self.index(_row, _col)
current_index = parent
old_item = self.takeItem(old_index.row(), old_index.column())
parent_item = self.itemFromIndex(parent)
parent_item.appendRow(old_item)
return True
class DemoDragDrop(QWidget):
def __init__(self, parent=None):
super(DemoDragDrop, self).__init__(parent)
self.setWindowTitle('drag&drop in PyQt5')
self.resize(480, 320)
self.initUi()
def initUi(self):
self.vLayout = QVBoxLayout(self)
self.TreeView = QtWidgets.QTreeView(self)
self.TreeView.setSelectionMode(QAbstractItemView.ExtendedSelection)
self.TreeView.setDragEnabled(True)
self.TreeView.setAcceptDrops(True)
self.TreeView.setDropIndicatorShown(True)
self.ddm = DragDropTreeModel()
self.TreeView.setDragDropMode(QAbstractItemView.InternalMove)
self.TreeView.setDefaultDropAction(Qt.MoveAction)
self.TreeView.setDragDropOverwriteMode(False)
self.root_node = Node('root')
self.ddm.appendRow(self.root_node)
node_a = Node('a')
self.root_node.appendRow(node_a)
node_b = Node('b')
self.root_node.appendRow(node_b)
node_c = Node('c')
self.root_node.appendRow(node_c)
self.TreeView.setModel(self.ddm)
self.printButton = QPushButton("Print")
self.vLayout.addWidget(self.TreeView)
self.vLayout.addWidget(self.printButton)
self.printButton.clicked.connect(self.printModelProp)
def printModelProp(self):
cur_ind = self.TreeView.currentIndex()
obj = self.ddm.itemFromIndex(cur_ind)
obj: Node
print(obj.symbol)
if __name__ == '__main__':
app = QApplication(sys.argv)
app.setStyle('fusion')
window = DemoDragDrop()
window.show()
sys.exit(app.exec_())
</code></pre>
<p>In this example the tree will break.</p>
<p>I wonder is there a way to move the node without destroying it. It seems to me a wrong way to recreate the object (in <code>mimeData()</code> and <code>dropMimeData()</code>) when it's only needed to change the index.</p>
<p>So, the questions are: how to implement this move correctly, and is it possible without destroying the node (it can be a member of some list for example)?</p>
|
<p>Since nobody replied, I post an answer to my own question (maybe it's not perfect but it works).
First, I found that my design doesn't fit Qt's drag-n-drop design. One way is not to subclass an item using multiple inheritance (like <code>class Node(A, QStandardItem)</code>) but rather to use composition and keep <code>class A</code> within <code>QStandardItem</code> using roles. Then no need to override any methods.</p>
<p>In my case nodes are using multiple inheritance so I did override dropEvent method and move row (extract it from the model and insert in the new place). It's very simple example just for demonstration, so no checks what kind of object is dropping and so on.</p>
<p>If somebody has better idea - you're welcome to comment or write your own solution</p>
<pre><code>import sys
from PyQt5 import QtCore, QtGui, QtWidgets
from PyQt5.QtCore import (Qt, QModelIndex, QMimeData, QByteArray)
from PyQt5.QtWidgets import (QApplication, QMainWindow, QAbstractItemView, QPushButton, QVBoxLayout, QWidget)
from PyQt5.QtGui import QStandardItemModel, QStandardItem
class A:
def __init__(self, *args, **kwargs):
self.symbol = None
super().__init__(*args, **kwargs) # forwards all unused arguments
class Node(A, QStandardItem):
def __init__(self, symbol, *args, **kwargs):
super().__init__(*args, **kwargs)
self.symbol = symbol
self.setData(symbol, QtCore.Qt.UserRole)
self.setText("Node " + str(self.symbol))
def get_sym(self):
return self.data(QtCore.Qt.UserRole)
class DragDropTreeModel(QStandardItemModel):
def __init__(self, parent=None):
super(DragDropTreeModel, self).__init__(parent)
def supportedDropActions(self):
return Qt.MoveAction
def flags(self, index):
defaultFlags = QStandardItemModel.flags(self, index)
if index.isValid():
return Qt.ItemIsDragEnabled | Qt.ItemIsDropEnabled | defaultFlags
else:
return Qt.ItemIsDropEnabled | defaultFlags
class MyTreeView(QtWidgets.QTreeView):
def __init__(self, parent=None):
super(MyTreeView, self).__init__(parent)
self.setAcceptDrops(True)
self.setDragEnabled(True)
def dropEvent(self, event):
index = self.indexAt(event.pos())
model = self.model()
dest_node = model.itemFromIndex(index)
if dest_node is None:
return
source_index = self.currentIndex()
source_node = model.itemFromIndex(source_index)
source_node: Node
sourse_parent = source_node.parent()
taken_row = sourse_parent.takeRow(source_index.row())
dest_parent = dest_node
if dest_node != sourse_parent:
dest_parent = dest_node.parent()
if dest_parent is None:
dest_parent = dest_node
dest_parent.insertRow(index.row(), taken_row)
class DemoDragDrop(QWidget):
def __init__(self, parent=None):
super(DemoDragDrop, self).__init__(parent)
self.setWindowTitle('drag&drop in PyQt5')
self.resize(480, 320)
self.initUi()
def initUi(self):
self.vLayout = QVBoxLayout(self)
self.TreeView = MyTreeView(self)# QtWidgets.QTreeView(self)#
self.TreeView.setSelectionMode(QAbstractItemView.ExtendedSelection)
self.TreeView.setDragEnabled(True)
self.TreeView.setAcceptDrops(True)
self.TreeView.setDropIndicatorShown(True)
self.ddm = DragDropTreeModel()
self.TreeView.setDragDropMode(QAbstractItemView.InternalMove)
self.TreeView.setDefaultDropAction(Qt.MoveAction)
self.TreeView.setDragDropOverwriteMode(False)
self.root_node = Node('root')
self.ddm.appendRow(self.root_node)
node_1 = Node('1')
self.root_node.appendRow(node_1)
node_2 = Node('2')
self.root_node.appendRow(node_2)
node_d = Node('d')
node_2.appendRow(node_d)
node_a = Node('a')
font = QtGui.QFont()
font.setBold(True)
node_a.setFont(font)
node_1.appendRow(node_a)
node_b = Node('b')
node_1.appendRow(node_b)
node_c = Node('c')
node_1.appendRow(node_c)
self.TreeView.setModel(self.ddm)
self.printButton = QPushButton("Print")
self.vLayout.addWidget(self.TreeView)
self.vLayout.addWidget(self.printButton)
self.printButton.clicked.connect(self.printModelProp)
def drop(self):
print('drop')
def printModelProp(self):
cur_ind = self.TreeView.currentIndex()
obj = self.ddm.itemFromIndex(cur_ind)
obj: Node
print(obj.symbol)
if __name__ == '__main__':
app = QApplication(sys.argv)
app.setStyle('fusion')
window = DemoDragDrop()
window.show()
sys.exit(app.exec_())
</code></pre>
|
python|pyqt5
| 0 |
1,905,602 | 66,889,048 |
Removing empty rows from dataframe
|
<p>I have a dataframe with empty values in rows</p>
<p><a href="https://i.stack.imgur.com/jQJoW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jQJoW.png" alt="enter image description here" /></a></p>
<p>How can I remove these empty values? I have already tried <code>data.replace('', np.nan, inplace=True)</code> and <code>data.dropna()</code> but that didn't change anything.
What other ways are there to drop empty rows from a dataframe?</p>
|
<p>As you have spaces in a numeric variable, I'm assuming it got read in as a string. The way I would solve this in a robust way is following the following:</p>
<pre><code>data = {'lattitude': ['', '38.895118', '', '', '', '45.5234515', '', '40.764462'],
'longitude': ['', '-77.0363658', '', '', '', '-122.6762071', '', '-11.904565']}
df = pd.DataFrame(data)
</code></pre>
<p><a href="https://i.stack.imgur.com/KgOaz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KgOaz.png" alt="enter image description here" /></a></p>
<p>Change the fields to a numeric field. errors='coerce' will change the values it can not convert to a numeric to pd.NaN.</p>
<pre><code>df = df.apply(lambda x: pd.to_numeric(x, errors='coerce'))
</code></pre>
<p><a href="https://i.stack.imgur.com/wP8le.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wP8le.png" alt="enter image description here" /></a></p>
<p>The only thing you'll have to do now is drop the NA's</p>
<pre><code>df.dropna(inplace=True)
</code></pre>
<p><a href="https://i.stack.imgur.com/3stVk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3stVk.png" alt="enter image description here" /></a></p>
<p>Another possible solution is to use regular expressions. In this case it's a negative match to any character. So if the field does not have a character, it'll be caught here. Of course there are multiple regex possible here.</p>
<pre><code>mask = (df['lattitude'].str.contains(r'(^\S)') & df['longitude'].str.contains(r'(^\S)'))
df = df[mask]
</code></pre>
|
python|pandas|dataframe
| 1 |
1,905,603 | 42,744,546 |
Scrapy Splash on Ubuntu server: got an unexpected keyword argument 'encoding'
|
<p>The Scrapy Splash I am using is working just fine on my local machine, but it returns this error when I use it on my Ubuntu server. Why is that? Is it caused by low memory?</p>
<pre><code> File "/usr/local/lib64/python2.7/site-packages/twisted/internet/defer.py", line 1299, in _inlineCallbacks
result = g.send(result)
File "/usr/local/lib/python2.7/site-packages/scrapy/core/downloader/middleware.py", line 53, in process_response
spider=spider)
File "/usr/local/lib/python2.7/site-packages/scrapy_splash/middleware.py", line 387, in process_response
response = self._change_response_class(request, response)
File "/usr/local/lib/python2.7/site-packages/scrapy_splash/middleware.py", line 402, in _change_response_class
response = response.replace(cls=respcls, request=request)
File "/usr/local/lib/python2.7/site-packages/scrapy/http/response/text.py", line 50, in replace
return Response.replace(self, *args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/scrapy/http/response/__init__.py", line 79, in replace
return cls(*args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/scrapy_splash/response.py", line 33, in __init__
super(_SplashResponseMixin, self).__init__(url, *args, **kwargs)
TypeError: __init__() got an unexpected keyword argument 'encoding'
</code></pre>
<h2>UPDATE</h2>
<p>It only happens when using localhost as <code>SPLASH_URL</code></p>
|
<p>I solve it by using the exact URL instead:</p>
<pre><code>SPLASH_URL = 'http://therealip:8050'
</code></pre>
<p>Any localhost solution doesn't work. I think it's a bug in Scrapy Splash.</p>
<p><strong>UPDATE</strong></p>
<p>It turns out, the error is also gone if I turn off Crawlera. But then it reproduces another error. It's best to not use localhost in the configuration.</p>
|
python|web-scraping|scrapy|scrapy-splash|splash-js-render
| 0 |
1,905,604 | 43,012,415 |
Correcting dates with apply function pandas
|
<p>I have a DataFrame with dates in the following format:</p>
<p>12/31/2000 20:00 <strong><em>(month/day/year hours:minutes)</em></strong></p>
<p>The issue is that there are some dates that are wrong in the data set, for instance: </p>
<p><strong>10/12/2003 24:00</strong> should be <strong>10/13/2003 00:00</strong></p>
<p>This is what I get when I run dfUFO[wrongFormat]</p>
<p><a href="https://i.stack.imgur.com/k4Xi9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/k4Xi9.png" alt="enter image description here"></a></p>
<p>So I have the following code in a pandas notebook to reformat these dates:</p>
<pre><code>def convert2400ToTimestamp(x) :
date = pd.to_datetime(x.datetime.split(" ")[0], format='%m/%d/%Y')
return date + pd.Timedelta(days=1)
wrongFormat = dfUFO.datetime.str.endswith("24:00", na=False)
dfUFO[wrongFormat] = dfUFO[wrongFormat].apply(convert2400ToTimestamp, axis=1)
</code></pre>
<p>This code results in </p>
<pre><code>ValueError: Must have equal len keys and value when setting with an iterable
</code></pre>
<p>I don't really get what this error means. Something I'm missing?</p>
<p><strong>EDIT: Changed to</strong> </p>
<pre><code>dfUFO.loc[wrongFormat, 'datetime'] = dfUFO[wrongFormat].apply(convert2400ToTimestamp, axis=1)
</code></pre>
<p>But datetime now shows values like 1160611200000000000 for date <strong>10/11/2006</strong></p>
|
<p>You can parse your <code>datetime</code> column to "correctly named" parts and use <code>pd.to_datetime()</code>:</p>
<p>Source DF:</p>
<pre><code>In [14]: df
Out[14]:
datetime
388 10/11/2006 24:00:00
693 10/1/2001 24:00:00
111 10/1/2001 23:59:59
</code></pre>
<p>Vectorized solution:</p>
<pre><code>In [11]: pat = r'(?P<month>\d{1,2})\/(?P<day>\d{1,2})\/(?P<year>\d{4}) (?P<hour>\d{1,2})\:(?P<minute>\d{1,2})\:(?P<second>\d{1,2})'
In [12]: df.datetime.str.extract(pat, expand=True)
Out[12]:
month day year hour minute second
388 10 11 2006 24 00 00
693 10 1 2001 24 00 00
111 10 1 2001 23 59 59
In [13]: pd.to_datetime(df.datetime.str.extract(pat, expand=True))
Out[13]:
388 2006-10-12 00:00:00
693 2001-10-02 00:00:00
111 2001-10-01 23:59:59
dtype: datetime64[ns]
</code></pre>
<p>from <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html" rel="nofollow noreferrer">docs</a>:</p>
<blockquote>
<p>Assembling a datetime from multiple columns of a DataFrame. The keys
can be common abbreviations like:</p>
<p><code>['year', 'month', 'day', 'minute', 'second','ms', 'us', 'ns']</code></p>
<p>or plurals of the same</p>
</blockquote>
|
python|pandas|datetime|dataframe|timestamp
| 3 |
1,905,605 | 42,815,917 |
Pandas: merge if left column matches any of right columns
|
<p>Is there a way to <em>merge</em> two data frames if one of the columns from left data frame matches any of the columns of the right data frame:</p>
<pre><code>SELECT
t1.*, t2.*
FROM
t1
JOIN
t2 ON t1.c1 = t2.c1 OR
t1.c1 = t2.c2 OR
t1.c1 = t2.c3 OR
t1.c1 = t2.c4
</code></pre>
<p>Python <em>(something like):</em></p>
<pre><code>import pandas as pd
dataA = [(1), (2)]
pdA = pd.DataFrame(dataA)
pdA.columns = ['col']
dataB = [(1, None), (None, 2), (1, 2)]
pdB = pd.DataFrame(dataB)
pdB.columns = ['col1', 'col2']
pdA.merge(pdB, left_on='col', right_on='col1') \
.append(pdA.merge(pdB, left_on='col', right_on='col2'))
</code></pre>
<p><a href="https://i.stack.imgur.com/qjdwf.png" rel="noreferrer"><img src="https://i.stack.imgur.com/qjdwf.png" alt="enter image description here"></a>
<a href="https://i.stack.imgur.com/uyssS.png" rel="noreferrer"><img src="https://i.stack.imgur.com/uyssS.png" alt="enter image description here"></a>
<a href="https://i.stack.imgur.com/6xc6R.png" rel="noreferrer"><img src="https://i.stack.imgur.com/6xc6R.png" alt="enter image description here"></a></p>
|
<p>Looks like we're doing a row by row <code>isin</code> check. I like to use set logic and use numpy broadcasting to help out.</p>
<pre><code>f = lambda x: set(x.dropna())
npB = pdB.apply(f, 1).values
npA = pdA.apply(f, 1).values
a = npA <= npB[:, None]
m, n = a.shape
rA = np.tile(np.arange(n), m)
rB = np.repeat(np.arange(m), n)
a_ = a.ravel()
pd.DataFrame(
np.hstack([pdA.values[rA[a_]], pdB.values[rB[a_]]]),
columns=pdA.columns.tolist() + pdB.columns.tolist()
)
col col1 col2
0 1.0 1.0 NaN
1 2.0 NaN 2.0
2 1.0 1.0 2.0
3 2.0 1.0 2.0
</code></pre>
|
python|pandas
| 0 |
1,905,606 | 72,329,173 |
Python dictionary extract data from two dictionaries and insert in new dictionary
|
<p>I am a Python beginner and am struggling with working with dictionaries.</p>
<p>I have the dictionaries routesAndID and findOutPlane:</p>
<pre><code>routesAndID = {('Sydney', 'Dubai'): 3, ('New York', 'Los Angeles'): 2, ('Zurich', 'Singapore'): 0}
findOutPlane = {('Sydney', 'Dubai'): 'Airplane', ('New York', 'Los Angeles'): 'Helicopter', ('Zurich', 'Singapore'): 'Jet'}
</code></pre>
<p>I need to extract the Aircraft and the corresponding ID on match of routes (depending on routes, the aircraft can be identified). I need the following output:</p>
<pre><code>newdict = { "Airplane": 3, "Helicopter": 2, "Jet": 0 }
</code></pre>
<p>Would anyone know how to do so?</p>
|
<p>Consider using a <a href="https://peps.python.org/pep-0274/" rel="nofollow noreferrer"><code>dict comprehension</code></a>:</p>
<pre><code>>>> route_to_id = {
... ('Sydney', 'Dubai'): 3,
... ('New York', 'Los Angeles'): 2,
... ('Zurich', 'Singapore'): 0
... }
>>> route_to_aircraft = {
... ('Sydney', 'Dubai'): 'Airplane',
... ('New York', 'Los Angeles'): 'Helicopter',
... ('Zurich', 'Singapore'): 'Jet'
... }
>>> aircraft_to_id = {
... aircraft: route_to_id[route]
... for route, aircraft in route_to_aircraft.items()
... }
>>> aircraft_to_id
{'Airplane': 3, 'Helicopter': 2, 'Jet': 0}
</code></pre>
<p>If there could be multiple routes with the same aircraft type you could utilize <a href="https://docs.python.org/3/library/collections.html#collections.defaultdict" rel="nofollow noreferrer"><code>collections.<b>defaultdict</b></code></a>:</p>
<pre><code>>>> from collections import defaultdict
>>> route_to_id = {
... ('Sydney', 'Dubai'): 3,
... ('New York', 'Los Angeles'): 2,
... ('Zurich', 'Singapore'): 0,
... ('Auckland', 'San Francisco'): 4
... }
>>> route_to_aircraft = {
... ('Sydney', 'Dubai'): 'Airplane',
... ('New York', 'Los Angeles'): 'Helicopter',
... ('Zurich', 'Singapore'): 'Jet',
... ('Auckland', 'San Francisco'): 'Airplane'
... }
>>> aircraft_to_ids = defaultdict(list)
>>> for route, aircraft in route_to_aircraft.items():
... aircraft_to_ids[aircraft].append(route_to_id[route])
...
>>> dict(aircraft_to_ids)
{'Airplane': [3, 4], 'Helicopter': [2], 'Jet': [0]}
</code></pre>
|
python|dictionary|set|dictionary-comprehension
| 0 |
1,905,607 | 65,555,097 |
Unexpected behavior when copying iterators using tee
|
<p>If you copy an iterator inside a for loop, the iteration resumes just fine. For example:</p>
<pre><code>ita = iter(range(5))
for a in ita:
print(a)
if a == 2:
ita, itb = tee(ita)
</code></pre>
<p>prints <code>0 1 2 3 4</code>. However, if you iterate over the second copy made, the original iterator depletes as well:</p>
<pre><code>ita = iter(range(5))
for a in ita:
print(a)
if a == 2:
ita, itb = tee(ita)
for b in itb:
pass
</code></pre>
<p>only prints <code>0 1 2</code>.</p>
<p>As far as I understand it, iterating over the copied iterator shouldn't affect the original one, so I don't know why this is happening. Any help would be appreciated</p>
|
<p>The <a href="https://docs.python.org/3/library/itertools.html#itertools.tee" rel="nofollow noreferrer">documentation for <code>tee</code> states</a>:</p>
<blockquote>
<p>Once <code>tee()</code> has made a split, the original <em>iterable</em> should not be used anywhere else; otherwise, the <em>iterable</em> could get advanced without the tee objects being informed.</p>
</blockquote>
<p>The behavior you are triggering is the opposite of what is discussed in the docs (<code>tee</code> is advancing the iterator without informing the <em>other</em> consumer of <code>ita</code> [the <code>for</code> loop]), but it causes the exact same kind of problem.</p>
<p>To put it another way, <code>tee</code> assumes it owns the passed-in iterator (the pseudo-Python in the "for learning purposes" implementation shows as much) - and since the iterator for <code>iter</code> is stateful, when <code>tee</code> consumes the remaining elements in <code>ita</code> it empties out the remaining entries in <code>ita</code> before the next turn of your <code>for</code> loop.</p>
|
python|loops|iterator
| 0 |
1,905,608 | 51,005,431 |
Parameter values for parameter (n_neighbors) need to be a sequence
|
<p>I am trying to use the Skleanr module. However, my code is below</p>
<pre><code>n_range = {'n_neighbors': range(1,100)}
knn_search = GridSearchCV(estimator = KNeighborsClassifier(), param_grid=n_range, scoring='f1_micro')
knn_search.fit(features_vector, train_labels)
</code></pre>
<p>results the error:</p>
<blockquote>
<p>Parameter values for parameter (n_neighbors) need to be a sequence.</p>
</blockquote>
<p>What I did wrong?</p>
|
<p>In Python 3.x, function <code>range</code> returns a <code>range</code> object (which is not a sequence), not a list. You must convert it to a list yourself:</p>
<pre><code>n_range = {'n_neighbors': list(range(1,100))}
</code></pre>
|
python|python-3.x
| 1 |
1,905,609 | 3,958,039 |
unable to convert pdf to text using python script
|
<p>i want to convert all my .pdf files from a specific directory to .txt format using the command pdftotext... but i wanna do this using a python script...
my script contains:</p>
<pre><code>import glob
import os
fullPath = os.path.abspath("/home/eth1/Downloads")
for fileName in glob.glob(os.path.join(fullPath,'*.pdf')):
fullFileName = os.path.join(fullPath, fileName)
os.popen('pdftotext fullFileName')
</code></pre>
<p>but I am getting the following error:</p>
<pre><code>Error: Couldn't open file 'fullFileName': No such file or directory.
</code></pre>
|
<p>You are passing <code>fullFileName</code> literally to <code>os.popen</code>. You should do something like this instead (assuming that <code>fullFileName</code> does not have to be escaped):</p>
<pre><code>os.popen('pdftotext %s' % fullFileName)
</code></pre>
<p>Also note that <code>os.popen</code> is considered deprecated, it's better to use the <code>subprocess</code> module instead:</p>
<pre><code>import subprocess
retcode = subprocess.call(["/usr/bin/pdftotext", fullFileName])
</code></pre>
<p>It is also much safer as it handles spaces and special characters in <code>fullFileName</code> properly.</p>
|
python|glob
| 3 |
1,905,610 | 35,331,854 |
Downloading a file at a specified location through python and selenium using Chrome driver
|
<p>I am trying to automatically download some links through selenium's click functionality and I am using a <strong>chrome webdriver</strong> and python as the programming language. <strong>How can I select the download directory</strong> through the python program so that it does not get downloaded in the default Downloads directory. I found a solution for firefox but there the download dialog keeps popping up every time it clicks on the link which does not happen in Chrome.</p>
|
<p>I found the accepted solution didn't work, however this slight change did:</p>
<pre><code>from selenium import webdriver
chrome_options = webdriver.ChromeOptions()
prefs = {'download.default_directory' : '/path/to/dir'}
chrome_options.add_experimental_option('prefs', prefs)
driver = webdriver.Chrome(chrome_options=chrome_options)
</code></pre>
|
python|google-chrome|selenium|download|driver
| 120 |
1,905,611 | 26,499,867 |
Error reading serial data transmission
|
<p>Using a raspberry pi I am having issues reading data that is being transmitted serially. My code was working when I tested it on a different machine but isn't working now.</p>
<p>The baud rate is 9600 w/ no parity, 8 data bits, 1 stop bit and I want the program to handle a variable length of characters (Sometimes 100K+). The reading portion of the code is as follows: </p>
<p><div class="snippet" data-lang="js" data-hide="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>ser = serial.Serial('/dev/ttyAMA0', 9600, parity = serial.PARITY_NONE, timeout=1)
While True:
data = ser.read(1)
bytesToRead = ser.inWaiting()
if bytesToRead:
data = data + ser.read(bytesToRead)
encodedData = data.encode('hex')</code></pre>
</div>
</div>
</p>
<p>With this code, Shouldn't I be able to read all the characters as Hex as long as the baud/parity/etc match up with the transmitting system?</p>
|
<pre><code>While True:
data = ser.read(1)
</code></pre>
<p>This waits for a single character to be read (times out after 1s due to the timeout specified in the Serial constructor) and stores it in <code>data</code></p>
<pre><code> bytesToRead = ser.inWaiting()
if bytesToRead:
data = data + ser.read(bytesToRead)
encodedData = data.encode('hex')
</code></pre>
<p>Now instantly check for any other characters in buffer - this will usually be zero. Due to the fact that you're running at 9600 baud, Python will usually see the characters come in one at a time. So your <code>if bytesToRead</code> statement will mostly be false as each incoming character is consumed by the above <code>ser.read(1)</code>.</p>
<p>If you just want to process each character individually, you can do:</p>
<pre><code>While True:
data = ser.read(1)
if data:
encodedData = data.encode('hex')
</code></pre>
<p>Or if you want to keep adding it to a buffer, use something like:</p>
<pre><code>data = ''
While True:
bytesToRead = ser.inWaiting()
if bytesToRead:
data += ser.read(bytesToRead)
encodedData = data.encode('hex')
if encodedData.startswith('1234deadb33f`):
data = data[6:] # strip 6 chars from start of data
</code></pre>
|
python|serial-port|raspberry-pi|pyserial
| 0 |
1,905,612 | 61,541,034 |
Is it possible to create a custom channel using Google Cloud Storage?
|
<p>I would like to <a href="https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/create-custom-channels.html" rel="nofollow noreferrer">create a custom channel</a> but instead of using the local file system, use a GCS bucket to host the packages. I have not been able to find any documentation or resources that indicate whether this is possible and/or how to do it. Does Anaconda allow the indexing of a GCS bucket? </p>
|
<p>If Conda can write in a local file system, try to use <a href="https://cloud.google.com/storage/docs/gcs-fuse" rel="nofollow noreferrer">gcsFuse</a>. </p>
<p>Thanks to it, you mount a directory which represent your bucket. GCS fuse transforms the system file IO call to GCS API call. Be careful, these calls aren't free. If you perform a large number of call, it will cost, a little! </p>
<p>In addition, don't expect the same read/write performance as you can have local SSD storage. Here, it's API Call, and the latency is not null!</p>
<p>Thereby, it's transparent for Conda and you can use your bucket like this.</p>
|
python|anaconda|google-cloud-storage
| 2 |
1,905,613 | 56,197,147 |
Move points horizontally in plot
|
<p>I have a plot of my data points and many of them are around the same value so I would like to move them a bit to the side so that each point is visible and all of them are not just a big mess.</p>
<p>I haven't found any code online that could help me.</p>
<pre><code>mpimax=[250, 300, 350, 400, 450]
mpimax2=[400, 450, 500, 550]
Fpis=np.array([ 88.15000964, 87.82604812, 85.44423898, 84.85864079, 84.41117001])
Fpis2=np.array([ 87.24004281, 85.42371568, 86.74856596, 86.42293262])
Fpis3=[80.97814175481653, 74.12625811398735, 82.44657342612943, 87.3771939549136]
Fpiserr=[1.6053918983908735,
1.1549571932661258,
1.0139484239435315,
0.8058605526698696,
0.6640766134707818]
Fpiserr2=[1.4946328563696913, 1.414439912368433, 1.370372743102621, 1.2860068512665481]
Fpiserr3=[0.7099107986265524,
0.07387064826087104,
0.1129094733109782,
0.1318016758128941]
plt.ylim(73,94)
plt.xlim(200,600)
plt.errorbar(mpimax,Fpis,yerr=Fpiserr,fmt="ro",label='NLO x',capsize=2)
plt.errorbar(mpimax2,Fpis2,yerr=Fpiserr2,fmt="r^",label='NNLO x',mfc='none',capsize=2)
plt.errorbar(mpimax2,Fpis3,yerr=Fpiserr3,fmt="g^",label='NNNLO x',capsize=2)
plt.xlabel('$M_{\pi}^{max}$[MeV]')
plt.legend(loc='lower left', fontsize='small')
plt.savefig('Fcutoffs.png')
plt.show()
</code></pre>
<p>This is the finished product but with some other points than those in the code.
<a href="https://i.stack.imgur.com/EWlRE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EWlRE.png" alt="Example of my plot"></a></p>
|
<p>You could turn your lists into np arrays and then shift them slightly, something like:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
mpimax = np.asarray([250, 300, 350, 400, 450])
mpimax2 = np.asarray([400, 450, 500, 550])
...
plt.errorbar(mpimax, Fpis, yerr=Fpiserr, fmt="ro", label='NLO x', capsize=2)
plt.errorbar(mpimax2 + 3,
Fpis2,
yerr=Fpiserr2,
fmt="r^",
label='NNLO x',
mfc='none',
capsize=2)
plt.errorbar(mpimax2 + 6,
Fpis3,
yerr=Fpiserr3,
fmt="g^",
label='NNNLO x',
capsize=2)
plt.xlabel('$M_{\pi}^{max}$[MeV]')
plt.legend(loc='lower left', fontsize='small')
plt.savefig('Fcutoffs.png')
plt.show()
</code></pre>
|
python|plot
| 0 |
1,905,614 | 56,416,041 |
How to do unit testing on Image as input and output in python?
|
<p>I am doing unit testing on object detection <a href="https://stackoverflow.com/questions/54576667/how-can-i-correctly-classify-the-number-of-positive-bright-color-circles-and-n">code</a> (accepted answer) in Python. I know that in unit testing, we basically put in test parameters to functions we have defined in our program and we enter the expected result. If the expected result is output, we get OK, otherwise, we will get an error.</p>
<p>So my problem is that my <strong>input</strong> is the <strong>Image</strong> and my <strong>output</strong> is the also an image (i.e. <strong>object detected in the image</strong>) and later on, the result is represented using the bar chart and the histogram with slider. <strong>How can I do unit testing on such data?</strong></p>
<p>So far, this is what I have tried (This <a href="https://stackoverflow.com/questions/54576667/how-can-i-correctly-classify-the-number-of-positive-bright-color-circles-and-n">code</a> is saved as cirCode)</p>
<pre><code>from unittest import TestCase
import unittest
from unittest import TestCase
import cirCode
class TestFind_circles(TestCase):
def setUp(self):
pass
def tearDown(self):
pass
#def test_circle(self):
# self.fail()
def test_find_circles(self):
Negative_circles, Positive_circles, out_filepath, circles, threshold = cirCode.find_circles('blobs.jpg')
self.assertEqual(Negative_circles, 20)
self.assertEqual(Positive_circles, 8)
if __name__ == '__main__':
unittest.main()
</code></pre>
<p>Now, I do not know how to test <strong>def circle</strong> function. Also, I am not sure if it is a correct way to test <strong>find_circles</strong> function. </p>
<p>Do you guys have any better idea to do the unit test on this <a href="https://stackoverflow.com/questions/54576667/how-can-i-correctly-classify-the-number-of-positive-bright-color-circles-and-n">code</a> and also how can I proceed with the unit test on the <em>circle</em> function?</p>
|
<p>Not sure what you are asking, I assume you are asking for unit testing method for opencv function and variables. </p>
<p>How about following opencv python test sample? They did a similar work that you are trying to do already. They have dozens of sample test case. </p>
<p><a href="https://github.com/opencv/opencv/tree/master/modules/python/test" rel="nofollow noreferrer">https://github.com/opencv/opencv/tree/master/modules/python/test</a></p>
<p>e.g</p>
<pre><code>import cv2 as cv
import numpy as np
import sys
from numpy import pi, sin, cos
from tests_common import NewOpenCVTests
def circleApproximation(circle):
nPoints = 30
dPhi = 2*pi / nPoints
contour = []
for i in range(nPoints):
contour.append(([circle[0] + circle[2]*cos(i*dPhi),
circle[1] + circle[2]*sin(i*dPhi)]))
return np.array(contour).astype(int)
</code></pre>
|
python|unit-testing|opencv|oop|hough-transform
| 1 |
1,905,615 | 18,502,295 |
Accessing a variable from the script in the HTML code?
|
<p>I am using the Bottle framework (also using Beaker for sessions) for Python and am having trouble accessing a variable from the script in the HTML code. The following is the Python script:</p>
<pre><code>import os, MySQLdb, hashlib, random, markdown2
import beaker.middleware
import bottle
from bottle import run, route, post, get, request, abort, template, hook, app, view
@post('/submit')
def submit():
db = MySQLdb.connect(host='localhost', port=3306, user="root", passwd="blkFDF94alkf", db="_pCMS")
query = db.cursor()
user = request.forms.get('credentials.username')
username = MySQLdb.escape_string(user)
request.session['username'] = username
passw = request.forms.get('credentials.password')
pass_w = MySQLdb.escape_string(passw)
passw2 = str(pass_w)
password = hashlib.md5(passw2).hexdigest()
user_name = request.session['username']
if username >= 2 and password >= 6:
if True:
ugh = query.execute("SELECT * FROM users WHERE username = '%s' AND password = '%s'" % (username, password))
db.commit()
return me()
else:
return index()
</code></pre>
<p>I doubt the code above needs to be explained since all I need to know is how to access the variable user_name in that function of that Python script on the HTML code below. This is what I have of the HTML:</p>
<p>me.tpl:</p>
<pre><code><div class="label">Name:</div>
%if len(user_name) >= 2:
<div class="content">{{user_name}}</div>
%end
</div>
</code></pre>
<p>Is that the correct way to access the variable user_name? It isn't working that way because it's giving me the following error:</p>
<pre><code>NameError: name 'user_name' is not defined
</code></pre>
<p>What me() and index() do:</p>
<pre><code>@route('/')
@route('/index')
@view('index.tpl')
def index():
index = { 'index' : _index()}
return index
def _index():
return 't'
@get('/me')
@view('me.tpl')
def me():
me = { 'me' : _me()}
return me
def _me():
return 't'
</code></pre>
|
<p>Your <code>me</code> view should take a <code>user_name</code> parameter and inject it in the template:</p>
<pre><code>@get('/me')
@view('me.tpl')
def me(user_name):
me = {
'me': _me(),
'user_name': user_name
}
return me
</code></pre>
|
python|template-engine|bottle
| 0 |
1,905,616 | 18,448,579 |
Comparing date and time in python
|
<p>I need to compare the time (and date if that helps) that a message was sent to see if it was in the past 24 hours.</p>
<p>Does anyone know how to take said time and see if it was in the past 24 hours?</p>
|
<p>It sounds like you need to learn about the Python <code>datetime</code> module. Here is a method that solves your problem using <code>datetime</code>:</p>
<pre><code>from datetime import datetime,timedelta
def is_older_than_a_day(test_time):
one_day_ago = datetime.now() - timedelta(days=1)
if test_time > one_day_ago:
print "The test time is less than one day old!"
else:
print "The test time is older than one day."
</code></pre>
<p>(Notice, <code>test_time</code> is the timestamp of your message as a <code>datetime</code> object.) Basically, I used three helpful features from the <code>datetime</code> module:</p>
<ul>
<li><code>datetime.now()</code> will get the current time</li>
<li><code>timedelta</code> allows you to adjust/change a datetime by a given number of <code>days</code>, <code>hours</code>, <code>minutes</code>, etc.</li>
<li>Two <code>datetime</code> objects can be compared with the operators: <code>></code>, <code><</code>, <code>>=</code>, <code><=</code>, <code>==</code>, <code>!=</code></li>
</ul>
|
python|string|class|date|time
| 4 |
1,905,617 | 69,344,044 |
How to play musical notes in Python?
|
<p>I wrote some code in Python which randomly picks numbers and converts them into music notes in a text file. But I want to know if there is a way to play the notes and music. If there is a package, are there tutorials or docs that I could look into?</p>
<p>Thanks in advance!</p>
|
<p>Making music is a pretty broad subject.
I think the closest thing to what you ask for is midi. A pretty simple protocol where you can tell what note to play for how long. These questions should help you along with how to <a href="https://stackoverflow.com/q/11059801/383793">write</a> midi files and then <a href="https://stackoverflow.com/q/6030087/383793">play</a> them.</p>
<p>If you actually want to create the waves yourself and synthesise the sound <a href="https://stackoverflow.com/a/33880295/383793">this</a> should help.</p>
<p>But the subject is broad and there's a long (non exhaustive) list of <a href="https://wiki.python.org/moin/PythonInMusic" rel="nofollow noreferrer">music software written in Python</a>.</p>
|
python
| 1 |
1,905,618 | 55,232,757 |
Keras Dense layer after an LSTM with return_sequence=True
|
<p>I am trying to reimplement this paper <a href="http://aclweb.org/anthology/D18-1060" rel="nofollow noreferrer">1</a> in Keras as the authors used PyTorch <a href="https://github.com/gao-g/metaphor-in-context/" rel="nofollow noreferrer">2</a>. Here is the network architecture:
<a href="https://i.stack.imgur.com/XNiys.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XNiys.png" alt="enter image description here"></a>
What I have done so far is:</p>
<pre><code>number_of_output_classes = 1
hidden_size = 100
direc = 2
lstm_layer=Bidirectional(LSTM(hidden_size, dropout=0.2, return_sequences=True))(combined) #shape after this step (None, 200)
#weighted sum and attention should be here
attention = Dense(hidden_size*direc, activation='linear')(lstm_layer) #failed trial
drop_out_layer = Dropout(0.2)(attention)
output_layer=Dense(1,activation='sigmoid')(drop_out_layer) #shape after this step (None, 1)
</code></pre>
<p>I want to include the attention layer and the final FF layer after the LSTM but I am running into errors due to the dimensions and the return_sequence= True option.</p>
|
<p>This is a sequence classification task. Sequence classification is many to one mapping, where you have input from multiple timesteps labeled to a single class. In this case, you inputs should have a shape of (batch_size, time_steps, channels) and outputs should have a shape of (batch_size, channels). If the <code>return_sequences</code> argument of LSTM class is True, the output will have a shape of <code>(batch_size, time_steps, channels)</code>. Feeding this to dense layers and dropout layer wouldn't reduce the number of dimensions. To reduce the number of dimension into two you have to set <code>return_sequences</code> argument of last LSTM layer to <code>True</code>. In your case </p>
<pre class="lang-py prettyprint-override"><code>lstm_layer=Bidirectional(LSTM(hidden_size, dropout=0.2, return_sequences=False))(combined)
</code></pre>
|
python|keras|deep-learning|nlp
| 0 |
1,905,619 | 57,531,716 |
"ValueError: A non-empty list of tiles should be provided to merge." Cartopy OSM
|
<p>I'm currently using OSM in cartopy, python 3.7.</p>
<pre><code>t=OSM()
ax.add_image(t, 10)
</code></pre>
<p>OSM (open street maps) is set to the image at a zoom of 10. But somehow, I get this error
<code>ValueError: A non-empty list of tiles should be provided to merge.</code>:
What does this mean, and how can I fix it?</p>
|
<p>I also had this issue and have managed to get to some resolution.</p>
<p>Following suggestions on <a href="https://github.com/SciTools/cartopy/issues/1341" rel="nofollow noreferrer">https://github.com/SciTools/cartopy/issues/1341</a></p>
<p>You need to make some changes in the img_tiles.py script which can be found in your python library </p>
<p>/python3.x/site-packages/cartopy/io/</p>
<p><strong>The changes to be made are:</strong></p>
<p>add line </p>
<p><code>from urllib.request import Request</code></p>
<p>within the class GoogleWTS, method get_image (approx line 158)</p>
<p>remove line</p>
<p><code>fh = urlopen(url)</code></p>
<p>and replace with lines</p>
<pre><code>req = Request(url)
req.add_header('User-agent', 'your bot 0.1')
fh = urlopen(req)
</code></pre>
<p>I understand you are using OSM not GoogleWTS, as am I, so I didn't think this would work but it did.</p>
<p>I can't see all of your code but also make sure that you have this middle line too:</p>
<pre><code>t=OSM()
ax = plt.axes(projection=t.crs)
ax.add_image(t, 10)
</code></pre>
<p>assuming you are trying to use this with Matplotlib</p>
|
python|cartopy
| 0 |
1,905,620 | 57,593,843 |
How to modify all output files to conceal passwords in Robot Framework?
|
<p>I'd like to conceal passwords from output files in Robot Framework.
In particular, I'm looking for a native possibility (not multiple commands):</p>
<ul>
<li><p>to run a robot framework test retrieving one or more passwords from a vault through a custom keyword</p></li>
<li><p>and to remove in the output files (output.xml, log.html and report.html) all the strings equal to the password(s) retrieved.</p></li>
</ul>
<p>I managed to do it for output.xml through --prerebotmodifier and a simple Python script I made, but the html files (log and report) are generated after the call to the Python script and so passwords are not concealed in there.</p>
<p>It's not possible to use --removekeywords since the password could be used somewhere else in the test and with DEBUG or TRACE it would be shown in the logs.</p>
<p>Another solution would be to run the Python script in a separate command (e.g. through <code>||</code>) but this is not what I'd like to achieve.</p>
<pre class="lang-sh prettyprint-override"><code>robot --prerebotmodifier lib/password_clean.py -L TRACE testConceal.robot
</code></pre>
<pre><code>Test to get password
${password}= get password ${SOME_PARAMETERS}
Log To Console ${password}
</code></pre>
<p>The expected result would be not to see the value of <code>${password}</code> in output.xml, log.html and report.html with one Robot Framework native command.</p>
|
<p>I found a quick win which would be to use <code>--listener</code> instead of <code>--prerebotmodifier</code>. Still working on it, though.</p>
|
python|robotframework
| 0 |
1,905,621 | 57,340,205 |
read text from file to array problem with indexes
|
<p>hi i try to read text file to array but i have a mistake when i read a number wite 2 digits.</p>
<p>i want to check how match odd number or even number has in each string.
what i did wrong?</p>
<p>to this file1.txt: </p>
<pre><code>1 2 3 4 3 6
4 5 8 6 4 2
15 4 22 5 8 21
</code></pre>
<hr>
<pre><code>i get:
evenArray: [3, 5, 3]
oddArray: [3, 1, 2]
</code></pre>
<hr>
<pre><code>with open('file1.txt') as file:
array = file.readlines()
evenCounter = 0
oddCounter = 0
evenArray = []
oddArray = []
for x in array:
for i in range(len(x) - 1):
if(x[i] != " " and x[i + 1] != " " and x[i + 1] != '\n'):
strTemp = x[i]
strTemp += x[i+1]
temp = int(strTemp)
elif x[i] != " ":
temp = int(x[i])
if temp % 2 == 0:
evenCounter += 1
else:
oddCounter += 1
evenArray.append(evenCounter)
oddArray.append(oddCounter)
evenCounter = 0
oddCounter = 0
</code></pre>
|
<p>Try this code.<br>
Each line is an <code>str</code>. For example, the first line is <code>'1 2 3 4 3 6'</code>. This is not a list of numbers, it has to be split and then converted to <code>ints</code>.</p>
<pre class="lang-py prettyprint-override"><code>def is_even(x):
if x == 1:
return False
for i in range(2, x//2 + 1):
if x % i == 0:
return False
return True
with open('file1.txt', 'r') as file:
lines = file.readlines()
for line in lines:
even_nos = []
odd_nos = []
numbers = map(int, line.split())
for number in numbers:
if is_even(number):
even_nos.append(number)
else:
odd_nos.append(number)
print(f'{len(even_nos)} even numbers: {even_nos}')
print(f'{len(odd_nos)} odd numbers: {odd_nos}')
print()
</code></pre>
|
python
| 0 |
1,905,622 | 57,590,142 |
How to format Django's timezone.now()
|
<p>I am trying to autofill a django datetimefield using a timezone aware date.
Using Django's <code>timezone.now()</code> outputs <code>Aug. 21, 2019, 5:57 a.m.</code>. How can I convert this to <code>2019-08-21 14:30:59</code> ?</p>
|
<p>If you want to do the transformation on the backend you can use Django's built in utility <a href="https://github.com/django/django/blob/master/django/utils/dateformat.py" rel="noreferrer"><code>dateformat</code></a>.</p>
<pre><code>from django.utils import timezone, dateformat
formatted_date = dateformat.format(timezone.now(), 'Y-m-d H:i:s')
</code></pre>
|
python|django
| 22 |
1,905,623 | 59,180,615 |
How to convert this tuple to array python
|
<p>This one outputs tuple, but I want it to convert to array so I can access each element in the array to my main function. here is the code:</p>
<pre><code>def numbers():
db = getDB();
cur = db.cursor()
sql = "SELECT mobile_number FROM names"
cur.execute(sql)
result = cur.fetchall()
for [x] in result:
print(x)
</code></pre>
|
<p>The solution is very simple:</p>
<pre><code>result = list(cur.fetchall())
</code></pre>
|
python
| 0 |
1,905,624 | 22,743,666 |
How to omit code block based on arguments to context manager?
|
<p>Lets's see this example:</p>
<pre><code>with mycontextmanager(arg1='value', arg2=False):
print 'Executed'
</code></pre>
<p>Is there a way to <strong>not</strong> execute code block (<code>print 'Executed'</code>) within a context manager based on argument, eg: when arg2 is not False?</p>
|
<p>An other option is to use a special <code>ConditionalExecution</code> context manager whose <code>__enter__</code> methods returns an action that conditionally raises a <code>SkipExecution</code> exception. The <code>__exit__</code> method suppresses only this exception.
Something like the following:</p>
<pre><code>class SkipExecution(Exception): pass
class ConditionalExecution(object):
def __init__(self, value, arg):
self.value = value
self.arg = arg
def __enter__(self):
def action():
if not self.arg:
raise SkipExecution()
return action
def __exit__(self, exc_type, exc_value, tb):
if exc_type is SkipExecution:
return True
return False
</code></pre>
<p>Used as:</p>
<pre><code>In [17]: with ConditionalExecution(1, True) as check_execution:
...: check_execution()
...: print('Hello')
...:
Hello
In [18]: with ConditionalExecution(1, False) as check_execution:
...: check_execution()
...: print('Hello')
In [19]:
</code></pre>
<p>However the problem is that you have to add a call to the value returned.</p>
<p>The problem is that <code>__exit__</code> is called <em>if and only if</em> <code>__enter__</code> returned succesfully, which means you can't raise an exception in <code>__enter__</code> to block the execution of the code block.
If you want you can modify this solution so that the call to <code>check_execution()</code> can be done in the first line, like:</p>
<pre><code>In [29]: with ConditionalExecution(1, True) as check_execution, check_execution():
...: print('Hello')
Hello
In [30]: with ConditionalExecution(1, False) as check_execution, check_execution():
...: print('Hello')
</code></pre>
<p>Using a <code>Skipper</code> helper-context manager:</p>
<pre><code>class SkipExecution(Exception): pass
class Skipper(object):
def __init__(self, func):
self.func = func
def __call__(self):
return self.func() or self
def __enter__(self):
return self
def __exit__(self, *args):
pass
class ConditionalExecution(object):
def __init__(self, value, arg):
self.value = value
self.arg = arg
def __enter__(self):
def action():
if not self.arg:
raise SkipExecution()
return Skipper(action)
def __exit__(self, exc_type, exc_value, tb):
if exc_type is SkipExecution:
return True
return False
</code></pre>
<p>I don't think there is anyway to do this without at least an explicit function call as in the above example.</p>
|
python|contextmanager
| 1 |
1,905,625 | 14,677,548 |
Upvote and Downvote buttons with Django forms
|
<p>I want to make upvote and downvote buttons for comments but I want all the form inputs that django.contrib.comments.forms.CommentSecurityForm gives me to make sure the form is secure. Is that necessary? And if so, how do I make a form class that with upvote and downvote buttons? Custom checkbox styles?</p>
|
<p>I would suggest, you use separate view login for up and down voting.</p>
<p>Something like this <code>/upvote/{{comment.pk|urlize}}</code></p>
<p>and then write a view that works with this url. With PK find the comment that the user is trying to up/down vote, then write the necessary condition to check if the user is authorized to perform that kind of action, and then finally execute that action.</p>
<p>I hope this helps</p>
|
python|django|django-forms
| 0 |
1,905,626 | 14,512,620 |
Is there any way to override the double-underscore (magic) methods of arbitrary objects in Python?
|
<p>I want to write a wrapper class which takes a value and behaves just like it except for adding a 'reason' attribute. I had something like this in mind:</p>
<pre><code>class ExplainedValue(object):
def __init__(self, value, reason):
self.value = value
self.reason = reason
def __getattribute__(self, name):
print '__getattribute__ with %s called' % (name,)
if name in ('__str__', '__repr__', 'reason', 'value'):
return object.__getattribute__(self, name)
value = object.__getattribute__(self, 'value')
return object.__getattribute__(value, name)
def __str__(self):
return "ExplainedValue(%s, %s)" % (
str(self.value),
self.reason)
__repr__ = __str__
</code></pre>
<p>However, the double-underscore functions don't seem to be captured with <code>__getattribute__</code>, for example:</p>
<pre><code>>>> numbers = ExplainedValue([1, 2, 3, 4], "it worked")
>>> numbers[0]
Traceback (most recent call last):
File "<pyshell#118>", line 1, in <module>
numbers[0]
TypeError: 'ExplainedValue' object does not support indexing
>>> list(numbers)
__getattribute__ with __class__ called
Traceback (most recent call last):
File "<pyshell#119>", line 1, in <module>
list(numbers)
TypeError: 'ExplainedValue' object is not iterable
</code></pre>
<p>I would think the two above should end up doing this:</p>
<pre><code>>>> numbers.value[0]
__getattribute__ with value called
1
>>> list(numbers.value)
__getattribute__ with value called
[1, 2, 3, 4]
</code></pre>
<p>Why is this not happening? How can I make it happen? (This might be a horrible idea to actually use in real code but I'm curious about the technical issue now.)</p>
|
<p>As millimoose says, an implicit <code>__foo__</code> call never goes through <code>__getattribute__</code>. The only thing you can do is actually add the appropriate functions to your wrapper class.</p>
<pre><code>class Wrapper(object):
def __init__(self, wrapped):
self.wrapped = wrapped
for dunder in ('__add__', '__sub__', '__len__', ...):
locals()[dunder] = lambda self, __f=dunder, *args, **kwargs: getattr(self.wrapped, __f)(*args, **kwargs)
obj = [1,2,3]
w = Wrapper(obj)
print len(w)
</code></pre>
<p>Class bodies are executed code like any other block (well, except <code>def</code>); you can put loops and whatever else you want inside. They're only magical in that the entire local scope is passed to <code>type()</code> at the end of the block to create the class.</p>
<p>This is, perhaps, the only case where assigning to <code>locals()</code> is even remotely useful.</p>
|
python|object|magic-methods|getattr|getattribute
| 5 |
1,905,627 | 41,262,106 |
Requests giving errors while importing Python 3.4.2
|
<p>When I am trying to import requests in python 3.4.2, it gives me the following errors:</p>
<pre><code>Python 3.4.2 (v3.4.2:ab2c023a9432, Oct 5 2014, 20:42:22)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import requests
Traceback (most recent call last):
File "<frozen importlib._bootstrap>", line 2218, in _find_and_load_unlocked
AttributeError: 'module' object has no attribute '__path__'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/packages/__init__.py", line 27, in <module>
from . import urllib3
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/packages/urllib3/__init__.py", line 8, in <module>
from .connectionpool import (
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/packages/urllib3/connectionpool.py", line 11, in <module>
from .exceptions import (
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/packages/urllib3/exceptions.py", line 2, in <module>
from .packages.six.moves.http_client import (
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/packages/urllib3/packages/six.py", line 203, in load_module
mod = mod._resolve()
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/packages/urllib3/packages/six.py", line 115, in _resolve
return _import_module(self.mod)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/packages/urllib3/packages/six.py", line 82, in _import_module
__import__(name)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/http/client.py", line 69, in <module>
import email.parser
ImportError: No module named 'email.parser'; 'email' is not a package
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<frozen importlib._bootstrap>", line 2218, in _find_and_load_unlocked
AttributeError: 'module' object has no attribute '__path__'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/__init__.py", line 60, in <module>
from .packages.urllib3.exceptions import DependencyWarning
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/requests/packages/__init__.py", line 29, in <module>
import urllib3
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/urllib3/__init__.py", line 8, in <module>
from .connectionpool import (
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/urllib3/connectionpool.py", line 11, in <module>
from .exceptions import (
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/urllib3/exceptions.py", line 2, in <module>
from .packages.six.moves.http_client import (
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/urllib3/packages/six.py", line 203, in load_module
mod = mod._resolve()
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/urllib3/packages/six.py", line 115, in _resolve
return _import_module(self.mod)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/urllib3/packages/six.py", line 82, in _import_module
__import__(name)
File "/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/http/client.py", line 69, in <module>
import email.parser
ImportError: No module named 'email.parser'; 'email' is not a package
</code></pre>
<p>I am not trying to use any files like email.parser, yet it shows an error. How can I fix this? I have tried to install requests again, for instance, with <code>sudo pip3.4 install requests</code> and <code>sudo pip3.4 install --upgrade requests</code>. Note: I am running macOS Sierra.</p>
|
<p>You have a local file called email.py which is shadowing the standard library module. Rename your file.</p>
|
python|python-3.x|python-requests|python-3.4
| 3 |
1,905,628 | 6,519,546 |
scoped_session(sessionmaker()) or plain sessionmaker() in sqlalchemy?
|
<p>I am using SQlAlchemy in my web project. What should I use - <code>scoped_session(sessionmaker())</code> or plain <code>sessionmaker()</code> - and why? Or should I use something else? </p>
<pre><code>## model.py
from sqlalchemy import *
from sqlalchemy.orm import *
engine = create_engine('mysql://dbUser:dbPassword@dbServer:dbPort/dbName',
pool_recycle=3600, echo=False)
metadata = MetaData(engine)
Session = scoped_session(sessionmaker())
Session.configure(bind=engine)
user = Table('user', metadata, autoload=True)
class User(object):
pass
usermapper = mapper(User, user)
## some other python file called abc.py
from models import *
def getalluser():
session = Session()
session.query(User).all()
session.flush()
session.close()
## onemore file defg.py
from models import *
def updateuser():
session = Session()
session.query(User).filter(User.user_id == '4').update({User.user_lname: 'villkoo'})
session.commit()
session.flush()
session.close()
</code></pre>
<p>I create a <code>session = Session()</code> object for each request and I close it. Am I doing the right thing or is there a better way to do it?</p>
|
<p>Reading the <a href="http://www.sqlalchemy.org/docs/06/orm/session.html?highlight=scoped_session#unitofwork-contextual" rel="noreferrer">documentation</a> is recommended:</p>
<blockquote>
<p>the <code>scoped_session()</code> function is provided which produces a thread-managed registry of <code>Session</code> objects. It is commonly used in web applications so that a single global variable can be used to safely represent transactional sessions with sets of objects, localized to a single thread.</p>
</blockquote>
<p>In short, use <code>scoped_session()</code> for thread safety.</p>
|
python|django|orm|sqlalchemy|flask-sqlalchemy
| 52 |
1,905,629 | 56,893,643 |
Read multiple txt and create df for each with names from original files
|
<p>Folder has five or six .csv files. I want to read all of them in at once using pd.read_csv() but then save each df as a variable in jupyter specific to the filename without any path or file type.</p>
<p>For example, say these are the two files:</p>
<pre><code>'../main/data/csv_files/file_1.csv'
'../main/data/csv_files/file_2.csv'
</code></pre>
<p>I can do this to each:</p>
<pre><code>file_1 = pd.read_csv('../main/data/csv_files/file_1.csv')
file_2 = pd.read_csv('../main/data/csv_files/file_2.csv')
</code></pre>
<p>However, my question is how could I do this all at once with a loop or something for all files with keeping the naming convention of the filenames?</p>
<p>I can use glob or other means to get a list of all the filepaths for the csv file. I can then create a dictionary to put them all into but it uses their full filepaths as the name.</p>
<pre><code>path = r'../main/data/csv_files'
files = glob.glob(path + '/*.csv')
dfs = {}
for x in files:
dfs[x] = pd.read_csv(x)
</code></pre>
<p>This works but the naming of the full path isn't ideal.</p>
|
<p>If your file names are not coming in from an untrusted source like the network, use <code>exec</code> to run a python command.</p>
<pre><code>import ntpath
for x in files:
# /a/b/c.csv => c.csv
file_without_path = ntpath.basename(x)
# c.csv => c
file_without_extension = file_without_path[:-4]
# execute "c = pd.readcsv('a/b/c.csv')"
exec("{} = pd.read_csv('{}')".format(file_without_extension, x))
</code></pre>
<p>Don't do this if the file names cannot be trusted since any code in the filename will be executed.</p>
|
python-3.x|pandas|csv
| 0 |
1,905,630 | 56,877,358 |
Should GridSearchCV score results be equal to score of cross_validate using same input?
|
<p>I am playing around with scikit-learn a bit and wanted to reproduce the cross-validation scores for one specific hyper-parameter combination of a carried out grid search. </p>
<p>For the grid search, I used the <code>GridSearchCV</code> class and to reproduce the result for one specific hyper-parameter combination I used the <code>cross_validate</code> function with the exact same split and classifier settings. </p>
<p>My problem is that I do not get the expected score results, which to my understanding should be exactly the same as the same computations are carried out to obtain the scores in both methods.</p>
<p>I made sure to exclude any randomness sources from my script by fixing the used splits on the training data. </p>
<p>In the following code snippet, an example of the stated problem is given. </p>
<pre><code>import numpy as np
from sklearn.model_selection import cross_validate, StratifiedKFold, GridSearchCV
from sklearn.svm import NuSVC
np.random.seed(2018)
# generate random training features
X = np.random.random((100, 10))
# class labels
y = np.random.randint(2, size=100)
clf = NuSVC(nu=0.4, gamma='auto')
# Compute score for one parameter combination
grid = GridSearchCV(clf,
cv=StratifiedKFold(n_splits=10, random_state=2018),
param_grid={'nu': [0.4]},
scoring=['f1_macro'],
refit=False)
grid.fit(X, y)
print(grid.cv_results_['mean_test_f1_macro'][0])
# Recompute score for exact same input
result = cross_validate(clf,
X,
y,
cv=StratifiedKFold(n_splits=10, random_state=2018),
scoring=['f1_macro'])
print(result['test_f1_macro'].mean())
</code></pre>
<p>Executing the given snippet results in the output: </p>
<pre><code>0.38414468864468865
0.3848840048840049
</code></pre>
<p>I would have expected these scores to be exactly the same, as they are computed on the same split, using the same training data with the same classifier.</p>
|
<p>It is because the <code>mean_test_f1_macro</code> is not a simple average of all combination of folds, it is a weight average, with weights being the size of the test fold. To know more about the actual implementation of refer <a href="https://stackoverflow.com/a/55720287/6347629">this</a> answer.</p>
<p>Now, to replicate the <code>GridSearchCV</code> result, try this!</p>
<pre class="lang-py prettyprint-override"><code>print('grid search cv result',grid.cv_results_['mean_test_f1_macro'][0])
# grid search cv result 0.38414468864468865
print('simple mean: ', result['test_f1_macro'].mean())
# simple mean: 0.3848840048840049
weights= [len(test) for (_, test) in StratifiedKFold(n_splits=10, random_state=2018).split(X,y)]
print('weighted mean: {}'.format(np.average(result['test_f1_macro'], axis=0, weights=weights)))
# weighted mean: 0.38414468864468865
</code></pre>
|
python|machine-learning|scikit-learn|cross-validation|grid-search
| 1 |
1,905,631 | 44,538,185 |
ubuntu14.04,M2Crypto==0.25.1,openssl (1.0.1f-1ubuntu2.22) run django get undefined symbol: SSLv2_method
|
<blockquote>
<p>"docker build dockerfile when run 'python manage.py makemigrations'
get 'undefined symbol: SSLv2_method'"</p>
</blockquote>
<ol>
<li><p>Traceback (most recent call last):
File "manage.py", line 10, in
execute_from_command_line(sys.argv)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/<strong>init</strong>.py",
line 353, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python2.7/dist-packages/django/core/management/<strong>init</strong>.py",
line 327, in execute
django.setup()</p>
<pre><code> apps.populate(settings.INSTALLED_APPS)
File "/usr/local/lib/python2.7/dist-packages/django/apps/registry.py",
line 115, in populate
app_config.ready()
File "/usr/local/lib/python2.7/dist-packages/django/contrib/admin/apps.py",
line 22, in ready
self.module.autodiscover()
File "/usr/local/lib/python2.7/dist-packages/django/contrib/admin/__init__.py",
line 26, in autodiscover
autodiscover_modules('admin', register_to=site)
File "/usr/local/lib/python2.7/dist-packages/django/utils/module_loading.py",
line 50, in autodiscover_modules
import_module('%s.%s' % (app_config.name, module_to_search))
File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/goldbox-backend/goldbox_p2p/admin.py", line 28, in <module>
from goldbox_common.crypto import ReportCrypto
File "/goldbox-backend/goldbox_common/crypto.py", line 5, in <module>
from M2Crypto import RSA,BIO,EVP
File "/usr/local/lib/python2.7/dist-packages/M2Crypto/__init__.py", line
26, in <module>
from M2Crypto import (ASN1, AuthCookie, BIO, BN, DH, DSA, EVP, Engine, Err,
File "/usr/local/lib/python2.7/dist-packages/M2Crypto/ASN1.py", line 15,
in <module>
from M2Crypto import BIO, m2, util
File "/usr/local/lib/python2.7/dist-packages/M2Crypto/BIO.py", line 10, in <module>
from M2Crypto import m2, util
File "/usr/local/lib/python2.7/dist-packages/M2Crypto/m2.py", line 30, in <module>
from M2Crypto._m2crypto import *
File "/usr/local/lib/python2.7/dist-packages/M2Crypto/_m2crypto.py", line
26, in <module>
__m2crypto = swig_import_helper()
File "/usr/local/lib/python2.7/dist-packages/M2Crypto/_m2crypto.py", line
22, in swig_import_helper
_mod = imp.load_module('__m2crypto', fp, pathname, description)
ImportError: /usr/local/lib/python2.7/dist-packages/M2Crypto/__m2crypto.so:
undefined symbol: SSLv2_method
</code></pre></li>
</ol>
<p>how can i fix it?
thanks~</p>
|
<p>Update to <a href="https://pypi.python.org/pypi/M2Crypto/0.26.0" rel="nofollow noreferrer">the latest upstream version</a>. If there is still problem when you follow <a href="https://gitlab.com/m2crypto/m2crypto/blob/master/INSTALL.rst" rel="nofollow noreferrer">INSTALL</a>, please file a new issue report in the upstream tracker. Thank you.</p>
|
python|openssl|dockerfile|m2crypto
| 0 |
1,905,632 | 23,886,492 |
Tree structure with multiples classes association as a successor
|
<p>I'm trying to create a Decision Tree structure with multiple classes resulting of a node but I don't know what's the best way to do this with Django.</p>
<p>To make it clear, here's what I want to do (left son is the case when the condition is valid, right son is when the condition is invalid): </p>
<pre><code> (Condition A)
|
-------------------------------
| |
(Condition B) (Condition C)
| |
------------------ |------------
| | | |
(Cond D) <Category> + <Group> <Cat>+<Gr> (Cond D)
| |
.. ...
</code></pre>
<p>The idea is to associate a couple (<code><Category></code>,<code><Group></code>) or another <code><Node></code> as a son. The problem is, "How to represent multiple classes field in Django ?"</p>
<p>Here's my model : </p>
<pre><code>class GroupDecision(models.Model):
name = models.CharField(max_length=100)
# Other fields that may come later
class DecisionTree(models.Model):
name = models.CharField(max_length=100)
start_node = models.ForeignKey('Node')
# Other fields that may come later
class Node(models.Model):
name = models.CharField(max_length=100)
predecessor = models.ForeignKey('Node', null = True, blank = True, default = None)
successor = models.ForeignKey('SuccessorAssociation')
operation = models.ForeignKey('Filter')
class SuccessorAssociation(models.Model):
TARGET = (('C','Category'),('G','Group'),('N','Node'))
condition = models.BooleanField()
target_class = models.CharField(max_length=10,choices=TARGET)
target_pk = models.IntegerField()
</code></pre>
<p>I managed to "hack" it with the <code>SuccessorAssociation</code> who can target either a <code><Category></code>,<code><Group></code> or <code><Node></code> but I don't like this implementation because it doesn't keep the recursive delete principe without overriding the <code>delete()</code> method. </p>
<p>On top of that, I'm overriding some mechanisms who are managed by Django itself.</p>
<p>A custom field would be a way to solve this problem but I'm not really familiar with it and I think that this is a disproportionate way to do. </p>
<p>Can someone help me to implement this ? </p>
<p>Thank you</p>
|
<p>Use the <code>contenttypes</code> framework and generic foreign keys: <a href="https://docs.djangoproject.com/en/dev/ref/contrib/contenttypes/#generic-relations" rel="nofollow">https://docs.djangoproject.com/en/dev/ref/contrib/contenttypes/#generic-relations</a></p>
|
python|django|model|decision-tree
| 1 |
1,905,633 | 24,087,435 |
Choose a non-repeating random element from a list using Python
|
<p>I have this list: </p>
<pre><code>pics = [i for i in glob.glob("*.jpg")]
choice = random.choice(pics)
</code></pre>
<p>and the code below the list was used to select a random image from a list. My problem is that it isn't unique and lots of pictures repeat.. Is there any way to overcome that? </p>
|
<p>Use <a href="https://docs.python.org/3/library/random.html#random.sample" rel="nofollow"><code>random.sample</code></a> to choose random non-repeating elements:</p>
<pre><code>>>> import random
>>> random.sample(glob.glob('*.jpg'), number_of_images_to_choose)
</code></pre>
<p><code>random.sample</code> returns a <code>list</code> object.</p>
<p><em>Side note:</em> there's no need in list comprehension, unless you're planning to filter the result of <code>glob.glob</code>.</p>
|
python|list|random
| 6 |
1,905,634 | 72,886,233 |
How to extract numbers and characters from a string Python pandas?
|
<p>I have a dataset that mixes numeric and character data. I would like to extract only the numerical data and letter "W" (i don't need '2 x HDMI | 2 x USB'....) .</p>
<p>for exemple in this case (20 W, 30W etc).
thank you for your help</p>
<pre><code>v=['2 x HDMI | 2 x USB', '20 W Speaker Output', '10 W Speaker Output',
'20 W Speaker Output', '20 W Speaker Output',
'20 W Speaker Output', '20 W Speaker Output', '20 Speaker Output',
'20 W Speaker Output', '20 W Speaker Output',
'30 W Speaker Output', '20 W Speaker Output',
'20 W Speaker Output', '2 x HDMI | 2 x USB', '20 W Speaker Output',
'20 Speaker Output', '24 W Speaker Output', '20 W Speaker Output']
df=pd.DataFrame({"col_1":v})
</code></pre>
|
<p>You can use regular expressions and a little bit of list comprehension trickery to get what you desire:</p>
<pre><code>import re
import pandas as pd
v=['2 x HDMI | 2 x USB', '20 W Speaker Output', '10 W Speaker Output',
'20 W Speaker Output', '20 W Speaker Output',
'20 W Speaker Output', '20 W Speaker Output', '20 Speaker Output',
'20 W Speaker Output', '20 W Speaker Output',
'30 W Speaker Output', '20 W Speaker Output',
'20 W Speaker Output', '2 x HDMI | 2 x USB', '20 W Speaker Output',
'20 Speaker Output', '24 W Speaker Output', '20 W Speaker Output']
df=pd.DataFrame({"col_1":[v.group(0) for v in [re.search('\d+\s?[Ww]', v) for v in v] if v]})
</code></pre>
<p>... results in:</p>
<pre><code>>>> df
col_1
0 20 W
1 10 W
2 20 W
3 20 W
4 20 W
5 20 W
6 20 W
7 20 W
8 30 W
9 20 W
10 20 W
11 20 W
12 24 W
13 20 W
</code></pre>
|
python|pandas|extract
| 0 |
1,905,635 | 73,026,770 |
How to make a gradient line between two points in numpy array?
|
<p>I hope to generate a gradient line between two points in a numpy array. For example, if I have a numpy array like</p>
<pre><code>[[1,0,0,0]
[0,0,0,0]
[0,0,0,0]
[0,0,0,4]]
</code></pre>
<p>What I hope to get is find a line between the <code>[0,0]</code> and <code>[3,3]</code>, and have a liner gradient. So I hope to make an array like</p>
<pre><code>[[1,0,0,0]
[0,2,0,0]
[0,0,3,0]
[0,0,0,4]]
</code></pre>
<p>The tricky part is the matrix may not be a perfect <code>nxn</code>. I don't care if some lines have two non-zero elements (because we cannot get a perfect diagonal for <code>mxn</code> matrix). The element of the same line can be the same in my case.</p>
<p>I am wondering is there an efficient way to make this happen?</p>
|
<p>You can use <a href="https://numpy.org/doc/stable/reference/generated/numpy.fill_diagonal.html#numpy-fill-diagonal" rel="nofollow noreferrer">np.fill_diagonal</a>:</p>
<pre><code>np.fill_diagonal(arr, np.arange(arr[0,0], arr[-1,-1]+1))
</code></pre>
<p>Output:</p>
<pre><code>array([[1, 0, 0, 0],
[0, 2, 0, 0],
[0, 0, 3, 0],
[0, 0, 0, 4]])
</code></pre>
|
python|arrays|numpy
| 0 |
1,905,636 | 73,366,049 |
Extract substring from dot untill colon with Python regex
|
<p>I have a string that resembles the following string:</p>
<pre><code>'My substring1. My substring2: My substring3: My substring4'
</code></pre>
<p>Ideally, my aim is to extract 'My substring2' from this string with Python regex. However, I would also be pleased with a result that resembles '. My substring2:'</p>
<p>So far, I am able to extract</p>
<pre><code>'. My substring2: My substring3:'
</code></pre>
<p>with</p>
<pre><code>"\.\s.*:"
</code></pre>
<p>Alternatively, I have been able to extract - by using Wiktor Stribiżew's solution that deals with a somewhat similar problem posted in <a href="https://stackoverflow.com/questions/66078435/how-can-i-extract-words-from-a-string-before-colon-and-excluding-n-from-them-in">How can i extract words from a string before colon and excluding \n from them in python using regex</a> -</p>
<pre><code>'My substring1. My substring2'
</code></pre>
<p>specifically with</p>
<pre><code>r'^[^:-][^:]*'
</code></pre>
<p>However, I have been unable, after many hours of searching and trying (I am quite new to regex), to combine the two results into a single effective regex expression that will extract 'My substring2' out of my aforementioned string.</p>
<p>I would be eternally greatfull if someone could help me find to correct regex expression to extract 'My substring2'. Thanks!</p>
|
<p>With your shown samples please try following regex, code is written and tested in Python3. Here is the <a href="https://regex101.com/r/WKvOGV/1" rel="nofollow noreferrer">Online demo</a> for used regex.</p>
<pre class="lang-py prettyprint-override"><code>import re
s = "My substring1. My substring2: My substring3: My substring4"
re.findall(r'^.*?\.\s([^:]+)(?:(?::\s[^:]*)+)$',s)
['My substring2']
</code></pre>
<p><em><strong>OR:</strong></em> use following regex with only 1 capturing group, little tweak to above regex, here is the <a href="https://regex101.com/r/luQLh0/1" rel="nofollow noreferrer">Online demo</a> for below regex.</p>
<pre><code>^.*?\.\s([^:]+)(?::\s[^:]*)+$
</code></pre>
<p><em><strong>Explanation:</strong></em> Using <code>re</code> module of Python3 here, where I am using <code>re.findall</code> function of it. Then creating variable named <code>s</code> which has value as: <code>'My substring1. My substring2: My substring3: My substring4'</code> and used regex is: <code>^.*?\.\s([^:]+)(?:(?::\s[^:]*)+)$</code></p>
<p><em><strong>Explanation of regex:</strong></em> Following is the detailed explanation for above regex.</p>
<pre><code>^.*?\.\s ##Matching from starting of value of variable using lazy match till literal dot followed by space.
([^:]+) ##Creating one and only capturing group which has everything just before : here.
(?: ##Starting a non-capturing group here.
(?: ##Starting 2nd non-capturing group here.
:\s[^:]* ##Matching colon followed by space just before next occurrence of colon here.
)+ ##Closing 2nd non-capturing group and matching its 1 or more occurrences in variable.
)$ ##Closing first non-capturing group here at end of value.
</code></pre>
|
python|regex|substring|punctuation
| 3 |
1,905,637 | 66,596,022 |
NameError: name 'path' is not defined
|
<p>I tried to solve the problem by writing the following code but no luck.</p>
<pre class="lang-py prettyprint-override"><code>with detection_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(path, 'r') as fid:
serialized_graph = fid.read()
</code></pre>
<p>Then I saw an error like this</p>
<pre><code>NameError: name 'path' is not defined
</code></pre>
<p>How can I fix it?</p>
|
<pre><code>import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
</code></pre>
<p>I just guess because the infomation you give is not enough. Just try it if your tensorflow has version issue.</p>
|
python
| 0 |
1,905,638 | 66,538,477 |
Modify a bit at a given position in a int from left to right:
|
<p>I have been trying to come up with a funtion were given a int it would modify a bit at a given position using bitwise operations:</p>
<p>For example:</p>
<p>modify_bit(int, pos)</p>
<p>modify_bit(0b10000, 1) should return 0b11000</p>
<p>Or modify_bit(0b10000, 6) should return 0b100001</p>
<p>I have done research but have not found any funtions that modify a bit at a given position in a bitboard from <strong>left to right</strong> were instead all the funtions that I have found that might be what I am looking for modify a bit from the postions <strong>right to left.</strong></p>
<p>Thanks in advance!</p>
|
<p>This is a very unusual thing to want to do. Are you sure this is the spec? You don't normal want to extend a sequence of bits like this. However, this does what you ask:</p>
<pre><code>def setbit( val, pos ):
bits = len(bin(val))-2
if pos >= bits:
val <<= (pos-bits)
bits = pos + 1
val |= 1 << (bits - pos - 1)
return val
def clrbit( val, pos ):
bits = len(bin(val))-2
if pos >= bits:
val <<= (pos-bits)
bits = pos + 1
else:
val &= ~(1 << (bits - pos - 1))
return val
print( bin(setbit( 0b10000, 1 )))
print( bin(setbit( 0b10000, 6 )))
</code></pre>
|
python|bitwise-operators|bit|bitboard
| 1 |
1,905,639 | 64,762,336 |
How to order a list in this fashion: [1, 2, 1, 2, 1, 2]
|
<p>Suppose I have list_A and list_B:<br />
list_A = [1, 1, 1, 1, 1]<br />
list_B = [2, 2, 2, 2, 2]</p>
<p>How can achieve a third list mixing them like this: [1, 2, 1, 2, 1, 2, 1, 2, 1, 2]</p>
<p>Language: Python 3</p>
<p>Thanks</p>
|
<p>Is this what you are looking for?</p>
<pre><code>list_A = [1, 1, 1, 1, 1]
list_B = [2, 2, 2, 2, 2]
c = list(zip(list_A,list_B))
print (c)
</code></pre>
<p>Output will be as follows:</p>
<pre><code>[(1, 2), (1, 2), (1, 2), (1, 2), (1, 2)]
</code></pre>
<p>Or are you looking for :</p>
<pre><code> [1, 2, 1, 2, 1, 2, 1, 2, 1, 2]
</code></pre>
|
python-3.x
| 1 |
1,905,640 | 63,934,603 |
python Numpy arrays in array using nested for loop
|
<p>I am trying create an numpy array that contain sub arrays. The output that i am looking for should be like this:</p>
<pre><code>[[0. 1.5 3. 4.5 6. 7.5 9. 10.5 12. 13.5 15. 16.5 18. 19.5
21. 22.5 24. 25.5 27. 28.5 30.] [0. 1.5 3. 4.5 6. 7.5 9. 10.5 12. 13.5 15. 16.5 18. 19.5 21]......]
</code></pre>
<p>but instead i am getting just a single array as below</p>
<pre><code>[0. 1.5 3. 4.5 6. 7.5 9. 10.5 12. 13.5 15. 16.5 18. 19.5
21. 22.5 24. 25.5 27. 28.5 30. 0. 1.5 3. 4.5 6. 7.5 9. 10.5 12. 13.5 15. 16.5 18. 19.5 21........]
</code></pre>
<p>The background of this is that i have an array of arrays called "b".that array looks like this:
[array([421.3, 448.6, 449.32171103, 444.28498751,
449.36065693, 448.75383007, 449.25048692, 448.75383007,
448.59001326, 448.64239657, 448.64239657, 448.00558032,
448.00558032, 447.93972809, 448.44620636, 447.93972809,
447.93972809, 447.87609894, 447.64383163, 447.6985593 ,
447.21918563]), array([447.75551365, 447.75551365, 448.36146132, 447.75551365,
447.75551365, 447.75551365, 447.75551365, 448.36146132,
447.6985593 , 448.36146132, 447.6985593 , 447.59133146,
447.6985593 , 447.54105957, 447.64383163, 447.54105957,
446.87805943, 446.87805943, 446.75720475, 446.70012313,
446.70012313, 446.70012313, 446.64527312, 446.64527312,
446.14907822, 445.88002871, 445.70169396, 445.29989894]).........]
i need to plot each array and i want to create another similar array of arrays with matching lengths but with different content using the code below.
here is my code can you please suggest how to fix this.</p>
<pre><code>tt=np.array([])
for i in range(len(array_size)):
time_calc_1=0
for j in range(len(b[i])):
tt=np.append(tt,time_calc_1)
time_calc_1=time_calc_1+1.5
</code></pre>
|
<p>Do not loop and append to array. It is a bad idea. Instead use functions to achieve your goal:</p>
<pre><code>tt = np.repeat(np.arange(0,31,1.5)[None,:],25,0)
</code></pre>
<p>output:</p>
<pre><code>[[ 0. 1.5 3. 4.5 6. 7.5 9. 10.5 12. 13.5 15. 16.5 18. 19.5
21. 22.5 24. 25.5 27. 28.5 30. ]
[ 0. 1.5 3. 4.5 6. 7.5 9. 10.5 12. 13.5 15. 16.5 18. 19.5
21. 22.5 24. 25.5 27. 28.5 30. ]
[ 0. 1.5 3. 4.5 6. 7.5 9. 10.5 12. 13.5 15. 16.5 18. 19.5
21. 22.5 24. 25.5 27. 28.5 30. ]
...
</code></pre>
<p><strong>UPDATE</strong>: In case of variable length arrays, (I suggest to use list of lists, but if array of arrays is insisted):</p>
<p><em>list of lists:</em></p>
<pre><code>b = [10,20,30]
tt = [np.arange(0,i,1.5).tolist() for i in b]
#[[0.0, 1.5, 3.0, 4.5, 6.0, 7.5, 9.0], [0.0, 1.5, 3.0, 4.5, 6.0, 7.5, 9.0, 10.5, 12.0, 13.5, 15.0, 16.5, 18.0, 19.5], [0.0, 1.5, 3.0, 4.5, 6.0, 7.5, 9.0, 10.5, 12.0, 13.5, 15.0, 16.5, 18.0, 19.5, 21.0, 22.5, 24.0, 25.5, 27.0, 28.5]]
</code></pre>
<p><em>array of arrays:</em></p>
<pre><code>b = [10,20,30]
tt = np.array([np.arange(0,i,1.5) for i in b])
#[array([0. , 1.5, 3. , 4.5, 6. , 7.5, 9. ])
# array([ 0. , 1.5, 3. , 4.5, 6. , 7.5, 9. , 10.5, 12. , 13.5, 15. , 16.5, 18. , 19.5])
# array([ 0. , 1.5, 3. , 4.5, 6. , 7.5, 9. , 10.5, 12. , 13.5, 15. , 16.5, 18. , 19.5, 21. , 22.5, 24. , 25.5, 27. , 28.5])]
</code></pre>
|
python|arrays|python-3.x|numpy|for-loop
| 1 |
1,905,641 | 71,819,798 |
Selenium finds element by class, but returns empty string. How to fix?
|
<p>The code tries to show the weather forecast for a city. It is able to find the class with the content, but it prints out an empty string. Why is that and how could I change my code to not get an empty string?</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.common.by import By
s=Service("C:\Program Files (x86)\chromedriver.exe")
browser = webdriver.Chrome(service=s)
city = str(input("Enter a city"))
url="https://www.weather-forecast.com/locations/"+city+"/forecasts/latest"
browser.get(url)
browser.maximize_window()
content = browser.find_element(By.CLASS_NAME, "b-forecast__table-description-content")
print(content.text)
</code></pre>
|
<p>You were close enough. The contents are actually within the descendant <code><span></code> of the ancestor <code><p></code> tags.</p>
<hr />
<p>To print all the desired texts you have to induce <a href="https://stackoverflow.com/questions/49775502/webdriverwait-not-working-as-expected/49775808#49775808">WebDriverWait</a> for the <a href="https://stackoverflow.com/a/64770041/7429447"><em>visibility_of_all_elements_located()</em></a> and you can use either of the following <a href="https://stackoverflow.com/questions/48369043/official-locator-strategies-for-the-webdriver/48376890#48376890">Locator Strategies</a>:</p>
<ul>
<li><p>Using <em>CSS_SELECTOR</em>:</p>
<pre><code>city = "Dallas"
driver.get("https://www.weather-forecast.com/locations/"+city+"/forecasts/latest")
print([my_elem.text for my_elem in WebDriverWait(driver, 20).until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR, "p.b-forecast__table-description-content > span.phrase")))])
</code></pre>
</li>
<li><p>Using <em>XPATH</em>:</p>
<pre><code>city = "Dallas"
driver.get("https://www.weather-forecast.com/locations/"+city+"/forecasts/latest")
print([my_elem.get_attribute("innerText") for my_elem in WebDriverWait(driver, 20).until(EC.visibility_of_all_elements_located((By.XPATH, "//p[@class='b-forecast__table-description-content']/span[@class='phrase']")))])
</code></pre>
</li>
<li><p>Console Output:</p>
<pre><code>['Heavy rain (total 0.8in), heaviest during Tue night. Warm (max 86°F on Mon afternoon, min 66°F on Tue night). Winds decreasing (fresh winds from the S on Sun night, light winds from the S by Mon night).', 'Light rain (total 0.3in), mostly falling on Fri morning. Warm (max 77°F on Wed afternoon, min 57°F on Wed night). Wind will be generally light.', 'Light rain (total 0.1in), mostly falling on Sat night. Warm (max 84°F on Sat afternoon, min 43°F on Sun night). Winds decreasing (fresh winds from the N on Sun afternoon, calm by Mon night).', 'Mostly dry. Warm (max 70°F on Wed afternoon, min 48°F on Tue night). Wind will be generally light.']
</code></pre>
</li>
<li><p><strong>Note</strong> : You have to add the following imports :</p>
<pre><code>from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
</code></pre>
</li>
</ul>
|
python|selenium|xpath|css-selectors|webdriverwait
| 0 |
1,905,642 | 71,812,508 |
SQLAlchemy Select from Join of two Subqueries
|
<p>Need help translating this SQL query into SQLAlchemy:</p>
<pre><code>select
COALESCE(DATE_1,DATE_2) as DATE_COMPLETE,
QUESTIONS_CNT,
ANSWERS_CNT
from (
(select DATE as DATE_1,
count(distinct QUESTIONS) as QUESTIONS_CNT
from GUEST_USERS
where LOCATION like '%TEXAS%'
and DATE = '2021-08-08'
group by DATE
) temp1
full join
(select DATE as DATE_2,
count(distinct ANSWERS) as ANSWERS_CNT
from USERS
where LOCATION like '%TEXAS%'
and DATE = '2021-08-08'
group by DATE
) temp2
on temp1.DATE_1=temp2.DATE_2
)
</code></pre>
<p>Mainly struggling with the join of the two subqueries. I've tried this (just for the join part of the SQL):</p>
<pre><code>query1 = db.session.query(
GUEST_USERS.DATE_WEEK_START.label("DATE_1"),
func.count(GUEST_USERS.QUESTIONS).label("QUESTIONS_CNT")
).filter(
GUEST_USERS.LOCATION.like("%TEXAS%"),
GUEST_USERS.DATE == "2021-08-08"
).group_by(GUEST_USERS.DATE)
query2 = db_session_stg.query(
USERS.DATE.label("DATE_2"),
func.count(USERS.ANSWERS).label("ANSWERS_CNT")
).filter(
USERS.LOCATION.like("%TEXAS%"),
USERS.DATE == "2021-08-08"
).group_by(USERS.DATE)
sq2 = query2.subquery()
query1_results = query1.join(
sq2,
sq2.c.DATE_2 == GUEST_USERS.DATE)
).all()
</code></pre>
<p>In this output I receive only the DATE_1 column and the QUESTIONS_CNT columns. Any idea why the selected output from the subquery is not being returned in the result?</p>
|
<p>Not sure if this is the best solution but this is how I got it to work. Using 3 subqueries essentially.</p>
<pre><code>query1 = db.session.query(
GUEST_USERS.DATE_WEEK_START.label("DATE_1"),
func.count(GUEST_USERS.QUESTIONS).label("QUESTIONS_CNT")
).filter(
GUEST_USERS.LOCATION.like("%TEXAS%"),
GUEST_USERS.DATE == "2021-08-08"
).group_by(GUEST_USERS.DATE)
query2 = db_session_stg.query(
USERS.DATE.label("DATE_2"),
func.count(USERS.ANSWERS).label("ANSWERS_CNT")
).filter(
USERS.LOCATION.like("%TEXAS%"),
USERS.DATE == "2021-08-08"
).group_by(USERS.DATE)
sq1 = query1.subquery()
sq2 = query2.subquery()
query3 = db.session.query(sq1, sq2).join(
sq2,
sq2.c.DATE_2 == sq1.c.DATE_1)
sq3 = query3.subquery()
query4 = db.session.query(
func.coalesce(
sq3.c.DATE_1, sq3.c.DATE_2),
sq3.c.QUESTIONS_CNT,
sq3.c.ANSWERS_CNT
)
results = query4.all()
</code></pre>
|
python|sql|sqlalchemy
| 1 |
1,905,643 | 71,818,662 |
Pandas read_csv Multiple spaces delimiter
|
<p>I have a file with 7 aligned columns, with empty cells.Example:</p>
<pre><code>SN 1995ap 0.230 40.44 0.46 0.00 silver
SN 1995ao 0.300 40.76 0.60 0.00 silver
SN 1995ae 0.067 37.54 0.34 0.00 silver
SN 1995az 0.450 42.13 0.21 gold
SN 1995ay 0.480 42.37 0.20 gold
SN 1995ax 0.615 42.85 0.23 gold
</code></pre>
<p>I want to read it using <code>pandas.read_csv()</code>, but I have some trouble. The separator can be either 1 or 2 spaces. If I use <code>sep='\s+'</code> it works, but it ignores empty cells, therefore I get cells shifted to the left and empty cells in the last columns. I tried to use regex separator <code>sep=\s{1,2}</code>, but i get the following error:</p>
<pre><code>pandas.errors.ParserError: Expected 7 fields in line 63, saw 9. Error could possibly be due to quotes being ignored when a multi-char delimiter is used.
</code></pre>
<p>My code:</p>
<pre><code>import pandas as pd
riess_2004b=pd.read_csv('Riess_2004b.txt', skiprows=22, header=None, sep='\s{1,2}', engine='python')
What I am not getting right?
</code></pre>
|
<p>Fix-width file (<code>read_fwf</code>) seems like a better fit for your case:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.read_fwf("Riess_2004b.txt", colspecs="infer", header=None)
</code></pre>
|
python|pandas|dataframe
| 2 |
1,905,644 | 71,926,512 |
Call a method n times by appending it to itself in a python object
|
<p>Say i have a python class <code>Sum()</code> with a method <code>add()</code> that can take a list of numbers for manipulation, say</p>
<pre><code>sum = Sum()
sum.add([5, 8, 2])
</code></pre>
<p>I want to instead call the <code>.add</code> method on each list item by 'appending' to itself. How can i achieve this?</p>
<p><code> sum.add(5).add(8).add(2)</code></p>
<p>For clarity, i have seen the two implementations in <a href="https://www.tensorflow.org/tutorials/keras/text_classification_with_hub#:%7E:text=the%20full%20model%3A-,model,-%3D%20tf.keras.Sequential" rel="nofollow noreferrer">keras</a></p>
<pre><code>model = tf.keras.Sequential([
hub_layer,
tf.keras.layers.Dense(16, activation='relu'),
tf.keras.layers.Dense(1)
])
</code></pre>
<p><em>Which can also be represented as</em></p>
<pre><code>model = tf.keras.Sequential()
model.add(hub_layer)
model.add(tf.keras.layers.Dense(16, activation='relu'))
model.add(tf.keras.layers.Dense(1))
</code></pre>
<p>I want to achieve the second for the above scenarios, whereby I call the <code>.add</code> method <code>n</code> times for each item I have in a list</p>
|
<p>in your add function, simply return object itself</p>
<pre class="lang-py prettyprint-override"><code>def add(self, number: int):
# do your stuff
return self
</code></pre>
<p>this works because the next .add is going to be executed on the returned element of the previous .add (aka the object itself)</p>
<p>hope it helps :)</p>
|
python|tensorflow|keras
| 1 |
1,905,645 | 71,459,969 |
Use matplotlib to graph a csv column and occurrences
|
<p>I've looked around for sources on how to use matplotlib to create graphs (line, bar, and pie) from a csv file, but they're not exactly what I'm looking for. I am wondering how, from this example file I created below, would I graph just the Favorite Color column along with the number of occurrences of each color?</p>
<pre><code>ID Name Favorite Color
1 Mary Blue
2 Bob Green
3 Simon Red
4 Lily Red
5 Gerald Blue
6 Kathy Blue
</code></pre>
<p>Hope that makes sense! Thank you in advance!</p>
|
<p>The easiest way to plot this data is probably to use Pandas to load the file into a dataframe, then get the occurrences of each color using <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.value_counts.html" rel="nofollow noreferrer"><code>value_counts</code></a> and plot the results.</p>
<pre><code>import pandas as pd
df = pd.read_csv('/path/to/file.csv')
df['Favorite Color'].value_counts().plot.bar()
</code></pre>
|
python|csv|matplotlib|plot|graph
| 1 |
1,905,646 | 10,676,050 |
bash: syntax error near unexpected token `(' - Python
|
<pre><code># from lxml import etree;
import module2dbk;
print module2dbk.xsl_transform(etree.parse('test-ccap/col10614/index.cnxml'), []);
Error: bash: syntax error near unexpected token `('
</code></pre>
|
<p>add <code>#!/usr/bin/env python</code> at the top of your script, or call your script using <code>python myscript.py</code></p>
|
python|syntax
| 70 |
1,905,647 | 62,596,882 |
Strange visual behavior when using a Tkinter OptionMenu() widget with many choices on Macintosh (10.13.6)
|
<p>I'm working with a GUI where I give the user a choice of many different colormaps.</p>
<p><strong>The issue is that when the dropdown list of the <code>OptionMenu()</code> gets near the bottom of the screen, the whole box shifts down to an odd place.</strong></p>
<p>I'm not sure if this is a bug or if there is something I am doing wrong. Example code provided below, along with an image of what happens before and after the list box gets shifted down (Left vs right have 7 other widgets above vs. 8).</p>
<p>Note that if you're trying to reproduce the issue, your resolution may require a longer list/lower dropdown.</p>
<pre><code>from tkinter import *
class GUI(Tk):
def __init__(self):
Tk.__init__(self)
self.initGUI()
def initGUI(self):
self.cmapchoice = StringVar()
self.cmapchoice.set('jet')
self.cmaps = sorted(['viridis', 'plasma', 'inferno', 'magma','binary',
'bone','spring', 'summer', 'autumn', 'winter', 'cool','hot','copper','Spectral',
'coolwarm', 'bwr', 'seismic','twilight', 'hsv', 'Paired', 'Accent', 'prism', 'ocean',
'terrain','brg', 'rainbow', 'jet'],key=lambda s: s.lower())
for i in range(8): # Change this to 7 to "fix" the issue
Label(self,text='OTHER WIDGETS').grid(row=i, column=1, sticky='WE')
OptionMenu(self,self.cmapchoice,*self.cmaps).grid(row=9, column=1, sticky='WE')
if __name__ == "__main__":
MainWindow = GUI()
MainWindow.mainloop()
</code></pre>
<p><a href="https://i.stack.imgur.com/APxEF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/APxEF.png" alt="enter image description here" /></a></p>
|
<p>There can be a workaround to this problem, though this is not an exact fix to the problem but solves the issue. If opening the menu above the <a href="http://effbot.org/tkinterbook/menubutton.htm" rel="nofollow noreferrer"><code>Menubutton</code></a> <em>(OptionMenu widget without any dropdown menu)</em> then it will never shift to the bottom of the screen.</p>
<p>The direction of the dropdown menu can be set with <a href="http://effbot.org/tkinterbook/menubutton.htm#Tkinter.Menubutton.config-method" rel="nofollow noreferrer"><code>direction</code></a> argument of <code>Menubutton</code>. Like so...</p>
<pre class="lang-py prettyprint-override"><code>op = OptionMenu(...)
op['direction'] = 'above'
</code></pre>
<p><em>Complete example</em></p>
<pre class="lang-py prettyprint-override"><code>from tkinter import *
class GUI(Tk):
def __init__(self):
Tk.__init__(self)
self.initGUI()
def initGUI(self):
self.cmapchoice = StringVar()
self.cmapchoice.set('jet')
self.cmaps = sorted(['viridis', 'plasma', 'inferno', 'magma','binary',
'bone','spring', 'summer', 'autumn', 'winter', 'cool','hot','copper','Spectral',
'coolwarm', 'bwr', 'seismic','twilight', 'hsv', 'Paired', 'Accent', 'prism', 'ocean',
'terrain','brg', 'rainbow', 'jet'],key=lambda s: s.lower())
for i in range(8): # Change this to 7 to "fix" the issue
Label(self,text='OTHER WIDGETS').grid(row=i, column=1, sticky='WE')
op = OptionMenu(self, self.cmapchoice, *self.cmaps)
op.grid(row=9, column=1, sticky='WE')
op['direction'] = 'above'
if __name__ == "__main__":
MainWindow = GUI()
MainWindow.mainloop()
</code></pre>
|
python|tkinter
| 0 |
1,905,648 | 67,354,252 |
Querying an array in a JSON column on MySQL with SQLAlchemy
|
<p>I have the following code:</p>
<pre><code>class CabinetList(Resource):
def get(self):
devices = Device.query.filter(Device.type == 'CABINET').all()
return {'cabinets':list(x.json() for x in devices)}
</code></pre>
<p>generating this JSON which is stored in a JSON MySql column:</p>
<pre><code> {
"cabinets":[
{
"id":4,
"name":"Armario 1",
"online":true,
"setup":0.0,
"type":"CABINET",
"data":{
"lockers":[
{
"id":1,
"content":"Pala",
"enabled":true,
"busy":false
},
{
"id":2,
"content":"Azada",
"enabled":true,
"busy":false
}
]
}
}
]
}
</code></pre>
<p>With this code I can change the "busy" property of the chosen index</p>
<pre><code>def get(self, device_id, locker_id):
if not 1 <= locker_id <= 32:
return {'error': 'device not found'}, 404
device = Device.query.filter(and_(Device.id == device_id, Device.type == 'CABINET')).first()
if not device:
return {'error': 'device not found'}, 404
# Update the current status for the locker
device.data['lockers'][0]['busy'] = True
return {'cabinet': device.json()}
</code></pre>
<p>It works, but I don't want to refence the item by its index I want to change the property for the item matching its 'id'</p>
|
<p>I assume you are making reference to this line, where you would like to not use <code>0</code> as index to access the correct locker:</p>
<pre><code>device.data['lockers'][0]['busy'] = True
</code></pre>
<p>I think the most simple solution would be to filter the locker by provided <code>locker_id</code>:</p>
<pre><code>for locker in device.data['lockers']:
if locker['id'] == locker_id:
locker['busy'] = True
break
else:
raise Exception("Locker not found")
</code></pre>
|
python|mysql|json|sqlalchemy|flask-sqlalchemy
| 0 |
1,905,649 | 67,458,899 |
How to write a program that would keep taking inputs from the user until the sum of all the inputs reach 200
|
<p>The numbers permissible for the user input are: 10, 20 and 50. If any other number is entered, then the program should declare it as invalid.</p>
<p>I've tried the following but it doesn't seem to work out:</p>
<pre><code>count = 0
total = 0
print("Enter the values of amounts collected")
while True:
new_number = input('> ')
count = count + 1
total = total + int(new_number)
if total==200 :
print("You have successfully collected 200")
break
if total>200:
print("Amount collected exceeds 200")
break
</code></pre>
<p>Sample input:</p>
<pre><code>> 10
> 50
> 50
> 50
> 10
> 20
> 10
</code></pre>
<p>Sample output:</p>
<pre><code>You have successfully collected 200
</code></pre>
<p>Sample input:</p>
<pre><code>> 190
...
</code></pre>
<p>Sample output:</p>
<pre><code>Invalid input
</code></pre>
<p>Sample input:</p>
<pre><code>> 50
> 50
> 50
> 20
> 50
</code></pre>
<p>Sample output:</p>
<pre><code>Amount collected exceeds 200
</code></pre>
|
<p>You just need nested <code>if</code> condition</p>
<pre><code>total = 0
print("Enter the values of amounts collected")
while total<200: # Loop in until total < 200
new_number = int(input('> '))
if new_number in [10,20,50]: # First check input number is in 10,20,50
total = total + new_number # Then add sum to total
if total == 200: # If total = 200 break
print("You have successfully collected 200")
break
elif total >200: # If total > 200 break
print("Amount collected exceeds 200")
break
else: # If number not in 10,20,50 then print invalid input
print("Invalid input")
</code></pre>
|
python|python-3.x|list
| 0 |
1,905,650 | 53,325,625 |
C++ Qt: QProcess run Python script path specifying Python version
|
<p>Qt Creator 4.7.1
Based on Qt 5.11.2 (Clang 8.0 (Apple), 64 bit)</p>
<p>I'm running this in Qt.</p>
<pre><code>QProcess p;
QStringList params;
params << "/Users/johan/Documents/testQt/hello.py";
p.start("python", params);
p.waitForFinished(-1);
qDebug() << "finished";
QString p_stdout = p.readAll();
qDebug() << p_stdout;
QString p_stderr = p.readAllStandardError();
if(!p_stderr.isEmpty())
qDebug()<<"Python error:"<<p_stderr;
</code></pre>
<p>I at first had the same error as this: <a href="https://stackoverflow.com/questions/48242102/qt-calling-python-using-qprocess">Qt calling python using QProcess</a></p>
<pre><code>Python error: "ImportError: No module named site\r\n"
</code></pre>
<p>And I added:</p>
<pre><code>QProcessEnvironment env = QProcessEnvironment::systemEnvironment();
env.insert("PYTHONPATH", "/Users/johan/anaconda3/lib/python3.7");
env.insert("PYTHONHOME", "/Users/johan/anaconda3/bin/python");
p.setProcessEnvironment(env);
</code></pre>
<p>I can directly run the python script from terminal with <code>python hello.py</code>. <code>/Users/johan/anaconda3/bin/python</code> is the output of <code>which python</code>. I suppose I have the correct path for PYTHONHOME, but I'm still getting error.</p>
<pre><code>Python error: " File \"/Users/johan/anaconda3/lib/python3.7/site.py\", line 177\n file=sys.stderr)\n ^\nSyntaxError: invalid syntax\n"
</code></pre>
<p>now this is the same error as this: <a href="https://stackoverflow.com/questions/20555517/using-multiple-versions-of-python">Using multiple versions of Python</a></p>
<p>But adding what's suggested <code>#!python3</code> in the script didn't help. I've also tried <code>#!/Users/johan/anaconda3/bin/python</code>.</p>
<p>After searching for hours, now I really don't know how to solve this. How do I specify to run with Python 3? Any help is appreciated. </p>
<p>I guess it's probably still a path problem. Please kindly educate me what I don't understand about PATH in general. I do know PATH is where shell looks for the executable. But why are we inserting PYTHONPATH and PYTHONHOME here instead of just adding it to PATH? What are PYTHONPATH and PYTHONHOME for? (I've read <a href="https://docs.python.org/3/using/cmdline.html#envvar-PYTHONHOME" rel="nofollow noreferrer">PYTHONHOME documentation</a> but I don't understand.) </p>
<p>EDIT (hello.py for testing package imports):</p>
<pre><code>import time
import sys
import os
import tensorflow as tf
import numpy as np
import time
import inspect
import cv2
def main():
time.sleep(1)
print(os.path)
print(sys.version_info[0])
print("hello")
if __name__ == '__main__':
main()
</code></pre>
|
<p>In <code>PYTHONPATH</code> there must be the paths of the modules(so the minimum is <code>site-packages</code>) so the solution is:</p>
<pre><code>env.insert("PYTHONPATH", "/Users/johan/anaconda3/lib/python3.7/site-packages")
</code></pre>
<p>You must also place the path of the python binary that is used:</p>
<pre><code>p.start("/Users/johan/anaconda3/bin/python", params);
</code></pre>
|
python|c++|qt|path
| 0 |
1,905,651 | 70,090,374 |
How to clear output (not terminal) but in a gui
|
<p>Very new to python, I was scripting a lbs to kg converter, and want to know how to clear the output after receiving the answer, basically whenever I press the convert button it keeps giving output, and it will keep expanding the window.</p>
<pre><code>def to_kg():
kg=float(entry1.get())/2.0462
labelanswer=tk.Label(root, text=(kg), bg='teal')
labelanswer.place(x=45,y=45)
labelanswer.pack()
import tkinter as tk
from tkinter.constants import BOTTOM
from typing import Text
root=tk.Tk()
root.resizable(False, False)
root.title('Pounds to Kilograms')
canvas=tk.Canvas(root, height=150, width=200, bg='teal')
frame=tk.Frame(root, bg='white')
frame.place(relwidth=0.8, relheight=0.8, relx=0.1 rely=0.1)
entry1=tk.Entry(root)
canvas.create_window(200, 200, window=entry1)
entry1.place(x=40, y=50)
def to_kg():
kg=float(entry1.get())/2.205
labelanswer=tk.Label(root, text=(kg), bg='teal')
labelanswer.place(x=45,y=45)
labelanswer.pack()
labeltext=tk.Label(root, text='lbs to kg', bg='white')
button1=tk.Button(root, text="convert", command=to_kg)
button1.pack(side=BOTTOM)
canvas.pack()
frame.pack()
root.mainloop()
</code></pre>
<p><a href="https://i.stack.imgur.com/GCuRe.png" rel="nofollow noreferrer">Example1</a>
<a href="https://i.stack.imgur.com/zxJcQ.png" rel="nofollow noreferrer">Example2</a></p>
|
<p>You are creating new label in each button click. This will create a new label as on your screenshot. Instead of creating new label, you should modify the answer label. So, there is only one <code>answerlabel</code>.</p>
<p>Here is the working solution.</p>
<pre><code>import tkinter as tk
from tkinter.constants import BOTTOM
from typing import Text
root=tk.Tk()
root.resizable(False, False)
root.title('Pounds to Kilograms')
canvas=tk.Canvas(root, height=150, width=200, bg='teal')
frame=tk.Frame(root, bg='white')
frame.place(relwidth=0.8, relheight=0.8, relx=0.1, rely=0.1)
entry1=tk.Entry(root)
canvas.create_window(200, 200, window=entry1)
entry1.place(x=40, y=50)
labelanswer=tk.Label(root, bg='teal')
labelanswer.place(x=45,y=45)
def to_kg():
kg=float(entry1.get())/2.205
labelanswer['text'] = kg
labeltext=tk.Label(root, text='lbs to kg', bg='white')
button1=tk.Button(root, text="convert", command=to_kg)
button1.pack(side=BOTTOM)
canvas.pack()
labelanswer.pack()
frame.pack()
root.mainloop()
</code></pre>
<p>line <code>labelanswer['text'] = kg</code> is a way to change your answer label.</p>
|
python|tkinter
| 0 |
1,905,652 | 55,675,188 |
My objects are not getting filtered despite setting the get_queryset function in views.py
|
<p>I want to make an api to get the detail view of a blog from a list of published blog posts. To solve that, I am using get_queryset() filters to solve this, but it is simply giving back all the list, i.e. no filter worked.</p>
<p>I have used the code as shown below:</p>
<p>models.py</p>
<pre><code>class BlogModel (models.Model) :
heading = models.CharField(max_length=254)
blog = models.TextField()
author = models.CharField(max_length=254)
</code></pre>
<p>views.py</p>
<pre><code>class BlogRetrieveView(generics.RetrieveUpdateDeleteAPIView):
serializer_class=BlogListSerializer
queryset=BlogModel.objects.all()
lookup_field='blog_id'
def get_queryset(self,*args, **kwargs):
return BlogModel.objects.filter(
blog__id=self.kwargs['blog_id']
</code></pre>
<p>serializers.py</p>
<pre><code>class BlogListSerializer(serializers.ModelSerializer):
class Meta:
model = BlogModel
fields = '__all__'
</code></pre>
<p>urls.py</p>
<pre><code> url(r'^blog/(?P<blog_id>\d+)/$',BlogRetrieveView.as_view()),
</code></pre>
<p>I am getting the following output:</p>
<p><a href="https://i.stack.imgur.com/EFfDn.png" rel="nofollow noreferrer">This shows 1 out of 7 blog post shown in a list.</a>
Clearly, the filter wasn't applied.</p>
<p><strong>Edit 1: With the given advices, my code on localhost worked, but the production website is still stuck on a situation mentioned in the problem above. What can be the reason behind it?</strong></p>
|
<p>I think you should delete <code>lookup_field</code> and <code>get_queryset()</code> of BlogRetrieveView and change urls to <code>url(r'^blog/(?P<pk>\d+)/$',BlogRetrieveView.as_view())</code></p>
|
python|python-2.7|filter|django-rest-framework|django-1.11
| 0 |
1,905,653 | 55,733,789 |
How to save RAM when dealing with very large python dict?
|
<p>I have a billion-level key-value pairs and need to create a look-up table for them. I currently uses the native python <code>dict</code>, however, it seems to be very slow when adding the pairs into the dict and consumes lots of RAM (several hundred GB). The option I need is to 1) add every pair into the dict and 2) lookup for a few million times. Are there any recommended ways I should take to meet the requirement? I have a machine with a few hundred Gigabyte memory (but not sufficient to store everything in-memory) and a good amount of CPU cores.</p>
|
<p>If this data is not shared between machines (and if it's in memory with a <code>dict</code> I don't think it is) then I would recommend using a local SQLite database.</p>
<p>Python has an <a href="https://docs.python.org/3/library/sqlite3.html" rel="nofollow noreferrer">internal library</a> for interacting with SQLite which is fast (written in C), stores data to disk (to save RAM) and is available almost everywhere. </p>
|
python|database|dictionary
| 2 |
1,905,654 | 56,676,989 |
Why in some modules, certain classes has two copies of the same method one starts with _?
|
<p>Sometimes I see classes that has attributes or methods that start with underscore. Why do they do that?</p>
<p>For example: in Tensorflow, a model class has ._layers and .layers
methods.</p>
|
<p>Python has no notion of private members, so underscore is used as a convention to denote private methods or fields.</p>
<blockquote>
<p>The underscore prefix is meant as a hint to another programmer that a variable or method starting with a single underscore is intended for <strong>internal use.</strong> This convention is defined in PEP 8.</p>
</blockquote>
<p><a href="https://dbader.org/blog/meaning-of-underscores-in-python" rel="nofollow noreferrer">Link for the above quote</a></p>
|
python|tensorflow
| 1 |
1,905,655 | 60,819,023 |
Unexpected NaN value when importing CSV into numpy array using genfromtxt()
|
<p>Does anyone know why the first element in my numpy array is always nan when importing the following csv data using genfromtxt?</p>
<p>92,99,86,81 58,7,16,47 57,52,4,66 71,60,72,8 79,63,90,7 40,60,88,68 41,9,93,58 52,21,28,53 1,9,72,88 61,26,33,51</p>
<p>I have attached a screenshot to this post to show the exact issue. In this, the line of code</p>
<p>x = np.genfromtxt('../data/example_data.csv', delimiter=',')</p>
<p>imports a 10 by 4 array into the variable x, where the elements in the array are the values in my csv file, except for the element in position (0,0), which is nan.</p>
<p><a href="https://i.stack.imgur.com/rEcV6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rEcV6.png" alt="Issue"></a></p>
<p>Does anyone know what is going on here?</p>
<p>Cheers.</p>
|
<p><a href="https://i.stack.imgur.com/sisOW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sisOW.png" alt="enter image description here"></a></p>
<p>This is a typical csv file, when you read the values the top-left value is empty as you can see. So that might be the reason.</p>
|
python|arrays|numpy|csv
| 0 |
1,905,656 | 61,128,591 |
How to use a for loop in to find out data by condition
|
<p>I have DataFrame name <strong>expense</strong> which has a column name <strong>price</strong>. Now I want to make different DataFrame for price=50,100,500 respectively like df50 for price=50,df100 for price = 100,df 500 for price =500 from the original DataFrame expense. I have used the below code</p>
<pre><code>pr=[32,50,75,110,150,210,260]
for i in pr:
dfi = expence.loc[expence['price']==i]
</code></pre>
<p>But when I am doing <code>print(df50)</code>
it is showing </p>
<blockquote>
<p>NameError: name 'df50' is not defined.</p>
</blockquote>
<p>I know it can be done by </p>
<pre><code>df50 = expence.loc[expence['price']==50]
</code></pre>
<p>But I have to do it for so many values(almost100) in price. Because of that I want to use for loop.</p>
<p>Can anyone help me how to solve this issue or any suggestion for better method.</p>
|
<pre><code>pr=[32,50,75,110,150,210,260]
df = dict()
for i in pr:
df[i] = expence.loc[expence['price']==i] # assign the value here
print(df[50])
</code></pre>
|
python|python-3.x|python-2.7
| 1 |
1,905,657 | 66,050,950 |
Fit a scikit-learn model in parallel?
|
<p>Is it possible to fit a scikit-learn model in parallel? Something along the lines of
<code>model.fit(X, y, n_jobs=20)</code></p>
|
<p>It really depends on the model you are trying to fit. Usually it will have an <code>n_jobs</code> parameter when you initialize the model. See <a href="https://scikit-learn.org/stable/glossary.html#term-n_jobs" rel="nofollow noreferrer">glossary on n_jobs</a>. For example random forest:</p>
<pre><code>from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(n_jobs=10)
</code></pre>
<p>If it is an ensemble method, it makes sense to parallelize because you can fit models separately (see <a href="https://scikit-learn.org/stable/modules/ensemble.html#parallelization" rel="nofollow noreferrer">help page for ensemble methods</a>). <a href="https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html" rel="nofollow noreferrer">LogisticRegression() </a>also has an n_job option but I honestly don't know how much this speeds up the fitting process, if that's your bottle neck. See also this <a href="https://stackoverflow.com/questions/20894671/speeding-up-sklearn-logistic-regression">post</a></p>
<p>Other methods like elastic net, linear regression or SVM, i don't think there's a parallelization option.</p>
|
python|machine-learning|scikit-learn
| 1 |
1,905,658 | 68,901,706 |
Pandas group multiple columns and append value based on condition in non-grouped column
|
<p>I'd like to group several columns in my dataframe, then append a new column to the original dataframe with a non-aggregated value determined by a condition in another column that falls outside of the grouping. For example:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({'cat' : ['foo', 'foo', 'foo', 'foo','foo','foo',
'bar', 'bar', 'bar',' bar','bar', 'bar'],
'subcat' : ['a', 'a','a', 'b', 'b', 'b',
'c', 'c','c','d', 'd', 'd'],
'bin' : [1,0,0,0,1,0,0,0,1,0,0,1],
'value':[2,5,7,6,3,9,8,3,2,1,2,4]
})
</code></pre>
<p>I'd like to group by both 'cat' and 'subcat', and I'm hoping to append the corresponding 'value' as a new column where 'bin' == 1.</p>
<p>This is my desired output:</p>
<pre><code>df = pd.DataFrame({'cat' : ['foo', 'foo', 'foo', 'foo','foo','foo',
'bar', 'bar', 'bar',' bar','bar', 'bar'],
'subcat' : ['a', 'a','a', 'b', 'b', 'b',
'c', 'c','c','d', 'd', 'd'],
'bin' : [1,0,0,0,1,0,0,0,1,0,0,1],
'value':[2,5,7,6,3,9,8,3,2,1,2,4],
'new_value':[2,2,2,3,3,3,2,2,2,4,4,4]
})
</code></pre>
<p>I've tried various approaches including the following, but my merge yields more rows than expected so am hoping to find a different route.</p>
<pre><code>vals = df[df['bin'] == 1].loc[:,('cat', 'subcat', 'value')]
df_merged = pd.merge(left = df, right = vals, how = "left", on = ('cat','subcat'))
</code></pre>
<p>Thanks!</p>
|
<p>Try with <code>loc</code> with <code>groupby</code> and <code>idxmax</code>:</p>
<pre><code>df['new_value'] = df.loc[df.groupby(['subcat'])['bin'].transform('idxmax'), 'value'].reset_index(drop=True)
print(df)
</code></pre>
<p>Output:</p>
<pre><code> cat subcat bin value new_value
0 foo a 1 2 2
1 foo a 0 5 2
2 foo a 0 7 2
3 foo b 0 6 3
4 foo b 1 3 3
5 foo b 0 9 3
6 bar c 0 8 2
7 bar c 0 3 2
8 bar c 1 2 2
9 bar d 0 1 4
10 bar d 0 2 4
11 bar d 1 4 4
</code></pre>
|
python|pandas
| 0 |
1,905,659 | 72,624,883 |
How do I dump Python's logging configuration?
|
<p>How do I dump the current configuration of the Python <code>logging</code> module? For example, if I use a module that configures logging for me, how can I see what it has done?</p>
|
<p>There does not appear to be a documented way to do so, but we can get hints by looking at how the <code>logging</code> module is implemented.</p>
<p>All <code>Logger</code>s belong to a tree, with the root <code>Logger</code> instance at <code>logging.root</code>. The <code>Logger</code> instances do not track their own children but instead have a shared <code>Manager</code> that can be used to get a list of all loggers:</p>
<pre><code>>>> print(logging.root.manager.loggerDict)
{
'rosgraph': <logging.PlaceHolder object at 0xffffa2851710>,
'rosgraph.network': <logging.Logger object at 0xffffa28517d0>,
'rosout': <rosgraph.roslogging.RospyLogger object at 0xffffa2526290>,
'rospy': <rosgraph.roslogging.RospyLogger object at 0xffffa2594250>,
...
}
</code></pre>
<p>Each <code>Logger</code> instance has <code>handlers</code> and <code>filters</code> attributes which can help understand the behavior of the logger.</p>
|
python|python-logging
| 1 |
1,905,660 | 59,353,082 |
os.makedirs wrongly catches extension-less files for directory on macOS
|
<p>I am trying to create directories with the same name as files. There is a file: <code>readme</code> and it has no extension. It gets caught in the <code>os.makedirs(directory)</code> claiming that the file exists.</p>
<pre class="lang-py prettyprint-override"><code>source = "Users/me/Desktop/parent"
dirpaths = ['readme', 'index', 'robots']
def func(directory,source=source):
directory = os.path.join(source,directory) #
os.makedirs(directory)
a = [func(directory) for directory in dirpaths]
>>> FileExistsError: [Errno 17] File exists: '/Users/me/Desktop/parent/readme'
</code></pre>
<p>I changed the line with # to this:</p>
<pre class="lang-py prettyprint-override"><code>directory = os.path.join(source,directory+"/")
>>> NotADirectoryError: [Errno 20] Not a directory: '/Users/me/Desktop/parent/readme/'
</code></pre>
<p>How can I make the directory when an extension-less file of the same name exists?</p>
<p>Python 3.7.3</p>
<p>Turns out, macOS treats directory and extension-less files as the same. I tried moving a folder named <code>readme</code> to <code>parent</code> but it refused.</p>
<p><a href="https://i.stack.imgur.com/WH5Wz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WH5Wz.png" alt="enter image description here" /></a></p>
<pre><code>os.path.isfile(source + "/" + "readme")
True
os.path.isfile(source + "/" + "readme/")
False
os.path.isdir(source + "/" + "readme/")
False
os.path.isdir(source + "/" + "readme")
False
</code></pre>
<p>If there is a difference here, can it be used for creating too?</p>
|
<p>Directories are only special types of files. Specifically, they are just files where the <a href="https://docs.python.org/3/library/os.html#os.stat_result.st_mode" rel="nofollow noreferrer"><code>file mode bits</code></a> indicate that the file is a directory (see the bitmask <a href="https://docs.python.org/3/library/stat.html#stat.S_ISDIR" rel="nofollow noreferrer"><code>stat.S_ISDIR</code></a>). For example, a directory's mode as an octal number might typically be <code>0o40755</code> and a regular file <code>0o100644</code>.</p>
<p>On most filesystems (including macOS), you may not have a directory and a regular file with the same name within the same directory, nor may you have filename which includes the path separator character. This is in contrast to an object store, such as <a href="https://aws.amazon.com/s3/" rel="nofollow noreferrer">s3</a>, which is <em>not</em> actually a filesystem.</p>
<p>See for yourself, that the same <a href="https://en.wikipedia.org/wiki/Inode" rel="nofollow noreferrer">inode</a> is taken whether you specify a trailing slash or not:</p>
<pre><code>>>> import os
>>> os.makedirs("./example")
>>> os.stat('./example/').st_ino == os.stat('./example').st_ino
True
</code></pre>
|
python|macos|directory|file-extension
| 2 |
1,905,661 | 59,122,649 |
what is the Error in int object iteration?
|
<p>What is 'int' object is not subscriptable in this code?</p>
<pre><code>import math
import os
import random
import re
import sys
# Complete the hourglassSum function below.
def hourglassSum(arr):
sum1=0
result=0
for i in range(4):
for j in range(4):
sum1=arr[i][j]+arr[i+1][j]+arr[i+2][j]+arr[i+1][j+1]+arr[i][j+2]+arr[i+1][j+2]+arr[i+2[j+2]]
if sum1>result:
result=sum1
return result
if __name__ == '__main__':
fptr = open(os.environ['OUTPUT_PATH'], 'w')
arr = []
for _ in range(6):
arr.append(list(map(int, input().rstrip().split())))
result = hourglassSum(arr)
fptr.write(str(result) + '\n')
fptr.close()
</code></pre>
|
<p>The very last part of this long line:</p>
<pre><code>sum1=arr[i][j]+arr[i+1][j]+arr[i+2][j]+arr[i+1][j+1]+arr[i][j+2]+arr[i+1][j+2]+arr[i+2[j+2]]
</code></pre>
<p>(this part here):</p>
<pre><code>arr[i+2[j+2]]
</code></pre>
<p>Is an error; you seem to be trying to refer to <code>2[j+2]</code>. Clearly the integer <code>2</code> is not an array, so Python complains to you that it makes no sense to index an integer.</p>
<p>You probably want that last term to be:</p>
<pre><code>arr[i+2][j+2]
</code></pre>
<p>Looking more closely at the long line, it seems like what you are trying to accomplish is obtain the sum of the elements in a 3x3 section of <code>arr</code>. But even the long line is missing some of the combinations. Rather than risk typing the list of addition problems incorrectly (because there are so many), use a set of nested loops to build up the sum of the 3x3 segment.</p>
|
python|object|int
| 0 |
1,905,662 | 62,452,864 |
For looping several thousands of Url apis and adding it to a list
|
<p>Problem: The output of this code seems to be repeating alot of the same entries in the final list, thus making it exponentially longer.</p>
<p>The goal would be complete the query and the print the final list with all city within the region</p>
<pre><code>[
{
"name": "Herat",
"id": "AF~HER~Herat"
}
]
[
{
"name": "Herat",
"id": "AF~HER~Herat"
},
{
"name": "Kabul",
"id": "AF~KAB~Kabul"
}
]
[
{
"name": "Herat",
"id": "AF~HER~Herat"
},
{
"name": "Kabul",
"id": "AF~KAB~Kabul"
},
{
"name": "Kandahar",
"id": "AF~KAN~Kandahar"
}
]
</code></pre>
<p>My goal is to to a get a list with cityID. I first to a GET request and parse the JSON response to get the country IDs to a list,</p>
<p>Second: I have a for loop, which will make another GET request for the region id, but i now need to add the country IDs to the api url. I do that by adding .format on the GET request. and iterate trough all the countries and there respective region IDs, i parse them and store them in a list.</p>
<p>Third: i have another for loop, which will make another GET request for the cityID that will loop trough all cities with the above Region ID list, and the respectively collect the cityID that i really need.</p>
<p>Code :</p>
<pre><code>from requests.auth import HTTPBasicAuth
import requests
import json
import urllib3
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
def countries():
data = requests.get("https://localhost/api/netim/v1/countries/", verify=False, auth=HTTPBasicAuth("admin", "admin"))
rep = data.json()
a = []
for elem in rep['items']:
a.extend([elem.get("id","")])
print(a)
return a
def regions():
ids = []
for c in countries():
url = requests.get("https://localhost/api/netim/v1/countries/{}/regions".format(c), verify=False, auth=HTTPBasicAuth("admin", "admin"))
response = url.json()
for cid in response['items']:
ids.extend([cid.get("id","")])
data = []
for r in ids:
url = requests.get("https://localhost/api/netim/v1/regions/{}/cities".format(r), verify=False, auth=HTTPBasicAuth("admin", "admin"))
response = url.json()
data.extend([{"name":r.get("name",""),"id":r.get("id", "")} for r in response['items']])
print(json.dumps(data, indent=4))
return data
regions()
print(regions())
</code></pre>
<p>You will see thou output contains several copies of the same entry.</p>
<p>Not a programmer, not sure where am i getting it wrong</p>
|
<p>It looks as though the output you're concerned with might be due to the fact that you're printing <code>data</code> as you iterate through it in the <code>regions()</code> method. </p>
<p>Try to remove the line:
<code>print(json.dumps(data, indent=4))</code>?</p>
<p>Also, and more importantly - you're setting <code>data</code> to an empty list every time you iterate on an item in Countries. You should probably declare that variable before the initial loop.</p>
<p>You're already printing the final result when you call the function. So printing as you iterate only really makes sense if you're debugging & needing to review the data as you go through it. </p>
|
python|json|api|rest|brute-force
| 0 |
1,905,663 | 35,608,283 |
How to remove a lot of folders at once using Python?
|
<p>I have seen a lot of questions (<a href="https://stackoverflow.com/questions/185936/delete-folder-contents-in-python">Delete Folder Contents in Python</a>, <a href="https://stackoverflow.com/questions/6996603/how-do-i-delete-a-file-or-folder-in-python?lq=1">How to delete a file or folder?</a>, <a href="https://stackoverflow.com/questions/303200/how-do-i-remove-delete-a-folder-that-is-not-empty-with-python?rq=1">How do I remove/delete a folder that is not empty with Python?</a>) asking how to delete a folder (empty or not) but I haven't seen any asking about how to delete a large number of folders at once.</p>
<p>I tried using <code>shutils</code> and writing something as <code>shutils.rmtree('.../run*')</code> (all the folders that I want to delete are called run0000, run0001 and so on) but this doesn't work because * is not understood.</p>
<p>I finally ended up importing subprocess and using <code>subprocess.Popen('rm -r ./run*/', shell=True)</code> which works because of the <code>shell=True</code> but I would like to avoid this due to the security related hazards that discourage the use of <code>shell=True</code>.</p>
<p>What is the best way to erase a large number of folders (non-empty) at once? I think that it must be to adapt some of the answers given in one of the linked questions but I haven't been able so far. How could I do it?</p>
|
<p>You could use the <a href="https://docs.python.org/2/library/glob.html" rel="nofollow"><code>glob</code> module</a> to locate the directories, then use <code>shutil.rmtree()</code> on each:</p>
<pre><code>from glob import iglob
import shutil
for path in iglob('.../run*'):
shutil.rmtree(path)
</code></pre>
<p>Because you don't need to have a complete list of all matched directories, I used <a href="https://docs.python.org/2/library/glob.html#glob.iglob" rel="nofollow"><code>glob.iglob()</code></a> to yield matched paths one by one.</p>
|
python|python-2.7|shell
| 3 |
1,905,664 | 35,353,933 |
How to convert string into a dict escaping special characters in python
|
<p>I'm trying to convert the following string into a dictionary, where IP becomes a key, and everything else after | becomes a value:</p>
<pre><code>my_string = '''10.11.22.33|{"property1": "0", "property2": "1", "property3": "1", "property4": "1", "property5": "0"}
10.11.22.34|{"property1": "0", "property2": "0", "property3": "1", "property4": "1", "property5": "0", "property6": "0", "property7": "1", "property8": "0", "property9": "0", "property10": "1"}'''
</code></pre>
<p>This is the code I tried:</p>
<pre><code>d = dict(node.split('|') for node in my_string.split())
</code></pre>
<p>However, I get this error:</p>
<pre><code>ValueError: dictionary update sequence element #1 has length 1; 2 is required
</code></pre>
<p>So I simplified my_string to just one line:</p>
<pre><code>my_string = '10.11.22.33|{"property1": "0", "property2": "1", "property3": "1", "property4": "1", "property5": "0"}'
</code></pre>
<p>And used this code to first split the line:</p>
<pre><code>wow = my_string.split('|')
</code></pre>
<p>output:</p>
<pre><code>['10.11.22.33', '{"property1": "0", "property2": "1", "property3": "1", "property4": "1", "property5": "0"}']
</code></pre>
<p>The above is a list of two elements. However, when I try to create dictionary out of it, it fails with this error:</p>
<pre><code>d = dict(wow)
</code></pre>
<p>output:</p>
<pre><code>ValueError: dictionary update sequence element #0 has length 11; 2 is required
</code></pre>
<p>I do not want to modify the value - it needs to be preserved as is. What is the proper way to get this line into a dictionary so that it looks like this:</p>
<pre><code>{'10.11.22.33': '{"property1": "0", "property2": "1", "property3": "1", "property4": "1", "property5": "0"}'}
</code></pre>
<p>This is Python 2.6.</p>
|
<p>You need to <code>split</code> your string on <code>\n</code> first:</p>
<pre><code>dict(ip.split('|') for ip in s.split('\n'))
</code></pre>
<p>Also you can take a look into <a href="https://docs.python.org/2/library/re.html#re.findall" rel="nofollow"><strong><code>re.findall</code></strong></a>:</p>
<pre><code>dict(re.findall(r'(\d+\.\d+\.\d+\.\d+\d+).*?(\{.*?\})', s))
</code></pre>
<p>Where <code>s</code> is your string</p>
|
python|python-2.7|python-2.x
| 2 |
1,905,665 | 35,410,498 |
How to repeat a function every N minutes?
|
<p>In my python script I want to repeat a function every N minutes, and, of course, the main thread has to keep working as well. In the main thread I have this:</p>
<pre><code># something
# ......
while True:
# something else
sleep(1)
</code></pre>
<p>So how can I create a function (I guess, in another thread) which executes every N minutes? Should I use a timer, or Even, or just a Thread? I'm a bit confused.</p>
|
<p>use a thread</p>
<pre><code>import threading
def hello_world():
threading.Timer(60.0, hello_world).start() # called every minute
print("Hello, World!")
hello_world()
</code></pre>
|
python|multithreading|python-3.x|timer
| 35 |
1,905,666 | 58,846,993 |
Multiple context managers in a "with" statement in python
|
<p>I'm curious why the following code:</p>
<pre><code>l1= threading.Lock()
l2= threading.Lock()
l3=threading.Lock()
with l1 and l2 and l3:
print l1.locked()
print l2.locked()
print l3.locked()
</code></pre>
<p>prints this:</p>
<pre><code>False
False
True
</code></pre>
<p>I realize that the right syntax is:</p>
<pre><code>with l1, l2, l3:
</code></pre>
<p>but I'm trying to find an explanation for why only l3 was locked.</p>
|
<p>To understand this, consider how <code>and</code> works in Python.</p>
<p>The result of the expression <code>x and y</code> is equivalent to <code>y if x else x</code>, except that it only evaluates <code>x</code> once. That is, when <code>bool(x)</code> is <code>True</code>, the <code>and</code> operator results in <code>y</code>, otherwise it results in <code>x</code> (and doesn't evaluate <code>y</code>).</p>
<p>Unless an object defines its own <code>__bool__</code> dunder method, <code>bool(obj)</code> will generally be <code>True</code>. This is the case for lock objects. So <code>bool(l1)</code> is <code>True</code>, and the expression <code>l1 and l2 and l3</code> evaluates as <code>l2 and l3</code>. Then since <code>bool(l2)</code> is <code>True</code>, this expression evaluates as <code>l3</code>.</p>
<p>So the <code>with</code> statement ends up managing the lock <code>l3</code>, and therefore that's the one which is locked in the body of the <code>with</code> statement. As you note, if you want to manage multiple locks at once, you should pass them separated by commas.</p>
|
python|contextmanager
| 4 |
1,905,667 | 58,776,258 |
Load Multi-dimensional numpy array from binary string
|
<p>I want to load a multi-dimensional numpy array from binary string.</p>
<pre class="lang-py prettyprint-override"><code>multi_dim_arr = convert_bin_to_npy(binary_string)
</code></pre>
<p>It is established that the <code>binary_string</code> above is a multi-dimensional numpy array. To check if function works properly, I can verify it by the following method:</p>
<pre class="lang-py prettyprint-override"><code>with open('data.npy', 'rb') as f:
binary_string = f.read()
multi_dim_arr = convert_bin_to_npy(binary_string)
</code></pre>
<p>I am aware of <code>np.fromstring()</code> method, however, the array loses its dimensionality. I am looking for a possible method through which I can obtain all information of the numpy Array through its binary string and then reconstruct the array. </p>
<p>I am using Python 3.6 </p>
|
<p>you can use <code>np.load</code> function to load array saved using <code>np.save</code> and it will preserve the shape as well</p>
<p>Here is example code</p>
<pre class="lang-py prettyprint-override"><code>arr = np.arange(200).reshape(20,10)
print(arr.shape)
np.save('arr.npy', arr)
arr2 = np.load('arr.npy')
print(arr2.shape)
</code></pre>
|
python|arrays|numpy
| 0 |
1,905,668 | 58,755,658 |
how to delete in a list from x to y?
|
<p>i was wondering about how to delete multiple values from index x to y,
this is currently what im trying to:</p>
<pre><code>
first_num = None
second_num = None
while True:
first_or_last = "first" if first_num is None else "last"
text = str("Enter the {} index number to start deleting: ").format(first_or_last)
remove_ftl_input = input(text)
if first_num is None or second_num is None:
if remove_ftl_input.isdigit():
if first_num is None:
first_num = int(remove_ftl_input)
elif first_num is not None and second_num is None:
second_num = int(remove_ftl_input)
if first_num is not None and second_num is not None:
for x in range(0, first_num-second_num):
try:
# note: every loop index shifts by -1 thats why first-num i assume?
found_items_list.pop(first_num)
except IndexError:
print(str(x) + " was out of reach.")
</code></pre>
|
<blockquote>
<p>how to delete multiple values from index x</p>
</blockquote>
<p>Why not just join the ranges that you want to keep</p>
<pre><code>>>> list = [0, 1, 2, 3, 4, 5, 6, 7]
>>> first = 3
>>> last = 4
>>> list = list[:first] + list[last+1:]
>>> list
[0, 1, 2, 5, 6, 7]
</code></pre>
|
python|list
| 0 |
1,905,669 | 58,639,336 |
How to filter pandas table based on multiple values from differen columns?
|
<p>I have a pandas table in the following format [df], indexed by 'noc' and 'year'. How can I access a 'noc, year combination' and save the entry of 'total_medals' to a list?</p>
<pre><code> medal Bronze Gold Medal Silver total_medals
noc year
ALG 1984 2.0 NaN NaN NaN 2.0 2.000000
1992 4.0 2.0 NaN NaN 6.0 4.000000
1996 2.0 1.0 NaN 4.0 7.0 5.000000
ANZ 1984 2.0 15.0 NaN 2.0 19.0 19.000000
1992 3.0 5.0 NaN 2.0 10.0 14.500000
1996 1.0 2.0 NaN 2.0 5.0 11.333333
ARG 1984 2.0 6.0 NaN 3.0 11.0 11.000000
1992 5.0 3.0 NaN 24.0 32.0 21.500000
1996 3.0 7.0 NaN 5.0 15.0 19.333333
</code></pre>
<p>For example: I want to acccess the 'total_medals' of ARG in 1992 (which is 21.5) ans save this to a new list.</p>
|
<p>There is <code>MultiIndex</code> in index values, so you can select values by tuples in <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>DataFrame.loc</code></a>:</p>
<pre><code>a = df.loc[('ARG',1992), 'total_medals']
print (a)
21.5
</code></pre>
|
python|pandas|list|filter
| 1 |
1,905,670 | 31,537,379 |
Regex to match URL ending with _F<XX>_C<XX>
|
<p>In the URL, _F and C are fixed and XX are dynamic integers.</p>
<p>I have tried these patterns:</p>
<pre><code>^_F[\d+]_C[\d+]$
</code></pre>
<p>Example URLs are :</p>
<pre><code>_F23_C456
_F345_C1
</code></pre>
<p>I am trying to match this regex in urls.py of django.</p>
|
<p>You need to move <code>+</code> outside the character class and either remove the start-of-string anchor <code>^</code>, or insert <code>.*</code> after it:</p>
<pre><code>^.*_F\d+_C\d+$
</code></pre>
<p>Or</p>
<pre><code>_F\d+_C\d+$
</code></pre>
<p>See <a href="https://regex101.com/r/fC2iC6/1" rel="nofollow">demo</a></p>
<p>Inside the character class, <code>+</code> is treated literally, not as a quantifier, and loses its <em>match 1 or more occurrences</em> meaning. And your regex matches beginning of string, an underscore, <code>F</code>, one digit or <code>+</code>, an underscore, <code>C</code>, one digit or <code>+</code> and the end of string.</p>
|
python|regex|django|django-urls
| 1 |
1,905,671 | 15,895,840 |
Django exclude follower in query
|
<p>I have 2 models, userProfile and relationship. Users can follow each other and the relation is made through the relationship model. Here is the code :</p>
<pre><code>class UserProfile(models.Model):
slug = models.SlugField(max_length=200)
user = models.ForeignKey(User, unique =True)
relationships = models.ManyToManyField('self', through='Relationship',symmetrical=False,null=True, blank=True,related_name='related_to')
class Relationship(models.Model):
from_person = models.ForeignKey(UserProfile, related_name='from_people')
to_person = models.ForeignKey(UserProfile, related_name='to_people')
status = models.IntegerField(choices=RELATIONSHIP_STATUSES)
</code></pre>
<p>I'm trying to get the list of userProfile excluding the list of userProfile followed by a certain user.
Here is my query:</p>
<pre><code>topUsersRecommendation = UserProfile.objects.exclude(id=profile.id,relationships__to_people__from_person = profile).extra(
select={
'num_group': """
SELECT COUNT(*) FROM axiom_alto_groupuser gu1
JOIN axiom_alto_groupuser gu2 on gu1.group_id = gu2.group_id
WHERE gu2.user_id=axiom_alto_userprofile.id
AND gu1.user_id = %d """ % profile.id,
},
).order_by('-num_group')
</code></pre>
<p>But the exclude doesn't seems to work.
Thank you for your help ^^</p>
|
<p>You can add a field to <code>UserProfile</code> defined as such:</p>
<pre><code>followers = models.ManyToManyField(
'self',
blank=True,
symmetrical=False,
through='Relationship',
related_name='followees'
)
</code></pre>
<p>IIRC you can then do something like this:</p>
<pre><code>certain_user = UserProfile.objects.get(slug='Bob')
all_but_certain_user_followees = UserProfile.objects.all().exclude(
follower = certain_user
)
</code></pre>
|
python|django|subquery|django-queryset
| 0 |
1,905,672 | 25,231,407 |
Is there any way to emulate meta programming in C++?
|
<p>Is there any way to emulate meta programming in C++ ? ( c++ not c++11 standard)
For Python on stack suggests (<a href="https://stackoverflow.com/questions/25221981/generate-specific-names-proeprties-using-metaclass">Generate specific names proeprties using metaclass</a>) me to create like and it works</p>
<pre><code>class FooMeta(type):
def __init__(cls, name, bases, dct):
super(FooMeta, cls).__init__(name, bases, dct)
for n in range(100):
setattr(cls, 'NUMBER_%s'%n, n)
class Bar(object):
__metaclass__ = FooMeta
</code></pre>
<p>But I also need same class in C++, class with n static const int NUMBER_some_number fields.
How to create this without hardcoding ?</p>
|
<p>In Python naming the first few hundred integers can have a slight performance advantage, since a typical implementation only caches a few hundred integers, and there's still a look-up for them.</p>
<p>In C++ integers are not dynamic objects so there is no problem and no advantage.</p>
<p>In C++ meta programming is now typically done by using the template mechanism. Before that was introduced one used macros to generate code. However, since the Python problem that you're addressing doesn't exist in C++, there's no point.</p>
|
python|c++|metaprogramming
| 3 |
1,905,673 | 71,057,337 |
IndexError: list assignment index out of range for CNN program from scratch
|
<p>I've been getting this error while the dimension of my array match the limits of my loop, what should I change.</p>
<p>this program is aimed to classify 32*32p black and white images using Neural Networks from scratch.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import pickle
from math import exp
from random import seed
from random import random
def unpickle(file):
with open(file, 'rb') as fo:
data = pickle.load(fo, encoding='bytes')
return data
def load_data(data_dir, negatives=False):
meta_data_dict = unpickle("batches.meta")
data_label_names = meta_data_dict[b'label_names']
data_label_names = np.array(data_label_names)
# training data
train_data = None
train_filenames = []
train_labels = []
train_data_dict = unpickle("data_batch_1")
train_data = train_data_dict[b'data']
train_filenames += train_data_dict[b'filenames']
train_labels += train_data_dict[b'labels']
train_data = train_data.reshape((len(train_data), 1, 32, 32))
if negatives:
train_data = train_data.transpose(0, 2, 3, 1).astype(np.float32)
else:
train_data = np.rollaxis(train_data, 1, 4)
train_filenames = np.array(train_filenames)
train_labels = np.array(train_labels)
return train_data, train_filenames, train_labels, data_label_names
data_dir = 'data-batches-py'
x_train, x_train_filenames, y_train_labels, y_label_names =load_cifar_10_data(data_dir)
print (x_train.shape)
print (len(x_train))
# Initialize a network
def initialize_network(n_hidden):
network = list()
hidden_layer = [{'weights':[random() for i in range(len(x_train) + 1)]} for i in range(n_hidden)]
network.append(hidden_layer)
output_layer = [{'weights':[random() for i in range(n_hidden +1)]} for i in range(10)]
network.append(output_layer)
return network
def sum(inputs):
sum_row=[]
for i in range(len(x_train)):
for a in range(32):
for b in range(32):
sum_row[i]=0
sum_row[i]+= inputs[i][a][b][1]
return sum_row
# Calculate neuron activation for an input
def activate(weights, inputs):
flattened= sum(inputs)
activation = weights[-1]
for i in range(len(weights)-1):
activation += weights[i] * flattened[i]
return activation
# Transfer neuron activation
def transfer(activation):
return 1.0 / (1.0 + exp(-activation))
# Forward propagate input to a network output
def forward_propagate(network, row):
inputs = row
for layer in network:
new_inputs = []
for neuron in layer:
activation = activate(neuron['weights'], inputs)
neuron['output'] = transfer(activation)
new_inputs.append(neuron['output'])
inputs = new_inputs
return inputs
network = initialize_network(1)
row = x_train
output = forward_propagate(network, row)
print(output)
</code></pre>
<p>After running this code I get the following output</p>
<pre><code>(10000, 32, 32, 1)
10000
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
/var/folders/gm/z9_jyr1s5k1232zgf4xf7cxc0000gn/T/ipykernel_15670/2752672525.py in <module>
153 network = initialize_network(1)
154 row = x_train
--> 155 output = forward_propagate(network, row)
156 print(output)
/var/folders/gm/z9_jyr1s5k1232zgf4xf7cxc0000gn/T/ipykernel_15670/2752672525.py in forward_propagate(network, row)
78 new_inputs = []
79 for neuron in layer:
---> 80 activation = activate(neuron['weights'], inputs)
81 neuron['output'] = transfer(activation)
82 new_inputs.append(neuron['output'])
/var/folders/gm/z9_jyr1s5k1232zgf4xf7cxc0000gn/T/ipykernel_15670/2752672525.py in activate(weights, inputs)
62 # Calculate neuron activation for an input
63 def activate(weights, inputs):
---> 64 flattened= sum(inputs)
65 activation = weights[-1]
66 for i in range(len(weights)-1):
/var/folders/gm/z9_jyr1s5k1232zgf4xf7cxc0000gn/T/ipykernel_15670/2752672525.py in sum(inputs)
57 for a in range(32):
58 for b in range(32):
---> 59 sum_row[i]=0
60 sum_row[i]+= inputs[i][a][b][1]
61 return sum_row
IndexError: list assignment index out of range
</code></pre>
<p>as you can see the dimension of the input is 10000<em>32</em>32*1, then why am I getting an error?</p>
|
<p>The error is in</p>
<pre><code>def sum(inputs):
sum_row=[]
for i in range(len(x_train)):
for a in range(32):
for b in range(32):
sum_row[i]=0
sum_row[i]+= inputs[i][a][b][1]
return sum_row
</code></pre>
<p>The reason for the error is that you are trying to access an element of sum_row, but sum_row doesn't have any element in it (as you set it to [] before the for loops). Inside the for loops, you try to access members which it doesn't have. Instead of doing <code>sum_row[i] = 0</code> you should be doing sum_row.append(0) which adds an element to that list. Then you can access it via <code>sum_row[-1] += inputs[i][a][b][1]</code></p>
|
python|python-3.x|deep-learning|conv-neural-network
| 0 |
1,905,674 | 60,023,797 |
Cannot import json files with json.load
|
<p>I am trying to import json files to python, clean them and save them as csv. My problem is really on <em>importing</em> the json files from my computer in order to manipulate them. Something goes wrong in the first lines, the rest of the code works when I import the files directly from an API.</p>
<p>This is the code with the API that works:</p>
<pre><code>import requests, json
import pandas as pd
myList = {"325413", "424430"}
for toImport in myList:
query = {"naics": toImport}
results = requests.post(
"https://www.lobbyview.org/public/api/reports", data=json.dumps(query)
)
json_response = results.json()["result"]
resulting_data = []
for data in json_response:
year = data["year"]
....do my staff....
# create a DataFrame
b.to_csv(r"path/" +toImport +".csv")
</code></pre>
<p>And this is the one with the directory that doesn't:</p>
<pre><code>import pandas as pd
import requests, json
myList = {"1", "2", "3", "4", "5", "6", "7", "8", "9", "10"}
for toImport in myList:
with open("path" + toImport + ".json") as f:
json_response = json.load(f)
resulting_data = []
for data in json_response:
year = data["year"]
....do my staff....
# create a DataFrame
b = pd.DataFrame(resulting_data)
print(b)
b.to_csv(r"path/" +toImport +".csv")
</code></pre>
|
<pre><code>import pandas as pd
import requests, json
myList = ["1", "2", "3", "4", "5", "6", "7", "8", "9", "10"]
for toImport in myList:
with open("path/" + toImport + ".json") as f:
json_response = json.load(f)
resulting_data = []
for data in json_response:
year = data["year"]
..do my staff....
# create a DataFrame
b = pd.DataFrame(resulting_data)
print(b)
b.to_csv(r"path/" +toImport +".csv")
</code></pre>
|
python|json|pandas|dataframe|python-import
| 0 |
1,905,675 | 60,062,073 |
Check for a specific sound (input: microphone)
|
<p><strong>My problem</strong>: I currently have a sound file containing a specific sound I recorded. I want to be able to recognize when that sound is played again for over like 2 seconds. The volume does not matter to me, I want to be able to recognize when that specific note is played. For example the file holds a recording of the note A (la), and if i play the note A on a piano next to the microphone, the raspberry pi will print "correct" or something. I am having trouble recognizing the note, and previous research has suggested finding the frequency / using FFT function but i have been unable to figure it out. Do you recommend any libraries I should use in order to implement this?</p>
<p>Ideally I would be able to identify the pitch of an external sound. As soon as I have the pitch I would be able to check it between a range of frequencies.</p>
|
<p>You indeed want to use something like FFT, which both <code>numpy</code> and <code>scipy</code> offer. The idea would be that you collect a buffer of your microphone input, apply the FFT on it, then you would try and find if the most powerful frequency is that of the note you're looking for. There exists <a href="https://en.wikipedia.org/wiki/Piano_key_frequencies#List" rel="nofollow noreferrer">tables</a> that can tell you what the frequency of each note is.</p>
<p>You're essentially making a <a href="https://en.wikipedia.org/wiki/Spectrogram" rel="nofollow noreferrer">spectrogram</a>. </p>
<p>If you want an order of operations:</p>
<ol>
<li>Building frequency scale:
<ol>
<li>Determine frequency scale using <code>np.fft.fftfreq</code> (N being the same
length as your buffer)</li>
</ol></li>
<li>Build table of notes
<ol>
<li>Establish what frequency belongs to what note (use a reference)</li>
<li>Determine a margin of error </li>
</ol></li>
<li><p>Identifying notes (This part is in a loop)</p>
<ol>
<li><p>Collect signal in a buffer of select size</p></li>
<li><p>Apply FFT</p></li>
<li><p>Find highest value in frequency domain</p></li>
<li><p>Look for corresponding note within a range of error in lookup table</p></li>
</ol></li>
</ol>
<p>Useful functions:</p>
<p><a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.fft.fft.html#numpy.fft.fft" rel="nofollow noreferrer">Numpy FFT</a></p>
<p><a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.fft.fftfreq.html" rel="nofollow noreferrer">Numpy FFTFREQ</a></p>
<p><a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.argmax.html#numpy.argmax" rel="nofollow noreferrer">Numpy ARGMAX</a></p>
<p>Other helpful questions:</p>
<p><a href="https://stackoverflow.com/questions/47189624/maintain-a-streaming-microphone-input-in-python">Maintain a streaming microphone input in Python</a></p>
|
python|audio|fft|frequency|pyaudio
| 1 |
1,905,676 | 60,228,580 |
Request.POST.get not working for me in django, returning default value
|
<p>I am trying to get input from a html form in django , python code below:</p>
<pre><code>def add(request):
n = request.POST.get('Username', 'Did not work')
i = Item(name=n,price=0)
i.save()
return render(request,'tarkovdb/test.html')
</code></pre>
<p>Second pic is my html code:</p>
<pre><code><html>
<head>
<meta charset="UTF-8"›
<link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.4.1/css/bootstrap.min.css" integrity="sha384—Vkoo8x4CGs0 OPaXtkKtu6ug5T0eNV6gBiFeWPGFN9Muh0f23Q9Ifjh" crossorigin="anonymous">
<title>Tarkov Database Web App</title>
</head>
<body>
<h1>This is the page to add items</h1>
<li><a href="{% url 'tarkovdb:index' s'6}">List of Items in DataBase</a></li>
<form>
<div class="form—group">
<label for&username'> Username: </lable>
<input type&text' name='Username' id.'username'> <br><br>
</div>
<button type="submit" class="btn btn—primary">Submit</button>
</form>
</code></pre>
|
<p>You need to set your <a href="https://www.w3schools.com/tags/att_form_method.asp" rel="nofollow noreferrer">method</a> attribute to "<strong>post</strong>" on your HTML <a href="https://www.w3schools.com/tags/tag_form.asp" rel="nofollow noreferrer">form tag</a>. Like this:</p>
<pre><code><form method="post">
<!-- your fields here -->
</form>
</code></pre>
<p>Otherwise you'll be sending a GET request, which is the default value of the <a href="https://www.w3schools.com/tags/att_form_method.asp" rel="nofollow noreferrer">method</a> attribute.</p>
<p>PD.: Please paste your code, make it easy for the community to help you. Otherwise you'll get down voted.</p>
|
python|html|django
| 0 |
1,905,677 | 60,304,017 |
Source a virtual environment using python
|
<p>On the servers I am working on, there is a virtual environment which can be activated using <code>source /bin/virtualenv-activate</code>. I need this virtual environment because of a command line tool which is accessible only there. Let's call it <code>fancytool</code>. What I would like to do is to use <code>fancytool</code> out of a python script and return its output into a python variable. The python script is not initiated in the virtual environment, so I thought of something like this:</p>
<pre><code>os.system('source /bin/virtualenv-activate')
results = os.popen(fancytool).read()
</code></pre>
<p>However, this returns:</p>
<pre><code>sh: 1: source: not found
sh: 1: fancytool: not found
</code></pre>
<p>If I enter <code>source /bin/virtualenv-activate</code> in the terminal and then <code>fancytool</code>, everything works fine. How can I achieve this also in a python script?</p>
|
<p>You should add a shebang to the top of the script to activate the env</p>
<pre><code>#!/path/to/venv/bin/python
# your code here
</code></pre>
<p>However relying on the knowledge of the venv within your scripts is considered poor practice </p>
|
python|subprocess|virtual-environment
| 1 |
1,905,678 | 5,611,862 |
Guess File Type Windows Similar to Linux `file`
|
<p>Is there an equivalent to the Linux <code>file</code> command on Linux? I would prefer something with Python bindings, but anything will do as long as it can be accessed through a DLL or launched a subprocess.</p>
|
<p>There is no native function in Windows, but you can use <a href="http://mark0.net/soft-trid-e.html" rel="nofollow">TrID</a>. This tools has been around since 2003 and still gets maintained.</p>
|
python|file|types|mime
| 2 |
1,905,679 | 67,848,099 |
how to scroll down small window on web page using python?
|
<p>I am doing this..</p>
<pre><code> html = browser.find_element_by_xpath('/html/body/div[4]/div/div/div[2]')
html.send_keys(Keys.END)
</code></pre>
<blockquote>
<p>ERROR=selenium.common.exceptions.ElementNotInteractableException:
Message: element not interactable</p>
</blockquote>
|
<p>You can try to scroll the element into selenium view :</p>
<pre><code>element = browser.find_element_by_xpath('/html/body/div[4]/div/div/div[2]')
driver.execute_script("arguments[0].scrollIntoView();", element)
</code></pre>
<p>Or if you want to vertically scrolldown :</p>
<pre><code>driver.execute_script("window.scrollTo(0, Y)")
</code></pre>
<p>Y could be <code>100, 200, and so on...</code></p>
<p>or using <code>ActionChains</code></p>
<pre><code>ActionChains(driver).move_to_element(html).perform()
</code></pre>
|
python|selenium|xpath|webdriver
| 0 |
1,905,680 | 30,452,826 |
Python : Pandas DataFrame to CSV
|
<p>I want to simply create a csv file from the constructed DataFrame so I do not have to use the internet to access the information. The rows are the lists in the code: 'CIK' 'Ticker' 'Company' 'Sector' 'Industry'</p>
<p>My current code is as follows:</p>
<pre><code>def stockStat():
doc = pq('https://en.wikipedia.org/wiki/List_of_S%26P_500_companies')
for heading in doc(".mw-headline:contains('S&P 500 Component Stocks')").parent("h2"):
rows = pq(heading).next("table tr")
cik = []
ticker = []
coName = []
sector = []
industry = []
for row in rows:
tds = pq(row).find("td")
cik.append(tds.eq(7).text())
ticker.append(tds.eq(0).text())
coName.append(tds.eq(1).text())
sector.append(tds.eq(3).text())
industry.append(tds.eq(4).text())
d = {'CIK':cik, 'Ticker' : ticker, 'Company':coName, 'Sector':sector, 'Industry':industry}
stockData = pd.DataFrame(d)
stockData = stockData.set_index('Ticker')
stockStat()
</code></pre>
|
<p>As EdChum already mentioned in the comments, creating a CSV out of a pandas DataFrame is done with the <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_csv.html#pandas.DataFrame.to_csv" rel="nofollow">DataFrame.to_csv()</a> method.</p>
<p>The dataframe.to_csv() method takes lots of arguments, they are all covered in the <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_csv.html#pandas.DataFrame.to_csv" rel="nofollow">DataFrame.to_csv()</a> method documentation. Here is a small example for you:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'mycolumn': [1,2,3,4]})
df.to_csv('~/myfile.csv')
</code></pre>
<p>After this, the myfile.csv should be available in your home directory.
If you are using windows, saving the file to 'C:\myfile.csv' should work better as a proof of concept.</p>
|
python|csv|pandas|dataframe
| 1 |
1,905,681 | 30,534,580 |
Huge file handling and sorting using python
|
<p>Im currently working on a program that uses a file having data in the format - 6 columns and dynamic no. of rows.</p>
<p>The file I got for testing is 26 mb and following is the program that converts first 3 columns into 3 different lists.</p>
<pre><code>f = open('foo', 'r')
print('running...')
a = []
b = []
c = []
for line in f:
x = (line.split(' '))
a.append(x[0])
b.append(x[1])
c.append(x[2])
print(a,b,c,sep='\n')
</code></pre>
<p>I have rechecked this program and logic looks correct and when implemented on small file it works but when i use this program with the 26 mb file it stops responding.</p>
<p>Description of the program:
The program opens a file name 'foo' and implements line by line of the file.
It splits the line into parts based on the separator that is defined as an argument in the .split() method. In my program I have used white space as an separator as in the text file the data is separated using white spaces.</p>
<p>Im not able to figure out why this program stops responding and I need help with it! </p>
|
<p>if you use <code>numpy</code>, you can use <code>genfromtxt</code>:</p>
<pre><code>import numpy as np
a,b,c=np.genfromtxt('foo',usecols=[0,1,2],unpack=True)
</code></pre>
<p>Does that work with your large file?</p>
<p>EDIT:</p>
<p>OK, so I tried it on your file, and it seems to work fine. So I'm not sure what your problem is.</p>
<pre><code>In [1]: from numpy import genfromtxt
In [2]: a,b,c=genfromtxt('foo',usecols=[0,1,2],unpack=True)
In [3]: a
Out[3]:
array([ 406.954744, 406.828508, 406.906079, ..., 408.944226,
408.833872, 408.788698])
In [4]: b
Out[4]:
array([ 261.445358, 261.454366, 261.602131, ..., 260.46189 ,
260.252377, 260.650606])
In [5]: c
Out[5]:
array([ 17.451789, 17.582017, 17.388673, ..., 26.41099 , 26.481148,
26.606282])
In [6]: print len(a), len(b), len(c)
419040 419040 419040
</code></pre>
|
python|file-handling
| 1 |
1,905,682 | 42,703,723 |
Creating array of strings
|
<p>I have an array of flags for various types as:</p>
<pre><code>Data Type1 Type2 Type3
12 1 0 0
14 0 1 0
3 0 1 0
45 0 0 1
</code></pre>
<p>I want to create the following array:</p>
<pre><code>Data TypeName
12 Type1
14 Type2
3 Type2
45 Type3
</code></pre>
<p>I tried creating an empty array of type strings as:</p>
<pre><code>import numpy as np
z1 = np.empty(4, np.string_)
z1[np.where(Type1=1)] = 'Type1'
</code></pre>
<p>But this doesn't seem to give me desired results.</p>
<p>Edit:
I can use pandas dataframe and each row has only 1 type either Type1, Type2, Type3</p>
<p>Edit2:
Data Type1 Type2 Type3 are column names as in pandas dataframe but I was using numpy array with the implicit names as I have pointed in the example above.</p>
|
<p><strong>UPDATE:</strong> here is a mixture of <a href="https://stackoverflow.com/a/42703913/5741205">a brilliant @Divakar's idea</a> to use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.idxmax.html" rel="nofollow noreferrer">DataFrame.idxmax(1)</a> method and using <code>set_index()</code> and <code>reset_index()</code> in order to get rid of <code>pd.concat()</code>:</p>
<pre><code>In [142]: df.set_index('Data').idxmax(1).reset_index(name='TypeName')
Out[142]:
Data TypeName
0 12 Type1
1 14 Type2
2 3 Type2
3 45 Type3
</code></pre>
<p><strong>OLD answer:</strong></p>
<p>You can do it this way (Pandas solution):</p>
<pre><code>In [132]: df.set_index('Data') \
.stack() \
.reset_index(name='val') \
.query("val == 1") \
.drop('val', 1)
Out[132]:
Data level_1
0 12 Type1
4 14 Type2
7 3 Type2
11 45 Type3
</code></pre>
|
python|pandas|numpy
| 2 |
1,905,683 | 66,749,808 |
How do I use HAL version 1.2 on Tensorflow Lite for Android?
|
<p>I have a quantized TensorflowLite model that I'm loading onto a Pixel 3 running Android 11. I built the model using Tensorflow Lite 2.5 and I'm using the nightly builds of Tensorflow for Android.</p>
<p>I'm initializing the TFLite Interpreter using the default provided NNAPI delegate.</p>
<p>However, when I load the model, I'm getting the following error from NNAPI:</p>
<pre><code>/OperationsUtils(16219): NN_RET_CHECK failed (frameworks/ml/nn/common/OperationsUtils.cpp:111): Operation QUANTIZE with inputs {TENSOR_FLOAT32} and outputs {TENSOR_QUANT8_ASYMM} is only supported since HAL version 1.2 (validating using HAL version 1.0)
E/Utils (16219): Validation failed for operation QUANTIZE
E/OperationsUtils(16219): NN_RET_CHECK failed (frameworks/ml/nn/common/OperationsUtils.cpp:111): Operation QUANTIZE with inputs {TENSOR_FLOAT32} and outputs {TENSOR_QUANT8_ASYMM} is only supported since HAL version 1.2 (validating using HAL version 1.0)
</code></pre>
<p>Android 11 should support NNAPI 1.2. Is there some parameter I'm missing to TensorFlow or Android to enable support for higher versions on NNAPI?</p>
<p>For reference, here are my dependencies from my gradle file:</p>
<pre><code>dependencies {
// snip
implementation 'org.tensorflow:tensorflow-lite:0.0.0-nightly-SNAPSHOT'
implementation 'org.tensorflow:tensorflow-lite-gpu:0.0.0-nightly-SNAPSHOT'
}
</code></pre>
|
<p>It turns out these errors are more warnings coming from NNAPI. Tensorflow Lite is creating the model for all available devices, and NNAPI picks the best one based on the operations. Adding verbose login the eventual result of all of this is that NNAPI decides that the only device capable of processing the model is the qti-default device. The errors are coming from <em>paintbox</em> and the <em>nnapi-reference</em> devices, which are then not used in the execution of the model.</p>
<p>I assumed these messages were the cause of a failure to execute the model on NNAPI, but there is something else wrong.</p>
<p>So the answer to <strong>this</strong> question is TensorFlow Lite and NNAPI select the best-supported device where possible, despite scary error messages.</p>
|
android|tensorflow|tensorflow-lite|nnapi
| 0 |
1,905,684 | 65,730,183 |
How to create response json from pandas csv in django?
|
<p>get error when use this</p>
<p>df = pd.read_csv('filename.csv', usecols=[1,5,7])</p>
<p>return Response(df.to_json(),status=status.HTTP_200_OK)</p>
|
<p>df.to_json (r'Path where the new JSON file will be stored\New File Name.json')</p>
<p>to need to first save the than send it to a response</p>
|
python|django|django-views
| 0 |
1,905,685 | 3,687,561 |
How do I dynamically get a list of all PythonCard components in a GUI class?
|
<p>Here is a sample resource file for PythonCard:</p>
<pre><code>{ 'application':{ 'type':'Application',
'name':'Test Sounds',
'backgrounds':
[
{ 'type':'Background',
'name':'Test Sounds',
'title':'Test Sounds',
'position':( 5, 5 ),
'size':( 300, 200 ),
'components':
[
{ 'type':'TextField', 'name':'fldFilename', 'position':( 5, 4 ), 'size':( 200, -1 ), 'text':'anykey.wav' },
{ 'type':'Button', 'name':'btnFile', 'position':( 210, 5 ), 'size':( -1, -1), 'label':'Select File' },
{ 'type':'Button', 'name':'btnPlay', 'position':( 5, 40 ), 'size':( -1, -1 ), 'label':'Play' },
{ 'type':'CheckBox', 'name':'chkAsync', 'position':( 5, 70 ), 'size':( -1, -1 ), 'label':'Async I/O', 'checked':0 },
{ 'type':'CheckBox', 'name':'chkLoop', 'position':( 5, 90 ), 'size':( -1, -1 ), 'label':'Loop sound', 'checked':0 },
] } ] } }
</code></pre>
<p>With this source file:</p>
<pre><code>from PythonCard import model
class Sounds(model.Background):
# Some irrelevant methods... #
if __name__ == '__main__':
app = model.Application(Sounds)
app.MainLoop()
</code></pre>
<p>How would I go about <strong>dynamically</strong> obtaining a list of all the "Button" components (for example) from within the GUI class?</p>
<p>Components are accessed in the manner <code>self.components.<component name></code> so my initial thought was <code>for x in self.components: ...</code>, but <code>self.components</code> is not iterable.</p>
|
<p>It would be much cleaner if you were able to get the list of components from elsewhere, but I think it should work if you do something like:</p>
<pre><code>for comp_name in dir(self.components):
if comp_name.startswith('_'): # ignore special members like __repr__
continue
component = getattr(self.components, comp_name)
...
</code></pre>
|
python|user-interface|wxpython|pythoncard
| 0 |
1,905,686 | 3,554,763 |
Histogram Equalization
|
<p>I am a beginner in Python. I want to make a small project on histogram equalisation. Basically I want to include changing contrast, color and crop option etc in my project. I am blank right now. Please suggest something. I am very keen to make this project but how to start? </p>
|
<p>Python's <a href="http://www.pythonware.com/products/pil/" rel="nofollow noreferrer">PIL module</a> has methods for controlling <a href="http://www.pythonware.com/library/pil/handbook/imageenhance.htm" rel="nofollow noreferrer">contrast, color</a>, and <a href="http://www.pythonware.com/library/pil/handbook/image.htm" rel="nofollow noreferrer">cropping</a>.</p>
|
python|image-processing
| 3 |
1,905,687 | 50,484,912 |
How to execute parallel computing between several instances in Google Cloud Compute Engine?
|
<p>I've recently encountered a problem to process a pickle file of 8 Gigabytes with a Python script using VMs in Google Cloud Compute Engine. The problem is that the process takes too long and I am searching for ways to decrease the time of processing. One of possible solutions could be sharing the processes in the script or map them between CPUs of several VMs. If somebody knows how to perform it, please, share with me!))</p>
|
<p>You can use <a href="https://cloud.google.com/solutions/using-clusters-for-large-scale-technical-computing" rel="nofollow noreferrer">Clusters</a> for Large-scale Technical Computing in the Google Cloud Platform (GCP). There are open source software like <a href="https://github.com/gc3-uzh-ch/elasticluster" rel="nofollow noreferrer">ElastiCluster</a> provide cluster management and support for provisioning nodes while using Google Compute Engine (GCE). </p>
<p>After the cluster is operational, workload manager manages the task execution and node allocation. There are a variety of popular commercial and open source workload managers such as HTCondor from the University of Wisconsin, Slurm from SchedMD, Univa Grid Engine, and LSF Symphony from IBM. </p>
<p>This <a href="https://cloudplatform.googleblog.com/2018/03/easy-HPC-clusters-on-GCP-with-Slurm.html" rel="nofollow noreferrer">article</a> is also helpful.</p>
|
python-3.x|google-compute-engine
| 2 |
1,905,688 | 26,877,141 |
Cython Partial Derivative
|
<p>I have a python script where, as part of an evolutionary optimization algorithm, I'm evaluating partial derivatives many thousands of times. I've done a line by line profile, and this partial derivative calculation is taking up the majority of the run time. I'm using <code>scipy.optimize.approx_fprime</code> to calculate the partial derivatives, and I tried to rewrite it in cython without much success.</p>
<p>The line by line profile is below. My cythonized version of <code>scipy.optimize.approx_fprime</code> is simply called <code>approx_fprime</code>.</p>
<pre><code>Line # Hits Time Per Hit % Time Line Contents
==============================================================
84 @profile
100 1500 14889652 9926.4 25.3 df1 = approx_fprime(inp_nom,evaluate1,epsilon)
101 1500 14939889 9959.9 25.4 df2 = scipy.optimize.approx_fprime(inp_upp,evaluate1,epsilon)
</code></pre>
<p>Below is my cython file.</p>
<pre><code>import numpy as np
cimport numpy as np
cimport cython
@cython.boundscheck(False) # turn of bounds-checking for entire function
def approx_fprime(np.ndarray xk, f, double epsilon, *args):
# From scipy.optimize.approx_fprime
f0 = f(*((xk,) + args))
cdef np.ndarray grad = np.zeros((len(xk),), float)
cdef np.ndarray ei = np.zeros((len(xk),), float)
cdef np.ndarray d = epsilon * ei
for k in xrange(len(xk)):
ei[k] = 1.0
grad[k] = (f(*((xk + d,) + args)) - f0) / d[k]
ei[k] = 0.0
return grad
</code></pre>
<p>I've tried to put in all the relevant type declarations and ensure that it plays nicely with numpy. Ultimately, though, the proof is in the pudding, as they say. This version is just not really any faster than the scipy version. The function only has a few variables, so it's not a huge computation and there's probably only room for an incremental improvement in one iteration. However, the function gets called over and over because this is used in an evolutionary optimization algorithm, and so I'm expecting/hoping that an incremental performance gain multiplied many times over will have a big payoff.</p>
<p>Could a cython expert out there take a look at this code and help me figure out if I'm on the right track, or if this is just a fool's errand?</p>
<p>Thank you!</p>
|
<p>The first thing to notice is that optimizing code is all about finding bottlenecks in your code. There are typically few functions, loops, etc which consume most of the time. Those are the right candidates for optimization. So most important thing: <strong><em>Evaluate your code performance with a profiler</em></strong>. </p>
<p>The first thing when optimizing your python code is to go through the code line by line and check each line if new objects are created. That's because <strong>object creation is extremely expensive compared to simple arithmetic</strong>. Rule of thumb: try to avoid object creation whenever possible. But make sure you don't create any new object in your time critical loops.</p>
<p>Have a look at <code>f*((xk + d,) + args)</code>. This is perfectly fine python code - but unsuitable if you need high performance. It will create a new argument tuple in every step of the loop. Rewriting that in a way that does not create any objects will probably give you a huge performance boost.</p>
<p>The next step is to start typing statically. Make sure that you type everything that is used in your loops. Typing <code>k</code> will probably gain you a lot.</p>
<p>Afterwards you can try to optimize even further by unsetting the <code>boundscheck</code> etc.</p>
<p>Most important of all is: Do your optimization iteratively and check your performance gain by profiling your code. Most of the time it is not easy to see what really is the bottleneck in your code. Profiling will give you hints: If the optimization did not gain you much, you probably missed the bottleneck. </p>
|
python|numpy|cython
| 0 |
1,905,689 | 56,685,967 |
Asynchronous programming for calculating hashes of files
|
<p>I'm trying to calculate hash for files to check if any changes are made.
i have Gui and some other observers running in the event loop.
So, i decided to calculate hash of files [md5/Sha1 which ever is faster] asynchronously. </p>
<p>Synchronous code :</p>
<pre class="lang-py prettyprint-override"><code>import hashlib
import time
chunk_size = 4 * 1024
def getHash(filename):
md5_hash = hashlib.md5()
with open(filename, "rb") as f:
for byte_block in iter(lambda: f.read(chunk_size), b""):
md5_hash.update(byte_block)
print("getHash : " + md5_hash.hexdigest())
start = time.time()
getHash("C:\\Users\\xxx\\video1.mkv")
getHash("C:\\Users\\xxx\\video2.mkv")
getHash("C:\\Users\\xxx\\video3.mkv")
end = time.time()
print(end - start)
</code></pre>
<p>Output of synchronous code : <code>2.4000535011291504</code></p>
<p>Asynchronous code :</p>
<pre class="lang-py prettyprint-override"><code>import hashlib
import aiofiles
import asyncio
import time
chunk_size = 4 * 1024
async def get_hash_async(file_path: str):
async with aiofiles.open(file_path, "rb") as fd:
md5_hash = hashlib.md5()
while True:
chunk = await fd.read(chunk_size)
if not chunk:
break
md5_hash.update(chunk)
print("get_hash_async : " + md5_hash.hexdigest())
async def check():
start = time.time()
t1 = get_hash_async("C:\\Users\\xxx\\video1.mkv")
t2 = get_hash_async("C:\\Users\\xxx\\video2.mkv")
t3 = get_hash_async("C:\\Users\\xxx\\video3.mkv")
await asyncio.gather(t1,t2,t3)
end = time.time()
print(end - start)
loop = asyncio.get_event_loop()
loop.run_until_complete(check())
</code></pre>
<p>Output of asynchronous code : <code>27.957366943359375</code> </p>
<p>am i doing it right? or, are there any changes to be made to improve the performance of the code?</p>
<p>Thanks in advance.</p>
|
<p>In sync case, you read files sequentially. It's faster to read a file by chunks sequentially.</p>
<p>In async case, your event loop blocks when it's calculating the hash. That's why only one hash can be calculated at the same time. <a href="https://stackoverflow.com/questions/868568/what-do-the-terms-cpu-bound-and-i-o-bound-mean">What do the terms “CPU bound” and “I/O bound” mean?
</a></p>
<p>If you want to increase the calculating speed, you need to use threads. Threads can be executed on CPU in parallel. Increasing CHUNK_SIZE should also help.</p>
<pre><code>import hashlib
import os
import time
from pathlib import Path
from multiprocessing.pool import ThreadPool
CHUNK_SIZE = 1024 * 1024
def get_hash(filename):
md5_hash = hashlib.md5()
with open(filename, "rb") as f:
while True:
chunk = f.read(CHUNK_SIZE)
if not chunk:
break
md5_hash.update(chunk)
return md5_hash
if __name__ == '__main__':
directory = Path("your_dir")
files = [path for path in directory.iterdir() if path.is_file()]
number_of_workers = os.cpu_count()
start = time.time()
with ThreadPool(number_of_workers) as pool:
files_hash = pool.map(get_hash, files)
end = time.time()
print(end - start)
</code></pre>
<p>In case of calculating hash for only 1 file: aiofiles uses a thread for each file. OS needs time to create a thread.</p>
|
python|hash|python-asyncio|python-aiofiles
| 1 |
1,905,690 | 56,496,137 |
how to implement an array of Boolean values, indexed by integers 2 to n
|
<p>I don't know how to get an array of Boolean values, indexed by integers 2 to n.</p>
<p>I tried the following code, and it works, but I think it is stupid, and there must be something better. By the way, I first thought I don't need to write the very first two insert calls, but it seems in python that even if I write exactly insert(2, True), python will just put True in my first element of the array, in other words, a[0] = True not a[2] = True.</p>
<pre><code>a = []
a.insert(0, 1)
a.insert(1, 1)
for index in range(2, n + 1):
a.insert(index, True)
</code></pre>
<p>I am seeking for another easy and inspiring way to implement this [an array of Boolean values, indexed by integers 2 to n]</p>
<p>Edit: I tried to write the pseudocode from <a href="https://en.wikipedia.org/wiki/Sieve_of_Eratosthenes" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Sieve_of_Eratosthenes</a>, </p>
<pre><code>Input: an integer n > 1.
Let A be an array of Boolean values, indexed by integers 2 to n,
initially all set to true.
for i = 2, 3, 4, ..., not exceeding √n:
if A[i] is true:
for j = i2, i2+i, i2+2i, i2+3i, ..., not exceeding n:
A[j] := false.
Output: all i such that A[i] is true.
</code></pre>
<p>as you can see, I just need a list that starts at index 2. I know how to do this algorithm, but I just felt that my way of creating [an array of Boolean values, indexed by integers 2 to n] is not good.</p>
|
<p>Python lists are always indexing from zero. If you want to create a list with True's from 2 to N, you can create something like this:</p>
<pre><code>N = 5
a = [None] * 2 + [True] * (N-2)
</code></pre>
<blockquote>
<p><code>[None, None, True, True, True]</code></p>
</blockquote>
<p>And use only indices 2 or more later in your code.</p>
|
python
| 0 |
1,905,691 | 45,001,850 |
Scrapy not accepting japanese characters in spider
|
<p>Here is a part of the source code of the website i am trying to scrape.</p>
<pre><code><th>会社名</th>
<td colspan="2">
<p class="realtorName">
<ruby>株式会社エリア・エステート 川崎店</ruby>
</p>
</td>
</code></pre>
<p>And this is just a test spider to see if scrapy is fetching any data</p>
<pre><code># -*- coding: utf-8 -*-
import scrapy
class TestSpider(scrapy.Spider):
name = "test"
allowed_domains = ["homes.co.jp"]
start_urls = ['http://www.homes.co.jp/realtor/mid-122457hNYEJwIO7kDs/']
def parse(self, response):
yield{
'FAX':response.xpath('//*[@id="anchor_realtorOutline"]/div[1]/table/tbody/tr/th[contains(text(), "FAX")]/following-sibling::td/text()').extract(),
'Company_Name':response.xpath('//*[@id="anchor_realtorOutline"]/div[1]/table/tbody/tr/th[contains(text(), "会社名")]/following-sibling::td/p[1]/ruby/text()').extract(),
'TEl':response.xpath('//*[@id="anchor_realtorOutline"]/div[1]/table/tbody/tr/th[contains(text(), "TEL")]/following-sibling::td/text()').extract(),
}
</code></pre>
<p>The 'TEL' and 'FAX' fields would return data but scrapy throws an error for the field 'Company_Name'</p>
<p>Error:</p>
<pre><code>All strings must be XML compatible: Unicode or ASCII, no NULL bytes or control characters.
</code></pre>
<p>What i wanted to do was match that string in Japanese and obtain the text from the sibling tag as mentioned in the above source code.</p>
<p>And the strange fact is that it ran yesterday and scraped data. Now it's returning errors.</p>
<p>Do i need to do something to include the Japanese characterset?</p>
|
<p>Try to append to append string with <code>u</code>, like this</p>
<pre><code>'Company_Name':response.xpath(u'//*[@id="anchor_realtorOutline"]/div[1]/table/tbody/tr/th[contains(text(), "会社名")]/following-sibling::td/p[1]/ruby/text()').extract(),
</code></pre>
|
python|python-2.7|scrapy
| 1 |
1,905,692 | 61,314,002 |
Python Random Number Diffrent
|
<p>Hi guys I'm a beginner python developer. I want to randomly generate a number between 0 and 20. The user needs to guess what the number is. If the user guesses wrong, tell them their guess is either too <strong>high or too low.</strong> I made almost everything, but I didn't made this section. >if the user guesses wrong, tell them their guess is either too high or too low. </p>
<p>This is the code</p>
<pre><code>import random
guess_count = 0
guess_limit = 3
random.randint = [
(1,2,3,4,5),
(6,7,8,9,10),
(11,12,13,14,15),
(16,17,18,19,20)
]
while guess_count < guess_limit:
guess = int(input("Guess: "))
guess_count += 1
if guess < random.randint:
print("Low guess")
elif guess > random.randint:
print("High guess")
elif guess == random.randint:
print("You won!")
break
else:
print("You failed!")
</code></pre>
<p>Can you help me? :)</p>
|
<p>Your issue is specifically here:</p>
<pre class="lang-py prettyprint-override"><code>random.randint = [
(1,2,3,4,5),
(6,7,8,9,10),
(11,12,13,14,15),
(16,17,18,19,20)
]
</code></pre>
<p>You need to read a bit on <a href="https://docs.python.org/3/tutorial/modules.html" rel="nofollow noreferrer">Python modules</a> and how they work. What you intended on doing was:</p>
<pre class="lang-py prettyprint-override"><code>import random
guess_count = 0
guess_limit = 3
random_int = random.randint(1, 20)
print(random_int)
while guess_count < guess_limit:
guess = int(input("Guess: "))
guess_count += 1
if guess < random_int:
print("Low guess")
elif guess > random_int:
print("High guess")
elif guess == random_int:
print("You won!")
break
else:
print("You failed!")
</code></pre>
|
python|random
| 1 |
1,905,693 | 61,548,347 |
How to log each SSH session packet with Paramiko?
|
<p>I am working with Paramiko 2.7.1, using a simple client implementation for running commands on remote SSH servers.</p>
<p>On most of my hosts, it works great. Input commands go out, output (if exists) comes back.</p>
<p>One specific type of host (an IBM VIOS partition to be precise) is giving me headaches in that the commands execute, but the output is always empty.
I have used PuTTY in an interactive session to log all SSH packets and check for any differences and, at least during an interactive session, no differences present between a working and a non-working host.
I have enabled Paramiko logging with:</p>
<pre><code>basicConfig(level=DEBUG)
logging.getLogger("paramiko").setLevel(logging.DEBUG)
log_to_file('ssh.log')
</code></pre>
<p>But the output doesn't dump each packet. I have done a search for any parameters or methods that would dump those packets but I've come up empty.
Wireshark is not an option since we are talking about an encrypted connection.
I would prefer to keep using <code>exec_command</code> instead of having to refactor everything and adapt to using an SSH shell.
So, in the end. Is there any way to dump the entire SSH session with Paramiko? I can handle either SSH packets or raw data.</p>
<hr>
<p>Edit 1: I have remembered that PuTTY's <code>plink.exe</code> does ssh exec commands, so I used it to compare both SSH server's output and stumbled onto the solution to my base problem: <a href="https://www.ibm.com/support/pages/unable-execute-commands-remotely-vio-server-padmin-user-ssh" rel="nofollow noreferrer">https://www.ibm.com/support/pages/unable-execute-commands-remotely-vio-server-padmin-user-ssh</a><br>
Still, I'd rather have captured the session with Paramiko, since I will not always be able to simulate with other tools...</p>
|
<p>In addition to enabling logging, call <a href="https://docs.paramiko.org/en/stable/api/transport.html#paramiko.transport.Transport.set_hexdump" rel="nofollow noreferrer"><code>Transport.set_hexdump()</code></a>:</p>
<pre><code>client.get_transport().set_hexdump(True)
</code></pre>
<hr>
<p>Regarding your original problem, see also:<br>
<a href="https://stackoverflow.com/q/56066517/850848">Command executed with Paramiko does not produce any output</a></p>
|
python|ssh|paramiko
| 0 |
1,905,694 | 60,747,944 |
How change text in button
|
<p>I want to do here is have the pushButton in PyQt5 to change to "Working..." and Red when clicked... which it currently does. Thing is I need it to also change back to the default "SCAN" and Green color when done running that method the button is linked to.</p>
<pre><code>from PyQt5 import QtCore, QtGui, QtWidgets
import sys
import pyautogui
class Ui_MainWindow(object):
def setupUi(self, MainWindow):
MainWindow.setObjectName("MainWindow")
MainWindow.showMaximized()
MainWindow.setMinimumSize(QtCore.QSize(0, 0))
MainWindow.setMaximumSize(QtCore.QSize(3840, 2160))
font = QtGui.QFont()
font.setFamily("Arial Black")
MainWindow.setFont(font)
MainWindow.setStyleSheet("background-color: rgba(0, 85, 127, 100);")
self.centralwidget = QtWidgets.QWidget(MainWindow)
self.centralwidget.setObjectName("centralwidget")
self.pushButton = QtWidgets.QPushButton(self.centralwidget)
self.pushButton.setGeometry(QtCore.QRect(250, 250, 400, 150))
font = QtGui.QFont()
font.setFamily("Tahoma")
font.setPointSize(24)
font.setBold(True)
font.setWeight(75)
self.pushButton.setFont(font)
self.pushButton.setStyleSheet("background-color: rgb(0, 170, 0);\n"
"color: rgb(255, 255, 255);")
self.pushButton.setObjectName("pushButton")
self.label = QtWidgets.QLabel(self.centralwidget)
self.label.setGeometry(QtCore.QRect(730, 300, 701, 111))
font = QtGui.QFont()
font.setPointSize(18)
font.setBold(True)
font.setItalic(False)
font.setWeight(75)
self.label.setFont(font)
self.label.setLayoutDirection(QtCore.Qt.LeftToRight)
self.label.setObjectName("label")
MainWindow.setCentralWidget(self.centralwidget)
self.menubar = QtWidgets.QMenuBar(MainWindow)
self.menubar.setGeometry(QtCore.QRect(0, 0, 1920, 18))
self.menubar.setObjectName("menubar")
MainWindow.setMenuBar(self.menubar)
self.statusbar = QtWidgets.QStatusBar(MainWindow)
self.statusbar.setObjectName("statusbar")
MainWindow.setStatusBar(self.statusbar)
self.retranslateUi(MainWindow)
QtCore.QMetaObject.connectSlotsByName(MainWindow)
def retranslateUi(self, MainWindow):
_translate = QtCore.QCoreApplication.translate
MainWindow.setWindowTitle(_translate("MainWindow", "MainWindow"))
self.label.setStyleSheet("background-color: rgba(0, 85, 127, 0);\n"
"color: rgb(255, 255, 255);")
self.pushButton.setText(_translate("MainWindow", "SCAN"))
self.label.setText(_translate("MainWindow", "WELCOME"))
self.pushButton.clicked.connect(self.copy)
def copy(self, MainWindow):
self.pushButton.setText('WORKING...')
self.pushButton.setStyleSheet("background-color: rgb(250, 0, 0);\n"
"color: rgb(255, 255, 255);")
testprompt=storeid=pyautogui.prompt(text='test', title='test')
class Application():
def run():
import sys
app = QtWidgets.QApplication(sys.argv)
MainWindow = QtWidgets.QMainWindow()
ui = Ui_MainWindow()
ui.setupUi(MainWindow)
MainWindow.show()
sys.exit(app.exec_())
Application.run()
</code></pre>
|
<p>Instead of <code>pyautogui.prompt</code>, I would recommend using one of the Qt classes, like QMessageBox or QInputDialog since you're already using a massive GUI toolkit with tons of widgets and classes. </p>
<p>In the button stylesheet, you can specify the background color to red when the button is disabled. This way you can disable the button, call the dialog window, and then enable it afterwards. The style sheet looks like this:</p>
<pre><code>self.pushButton.setStyleSheet('''
QPushButton {
background-color: rgb(0, 170, 0);
color: rgb(255, 255, 255);
}
QPushButton:disabled {
background-color: rgb(250, 0, 0);
}''')
</code></pre>
<p>And the copy function</p>
<pre><code>def copy(self, MainWindow):
self.pushButton.setText('WORKING...')
self.pushButton.setDisabled(True)
text, pressed = QtWidgets.QInputDialog.getText(None, 'Test', 'Test')
self.pushButton.setText('SCAN')
self.pushButton.setEnabled(True)
</code></pre>
|
python|pyqt5
| 0 |
1,905,695 | 55,270,266 |
pytorch linear regression given wrong results
|
<p>I implemented a simple linear regression and I’m getting some poor results. Just wondering if these results are normal or I’m making some mistake.</p>
<p>I tried different optimizers and learning rates, I always get bad/poor results</p>
<p>Here is my code:</p>
<pre><code>import torch
import torch.nn as nn
import numpy as np
import matplotlib.pyplot as plt
from torch.autograd import Variable
class LinearRegressionPytorch(nn.Module):
def __init__(self, input_dim=1, output_dim=1):
super(LinearRegressionPytorch, self).__init__()
self.linear = nn.Linear(input_dim, output_dim)
def forward(self,x):
x = x.view(x.size(0),-1)
y = self.linear(x)
return y
input_dim=1
output_dim = 1
if torch.cuda.is_available():
model = LinearRegressionPytorch(input_dim, output_dim).cuda()
else:
model = LinearRegressionPytorch(input_dim, output_dim)
criterium = nn.MSELoss()
l_rate =0.00001
optimizer = torch.optim.SGD(model.parameters(), lr=l_rate)
#optimizer = torch.optim.Adam(model.parameters(),lr=l_rate)
epochs = 100
#create data
x = np.random.uniform(0,10,size = 100) #np.linspace(0,10,100);
y = 6*x+5
mu = 0
sigma = 5
noise = np.random.normal(mu, sigma, len(y))
y_noise = y+noise
#pass it to pytorch
x_data = torch.from_numpy(x).float()
y_data = torch.from_numpy(y_noise).float()
if torch.cuda.is_available():
inputs = Variable(x_data).cuda()
target = Variable(y_data).cuda()
else:
inputs = Variable(x_data)
target = Variable(y_data)
for epoch in range(epochs):
#predict data
pred_y= model(inputs)
#compute loss
loss = criterium(pred_y, target)
#zero grad and optimization
optimizer.zero_grad()
loss.backward()
optimizer.step()
#if epoch % 50 == 0:
# print(f'epoch = {epoch}, loss = {loss.item()}')
#print params
for name, param in model.named_parameters():
if param.requires_grad:
print(name, param.data)
</code></pre>
<p>There are the poor results :</p>
<pre class="lang-py prettyprint-override"><code>linear.weight tensor([[1.7374]], device='cuda:0')
linear.bias tensor([0.1815], device='cuda:0')
</code></pre>
<p>The results should be weight = 6 , bias = 5</p>
|
<h1>Problem Solution</h1>
<p>Actually your <code>batch_size</code> is problematic. If you have it set as one, your <code>target</code>needs the same shape as outputs (which you are, correctly, reshaping with <code>view(-1, 1)</code>).</p>
<p>Your loss should be defined like this:</p>
<pre><code>loss = criterium(pred_y, target.view(-1, 1))
</code></pre>
<p>This network is correct</p>
<h1>Results</h1>
<p>Your results <strong>will not be</strong> <code>bias=5</code> (yes, <code>weight</code> will go towards <code>6</code> indeed) as you are adding random noise to <code>target</code> (and as it's a single value for all your data points, only <code>bias</code> will be affected).</p>
<p>If you want <code>bias</code> equal to <code>5</code> remove addition of noise.</p>
<p><strong>You should increase number of your epochs as well, as your data is quite small and network (linear regression in fact) is not really powerful.</strong> <code>10000</code> say should be fine and your loss should oscillate around <code>0</code> (if you change your noise to something sensible).</p>
<h2>Noise</h2>
<p>You are creating multiple gaussian distributions with different variations, hence your loss would be higher. Linear regression is unable to fit your data and find sensible bias (as the optimal slope is still approximately <code>6</code> for your noise, you may try to increase multiplication of <code>5</code> to <code>1000</code> and see what <code>weight</code> and <code>bias</code> will be learned).</p>
<h1>Style (a little offtopic)</h1>
<p>Please read documentation about PyTorch and keep your code up to date (e.g. <code>Variable</code> is deprecated in favor of <code>Tensor</code> and rightfully so). </p>
<p>This part of code:</p>
<pre><code>x_data = torch.from_numpy(x).float()
y_data = torch.from_numpy(y_noise).float()
if torch.cuda.is_available():
inputs = Tensor(x_data).cuda()
target = Tensor(y_data).cuda()
else:
inputs = Tensor(x_data)
target = Tensor(y_data)
</code></pre>
<p>Could be written succinctly like this (without much thought):</p>
<pre><code>inputs = torch.from_numpy(x).float()
target = torch.from_numpy(y_noise).float()
if torch.cuda.is_available():
inputs = inputs.cuda()
target = target.cuda()
</code></pre>
<p>I know deep learning has it's reputation for bad code and fatal practice, but <strong>please</strong> do not help spreading this approach.</p>
|
linear-regression|pytorch
| 1 |
1,905,696 | 57,495,796 |
How can I set a new value for a cell as a float in pandas dataframe (Python) - The DataFrame rounds to integer when in nested for loop
|
<p><strong>FOUND SOLUTION: I needed to change datatype for dataframe:</strong></p>
<pre><code>for p in periods:
df['Probability{}'.format(p)] = 0
</code></pre>
<pre><code>for p in periods:
df['Probability{}'.format(p)] = float(0)
</code></pre>
<p>Alternatively do as in approved answer below.</p>
<hr>
<p>I am asserting new values for cells as floats but they are set as integers and I don't get why.
It is a part of a data mining project, which contains nested loops. </p>
<p>I am using Python 3.</p>
<p>I tried different modes of writing into a cell with pandas:
<code>df.at[index,col] = float(val)</code>,
<code>df.set_value[index,col,float(val)]</code>, and
<code>df[col][index] = float(val)</code> but none of them delivered a solution. The output I got was:</p>
<pre><code>In: print(df[index][col])
Out: 0
</code></pre>
<pre><code>In: print(val)
Out: 0.4774410939826658
</code></pre>
<p>Here is a simplified version of the loop</p>
<pre><code>periods = [7,30,90,180]
for p in periods:
df['Probability{}'.format(p)] = 0
for i in range(len(df.index)):
for p in periods:
if i >= p - 1:
# Getting relevant data and computing value
vals = [df['Close'][j] for j in range(i - p, i)]
probability = (len([j for j in vals if j>0])/len(vals))
# Asserting value to cell in pd.dataframe
df.at[df.index[i], 'Probability{}'.format(p)] = float(probability)
</code></pre>
<p>I don't get why pandas.DataFrame are changing float to integer and rounds up or down. When I asserted values to cells in console directly I did not experience any problems.</p>
<p>Is there any work arounds or solutions to this problem?</p>
<p>I had no problem before nesting a for loop for periods to avoid hard coding a lot of trivial code.</p>
<p>NB: It also seems that if I factorize, e.g. with <code>100 * val = new_val</code>, it do only factorize the rounded number. So if I multiplied <code>100*val = new_val = 0</code> because the number is rounded down to <code>0</code>.</p>
<p>I also tried to change datatype for the dataframe: </p>
<pre><code>df = df.apply(pd.to_numeric)
</code></pre>
<p>All the best.</p>
|
<p>Seems like a problem with incorrect data types in your dataframe. Your last attempt at converting the whole <code>df</code> was probably very close. Try and use</p>
<pre><code>df['Close'] = pd.to_numeric(df['Close'], downcast="float")
</code></pre>
|
python|pandas|dataframe|for-loop
| 1 |
1,905,697 | 54,187,028 |
include only *.pyc files in python wheel
|
<p>How can I include only *.pyc files in a python wheel?
When creating eggs, it used to be possible to run
<code>python setup.py bdist_egg --exclude-source-files</code></p>
<p>Given that eggs have been <a href="https://packaging.python.org/discussions/wheel-vs-egg/" rel="noreferrer">replaced by wheels</a>, how would I reproduce a similar result?</p>
|
<p>I haven't tried it, but it looks like <a href="https://pyc-wheel.readthedocs.io" rel="nofollow noreferrer">pyc-wheel</a> does precisely that.</p>
|
python-3.x|python-wheel|egg
| 1 |
1,905,698 | 54,070,055 |
Get address of a global symbol from symbol name with GDB Python API
|
<p>Is there a way by which I can get the address of a global symbol in my binary if I know its name with the GDB Python API ?</p>
<p>Is <code>python print(gdb.parse_and_eval('symbol').address)</code> the correct method t o obtain this value ?</p>
|
<p>Well, you answered yourself already correctly. Easy enough to verify:</p>
<pre><code>(gdb) p &t
$2 = (time_t *) 0x562076476018 <t>
(gdb) python print(gdb.parse_and_eval('t').address)
0x562076476018 <t>
(gdb)
</code></pre>
|
gdb|gdb-python
| 1 |
1,905,699 | 22,937,269 |
Comparing two string in python
|
<p>Is there any in build function in python which enables two compare two string.
i tried comparing two strings using <code>==</code> operator but not working.</p>
<pre><code>try:
if company=="GfSE-Zertifizierungen":
y=2
if x<y:
print "**************Same company***********"
x=x+1
flag=0
pass
if x==y:
flag=1
x=0
count=count+1
except Exception as e:
print e
</code></pre>
<p>This is not even showing any error nor working out.
Can anyone assist me where I m going wrong</p>
|
<p>In python to compare a string you should use the <code>==</code> operator.
eg:</p>
<pre><code>a = "hello"
b = "hello2"
c = "hello"
</code></pre>
<p>then</p>
<pre><code>a == b # should return False
a == c # should return True
</code></pre>
<p>Suggestion: print the content of your variable "company" to check what's inside of it. Be sure to have the same case (lower/upper letters).</p>
|
python|string
| 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.