Unnamed: 0
int64 0
1.91M
| id
int64 337
73.8M
| title
stringlengths 10
150
| question
stringlengths 21
64.2k
| answer
stringlengths 19
59.4k
| tags
stringlengths 5
112
| score
int64 -10
17.3k
|
---|---|---|---|---|---|---|
1,907,700 | 60,252,148 |
Django and HTML : Too many values to unpack (expected 2)
|
<p>I'm new to django and html and I have to do a project for school. I tried to learn how forms works with django and I think I got it django forms right. But I simply don't know how to make it work with my html.
I got the <code>too many values to unpack (expected 2)</code>.
Here is the important part of my form:</p>
<pre><code>class ConfiguratorForm(forms.Form):
queryOfProject = TypeOfProgram.objects.values_list('name')
queryOfFramework = Framework.objects.values_list('name','version')
listFramework = []
listProject = []
listFramework=[(q[0],q[1])for q in queryOfFramework]
listProject =[(q[0])for q in queryOfProject]
print("list of")
print(listFramework)
print(listProject)
typeOfTheproject = forms.ChoiceField(choices = listProject)
wantedFramework = forms.MultipleChoiceField(choices = listFramework)
</code></pre>
<p>Where <code>listProject</code> and <code>listFramework</code> are both list containing elements.
And I know it is no good code but I'm learning how everything works and I haven't much time.</p>
<p>For the html code. I absolutly don't know what exactly I wrote. I've search so lang and tried a lot of things I've seen.</p>
<pre><code>{% block content %}
<form method="post">
{% csrf_token %}
{{form}}
<input type="submit" value="Submit">
</form>
{% endblock %}
</code></pre>
<p>If anyone could tell me how to correct the html file or if my form is wrong?</p>
<p>EDIT: Here is the full trace, my model and the view:</p>
<p>Models:</p>
<pre><code>class web(models.Model):
typeOfWeb = models.CharField(max_length=50)
def __str__(self):
template = '{0.typeOfWeb}'
return template.format(self)
class TypeOfProgram(models.Model):
name = models.CharField(max_length=200)
def __str__(self):
template = '{0.name}'
return template.format(self)
def __iter__(self):
return iter(self.name)
class Language(models.Model):
name = models.CharField(max_length=200)
version = models.CharField(max_length=200)
def __str__(self):
template = '{0.name} {0.version}'
return template.format(self)
class Pro(models.Model):
advantage = models.CharField(max_length=200)
def __str__(self):
template = '{0.advantage}'
return template.format(self)
class Con(models.Model):
disadvantage = models.CharField(max_length=200)
def __str__(self):
template = '{0.disadvantage}'
return template.format(self)
class Framework(models.Model):
name = models.CharField(max_length=200)
language = models.ManyToManyField(Language)
version = models.CharField(max_length=20)
typeOfFramework = models.ForeignKey(TypeOfProgram, on_delete=models.CASCADE)
cotationOfFramework = models.IntegerField(default = 5) #need to increase from pros and cons
pros = models.ManyToManyField(Pro)
cons = models.ManyToManyField(Con)
additionalInformation = models.ForeignKey(web,blank=True,null=True ,default = None ,on_delete=models.CASCADE)
def __str__(self):
template = '{0.name} {0.version} {0.typeOfFramework} {0.cotationOfFramework} {0.additionalInformation}'
return template.format(self)
def __iter__(self):
return [ self.name,
self.language,
self.version,
self.typeOfFramework,
self.cotationOfFramework,
self.pros,
self.cons,
self.additionalInformation]
</code></pre>
<p>View:</p>
<pre><code> def index(request):
context = {}
context['form'] = ConfiguratorForm()
return render(request, "forms/configurator.html", context)
"""all the configurator algorithm"""
if request.method == 'POST':
form = ConfiguratorForm(request.POST)
#Check if the form is valid:
if form.is_valid():
typeOfProject = form.cleaned_data['typeOfTheproject']
wantedFramework = form.cleaned_data['wantedFramework']
listOfmember = form.cleaned_data['listOfmember']
print(typeOfProject, wantedFramework, listOfmember)
#if the project is a web application, then 2 kind of framework are needed
#need to parse instances of the form
#get the database
listOfProject = TypeOfProgram.objects.all()
listOfFramework = Framework.objects.all()
listOfLanguages = Language.objects.all()
easiestChoice = ""
bestChoice = ""
otherChoices = [""]
if typeOfproject == 'Web Application':
typeOfproject = ["Backend", "Frontend"]
else:
print("hello world")
neededFramework = []
#take all the needed framework asked by the team
for i in range(len(wantedFramework)):
neededFramework.append(wantedFramework[i])
#configure easiest choice
#for all type of the project (needed for web development)
for i in range(len(typeOfproject)):
#for all asked framework
for j in neededFramework:
#for all framework
for k in Framework.objects.value_list("name"):
#check if k is a good framework for the project
for l in typeOfproject:
currentFramework = Framework.objects.filter(name=k)
if l == currentFramework.typeOfFramework:
#verify they are the same
if k == i:
if easiestChoice == "":
easiestChoice = i
#check the cotation
else:
check = Framework.objects.filter(name = i)
check2 = Framework.objects.filter(name = easiestChoice)
if check.cotationOfFramework > check2.cotationOfFramework:
easiestChoice = i
#configure best choice
for i in range(len(typeOfproject)):
for j in listOfFramework:
for l in typeOfproject:
currentFramework = Framework.objects.filter(name=j)
if l == currentFramework.typeOfFramework:
if bestChoice =="":
bestChoice = j
else:
check = Framework.objects.filter(name = bestChoice)
if j.cotation > check.cotation:
bestChoice = j.name
#give all other choices
for i in range(len(typeOfproject)):
for j in listOfFramework:
for l in typeOfproject:
currentFramework = Framework.objects.filter(name=j)
if l == currentFramework.typeOfFramework:
otherChoices.append(j.name)
return render(request, 'forms/configurator.html')
return render(request, 'forms/configurator.html')
</code></pre>
<p>Trace:</p>
<pre><code>Environment:
Request Method: GET
Request URL: http://127.0.0.1:8000/configurator/
Django Version: 3.0
Python Version: 3.7.2
Installed Applications:
['configurator.apps.ConfiguratorConfig',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles']
Installed Middleware:
['django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware']
Template error:
In template C:\Users\victo\OneDrive\Documents\BAC 3\Projet Individuel\1920_INFOB318_CPDI\code\ProjetIndividuel\CPDI\configurator\templates\forms\configurator.html, error at line 5
too many values to unpack (expected 2)
1 : {% block content %}
2 :
3 : <form method="post">
4 : {% csrf_token %}
5 : {{ form }}
6 : <input type="submit" value="Submit">
7 : </form>
8 : {% endblock %}
Traceback (most recent call last):
File "C:\Users\victo\OneDrive\Documents\Django\ProjetIndividuel\env\lib\site-packages\django\core\handlers\exception.py", line 34, in inner
response = get_response(request)
File "C:\Users\victo\OneDrive\Documents\Django\ProjetIndividuel\env\lib\site-packages\django\core\handlers\base.py", line 115, in _get_response
response = self.process_exception_by_middleware(e, request)
File "C:\Users\victo\OneDrive\Documents\Django\ProjetIndividuel\env\lib\site-packages\django\core\handlers\base.py", line 113, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "C:\Users\victo\OneDrive\Documents\BAC 3\Projet Individuel\1920_INFOB318_CPDI\code\ProjetIndividuel\CPDI\configurator\views.py", line 18, in index
return render(request, "forms/configurator.html", context)
File "C:\Users\victo\OneDrive\Documents\Django\ProjetIndividuel\env\lib\site-packages\django\shortcuts.py", line 19, in render
content = loader.render_to_string(template_name, context, request, using=using)
File "C:\Users\victo\OneDrive\Documents\Django\ProjetIndividuel\env\lib\site-packages\django\template\loader.py", line 62, in render_to_string
return template.render(context, request)
File "C:\Users\victo\OneDrive\Documents\Django\ProjetIndividuel\env\lib\site-packages\django\template\backends\django.py", line 61, in render
return self.template.render(context)
File "C:\Users\victo\OneDrive\Documents\Django\ProjetIndividuel\env\lib\site-packages\django\template\base.py", line 171, in render
return self._render(context)
File "C:\Users\victo\OneDrive\Documents\Django\ProjetIndividuel\env\lib\site-packages\django\template\base.py", line 163, in _render
return self.nodelist.render(context)
File "C:\Users\victo\OneDrive\Documents\Django\ProjetIndividuel\env\lib\site-packages\django\template\base.py", line 936, in render
bit = node.render_annotated(context)
File "C:\Users\victo\OneDrive\Documents\Django\ProjetIndividuel\env\lib\site-packages\django\template\base.py", line 903, in render_annotated
return self.render(context)
File "C:\Users\victo\OneDrive\Documents\Django\ProjetIndividuel\env\lib\site-packages\django\template\loader_tags.py", line 53, in render
result = self.nodelist.render(context)
File "C:\Users\victo\OneDrive\Documents\Django\ProjetIndividuel\env\lib\site-packages\django\template\base.py", line 936, in render
bit = node.render_annotated(context)
File "C:\Users\victo\OneDrive\Documents\Django\ProjetIndividuel\env\lib\site-packages\django\template\base.py", line 903, in render_annotated
return self.render(context)
File "C:\Users\victo\OneDrive\Documents\Django\ProjetIndividuel\env\lib\site-packages\django\template\base.py", line 992, in render
return render_value_in_context(output, context)
File "C:\Users\victo\OneDrive\Documents\Django\ProjetIndividuel\env\lib\site-packages\django\template\base.py", line 971, in render_value_in_context
value = str(value)
File "C:\Users\victo\OneDrive\Documents\Django\ProjetIndividuel\env\lib\site-packages\django\utils\html.py", line 373, in <lambda>
klass.__str__ = lambda self: mark_safe(klass_str(self))
File "C:\Users\victo\OneDrive\Documents\Django\ProjetIndividuel\env\lib\site-packages\django\forms\forms.py", line 137, in __str__
return self.as_table()
File "C:\Users\victo\OneDrive\Documents\Django\ProjetIndividuel\env\lib\site-packages\django\forms\forms.py", line 279, in as_table
errors_on_separate_row=False,
File "C:\Users\victo\OneDrive\Documents\Django\ProjetIndividuel\env\lib\site-packages\django\forms\forms.py", line 238, in _html_output
'field_name': bf.html_name,
File "C:\Users\victo\OneDrive\Documents\Django\ProjetIndividuel\env\lib\site-packages\django\utils\html.py", line 373, in <lambda>
klass.__str__ = lambda self: mark_safe(klass_str(self))
File "C:\Users\victo\OneDrive\Documents\Django\ProjetIndividuel\env\lib\site-packages\django\forms\boundfield.py", line 33, in __str__
return self.as_widget()
File "C:\Users\victo\OneDrive\Documents\Django\ProjetIndividuel\env\lib\site-packages\django\forms\boundfield.py", line 89, in as_widget
attrs = self.build_widget_attrs(attrs, widget)
File "C:\Users\victo\OneDrive\Documents\Django\ProjetIndividuel\env\lib\site-packages\django\forms\boundfield.py", line 224, in build_widget_attrs
if widget.use_required_attribute(self.initial) and self.field.required and self.form.use_required_attribute:
File "C:\Users\victo\OneDrive\Documents\Django\ProjetIndividuel\env\lib\site-packages\django\forms\widgets.py", line 702, in use_required_attribute
return use_required_attribute and first_choice is not None and self._choice_has_empty_value(first_choice)
File "C:\Users\victo\OneDrive\Documents\Django\ProjetIndividuel\env\lib\site-packages\django\forms\widgets.py", line 688, in _choice_has_empty_value
value, _ = choice
Exception Type: ValueError at /configurator/
Exception Value: too many values to unpack (expected 2)
</code></pre>
|
<p>You code is ver wired...</p>
<p>Choice fields expect to obtain a tuple or a list type: key - value</p>
<p>to fix <code>too many values to unpack (expected 2)</code> change this lines in your form:</p>
<pre><code>typeOfTheproject = forms.ChoiceField(choices = listProject)
wantedFramework = forms.MultipleChoiceField(choices = listFramework)
</code></pre>
<p>for:</p>
<p>FIRST VERSION CODE:</p>
<pre><code>class ConfiguratorForm(forms.Form):
queryOfProject = TypeOfProgram.objects.values_list('name')
queryOfFramework = Framework.objects.values_list('name', 'version')
listOfProject = []
listOfFramework = []
listOfFramework += queryOfFramework
listOfProject += queryOfProject
listFramework = []
listProject = []
for i in range(0,len(listOfFramework)):
listFramework.append(listOfFramework[i]['name'] + " " +listOfFramework[i]['version'])
for i in range(0,len(listOfProject)):
listProject.append(listOfProject[i]['name'])
typeOfTheproject = forms.ChoiceField(choices=[(x, x) for x in listProject]) # <---
wantedFramework = forms.MultipleChoiceField(choices=[(x, x) for x in listFramework]) # <---
</code></pre>
<p>SECOND VERSION CODE:</p>
<pre><code>class ConfiguratorForm(forms.Form):
queryOfProject = TypeOfProgram.objects.values_list('name')
queryOfFramework = Framework.objects.values_list('name','version')
listFramework = []
listProject = []
listFramework=[(q['name'], q['version'])for q in queryOfFramework] # <---
listProject =[(q['name'])for q in queryOfProject] # <---
print("list of")
print(listFramework)
print(listProject)
typeOfTheproject = forms.ChoiceField(choices = listProject)
wantedFramework = forms.MultipleChoiceField(choices = listFramework)
</code></pre>
|
python|html|django
| 0 |
1,907,701 | 1,458,203 |
Reading a float from string
|
<p>I have a simple string that I want to read into a float without losing any <strong>visible</strong> information as illustrated below:</p>
<pre><code>s = ' 1.0000\n'
</code></pre>
<p>When I do <code>f = float(s)</code>, I get <code>f=1.0</code></p>
<p>How to trick this to get <code>f=1.0000</code> ?</p>
<p>Thank you</p>
|
<p><strong>Direct answer</strong>: You can't. Floats are imprecise, by design. While python's floats have more than enough precision to represent 1.0000, they will <em>never</em> represent a "1-point-zero-zero-zero-zero". Chances are, this is as good as you need. You can always use string formatting, if you need to display four decimal digits.</p>
<pre><code>print '%.3f' % float(1.0000)
</code></pre>
<p><strong>Indirect answer</strong>: Use the <code>decimal</code> module.</p>
<pre><code>from decimal import Decimal
d = Decimal('1.0000')
</code></pre>
<p>The <code>decimal</code> package is designed to handle all these issues with arbitrary precision. A decimal "1.0000" is <em>exactly</em> 1.0000, no more, no less. Note, however, that complications with rounding means you can't convert from a <code>float</code> directly to a <code>Decimal</code>; you have to pass a string (or an integer) to the constructor. </p>
|
python|string|floating-point
| 9 |
1,907,702 | 44,352,986 |
Sample RDD element(s) according to weighted probability [Spark]
|
<p>In PySpark I have an RDD composed by (key;value) pairs where <em>key</em>s are sequential integers and <em>value</em>s are floating point numbers.</p>
<p>I'd like to sample exactly one element from this RDD with probability proportional to <em>value</em>.</p>
<p>In a naiive manner, this task can be accomplished as follows:</p>
<pre><code>pairs = myRDD.collect() #now pairs is a list of (key;value) tuples
K, V = zip(*pairs) #separate keys and values
V = numpy.array(V)/sum(V) #normalise probabilities
extractedK = numpy.random.choice(K,size=1,replace=True, p=V)
</code></pre>
<p>What concerns me is the <code>collect()</code> operation which, as you might know, loads the entire list of tuples in memory, which can be quite expensive. I'm aware of <code>takeSample()</code>, which is great when elements should be extracted uniformly, but what happens if elements should be extracted according to weighted probabilities?</p>
<p>Thanks!</p>
|
<p>Here is an algorithm I worked out to do this:</p>
<blockquote>
<p><strong>EXAMPLE PROBLEM</strong></p>
<p>Assume we want to sample 10 items from an RDD on 3 partitions like this:</p>
<ul>
<li>P1: ("A", 0.10), ("B", 0.10), ("C", 0.20)</li>
<li>P2: ("D": 0.25), ("E", 0.25)</li>
<li>P3: ("F", 0.10)</li>
</ul>
</blockquote>
<p>Here is the high-level algorithm:</p>
<blockquote>
<p><strong>INPUT:</strong> <code>number of samples</code> and a <code>RDD of items (with weights)</code></p>
<p><strong>OUTPUT:</strong> <code>dataset sample</code> on driver</p>
<ol>
<li>For each partition, calculate the total probability of sampling from the partition, and aggregate those values to the driver.
<ul>
<li>This would give the probability distribution: <code>Prob(P1) = 0.40, Prob(P2) = 0.50, Prob(P3) = 0.10</code></li>
</ul></li>
<li>Generate a sample of the partitions (to determine the number of elements to select from each partition.)
<ul>
<li>A sample may look like this: <code>[P1, P1, P1, P1, P2, P2, P2, P2, P2, P3]</code></li>
<li>This would give us 4 items from P1, 5 items from P2, and 1 item from P3.</li>
</ul></li>
<li>On each separate partition, we locally generate a sample of the needed size using only the elements on that partition:
<ul>
<li>On P1, we would sample 4 items with the (re-normalized) probability distribution: <code>Prob(A) = 0.25, Prob(B) = 0.25, Prob(C) = 0.50</code>. This could yield a sample such as <code>[A, B, C, C]</code>.</li>
<li>On P2, we would sample 5 items with probability distribution: <code>Prob(D) = 0.5, Prob(E) = 0.5</code>. This could yield a sample such as <code>[D,D,E,E,E]</code></li>
<li>On P3: sample 1 item with probability distribution: <code>P(F) = 1.0</code>, this would generate the sample <code>[E]</code></li>
</ul></li>
<li><code>Collect</code> the samples to the driver to yield your dataset sample <code>[A,B,C,C,D,D,E,E,E,F]</code>.</li>
</ol>
</blockquote>
<p>Here is an implementation in scala:</p>
<pre><code> case class Sample[T](weight: Double, obj: T)
/*
* Obtain a sample of size `numSamples` from an RDD `ar` using a two-phase distributed sampling approach.
*/
def sampleWeightedRDD[T:ClassTag](ar: RDD[Sample[T]], numSamples: Int)(implicit sc: SparkContext): Array[T] = {
// 1. Get total weight on each partition
var partitionWeights = ar.mapPartitionsWithIndex{case(partitionIndex, iter) => Array((partitionIndex, iter.map(_.weight).sum)).toIterator }.collect().toArray
//Normalize to 1.0
val Z = partitionWeights.map(_._2).sum
partitionWeights = partitionWeights.map{case(partitionIndex, weight) => (partitionIndex, weight/Z)}
// 2. Sample from partitions indexes to determine number of samples from each partition
val samplesPerIndex = sc.broadcast(sample[Int](partitionWeights, numSamples).groupBy(x => x).mapValues(_.size).toMap).value
// 3. On each partition, sample the number of elements needed for that partition
ar.mapPartitionsWithIndex{case(partitionIndex, iter) =>
val numSamplesForPartition = samplesPerIndex.getOrElse(partitionIndex, 0)
var ar = iter.map(x => (x.obj, x.weight)).toArray
//Normalize to 1.0
val Z = ar.map(x => x._2).sum
ar = ar.map{case(obj, weight) => (obj, weight/Z)}
sample(ar, numSamplesForPartition).toIterator
}.collect()
}
</code></pre>
<p>This code using a simple weighted sampling function <code>sample</code>:</p>
<pre><code> // a very simple weighted sampling function
def sample[T:ClassTag](dist: Array[(T, Double)], numSamples: Int): Array[T] = {
val probs = dist.zipWithIndex.map{case((elem,prob),idx) => (elem,prob,idx+1)}.sortBy(-_._2)
val cumulativeDist = probs.map(_._2).scanLeft(0.0)(_+_).drop(1)
(1 to numSamples).toArray.map(x => scala.util.Random.nextDouble).map{case(p) =>
def findElem(p: Double, cumulativeDist: Array[Double]): Int = {
for(i <- (0 until cumulativeDist.size-1))
if (p <= cumulativeDist(i)) return i
return cumulativeDist.size-1
}
probs(findElem(p, cumulativeDist))._1
}
}
</code></pre>
|
python|pyspark|rdd|subsampling
| 3 |
1,907,703 | 44,189,048 |
Couldn't install tenserflow on Windows10 with python --version 3.5.3. (64 bit)
|
<p><strong>Trying to install TensorFlow</strong> </p>
<p><strong>Installing with native pip</strong></p>
<p><em>Error:</em></p>
<pre><code>C:\Users\Sourav>pip3 install --upgrade tensorflow
Collecting tensorflow
Could not find a version that satisfies the requirement tensorflow (from versions: )
No matching distribution found for tensorflow
</code></pre>
<p><strong>Installing with Anaconda</strong></p>
<p><em>Error:</em></p>
<pre><code>C:\Users\Sourav>activate tensorflow
(tensorflow) C:\Users\Sourav>pip install --ignore-installed --upgrade https://storage.googleapis.com/tensorflow/windows/cpu/tensorflow-1.1.0-cp35-cp35m-win_amd64.whl
tensorflow-1.1.0-cp35-cp35m-win_amd64.whl is not a supported wheel on this platform.
(tensorflow) C:\Users\Sourav>pip install tensorflow
Collecting tensorflow
Could not find a version that satisfies the requirement tensorflow (from versions: )
No matching distribution found for tensorflow
</code></pre>
|
<p><strong><em>Finally found a solution.</em></strong></p>
<p><strong>Installing with Anaconda</strong></p>
<pre><code>conda create --name tensorflow python=3.5
activate tensorflow
conda install jupyter
conda install scipy
pip install tensorflow
or
pip install tensorflow-gpu
</code></pre>
<p>It is important to add python=3.5 at the end of the first line, because it will install Python 3.5.</p>
|
python|tensorflow|pip|anaconda
| 0 |
1,907,704 | 34,662,668 |
Unpacking a C struct with Pythons struct module
|
<p>I am sending a struct in binary format from C to my python script.</p>
<p>My C struct:</p>
<pre><code>struct EXAMPLE {
float val1;
float val2;
float val3;
}
</code></pre>
<p>How I send it:</p>
<pre><code>struct EXAMPLE *ex;
ex->val1 = 5.3f;
ex->val2 = 12.5f;
ex->val3 = 15.5f;
write(fd, &ex, sizeof(struct EXAMPLE));
</code></pre>
<p>How I recieve:</p>
<pre><code>buf = sock.recv(12)
buf = struct.unpack('f f f', buf)
print buf
</code></pre>
<p>But when I print it out on the python side all I get is random garbage. I'm pretty sure there is something wrong with the struct definition in python but I'm not sure what.</p>
|
<p>This line is wrong:</p>
<pre><code>write(fd, &ex, sizeof(struct EXAMPLE));
</code></pre>
<p>It should be:</p>
<pre><code>write(fd, ex, sizeof(struct EXAMPLE));
</code></pre>
|
python|c|struct
| 2 |
1,907,705 | 12,540,103 |
Python loop define limits trouble
|
<p>I am newbie in Python ...
I am dealing with an array of 10 elements A[0,10]
trying to do this loop:</p>
<pre><code>for i in range(1, 9):
C.append((A[i+1]-A[i-1])/2)
</code></pre>
<p>to find the average value of each point...</p>
<p>afterwards If I wish to print the results :</p>
<pre><code>print(i, A[i], C[i])
print(i, A[i], C[i-2])
</code></pre>
<p>the second print is only working, the first is not.. please why this?
And I need also to define the first and the last value of C[1] and C[10]
and I can't do that because the first value is the C[-2] which is C[9] ...</p>
<p>if someone has any idea how to solve it....</p>
|
<p>It's not usual to iterate over python lists (and I assume you mean a <code>list</code>) by index.
You're operating on pairs of items, so a more idiomatic way to do this would be:</p>
<pre><code>for prev, next in zip(a, a[2:]):
# do whatever
</code></pre>
<p>Look up <code>zip</code> in the standard documentation. The two variables are assigned from the results of <code>zip</code> using <em>sequence assignment</em>, which is just where a comma-separated list of variables is bound to the elements of a sequence.</p>
<p>The second thing you're doing that is unidiomatic is to append to a list in a loop. This sort of thing is so common that python has a specific syntax for it:</p>
<pre><code>[(next - prev)/2 for prev, next in zip(a, a[2:])]
</code></pre>
<p>This is called a <em>list comprehension</em>. If you can't guess what that does after pasting it into the interpreter, read about it in the standard documentation and the many articles available via google.</p>
<p>That code in action;</p>
<pre><code>>>> import random
>>> a = range(5,15)
>>> random.shuffle(a)
>>> a
[11, 5, 12, 14, 6, 7, 8, 13, 10, 9]
>>> [(next - prev)/2 for prev, next in zip(a, a[2:])]
[0, 4, -3, -4, 1, 3, 1, -2]
>>>
</code></pre>
|
python|for-loop|range
| 2 |
1,907,706 | 12,163,852 |
GAE Datastore: Performance of querying the same StringListProperty with multiple equalities
|
<p>Assuming I have a model</p>
<pre><code>class MyModelList(db.Model):
listed_props = db.StringListProperty(indexed=True)
</code></pre>
<p>and I query it with</p>
<pre><code>SELECT * from MyModelList where listed_props = 'a' and listed_props = 'b'
</code></pre>
<p>will it be almost as performant (latency wise) as if I had a model</p>
<pre><code>class MyModelProps(db.Model):
property_1 = db.StringProperty(indexed=True)
property_2 = db.StringProperty(indexed=True)
</code></pre>
<p>which I would query with:</p>
<pre><code>SELECT * from MyModelProps where property_1 = 'a' and property_2 = 'b'
</code></pre>
<p>and a composite index of</p>
<pre><code>indexes:
- kind: MyModelProps
properties:
- name: property_1
- name: property_2
</code></pre>
<p>The query for the first example with MyModelList seems harder to answer because the datastore will have to merge the listed_props index with itself (I assume 2 binary searches to find the start and then merging the indexes) compared to the second example (I assume 1 binary search to find the start and then just read).</p>
<p>This will be especially complicated if the index of MyModelList.listed_props needs to be sharded across multiple bigtable tablets.</p>
<p>Can I expect about the same performance (latency wise) for the two?</p>
<p>PS: The reason I'm asking is because I'd love to go with MyModelList.listed_props as it is much cheaper to update existing entities because I could get rid of a lot of composite indexes.</p>
|
<p>Performance wise it is a very bad idea to do queries without composite indexes like</p>
<pre><code>SELECT * from MyModelList where listed_props = 'a' and listed_props = 'b'
</code></pre>
<p>It is much more performant if you do </p>
<pre><code>SELECT * from MyModelProps where property_1 = 'a' and property_2 = 'b'
</code></pre>
<p>with a composite index, even if it doesn't need one.</p>
<p>I've implemented both solutions and ran it in a live system with 2.7 million records. The one with composite index was about 100x faster.</p>
<p>There is a fantastic article that explains it all:</p>
<p><a href="http://www.allbuttonspressed.com/blog/django/2010/01/An-App-Engine-limitation-you-didn-t-know-about" rel="nofollow">http://www.allbuttonspressed.com/blog/django/2010/01/An-App-Engine-limitation-you-didn-t-know-about</a></p>
|
python|google-app-engine|google-cloud-datastore
| 0 |
1,907,707 | 12,284,732 |
How to add an attribute that contains a hyphen to a WTForms field
|
<p>Calling a WTForms field object produces the rendered field, and any arguments are taken as attributes, for instance.</p>
<pre><code>form.field(attribute='value')
</code></pre>
<p>would return something like</p>
<pre><code><input attribute='value'>
</code></pre>
<p>How can I add HTML5 custom data attributes such as data-provide which contain hyphens, making them unparseable in python as a single keyword argument?</p>
|
<p>Create a dictionary with the corresponding key-value pairs and use ** to pass it to the field call:</p>
<pre><code>attrs = {'data-provide': "foo"}
form.field(**attrs)
</code></pre>
<p><em>Edit</em>: Looks like the comment by @NiklasB should be part of the answer:
For those using <a href="http://flask.pocoo.org/" rel="noreferrer">flask</a> with <a href="https://flask-wtf.readthedocs.org/en/latest/" rel="noreferrer">flask-WTF</a>, use: <code>{{ form.field( **{'data-provide': 'foo'} ) }}</code> in your template.</p>
|
python|wtforms
| 24 |
1,907,708 | 23,315,136 |
Can SQL Alchemy and Pyramid handle a date effective scenario
|
<p>I am looking for a web framework (hopefully Python, Postgresql and ORM based) that can handle the following scenario:</p>
<p>You have a Person table (with one person Avril Lavigne - who had two marriages):</p>
<pre><code>PersonID Start End PersonName
-------- ----------- ----------- -------------
1 27-Sep-1984 14-Jul-2006 Avril Lavigne
1 15-Jul-2006 30-Jun-2013 Avril Whibley
1 01-Jul-2013 31-Dec-9999 Avril Kroeger
</code></pre>
<p>You have an Assignment Table (which is the job assignment etc. that a person holds, where a person can hold more than one in various locations):</p>
<pre><code>AssignID Start End PersonID JobID LocID
-------- ----------- ----------- -------- ----- -----
1 27-Sep-1984 26-Sep-1999 1 1 1
1 27-Sep-1999 31-Dec-2002 1 1 2
1 01-Jan-2003 31-Dec-9999 1 1 3
2 01-Jul-2013 31-Dec-9999 1 2 3
</code></pre>
<p>Is there a way in which Pyramid can have a user select an effective date (say today) and SQL Alchemy will be able to join the tables using the effective date and a SQL <code>between</code>, and a foreign key join on <code>PersonID</code>; to bring back:</p>
<p>Avril Kroeger has 2 assignments with <code>JobID</code> 1 and 2 and <code>LocID</code> 3.</p>
<p>Where the SQL statement in Oracle would have been:</p>
<pre><code>select p.PersonName,
a.JobID,
a.LocID
from Person p,
Assignment a
where a.PersonID = p.PersonID
and trunc(sysdate) between a.Start and a.End
and trunc(sysdate) between p.Start and p.End
</code></pre>
<p>I'm just worried I'm going to learn another web framework which cannot handle composite primary keys and the above scenario which even SQL foreign keys can't handle.</p>
<p>So basically can the ORM handle arbitrary queries? Do you loose all the ORM shorthand functions if you go for the above scenario? Is there a better way to handle the above scenario. Should I store the effective date as a session variable?</p>
<p>I don't know enough about the topic to provide my solution. I'm more trying to avoid learning yet another framework which can't solve the above scenario.</p>
|
<p>Firstly, Pyramid can absolute totally handle this case simply because Pyramid is storage-agnostic, so it does not know anything about databases, SQL and other such stuff :) In other words, the question is not related to Pyramid in any way.</p>
<p>Secondly, SQLAlchemy does support composite primary keys. From the top of my head, the declarative classes would look like this:</p>
<pre><code>class Person(sa.Base):
person_id = sa.Column(sa.Integer, primary_key=True)
start = sa.Column(sa.DateTime, primary_key=True)
end = sa.Column(sa.DateTime, primary_key=True)
name = sa.Column(sa.String)
class Assignment(sa.Base):
person_id = sa.Column(sa.Integer, primary_key=True)
start = sa.Column(sa.DateTime, primary_key=True)
end = sa.Column(sa.DateTime, primary_key=True)
...
</code></pre>
<p>and your query would look like this:</p>
<pre><code>(session.query(Person.name, Assignment.jobId, Assignment.LocID)
.filter(Assignment.person_id==Person.person_id)
.filter(sa.sql.functions.now().between(Person.start, Person.end))
.filter(sa.sql.functions.now().between(Assignment.start, Assignment.end))
)
</code></pre>
|
python|sqlalchemy|pyramid
| 3 |
1,907,709 | 1,003,968 |
Is there a python ftp library for uploading whole directories (including subdirectories)?
|
<p>So I know about ftplib, but that's a bit too low for me as it still requires me to handle uploading files one at a time as well as determining if there are subdirectories, creating the equivalent subdirectories on the server, cd'ing into those subdirectories and then finally uploading the correct files into those subdirectories. It's an annoying task that I'd rather avoid if I can, what with writing tests, setting up test ftp servers etc etc..</p>
<p>Any of you know of a library (<a href="http://www.codinghorror.com/blog/archives/001268.html" rel="nofollow noreferrer">or mb some code scrawled on the bathroom wall..</a>) that takes care of this for me or should I just accept my fate and roll my own?</p>
<p>Thanks</p>
|
<blockquote>
<p>The ftputil Python library is a high-level interface to the ftplib module.</p>
</blockquote>
<p>Looks like this could help. <a href="http://ftputil.sschwarzer.net/trac" rel="noreferrer">ftputil website</a></p>
|
python|ftp
| 11 |
1,907,710 | 1,016,814 |
What should I do with "Unexpected indent" in Python?
|
<p>How do I rectify the error "unexpected indent" in Python?</p>
|
<p>Python uses spacing at the start of the line to determine when code blocks start and end. Errors you can get are:</p>
<p><strong>Unexpected indent.</strong> This line of code has more spaces at the start than the one before, but the one before is not the start of a subblock (e.g., the <em>if</em>, <em>while</em>, and <em>for</em> statements). All lines of code in a block must start with exactly the same string of whitespace. For instance:</p>
<pre class="lang-none prettyprint-override"><code>>>> def a():
... print "foo"
... print "bar"
IndentationError: unexpected indent
</code></pre>
<p>This one is especially common when running Python interactively: make sure you don't put any extra spaces before your commands. (Very annoying when copy-and-pasting example code!)</p>
<pre class="lang-none prettyprint-override"><code>>>> print "hello"
IndentationError: unexpected indent
</code></pre>
<p><strong>Unindent does not match any outer indentation level.</strong> This line of code has fewer spaces at the start than the one before, but equally it does not match any other block it could be part of. Python cannot decide where it goes. For instance, in the following, is the final print supposed to be part of the if clause, or not?</p>
<pre class="lang-none prettyprint-override"><code>>>> if user == "Joey":
... print "Super secret powers enabled!"
... print "Revealing super secrets"
IndendationError: unindent does not match any outer indentation level
</code></pre>
<p><strong>Expected an indented block.</strong> This line of code has the same number of spaces at the start as the one before, but the last line was expected to start a block (e.g., <em>if</em>, <em>while</em>, <em>for</em> statements, or a function definition).</p>
<pre class="lang-none prettyprint-override"><code>>>> def foo():
... print "Bar"
IndentationError: expected an indented block
</code></pre>
<p>If you want a function that doesn't do anything, use the "no-op" command <em>pass</em>:</p>
<pre><code>>>> def foo():
... pass
</code></pre>
<p>Mixing tabs and spaces is allowed (at least on my version of Python), but Python assumes tabs are 8 characters long, which may not match your editor. <strong>Just say "no" to tabs.</strong> Most editors allow them to be automatically replaced by spaces.</p>
<p>The best way to avoid these issues is to always use a consistent number of spaces when you indent a subblock, and ideally use a good IDE that solves the problem for you. This will also make your code more readable.</p>
|
python|indentation
| 166 |
1,907,711 | 41,854,374 |
I write JSON to SQLite, but err: AttributeError: 'list' object has no attribute 'itervalues'?
|
<pre><code># Description: write JSON to SQLite
# Date Modified: 10/22/13
# write JSON to SQLite
import sqlite3
import requests
import json
sqlite_name = 'poloniex_125.db'
FILE_NAME = 'historical.csv'
FILE_NAME_JSON = 'json.csv'
test_list=[]
def sqlite_get_data_from_json():
traffic = json.load(open(FILE_NAME_JSON))
db = sqlite3.connect(sqlite_name)
query = "insert into medicoes values (?,?,?,?,?,?,?)"
columns = ['globalTradeID', 'tradeID', 'date', 'type', 'rate', 'amount','total']
for timestamp, data in traffic.iteritems():
keys = (timestamp,) + tuple(data[c] for c in columns)
c = db.cursor()
c.execute(query, keys)
c.close()
def get_json_from_polo():
url = 'https://poloniex.com/public?command=returnTradeHistory&currencyPair=BTC_NXT'
r = requests.get(url, stream=True)
if r.status_code != 400:
with open(FILE_NAME_JSON, 'wb') as f:
for chunk in r:
f.write(chunk)
print('get_json_from_polo 2')
return True
sqlite_get_data_from_json()
</code></pre>
<p>but err: AttributeError: 'list' object has no attribute 'itervalues' .</p>
<p>why and how to do ?</p>
<p>i want to get data from poloniex.com. i am beginner for python.
thank you help me.... </p>
<p>i want to get data from poloniex.com. i am beginner for python.
thank you help me.... </p>
|
<p>When you're debugging code it can often be useful to use interactive mode.</p>
<pre><code>>>> import requests
>>> url = 'https://poloniex.com/public?command=returnTradeHistory&currencyPair=BTC_NXT'
>>> r = requests.get(url, stream=True)
>>> r.status_code
200
>>> with open('pFile.json', 'wb') as f:
... for chunk in r:
... f.write(chunk)
...
62
613
965
915
<lines omitted here>
818
704
809
761
632
</code></pre>
<p>As <a href="https://stackoverflow.com/users/3929826/klaus-d">https://stackoverflow.com/users/3929826/klaus-d</a> pointed out, you can see from examining the first few hundred bytes of the output file produced by this code that it's a list of dictionaries.</p>
<pre><code>[{"globalTradeID":76456788,"tradeID":886859,"date":"2017-01-25 17:53:41","type":"sell","rate":"0.00000662","amount":"252.35289868","total":"0.00167057"},{"globalTradeID":76456784,"tradeID":886858,"date":"2017-01-25 17:53:41","type":"sell","rate":"0.00000662","amount":"95.73319613","total":"0.00063375"},{"globalTradeID":76456782,"tradeID":886857,"date":"2017-01-25
</code></pre>
<p>You can process this list in a straightforward way without the need for the json library, as follows.</p>
<pre><code>>>> with open('pFile.json') as f:
... allofthem = eval(f.read())
... for item in allofthem:
... print (item)
...
{'total': '0.00167057', 'amount': '252.35289868', 'date': '2017-01-25 17:53:41', 'globalTradeID': 76456788, 'tradeID': 886859, 'rate': '0.00000662', 'type': 'sell'}
{'total': '0.00063375', 'amount': '95.73319613', 'date': '2017-01-25 17:53:41', 'globalTradeID': 76456784, 'tradeID': 886858, 'rate': '0.00000662', 'type': 'sell'}
{'total': '0.00063445', 'amount': '95.83876657', 'date': '2017-01-25 17:53:41', 'globalTradeID': 76456782, 'tradeID': 886857, 'rate': '0.00000662', 'type': 'sell'}
<items omitted>
</code></pre>
<p>Now all you need to do is to add in the lines to insert these data into the database.</p>
|
python|json|sqlite
| 0 |
1,907,712 | 33,692,542 |
Reading data in python from a COM port
|
<pre><code>import serial
while True:
ser=serial.Serial(port='COM30',baudrate=9600)
print "try"
s=ser.read(100) #reading up to 100 bytes
print s
ser.close()
</code></pre>
<p>Device Manager:</p>
<p><a href="https://i.stack.imgur.com/EIoP6.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EIoP6.jpg" alt="enter image description here"></a><br>
I am trying to read data from port using python.
But it show the error:</p>
<pre><code>Traceback (most recent call last):
File "new_python.py", line 3, in <module>
ser=serial.Serial(port='COM30',baudrate=9600)
File "C:\Anaconda\lib\site-packages\serial\serialwin32.py", line 38, in __init__
SerialBase.__init__(self, *args, **kwargs)
File "C:\Anaconda\lib\site-packages\serial\serialutil.py", line 282, in __init__
self.open()
File "C:\Anaconda\lib\site-packages\serial\serialwin32.py", line 66, in open
raise SerialException("could not open port %r: %r" % (self.portstr, ctypes.WinError()))
serial.serialutil.SerialException: could not open port 'COM30': WindowsError(5, 'Access is denied.')
***Repl Closed***
</code></pre>
<p>Now from previous solution on stack I am tried using python 32 bit and call it from a cmd with admin priviledges but same error!</p>
<p>When I try it matlab it shows me this:</p>
<pre><code>s = serial('COM30')
Serial Port Object : Serial-COM30
Communication Settings
Port: COM30
BaudRate: 9600
Terminator: 'LF'
Communication State
Status: closed
RecordStatus: off
Read/Write State
TransferStatus: idle
BytesAvailable: 0
ValuesReceived: 0
ValuesSent: 0
</code></pre>
|
<p>Look like you haven't opened the serial port</p>
<pre><code>ser=serial.Serial(port='COM30',baudrate=9600)
ser.open()
</code></pre>
<p>Also. What happens if you remove the loop?</p>
|
python|python-2.7|port|tcomport
| 0 |
1,907,713 | 33,564,433 |
How to apply Condition in regex
|
<p>Hello i am a newbie and currently trying to learn about regex pattern by experimenting on various patterns. I tried to create the regex pattern for this url but failed. It's a pagination link of amazon.</p>
<blockquote>
<p><a href="http://www.amazon.in/s/lp_6563520031_pg_2?rh=n%3A5866078031%2Cn%3A%215866079031%2Cn%3A6563520031&page=2s&ie=UTF8&qid=1446802571" rel="nofollow">http://www.amazon.in/s/lp_6563520031_pg_2?rh=n%3A5866078031%2Cn%3A%215866079031%2Cn%3A6563520031&page=2s&ie=UTF8&qid=1446802571</a></p>
</blockquote>
<p>Or</p>
<blockquote>
<p><a href="http://www.amazon.in/Tena-Wet-Wipe-Pulls-White/dp/B001O1G242/ref=sr_1_46?s=industrial&ie=UTF8&qid=1446802608&sr=1-46" rel="nofollow">http://www.amazon.in/Tena-Wet-Wipe-Pulls-White/dp/B001O1G242/ref=sr_1_46?s=industrial&ie=UTF8&qid=1446802608&sr=1-46</a></p>
</blockquote>
<p>I just want to check the url by only these two things.</p>
<blockquote>
<ol>
<li><p>If the url has dp directory or product directory</p></li>
<li><p>If the url has query string page having any digit</p></li>
</ol>
</blockquote>
<p>I tried to create the regex pattern but failed. <strong>I want that if the first thing is not there the regex pattern should match the second (or vice versa)</strong>.</p>
<p>Here's the regex pattern I made:</p>
<pre><code>.*\/(dp|product)\/ | .*page
</code></pre>
<p>Here is my regex101 link: <a href="https://regex101.com/r/zD2gP5/1#python" rel="nofollow">https://regex101.com/r/zD2gP5/1#python</a></p>
|
<p>Since you just want to check if a string contains some pattern, you can use</p>
<pre><code>\/(?:dp|product)\/|[&?]page=
</code></pre>
<p>See <a href="https://regex101.com/r/uQ8xZ9/1" rel="nofollow">regex demo</a></p>
<p>In Python, just check with <code>re.search</code>:</p>
<pre><code>import re
p = re.compile(r'/(?:dp|product)/|[&?]page=')
test_str = "http://w...content-available-to-author-only...n.in/s/lp_6563520031_pg_2?rh=n%3A5866078031%2Cn%3A%215866079031%2Cn%3A6563520031&page=2s&ie=UTF8&qid=14468025716"
if p.search(test_str):
print ("Found!")
</code></pre>
<p>Also, in Python regex patterns, there is no need to escape <code>/</code> slashes.</p>
<p>The regex matches two alternative subpatterns (<code>\/(?:dp|product)\/</code> and <code>[&?]page=</code>):</p>
<ul>
<li><code>/</code> - a forward slash</li>
<li><code>(?:dp|product)</code> - either <code>dp</code> or <code>product</code> (without storing the capture inside the capture buffer since it is a <em>non</em>-capturing group)</li>
<li><code>/</code> - a slash</li>
<li><code>|</code> - or...</li>
<li><code>[&?]</code> - either a <code>&</code> or <code>?</code> (we check the start of a query string parameter)</li>
<li><code>page=</code> - literal sequence of symbols <code>page=</code>.</li>
</ul>
|
python|regex
| 3 |
1,907,714 | 46,787,949 |
Multiple words concordance in a text
|
<p>I have a words file to find their concordances in a text (maximum 3 positions right and left)</p>
<p>Words file: </p>
<blockquote>
<p>buy</p>
<p>time</p>
<p>glass</p>
<p>home</p>
<p>red</p>
</blockquote>
<p>Text file:</p>
<blockquote>
<p>After almost a decade selling groceries online, Amazon has failed to make a major dent on its own as consumers have shown a stubborn urge to buy items like fruits, vegetables and meat in person.
Do you find yourself spending more time on your phone than you ... mindlessly passing time on a regular basis by staring at your phone?
The process can be fraught with anxiety, as many different glass styles are available, and points of view clash on what is proper and necessary</p>
</blockquote>
<p>Script: </p>
<pre><code>def keywordsContext(file, fileName):
#file: text file
#fileName: words file
with open(file, "r") as f, open(fileName, "r") as fi:
corpus = f.read().split()
pivot = fi.read().split()
for keywords in pivot:
if keywords in corpus:
index = pivot.index(keywords)
contexts = keywords+":", pivot[index-3:index], pivot[index+1:index+4]
print(contexts)
else:
pass
</code></pre>
<p>Output: </p>
<blockquote>
<p>('buy:', [], ['time', 'glass', 'home'])</p>
<p>('time:', [], ['glass', 'home', 'red'])</p>
<p>('glass:', [], ['home', 'red'])</p>
<p>None</p>
</blockquote>
<p>Output I wanted:</p>
<blockquote>
<p>'buy': stubborn urge to buy items like fruits</p>
<p>'time': yourself spending more time on your phone</p>
<p>'glass': as many different glass styles are available</p>
</blockquote>
<p><strong>EDIT</strong></p>
<p>And... if the same word appears more than once? I did a test (with one more sentence in the corpus to repeat the word 'glass'). I tried putting a while len (corpus) != 0 but it's a loop that repeats with the same outputs...</p>
<pre><code>def keywordsContext(file, fileName):
with open(file, "r") as f, open(fileName, "r") as fi:
corpus = f.read().split()
pivot = fi.read().split()
while len(corpus) != 0:
for keywords in pivot:
if keywords in corpus:
inde = corpus.index(keywords)
contexts = keywords+": "+ ' '.join(corpus[inde-3:inde+4])
print(contexts)
</code></pre>
<p><strong>Output:</strong></p>
<blockquote>
<p>buy: stubborn urge to buy items like fruits,</p>
<p>time: yourself spending more time on your phone</p>
<p>glass: as many different glass styles are available,</p>
<p>buy: stubborn urge to buy items like fruits,</p>
<p>time: yourself spending more time on your phone</p>
<p>glass: as many different glass styles are available,</p>
<p>buy: stubborn urge to buy items like fruits,</p>
<p>time: yourself spending more time on your phone</p>
<p>glass: as many different glass styles are available,</p>
<p>...</p>
</blockquote>
|
<pre><code>def keywordsContext(file, fileName):
#file: text file
#fileName: words file
with open(file, "r") as f, open(fileName, "r") as fi:
corpus = f.read().split()
pivot = fi.read().split()
for keywords in pivot:
if keywords in corpus:
index = corpus.index(keywords)
contexts = "'" + keywords + "' : " + " ".join(corpus[index - 3 : index + 4])
print(contexts)
else:
pass
</code></pre>
<p>Output:</p>
<pre><code>'buy' : stubborn urge to buy items like fruits,
'time' : yourself spending more time on your phone
'glass' : as many different glass styles are available,
</code></pre>
|
python|python-3.x
| 3 |
1,907,715 | 46,687,062 |
Matplotlib boxplot width in log scale
|
<p>I am trying to plot a boxplot with logarithmic x-axis. As you can see on the example below width of each box decreases because of the scale. Is there any way to make the width of all boxes same?</p>
<p><a href="https://i.stack.imgur.com/dPrHX.png" rel="noreferrer"><img src="https://i.stack.imgur.com/dPrHX.png" alt="enter image description here"></a></p>
|
<p>You can set a specific width of the box depending on the location on the plot. The <code>boxplot</code>'s <code>width</code> argument allows to set different widths. To calculate the respective width, you need to transform the position to linear scale, add or subtract some linear width, and transform back to log scale. The difference between the two hence obtained values is the width of the bar to set.</p>
<p>The linear width <code>w</code> used here is of course somehow arbitrary; you need to select a good value for yourself.</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np; np.random.seed(42)
a = np.cumsum(np.random.rayleigh(150, size=(50,8)), axis=1)
fig, ax=plt.subplots()
positions=np.logspace(-0.1,2.6,8)
w = 0.1
width = lambda p, w: 10**(np.log10(p)+w/2.)-10**(np.log10(p)-w/2.)
ax.boxplot(a, positions=positions, widths=width(positions,w))
ax.set_xscale("log")
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/nxwQQ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/nxwQQ.png" alt="enter image description here"></a></p>
|
python|matplotlib
| 11 |
1,907,716 | 29,904,772 |
JSON list/dictionary parsing from API
|
<p>I have developed a small library and am interested in making it easier on users to retrieve data from the JSON lists/dictionaries that are returned. I have created functions that handle the calls using <em>requests</em>. Now suppose I call this function and pass in a few parameters:</p>
<pre><code>precip = precipitation_obs(stid='kfnl', start='201504261800', end='201504271200', units='precip|in')
</code></pre>
<p>This will return the following JSON:</p>
<pre><code>{ 'STATION': [ { 'ELEVATION': '5016',
'ID': '192',
'LATITUDE': '40.45',
'LONGITUDE': '-105.01667',
'MNET_ID': '1',
'NAME': 'Fort Collins/Loveland, Fort Collins-Loveland '
'Municipal Airport',
'OBSERVATIONS': { 'count_1': 6,
'ob_end_time_1': '2015-04-27T00:55:00Z',
'ob_start_time_1': '2015-04-26T18:55:00Z',
'total_precip_value_1': 0.13,
'vids case4': ['39', '51', '40', '52']},
'STATE': 'CO',
'STATUS': 'ACTIVE',
'STID': 'KFNL',
'TIMEZONE': 'US/Mountain'}],
'SUMMARY': { 'METADATA_RESPONSE_TIME': '5.22613525391 ms',
'NUMBER_OF_OBJECTS': 1,
'RESPONSE_CODE': 1,
'RESPONSE_MESSAGE': 'OK',
'TOTAL_TIME': '57.6429367065 ms'}}
</code></pre>
<p>Now, I want the user to be able to just drill down through the dictionary, but STATION is a list and requires me to do the following:</p>
<pre><code>output = precip['STATION'][0]['OBSERVATIONS']['ob_start_time_1']
print(output)
# returns 2015-04-26T18:55:00Z
</code></pre>
<p>where I have to include the [0] to avoid:</p>
<pre><code>TypeError: list indices must be integers, not str
</code></pre>
<p>Is there anyway around this? Adding that [0] in there really jacks things up so to say. Or even having to specify ['STATION'] every time is a bit of a nuisance. Should I use the simpleJSON module to help here? Any tips on making this a bit easier would be great, thanks!</p>
|
<blockquote>
<p>Adding that <code>[0]</code> in there really jacks things up so to say. Or even having to specify <code>['STATION']</code> every time is a bit of a nuisance.</p>
</blockquote>
<p>So just store <code>precip['STATION'][0]</code> in a variable:</p>
<pre><code>>>> precip0 = precip['STATION'][0]
</code></pre>
<p>And now, you can use it repeatedly:</p>
<pre><code>>>> precip0['OBSERVATIONS']['ob_start_time_1']
2015-04-26T18:55:00Z
</code></pre>
<p>If you know that the API is always going to return exactly one station, and you're never going to want anything other than the data for that station, you can put that in your wrapper function:</p>
<pre><code>def precipitation_obs(stid, start, end, units):
# your existing code, which assigns something to result
return result['STATION'][0]
</code></pre>
<hr>
<p>If you're worried about "efficiency" here, don't worry. First, this isn't making a copy of anything, it's just making another reference to the same object that already exists—it takes under a microsecond, and wastes about 8 bytes. In fact, it <em>saves</em> you memory, because if you're not storing the whole dict, just the sub-dict, Python can garbage-collect the rest of the structure. And, more importantly, that kind of micro-optimization isn't worth worrying about in the first place until (1) your code is working and (2) you know it's a bottleneck.</p>
<hr>
<blockquote>
<p>Should I use the simpleJSON module to help here?</p>
</blockquote>
<p>Why would you think that would help? As <a href="https://pypi.python.org/pypi/simplejson/" rel="nofollow">its readme says</a>:</p>
<blockquote>
<p>simplejson is the externally maintained development version of the json library included with Python 2.6 and Python 3.0, but maintains backwards compatibility with Python 2.5.</p>
</blockquote>
<p>In other words, it's either the same code you've already got in the stdlib, or an older version of that code.</p>
<p>Sometimes a <em>different</em> library, like <a href="https://pypi.python.org/pypi/ijson/" rel="nofollow"><code>ijson</code></a>, can help—e.g., if the JSON structures are so big that you can't parse the whole thing into memory, or so complex that it's easier to describe what you want upside-down in SAX style. But that's not relevant here.</p>
|
python|json|python-3.x|python-requests|simplejson
| 2 |
1,907,717 | 61,424,081 |
Match two nodes and keep the path between the other respective nodes networkx
|
<h2>EDITED</h2>
<p>df1</p>
<pre><code>From description To priority
10 Start 20,30 1
20 Left 40 2
30 Right 40 2
40 End - 1
</code></pre>
<p>My second data frame</p>
<p>df2</p>
<pre><code>From description To priority
50 Start 60,70 1
60 Left 80 2
70 Right 80 2
80 End - 1
</code></pre>
<p>When I convert the two data frames into graph using <code>Netwokrx</code> Python library, I get the following graphs as graph1 for df1 and graph2 for df2. The color of the nodes is based on their priority.</p>
<p><a href="https://i.stack.imgur.com/ALjsH.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ALjsH.jpg" alt="enter image description here" /></a></p>
<p>I would like to match (combine) two nodes which have similar color such as 10/50, 40/80, 20/60, and 30/70. Just to make it clear, 10 and 50 have <code>Start</code> attribute and 40 and 80 have <code>End</code>. In addition to these, 10, 50, 40, and 80 have <code>priority ==1</code> attribute. Node 20 and 60 have '''Left' attribute and 30 and 70 have <code>Right</code>. In addition to these, 20, 60, 30, and 70 have <code>priority ==2</code>.</p>
<p>I managed to match nodes at once for the whole nodes in the two graphs. But I couldn't manage to go step by step (using a kind of loop). Which means first match nodes with <code>blue</code> color then add one of the nodes with <code>orange</code> color and so on. I would expect something like below.</p>
<p><a href="https://i.stack.imgur.com/tNDw9.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tNDw9.jpg" alt="enter image description here" /></a></p>
<p>To achieve the above result, this how I tried.</p>
<pre><code>for n1, n2 in g1.nodes(data ='True'):
for x1, x2 in g2.nodes(data ='True'):
if ((g1.node[n1]['description']) == (g2.node[x1]['description'])&
if ((g1.node[n1]['priority']) == (g2.node[x1]['priority'])
):
name1 = str(n1) + '/' + str(x1)
mapping1 = {n1: name1, x1:name1}
mapping1 = nx.relabel_nodes(g1, mapping1, copy =True)
</code></pre>
<p>Can anyone extend the above trial or find a new solution to get what I would like to see?</p>
|
<p>With the following code you can relabel the nodes:</p>
<p>More hints to the <a href="https://docs.python.org/3/howto/sorting.html" rel="nofollow noreferrer"><code>sorted</code></a> function. </p>
<pre class="lang-py prettyprint-override"><code>import networkx as nx
df1_graph = nx.DiGraph()
# add edges
df1_graph.add_edges_from([(10, 20), (10, 30), (20, 40), (30, 40)])
# add node information
nodes = [10, 20, 30, 40]
descriptions = ["Start", "Left", "Right", "End"]
priorities = [1, 2, 2, 1]
for node, (description, priority) in zip(nodes, zip(descriptions, priorities)):
df1_graph.nodes[node]["description"] = description
df1_graph.nodes[node]["priority"] = priority
df2_graph = nx.DiGraph()
# add edges
df2_graph.add_edges_from([(50, 60), (50, 70), (60, 80), (60, 80)])
nodes = [50, 60, 70, 80]
for node, (description, priority) in zip(nodes, zip(descriptions, priorities)):
df2_graph.nodes[node]["description"] = description
df2_graph.nodes[node]["priority"] = priority
# creating new graph
mappings = []
for node_1, data_1 in df1_graph.nodes(data=True):
for node_2, data_2 in df2_graph.nodes(data=True):
if data_1["description"] == data_2["description"] and data_1["priority"] == data_2["priority"]:
name = str(node_1) + '/' + str(node_2)
# add found mapping (node_1, name)
# together with information about sorting order
mappings.append((data_2["priority"], data_2["description"], node_1, name))
new_graph = df1_graph.copy()
# save the relabelled graphs in a lists
graphs_stepwise = []
# sort nodes according to priority and description (secondary key)
# we sort first by description to ensure in this example Left is replaced first
mappings = sorted(mappings, key=lambda x: x[1])
# sort by priority
mappings = sorted(mappings, key=lambda x: x[0])
# relabel one node at a time
for priority, description, node, new_label in mappings:
new_graph = nx.relabel_nodes(new_graph, {node: new_label}, copy=True)
graphs_stepwise.append(new_graph)
# print node information of saved graphs
for graph in graphs_stepwise:
print(graph.nodes(data=True))
#[(10, {'description': 'Start', 'priority': 1}), (20, {'description': 'Left', 'priority': 2}), (30, {'description': 'Right', 'priority': 2}), ('40/80', {'description': 'End', 'priority': 1})]
#[('10/50', {'description': 'Start', 'priority': 1}), (20, {'description': 'Left', 'priority': 2}), (30, {'description': 'Right', 'priority': 2}), ('40/80', {'description': 'End', 'priority': 1})]
#[('10/50', {'description': 'Start', 'priority': 1}), ('20/60', {'description': 'Left', 'priority': 2}), (30, {'description': 'Right', 'priority': 2}), ('40/80', {'description': 'End', 'priority': 1})]
#[('10/50', {'description': 'Start', 'priority': 1}), ('20/60', {'description': 'Left', 'priority': 2}), ('30/70', {'description': 'Right', 'priority': 2}), ('40/80', {'description': 'End', 'priority': 1})]
</code></pre>
|
python|graph|networkx
| 2 |
1,907,718 | 43,077,996 |
generate file name list for a given folder
|
<p>There is a list of document saved in a folder, I need to save the file name along with its path in a plain text file, such as</p>
<pre><code>/document/file1.txt
/document/file2.txt
...
</code></pre>
<p>My question is how to iterate through a file folder and extract the path information for each file. Thanks.</p>
|
<p>you can try something like this.</p>
<pre><code>import os
output_file = open(filename, 'w')
for root, dirs, files in os.walk(".", topdown=False):
for name in files:
f = print(os.path.join(root, name))
output_file.write(f)
for name in dirs:
print(os.path.join(root, name))
output_file.close()
</code></pre>
<p>you can also use <code>listdir()</code> method</p>
<pre><code>file = open(filename, 'w')
for f in os.listdir(dirName):
f = os.path.join(dirName, f)
file.write(f)
file.close()
</code></pre>
|
python|python-3.x
| 2 |
1,907,719 | 37,083,117 |
How to Change selection field automatically in odoo
|
<p>I'm working on Odoo 8. I have a view that contains a set of combo-box type fields and a selection field. I want to make a test on the combo-box fields and if there are all checked then the selection field value should change. Here is what i have so far: </p>
<pre><code>def get_etat_dossier(self,cr,uid,ids,args,fields,context=None):
res = {}
for rec in self.browse(cr,uid,ids):
if rec.casier_judiciare==True: # test field if = true
res[rec.id]= 02 # field etat_dos type selection = Dossier Complet
else:
res[rec.id] = 01
return res
_columns= {
'casier_judiciare' : fields.boolean('Casier Judiciaire'), # field to test
'reference_pro' : fields.boolean('Réferences Professionnelles'),
'certificat_qual' : fields.boolean('Certificat de qualification'),
'extrait_role' : fields.boolean('Extrait de Role'),
'statut_entre' : fields.selection([('eurl','EURL'),('sarl','SARL')],'Statut Entreprise'),
'etat_dos': fields.selection([('complet','Dossier Complet'),('manquant','Dossier Manquant')],'Etat De Dossier'), # field ho change after test
}
</code></pre>
<p><img src="https://i.stack.imgur.com/gAhdt.jpg" alt="enter image description here"></p>
<p>Here is the code for my view</p>
<pre><code><group col='4' name="doss_grp" string="Dossier de Soumission" colspan="4" > <field name="casier_judiciare"/>
<field name="certificat_qual"/>
<field name="extrait_role"/>
<field name="reference_pro"/>
<field name="statut_entre" style="width:20%%"/>
<field name="etat_dos"/>
</group>
</code></pre>
|
<p>Add an <code>onchange</code> attribute to the <code>casier_judiciare</code> field and then pass all the other fields you want to check as arguments to the method like this</p>
<pre><code><group col='4' name="doss_grp" string="Dossier de Soumission" colspan="4" >
<field name="casier_judiciare" on_change="onchange_casier_judiciare(casier_judiciare, certificat_qual, extrait_role, reference_pro)"/>
<field name="certificat_qual"/>
<field name="extrait_role"/>
<field name="reference_pro"/>
<field name="statut_entre" style="width:20%%"/>
<field name="etat_dos"/>
</group>
</code></pre>
<p>In your model file define the method like this and use an if statement to check if they're all True (That means they have all been checked), if so then you can return a dictionary with any value you want for the selection field, in this case <code>etat_dos</code> will change to <code>Dossier Complet</code></p>
<pre><code>def onchange_casier_judiciare(self, cr, uid, ids, casier_judiciare, certificat_qual, extrait_role, reference_pro, context=None):
if casier_judiciare and certificat_qual and extrait_role and reference_pro: # if they're all True (that means they're all checked):
values = {'value': {'etat_dos': 'complet'}} #set the value of etat_dos field
return values
</code></pre>
<p>Note that the <code>onchange</code> is only triggered on the <code>casier_judiciare</code> field but you can also set <code>onchange</code> on other fields and it should work just fine</p>
|
python|openerp|odoo-8
| 0 |
1,907,720 | 19,895,429 |
Pascals triangle in python?
|
<p>Ok so I've been working on this for a long time and I cant seem to get it. Our assignment was to make pascal's triangle and center and all that good stuff... But I cant seem to figure it out.</p>
<pre><code>def factorial(n):
if (n <= 1):
return 1
else:
return n * factorial(n-1)
def combination(n, k):
return int (factorial(n) / (factorial(k) * factorial(n-k)))
def pascal_row(row):
answer = ""
for entry in range(row+1):
answer = answer + " " + str(combination(row, entry))
print answer
def pascal_triangle(rows):
for row in range(rows):
pascal_row(row)
pascal_triangle(10)
</code></pre>
<p>I know that if I do the last row which is 9 and subtract by the current row and then multiply by three it will give me the right spacing for each row. I'm just not sure how to incorporate that into the code??
If you could help me that would be fantastic!
thanks for the help in advance.</p>
|
<p>You could do the following. To look up more examples & documentation on string formatting, visit <a href="http://docs.python.org/2/library/string.html#format-examples" rel="nofollow">http://docs.python.org/2/library/string.html#format-examples</a></p>
<pre><code>print("{:^50}".format(pascal_row(row)))
</code></pre>
<p>In the above code, <code>^</code> centers the string data. The 50 stands for the string length to be taken. (Big enough to encompass the longest string)</p>
|
python
| 1 |
1,907,721 | 66,923,272 |
Solve_IVP Precision
|
<p>Is there a way to increase the precision of the Solve_IVP parameters so that I can get a more accurate plot of the solution? Essentially, I am plotting the solutions to four different differential equations that each have a parameter k. I am finding sets of solutions for several k values (i.e. one iteration finds one set of equations, the next finds another set...) For each set I plotted the first function, and as of now I am getting the same plot (cannot tell difference between the graphs --- tried zooming in but that did not work). Is there any rule of thumb for deciding how precise we want Solve_IVP to be?</p>
|
<p>You can try numerous things. Here are 3 (try from 1->3):</p>
<ol>
<li>Increase the accuracy by passing the parameter 'max_step' and decreasing it. Try <code>max_step=0.02</code> and start lowering it to reach the desired accuracy.
solve_ivp documentation: <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.solve_ivp.html" rel="nofollow noreferrer">link</a></li>
<li>Moreover, you can try different integration methods bypassing the parameter <code>method=</code>. As demonstrated by this <a href="https://danielmuellerkomorowska.com/2021/02/16/differential-equations-with-scipy-odeint-or-solve_ivp/" rel="nofollow noreferrer">article</a>. <code>'LSODA'</code> offers the best accuracy.</li>
<li>Lastly, try the <code>dense_output=True</code> to continuously calculate all the points. However, this will increase the execution time as it takes more time to do so.</li>
</ol>
|
python|ode
| -1 |
1,907,722 | 4,416,185 |
PyGame code doesn't execute properly when event is executed
|
<p>I am trying to make the most simple pythong code that will respond when a button is pressed on a joystick. I used code from several different examples and I still cannot get it to work. The following code will not dispatch the event when I press the trigger (or any button for that matter)</p>
<pre><code>import pygame
joy = []
def handleJoyEvent(e):
if e.type == pygame.JOYBUTTONDOWN:
str = "Button: %d" % (e.dict['button'])
if (e.dict['button'] == 0):
print ("Pressed!\n")
else:
pass
def joystickControl():
while True:
e = pygame.event.wait()
if (e.type == pygame.JOYBUTTONDOWN):
handleJoyEvent(e)
# main method
def main():
pygame.joystick.init()
pygame.display.init()
for i in range(pygame.joystick.get_count()):
myjoy = pygame.joystick.Joystick(i)
myjoy.init()
joy.append(myjoy)
# run joystick listener loop
joystickControl()
# allow use as a module or standalone script
if __name__ == "__main__":
main()
</code></pre>
|
<p>I assume you've tried leaving off the if and just printing str?
Your joystick might also not be working properly. Does it work in other programs?<br/></p>
<p>If you are using linux you might need to install a joystick driver. For Windows, check the device manager.</p>
|
python|pygame
| 1 |
1,907,723 | 4,549,889 |
Why is my multithreaded python script that uses Queue , threading.Thread and subprocess so flaky
|
<p>I have three shell scripts P1 , P2 and P3 which I am trying to chain. These three shell scripts need to be run in series , but at any given time multiple P1s and P2s and P3s can be running.</p>
<p>I need to run these on tens of files and quickly and hence the desire to use Threads and do things in parallel.</p>
<p>I am using the python Thread , Queue and subprocess module to achieve this. </p>
<p>My problem is when I have a thread count of greater than one , the program behaves erratically and the threads dont hand off to each other in a reproduceable manner. SOmetimes all five threads work perfectly and work to completion.</p>
<p>This is my first attempt at doing something using threads and I am certain this is because of the usual issues with Threads involving race conditions. But I want to know how I can go about cleaning up my code.</p>
<p>The actual code is at (https://github.com/harijay/xtaltools/blob/master/process_multi.py). Pseudocode is given below. Sorry if the code is messy.</p>
<p>My question is Why do I have erratic behavior using this design. The Threads are all accessing different files at any given time. Also subprocess.call returns only when the shell script is finished and the file it produces is written to disk.</p>
<p>What can I do differently?
I have tried to explain my design here as succinctly as possible.</p>
<p>My Basic design:</p>
<pre><code>P1_Queue = Queue()
P2_Queue = Queue()
P3_Queue = Queue()
class P1_Thread(Thread):
def __init__(self,P1_Queue,P2_Queue):
Thread.__init__(self)
self.in_queue = P1_Queue
self.out_queue = P2_Queue
def run(self):
while True:
my_file_to_process = self.in_queue.get()
if my_file_to_process = None:
break
P1_runner = P1_Runner(my_file_to_process)
P1_runner.run_p1_using_subprocess()
self.out_queue.put(my_file_to_process)
</code></pre>
<p>The class p1 Runner takes the input file handle and then calls the subprocess.call() to run a shell script that uses the file input and produces a new output file using a run_p1_using_subprocess method.</p>
<pre><code>class P1_runner(object):
def __init__(self,inputfile):
self.my_shell_script = """#!/usr/bin/sh
prog_name <<eof
input 1
...
eof"""
self.my_shell_script_file = open("some_unique_p1_file_name.sh")
os.chmod("some_unique_file_name.sh",0755)
def run_p1_using_subprocess(self):
subprocess.call([self.my_shell_script_file])
I have essentially similar classes for P2 and P3 . All of which call a shell script that is custom generated
The chaining is achieved using a series of Thread Pools.
p1_worker_list = []
p2_worker_list = []
p3_worker_list = []
for i in range(THREAD_COUNT):
p1_worker = P1_Thread(P1_Queue,P2_Queue)
p1_worker.start()
p1_worker_list.append(p1_worker)
for worker in p1_worker_list:
worker.join()
And then again the same code block for p2 and p3
for i in range(THREAD_COUNT):
p2_worker = P2_Thread(P2_Queue,P3_Queue)
p2_worker.start()
p2_worker_list.append(p1_worker)
for worker in p2_worker_list:
worker.join()
</code></pre>
<p>Thanks a tonne for your help/advice</p>
|
<p>Well this is really bad:</p>
<pre><code>runner.run()
</code></pre>
<p>You shouldn't ever call a thread's run method manually. You start a thread with .start(). Your code is a HUGE mess and no one here is going to wade through it to find your error.</p>
|
python|multithreading|subprocess
| 2 |
1,907,724 | 69,611,344 |
Open CV imshow() - ARGB32 to OpenCV image
|
<p>I am trying to process images from Unity3D WebCamTexture graphics format(ARGB32) using OpenCV Python. But I am having trouble interpreting the image on the Open CV side. The image is all Blue (possibly due to ARGB)</p>
<pre><code>try:
while(True):
data = sock.recv(480 * 640 * 4)
if(len(data) == 480 * 640 * 4):
image = numpy.fromstring(data, numpy.uint8).reshape( 480, 640, 4 )
#imageNoAlpha = image[:,:,0:2]
cv2.imshow('Image', image) #further do image processing
key = cv2.waitKey(1) & 0xFF
if key == ord("q"):
break
finally:
sock.close()
</code></pre>
<p><a href="https://i.stack.imgur.com/LkOlT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LkOlT.png" alt="enter image description here" /></a></p>
|
<p>The reason is because of the order of the channels. I think the sender read image as a RGB image and you show it as a BGR image or vice versa.
Change the order of R and B channels will solve the problem:</p>
<pre class="lang-py prettyprint-override"><code>image = image[..., [0,3,2,1]] # swap 3 and 1 represent for B and R
</code></pre>
<p>You will meet this problem frequently if you work with <code>PIL.Image</code> and <code>OpenCV</code>. The <code>PIL.Image</code> will read the image as RGB and <code>cv2</code> will read as BGR, that's why all the red points in your image become blue.</p>
|
python|python-3.x|numpy|opencv|opencv-python
| 3 |
1,907,725 | 48,182,791 |
How do you lightness thresh hold with HSL on OpenCV?
|
<p>There is a project that im working on which required the color white detection, after some research i decided to use covert RGB image to HSL image and thresh hold the lightness to get the color white, im working with openCV so wonder if there is a way to do it.
<a href="https://i.stack.imgur.com/Sd3MJ.png" rel="nofollow noreferrer">enter image description here</a></p>
|
<p>You can do it with 4 easy steps:</p>
<p>Convert HLS</p>
<pre><code>img = cv2.imread("HLS.png")
imgHLS = cv2.cvtColor(img, cv2.COLOR_BGR2HLS)
</code></pre>
<p>Get the L channel</p>
<pre><code>Lchannel = imgHLS[:,:,1]
</code></pre>
<p>Create the mask</p>
<pre><code>#change 250 to lower numbers to include more values as "white"
mask = cv2.inRange(Lchannel, 250, 255)
</code></pre>
<p>Apply Mask to original image</p>
<pre><code>res = cv2.bitwise_and(img,img, mask= mask)
</code></pre>
<p>This also depends on what is white for you, and you may change the values :) I used inRange in the L channel but you can save one step and do</p>
<pre><code>mask = cv2.inRange(imgHLS, np.array([0,250,0]), np.array([255,255,255]))
</code></pre>
<p>instead of the lines:</p>
<pre><code>Lchannel = imgHLS[:,:,1]
mask = cv2.inRange(Lchannel, 250, 255)
</code></pre>
<p>It is shorter, but I did it the other way first to make it more explicit and to show what I was doing.</p>
<p>Image:</p>
<p><a href="https://i.stack.imgur.com/v5iIW.png" rel="noreferrer"><img src="https://i.stack.imgur.com/v5iIW.png" alt="enter image description here"></a></p>
<p>Result:</p>
<p><a href="https://i.stack.imgur.com/jbo27.png" rel="noreferrer"><img src="https://i.stack.imgur.com/jbo27.png" alt="enter image description here"></a></p>
<p>The result looks almost as the mask (almost binary), but depending on your lowerbound (I chose 250) you may get even some almost white colors.</p>
|
python|opencv|image-processing|colors|hsl
| 12 |
1,907,726 | 48,102,882 |
How to implement the derivative of Leaky Relu in python?
|
<p>How would I implement the derivative of Leaky ReLU in Python without using Tensorflow?</p>
<p>Is there a better way than this? I want the function to return a numpy array</p>
<pre><code>def dlrelu(x, alpha=.01):
# return alpha if x < 0 else 1
return np.array ([1 if i >= 0 else alpha for i in x])
</code></pre>
<p>Thanks in advance for the help</p>
|
<p>The method you use works, but strictly speaking you are computing the derivative with respect to the loss, or lower layer, so it might be wise to also pass the value from lower layer to compute the derivative (dl/dx). </p>
<p>Anyway, you can avoid using the loop which is more efficient for large <code>x</code>. This is one way to do it:</p>
<pre><code>def dlrelu(x, alpha=0.01):
dx = np.ones_like(x)
dx[x < 0] = alpha
return dx
</code></pre>
<p>If you passed the error from lower layer, it looks like this:</p>
<pre><code>def dlrelu(dl, x, alpha=0.01):
""" dl and x have same shape. """
dx = np.ones_like(x)
dx[x < 0] = alpha
return dx*dl
</code></pre>
|
python|neural-network|activation-function
| 11 |
1,907,727 | 48,086,938 |
My program isn't showing 2 answers in a loop that stops only when two answers are given
|
<p>I am trying to build a BlackJack game. </p>
<p>The program starts, and uses the random module to generate 2 different numbers which tells you which card you have. </p>
<ul>
<li>Number 1 is the value, e.g. 8</li>
<li>Number 2 is the suit, e.g. 4 = Spades</li>
</ul>
<p>The program gives me 2 cards sometimes, and the other half the time it gives me one card. </p>
<p><strong>Question:</strong> Why is this program not working as desired?</p>
<pre><code>types = [" of Spades", " of Diamonds", " of Clubs", " of Hearts"]
special = ["King", "Queen", "Jack", "Ace"]
tries = 0
import random
import time
print("Welcome to BlackJack.")
time.sleep(1)
print("Let's Begin.")
time.sleep(1)
while tries < 1:
cardnumber1 = random.randint(2, 13)
random.shuffle(types)
random.shuffle(special)
cardnumber2 = random.randint(2, 13)
random.shuffle(special)
random.shuffle(types)
cardnumber3 = random.randint(2, 9)
random.shuffle(special)
random.shuffle(types)
cardnumber4 = random.randint(2, 9)
random.shuffle(special)
random.shuffle(types)
cardtype1 = random.choice(types)
random.shuffle(special)
random.shuffle(types)
cardtype2 = random.choice(types)
random.shuffle(special)
random.shuffle(types)
cardtype3 = random.choice(types)
random.shuffle(special)
random.shuffle(types)
cardtype4 = random.choice(types)
random.shuffle(special)
random.shuffle(types)
cardspecial = random.choice(special)
random.shuffle(special)
random.shuffle(types)
cardspecial2 = random.choice(special)
random.shuffle(special)
random.shuffle(types)
if cardnumber1 > 10:
print(str(cardnumber3) + cardtype1)
tries = tries + 1
if cardnumber2 > 10:
print(str(cardnumber4) + cardtype2)
tries = tries + 1
if cardnumber1 < 9:
print(cardspecial + cardtype3)
tries = tries + 1
if cardnumber2 < 9:
print(cardspecial2 + cardtype4)
tries = tries + 1
</code></pre>
|
<p>You could declare your lists fully: </p>
<pre><code>types = [" of Spades", " of Diamonds", " of Clubs", " of Hearts"]
cards = ["2", "3", "4", "5", "6", "7", "8", "9", "10", "Jack", "Queen", "King", "Ace"]
</code></pre>
<p>And create a full deck of them:</p>
<pre><code>deck = [ (card, typ) for card in cards for typ in types] # create tuples
print (deck)
</code></pre>
<p>To draw a card:</p>
<pre><code>card = random.choice(deck) # will return a tuple for 1 card, 1st value is face, 2nd is typ
deck.remove(card) # remove the card from the deck so you dont draw it again
face,typ = card # deconstruct tuple
print(face)
print(typ)
</code></pre>
<p>Outputs:</p>
<pre><code> [('2', ' of Spades'), ('2', ' of Diamonds'), ('2', ' of Clubs'), ('2', ' of Hearts'),
('3', ' of Spades'), ('3', ' of Diamonds'), ('3', ' of Clubs'), ('3', ' of Hearts'),
('4', ' of Spades'), ('4', ' of Diamonds'), ('4', ' of Clubs'), ('4', ' of Hearts'),
('5', ' of Spades'), ('5', ' of Diamonds'), ('5', ' of Clubs'), ('5', ' of Hearts'),
('6', ' of Spades'), ('6', ' of Diamonds'), ('6', ' of Clubs'), ('6', ' of Hearts'),
('7', ' of Spades'), ('7', ' of Diamonds'), ('7', ' of Clubs'), ('7', ' of Hearts'),
('8', ' of Spades'), ('8', ' of Diamonds'), ('8', ' of Clubs'), ('8', ' of Hearts'),
('9', ' of Spades'), ('9', ' of Diamonds'), ('9', ' of Clubs'), ('9', ' of Hearts'),
('10', ' of Spades'), ('10', ' of Diamonds'), ('10', ' of Clubs'), ('10', ' of Hearts'),
('Jack', ' of Spades'), ('Jack', ' of Diamonds'), ('Jack', ' of Clubs'), ('Jack', ' of Hearts'),
('Queen', ' of Spades'), ('Queen', ' of Diamonds'), ('Queen', ' of Clubs'), ('Queen', ' of Hearts'),
('King', ' of Spades'), ('King', ' of Diamonds'), ('King', ' of Clubs'), ('King', ' of Hearts'),
('Ace', ' of Spades'), ('Ace', ' of Diamonds'), ('Ace', ' of Clubs'), ('Ace', ' of Hearts')]
10
of Diamonds
</code></pre>
<hr>
<pre><code>deck = [ (card, typ) for card in cards for typ in types]
</code></pre>
<p>is called a list comprehension. It is a way to construct lists from stuff - i.e. ranges, other lists, iterators, ...</p>
<p>Basic syntax : </p>
<ul>
<li><code>newList = [ str(x) for x in range(0,10)]</code> will use the inbuilt range(0,10) which creates numbers from 0 to 9 and makes them a string and built a list from it - Link to doku: <a href="https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions" rel="nofollow noreferrer">https://docs.python.org/3/tutorial/datastructures.html#list-comprehensions</a></li>
</ul>
<p>I build tuples <code>(card, typ)</code> for each element in <code>cards</code> (your list) and <code>types</code> your other list).</p>
<pre><code>[ (card, typ) for card in cards for typ in types]
# equivalent to
deck = [] # empty list
for card in cards:
for typ in types:
deck.append( (card,typ) ) # create tuples and add to list
</code></pre>
<p>You could use this deck and shuffle() it once, and then deck.pop() cards from it (faster, only one shuffling needed) or draw a random card every time random.choice(deck) and then remove it from the deck.</p>
<p>See <a href="https://docs.python.org/3/tutorial/datastructures.html#more-on-lists" rel="nofollow noreferrer">Lists</a> for methods on a list.</p>
<p>See <a href="https://docs.python.org/3/library/functions.html" rel="nofollow noreferrer">buildins</a> - you find the <code>range()</code> in that list and can get more infos.</p>
|
python
| 3 |
1,907,728 | 51,380,916 |
counting each value in dataframe
|
<p>So I want to create a plot or graph. I have a time series data.
My dataframe looks like that:</p>
<p><a href="https://i.stack.imgur.com/lV6EB.png" rel="nofollow noreferrer">df.head()</a></p>
<p>I need to count values in <code>df['status']</code> (there are 4 different values) and <code>df['group_name']</code> (2 different values) for each day.
So i want to have date index and count of how many times <strong>each</strong> value from <code>df['status']</code> appear as well as <code>df['group_name']</code>. It should return Series.</p>
|
<p>I used <code>spam.groupby('date')['column'].value_counts().unstack().fillna(0).astype(int)</code> and it working as it should. Thank you all for help</p>
|
pandas|plotly|python-3.6
| 0 |
1,907,729 | 64,411,304 |
gunicorn.errors.HaltServer while deploying to google cloud run for a python application
|
<p>Here is my dockerfile. I am trying to deploy this image to google cloud run using below two command</p>
<pre><code>1. gcloud builds submit --tag gcr.io/smartshop-292203/data_science --timeout=20h0m0s
2. gcloud run deploy --image gcr.io/smartshop-292203/data_science --platform managed
</code></pre>
<p>I am using this reference link</p>
<pre><code>https://cloud.google.com/run/docs/quickstarts/build-and-deploy#python
</code></pre>
<p>Dockerfile:</p>
<pre><code>FROM python:3.8
# Allow statements and log messages to immediately appear in the Knative logs
ENV PYTHONUNBUFFERED True
# Copy local code to the container image.
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY . ./
# update image and install cmake
RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get install cmake -y
RUN apt-get install libgl1-mesa-glx -y
# Install production dependencies.
RUN pip install -r requirements.txt
# command to run on container start
# CMD [ "python", "./app.py" ]
# CMD exec gunicorn --bind :$PORT --workers 1 --threads 8 --timeout 0 main:app
CMD exec gunicorn --bind :8001 --workers 1 --threads 8 --timeout 1500 main:app
# CMD exec gunicorn main:app --bind 0.0.0.0:8080 --worker-class aiohttp.GunicornWebWorker
# CMD exec gunicorn --bind 0.0.0.0:8080 wsgi:main
# CMD ["/bin/bash"]
</code></pre>
<p>Errors i am getting:</p>
<pre><code> "Traceback (most recent call last):
File "/usr/local/bin/gunicorn", line 8, in <module>
sys.exit(run())
File "/usr/local/lib/python3.8/site-packages/gunicorn/app/wsgiapp.py", line 58, in run
WSGIApplication("%(prog)s [OPTIONS] [APP_MODULE]").run()
File "/usr/local/lib/python3.8/site-packages/gunicorn/app/base.py", line 228, in run
super().run()
File "/usr/local/lib/python3.8/site-packages/gunicorn/app/base.py", line 72, in run
Arbiter(self).run()
File "/usr/local/lib/python3.8/site-packages/gunicorn/arbiter.py", line 227, in run
self.halt()
File "/usr/local/lib/python3.8/site-packages/gunicorn/arbiter.py", line 342, in halt
self.stop()
File "/usr/local/lib/python3.8/site-packages/gunicorn/arbiter.py", line 393, in stop
time.sleep(0.1)
File "/usr/local/lib/python3.8/site-packages/gunicorn/arbiter.py", line 242, in handle_chld
self.reap_workers()
File "/usr/local/lib/python3.8/site-packages/gunicorn/arbiter.py", line 525, in reap_workers
raise HaltServer(reason, self.WORKER_BOOT_ERROR)
gunicorn.errors.HaltServer: <HaltServer 'Worker failed to boot.' 3>"
</code></pre>
<p>What will be the issue ?
Please have a look</p>
|
<p>Try to run <code>gunicorn</code> with <code>--preload</code>. You will see the full stack trace. Seems like something wrong on app creation step.</p>
<p>Also you can deploy you app using:
<code>gcloud beta run deploy - deploy</code></p>
<p>More info <a href="https://cloud.google.com/sdk/gcloud/reference/beta/run/deploy" rel="nofollow noreferrer">https://cloud.google.com/sdk/gcloud/reference/beta/run/deploy</a>.</p>
|
python|docker|google-cloud-platform|gunicorn|google-cloud-run
| 0 |
1,907,730 | 69,787,114 |
Creating a point system from a list of matches
|
<p>I have a list that takes in analysis x matches of a tournament, each match is divided in y sections. Each number of the list tells me if the section was won, lost or draw (1, -1, 0).
Every team has to play against every other team only once, so if I have 3 teams the list will look like this: index 0 = player 1 vs player 2, index 1 = player 1 vs player 3 and index 2 = player 2 vs player 3.
For example here's a list:</p>
<pre><code>matches = [[1,0,0], [1,1,-1], [0,0,1]]
</code></pre>
<p>So in matches[0] player one won one section and drew two (meaning that player 2 lost), in matches[1] player 1 won two sections and lost one (meaning that player 3 lost) and in matches[2] player 2 won one section and drew two (meaning that player 3 lost).
From this I should come with some points to assign to each team, 2 if the match was won, 1 for a draw and 0 for a loss.
In this situation the expected output would be</p>
<pre><code>[4, 2, 0]
</code></pre>
<p>I tried many solutions but never got to the expected output. If someone could help me out I would really appreciate it. Thanks!</p>
|
<p>Your point system is a bit complicated, but you can try something like that:</p>
<pre><code>matches = [[1,0,0], [1,1,-1], [0,0,1]]
def calculate_match(match: list, pos: bool): # pos = True if player goes first (player1 is first in p1 vs p2 match), False if player goes second
return (2 if (sum(match) > 0 if pos else sum(match) < 0) else 1 if sum(match) == 0 else 0)
p1 = calculate_match(matches[0], True) + calculate_match(matches[1], True)
p2 = calculate_match(matches[0], False) + calculate_match(matches[2], True)
p3 = calculate_match(matches[1], False) + calculate_match(matches[2], False)
print(p1, p2, p3)
</code></pre>
<p>Output:</p>
<pre><code>>>> 4 2 0
</code></pre>
|
python
| 0 |
1,907,731 | 73,035,687 |
Pandas: Give (string + numbered) name to unknown number of added columns
|
<p>I have this example CSV file:</p>
<pre><code>Name,Dimensions,Color
Chair,!12:88:33!!9:10:50!!40:23:11!,Red
Table,!9:10:50!!40:23:11!,Brown
Couch,!40:23:11!!12:88:33!,Blue
</code></pre>
<p>I read it into a dataframe, then split <code>Dimensions</code> by <code>!</code> and take the first value of each <code>!..:..:..!</code>-section. I append these as new columns to the dataframe, and delete <code>Dimensions</code>. (code for this below)</p>
<pre><code>import pandas as pd
df = pd.read_csv("./data.csv")
df[["first","second","third"]] = (df['Dimensions']
.str.strip('!')
.str.split('!{1,}', expand=True)
.apply(lambda x: x.str.split(':').str[0]))
df = df.drop("Dimensions", axis=1)
</code></pre>
<p>And I get this:</p>
<pre><code> Name Color first second third
0 Chair Red 12 9 40
1 Table Brown 9 40 None
2 Couch Blue 40 12 None
</code></pre>
<p>I named them <code>["first","second","third"]</code> by manually here.<br />
But what if there are more than 3 in the future, or only 2, or I don't know how many there will be, and I want them to be named using a string + an enumerating number?<br />
Like this:</p>
<pre><code> Name Color data_0 data_1 data_2
0 Chair Red 12 9 40
1 Table Brown 9 40 None
2 Couch Blue 40 12 None
</code></pre>
<p><strong>Question:</strong> <br />
How do I make the naming automatic, based on the string "data_" so it gives each column the name "data_" + the number of the column? (So I don't have to type in names manually)</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.pop.html" rel="nofollow noreferrer"><code>DataFrame.pop</code></a> for use and drop column <code>Dimensions</code>, add <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.add_prefix.html" rel="nofollow noreferrer"><code>DataFrame.add_prefix</code></a> to default columnsnames and append to original DataFrame by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.join.html" rel="nofollow noreferrer"><code>DataFrame.join</code></a>:</p>
<pre><code>df = (df.join(df.pop('Dimensions')
.str.strip('!')
.str.split('!{1,}', expand=True)
.apply(lambda x: x.str.split(':').str[0]).add_prefix('data_')))
print (df)
Name Color data_0 data_1 data_2
0 Chair Red 12 9 40
1 Table Brown 9 40 None
2 Couch Blue 40 12 None
</code></pre>
|
python|pandas
| 1 |
1,907,732 | 55,917,803 |
Renaming column values in Pandas in alphabetical order
|
<p>I have a large data set with a column that contains personal names, totally there are 60 names by <code>value_counts()</code>. I don't want to show those names when I analyze the data, instead I want to rename them to <i>participant_1, ... ,participant_60</i>. </p>
<p>I also want to rename the values in alphabetical order so that I will be able to find out who is <i>participant_1</i> later. </p>
<p>I started with create a list of new names:</p>
<pre><code>newnames = [f"participant_{i}" for i in range(1,61)]
</code></pre>
<p>Then I try to use the function <code>df.replace</code>. </p>
<pre><code>df.replace('names', 'newnames')
</code></pre>
<p>However, I don't know where to specify that I want <i>participant_1</i> replace the name that comes first in alphabetical order. Any suggestions or better solutions?</p>
|
<p>If need replace values in column in alphabetical order use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Categorical.codes.html" rel="nofollow noreferrer"><code>Categorical.codes</code></a>:</p>
<pre><code>df = pd.DataFrame({
'names':list('bcdada'),
})
df['new'] = [f"participant_{i}" for i in pd.Categorical(df['names']).codes + 1]
#alternative solution
#df['new'] = [f"participant_{i}" for i in pd.CategoricalIndex(df['names']).codes + 1]
print (df)
names new
0 b participant_2
1 c participant_3
2 d participant_4
3 a participant_1
4 d participant_4
5 a participant_1
</code></pre>
|
python|pandas
| 3 |
1,907,733 | 55,639,402 |
Selenium - How to adjust the mouse speed when moving a slider?
|
<p>I've got this code to bypass captcha basically:</p>
<pre><code>#!/usr/bin/python
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.action_chains import ActionChains
import time
import sys
try:
driver = webdriver.Chrome()
driver.get(sys.argv[1])
time.sleep(2)
slider = driver.find_element_by_id('nc_2_n1z')
move = ActionChains(driver)
move.click_and_hold(slider).move_by_offset(400, 0).release().perform()
time.sleep(5)
driver.close()
except:
pass
</code></pre>
<p>Everything works but when I execute this code, it moves the slider very fast (probably less than 1 second) so I can't bypass the <code>Slide to verify</code> captcha. From start to finish moving the slider, I want it to take 3-5 seconds so it'll act more like a human when moving the slider. Is it possible to adjust the speed when moving the slider ?</p>
|
<p>You can try this by splitting the below line
move.click_and_hold(slider).move_by_offset(400, 0).release().perform()</p>
<p>You have to click and hold for the desired seconds then release </p>
<pre><code>move.click_and_hold(slider).perform()
sleep(2)
move.move_by_offset(400, 0).release().perform()
</code></pre>
<p>However i am not sure the ask can handle captcha as most of the modern captcha can figure out if you are running script </p>
|
python|selenium
| 0 |
1,907,734 | 73,375,258 |
How to use tf.gather with index vector that may contain out-of-range indices?
|
<p>I have an index vector that may contain negative entries. How can I use this in <code>tf.gather</code>? My approach</p>
<pre><code>params = tf.constant(range(5))
idx = tf.constant([-1, 1, 2])
tf.where(
condition = idx >= 0,
x = tf.gather(params, idx),
y = -1
)
</code></pre>
<p>throws</p>
<blockquote>
<p>InvalidArgumentError: indices[0] = -1 is not in [0, 5) [Op:GatherV2]</p>
</blockquote>
<p>because the <code>x</code> branch is evaluated for all elements. I do not want to remove the invalid indices because I need to retain the positional information, i.e. the desired output is <code>[-1, 1, 2]</code> (rather than <code>[1, 2]</code>, which I would get by discarding the invalid indices).</p>
|
<p>You can do it as follows</p>
<pre><code>tf.where(idx >= 0, tf.gather(params, tf.where(idx >= 0, idx, 0)), -1)
</code></pre>
<p>Output</p>
<pre><code><tf.Tensor: shape=(3,), dtype=int32, numpy=array([-1, 1, 2])>
</code></pre>
|
tensorflow|indexing
| 1 |
1,907,735 | 63,796,320 |
Add N empty rows between each value
|
<p>I have a df which sums every 5 rows in 'Costs' and puts into 'Sum'. Now I would like to add 4 empty rows between each value in 'Sum' to have it at the start of each 5 rows.
What would be the most efficient and straight forward way to achieve this?</p>
<p>Input:</p>
<pre><code> Costs | Sum
-------|-------
1000 | 10000
1500 | 22500
2000 | 35000
2500 |
3000 |
3500 |
4000 |
4500 |
5000 |
5500 |
6000 |
6500 |
7000 |
7500 |
8000 |
</code></pre>
<p>Output:</p>
<pre><code> Costs | Sum
-------|-------
1000 | 10000
1500 |
2000 |
2500 |
3000 |
3500 | 22500
4000 |
4500 |
5000 |
5500 |
6000 | 35000
</code></pre>
|
<p>IIUC ,you can repeat the Sum column and check for the first of the duplicated value , then assign:</p>
<pre><code>n=5
u = df['Sum'].replace('',np.nan).dropna().repeat(n)
df['New_sum'] = np.where(~u.index.duplicated(),u,'')
</code></pre>
<hr />
<pre><code>print(df)
Costs Sum New_sum
0 1000 10000 10000
1 1500 22500
2 2000 35000
3 2500
4 3000
5 3500 22500
6 4000
7 4500
8 5000
9 5500
10 6000 35000
11 6500
12 7000
13 7500
14 8000
</code></pre>
|
python|pandas
| 2 |
1,907,736 | 64,034,170 |
dataframe pyspark update rows from previous rows
|
<p>I am using pyspark, and I have a dataframe that looks like that :</p>
<pre><code>CODE | POSITION| COL1 | COL2
A | 1 | |
A | 2 | | AAA
A | 3 | INF |
A | 4 | BIC |
A | 5 | |
B | 1 | | BBB
B | 2 | MIL |
B | 3 | |
B | 4 | | CCC
B | 5 | |
B | 6 | |
</code></pre>
<p>and I want to have that :</p>
<pre><code>CODE | POSITION| COL1 | COL2
A | 1 | |
A | 2 | | AAA
A | 3 | INF | AAA
A | 4 | BIC | AAA
A | 5 | |
B | 1 | | BBB
B | 2 | MIL | BBB
B | 3 | |
B | 4 | | CCC
B | 5 | |
B | 6 | |
</code></pre>
<p>I explain, this dataframe is grouped by "CODE" and ordered by "POSITION", I need for a group "CODE" , when I have "COL2" filled (position =2 in this example) to take the value "AAA" and put it in the following positions 3 and 4 (while COL1 is filled)</p>
<p>I know is not that easy(for me!)</p>
<p>Thank you a lot for your help</p>
|
<p>It can be done using the <code>last</code> function.<br />
<code>F.last</code> returns the last non-null value in an ordered window.</p>
<p>Your dataframe:</p>
<pre><code>from pyspark.sql.functions import col
from pyspark.sql.functions import lag
from pyspark.sql.window import Window
from pyspark.sql import functions as F
import sys
df = sc.parallelize([['A', 1, None, None], ['A', 2, None, 'AAA'], ['A', 3, 'INF', None], ['A', 4, 'BIC', None], ['A', 5, None, None], ['B', 1, None, 'BBB'], ['B', 2, 'MIL', None], ['B', 3, None, None], ['B', 4, None, 'CCC'], ['B', 5, None, None], ['B', 6, None, None]])
df = df.toDF(['code', 'position', 'col1', 'col2'])
</code></pre>
<pre><code>w = Window.partitionBy("code").orderBy("position")
df.withColumn("col3", F.last('col2', True).over(w.rowsBetween(-sys.maxsize, 0)))\
.withColumn("col3", F.when(col("col1").isNull(), col("col2"))
.otherwise(col("col3")))\
.drop("col2").withColumnRenamed("col3", "col2")\
.orderBy("code", "position").show()
</code></pre>
<p>Output:</p>
<pre><code>+----+--------+----+----+
|code|position|col1|col2|
+----+--------+----+----+
| A| 1|null|null|
| A| 2|null| AAA|
| A| 3| INF| AAA|
| A| 4| BIC| AAA|
| A| 5|null|null|
| B| 1|null| BBB|
| B| 2| MIL| BBB|
| B| 3|null|null|
| B| 4|null| CCC|
| B| 5|null|null|
| B| 6|null|null|
+----+--------+----+----+
</code></pre>
<p>If <code>col1</code> corresponding to <code>position 6</code> is filled, it will return <code>CCC</code> in <code>col2</code>.<br />
It takes the latest non-null value in <code>col2</code> as it progresses through the window.</p>
<pre><code>+----+--------+----+----+
| B| 6| XYZ| CCC|
+----+--------+----+----+
</code></pre>
|
python|dataframe|pyspark
| 0 |
1,907,737 | 53,348,203 |
Scrapy: multiple "start_urls" yield duplicated results
|
<p>Although my simple code seems OK according to <a href="https://doc.scrapy.org/en/latest/topics/spiders.html" rel="nofollow noreferrer">the official document</a>, it generates unexpectedly duplicated results such as: </p>
<ul>
<li>9 rows/results when setting 3 URLs</li>
<li>4 rows/ results when setting 2 URLs</li>
</ul>
<p>When I set just 1 URL, my code works fine. Also, I tried <a href="https://stackoverflow.com/questions/51869266/python-scrapy-start-urls">the answer solution in this SO question</a>, but it didn't solve my issue.</p>
<p>[Scrapy command]</p>
<pre><code>$ scrapy crawl test -o test.csv
</code></pre>
<p>[Scrapy spider: test.py]</p>
<pre><code>import scrapy
from ..items import TestItem
class TestSpider(scrapy.Spider):
name = 'test'
start_urls = [
'file:///Users/Name/Desktop/tutorial/test1.html',
'file:///Users/Name/Desktop/tutorial/test2.html',
'file:///Users/Name/Desktop/tutorial/test3.html',
]
def parse(self, response):
for url in self.start_urls:
table_rows = response.xpath('//table/tbody/tr')
for table_row in table_rows:
item = TestItem()
item['test_01'] = table_row.xpath('td[1]/text()').extract_first()
item['test_02'] = table_row.xpath('td[2]/text()').extract_first()
yield item
</code></pre>
<p>[Target HTML: test1.html, test2.html, test3.html]</p>
<pre><code><html>
<head>
<title>test2</title> <!-- Same as the file name -->
</head>
<body>
<table>
<tbody>
<tr>
<td>test2 A1</td> <!-- Same as the file name -->
<td>test2 B1</td> <!-- Same as the file name -->
</tr>
</tbody>
</table>
</body>
</html>
</code></pre>
<p>[Generated CSV results for 3 URLs]</p>
<pre><code>test_01,test_02
test1 A1,test1 B1
test1 A1,test1 B1
test1 A1,test1 B1
test2 A1,test2 B1
test2 A1,test2 B1
test2 A1,test2 B1
test3 A1,test3 B1
test3 A1,test3 B1
test3 A1,test3 B1
</code></pre>
<p>[Expected results for 3 URLs]</p>
<pre><code>test_01,test_02
test1 A1,test1 B1
test2 A1,test2 B1
test3 A1,test3 B1
</code></pre>
<p>[Generated CSV results for 2 URLs]</p>
<pre><code>test_01,test_02
test1 A1,test1 B1
test1 A1,test1 B1
test2 A1,test2 B1
test2 A1,test2 B1
</code></pre>
<p>[Expected results for 2 URLs]</p>
<pre><code>test_01,test_02
test1 A1,test1 B1
test2 A1,test2 B1
</code></pre>
|
<p>You are iterating again the <code>start_urls</code>, you don't need to to that, scrapy already does it for you, so now you are looping twice on the <code>start_urls</code>.</p>
<p>Try that instead:</p>
<pre><code>import scrapy
from ..items import TestItem
class TestSpider(scrapy.Spider):
name = 'test'
start_urls = [
'file:///Users/Name/Desktop/tutorial/test1.html',
'file:///Users/Name/Desktop/tutorial/test2.html',
'file:///Users/Name/Desktop/tutorial/test3.html',
]
def parse(self, response):
table_rows = response.xpath('//table/tbody/tr')
for table_row in table_rows:
item = TestItem()
item['test_01'] = table_row.xpath('td[1]/text()').extract_first()
item['test_02'] = table_row.xpath('td[2]/text()').extract_first()
yield item
</code></pre>
|
python|scrapy
| 1 |
1,907,738 | 65,131,377 |
Method without arguments or parenthesis for Scipy odeint
|
<p>help, please - I can't understand my own code! lol
I'm fairly new at python and after many trials and errors, I got my code to work, but there is one particular part of it I don't understand.</p>
<p>In the code below, I'm solving a fairly basic ODE through scipy's odeint-function. My goal is then to build on this blue-print for more complicated systems.</p>
<p><strong>My question(s):</strong> How could I call the method <code>.reaction_rate_simple</code> without any arguments and without the closing parenthesis? What does this mean in python? Should I use a static method here somewhere?</p>
<p>If anyone has any feedback on this - maybe this is a crappy piece of code and there's a better way of solving it!</p>
<p>I am very thankful for any response and help!</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import odeint
class batch_reator:
def __init__(self, C_init, volume_reactor, time_init, time_end):
self.C_init = C_init
self.volume = volume_reactor
self.time_init = time_init
self.time_end = time_end
self.operation_time = (time_end - time_init)
def reaction_rate_simple(self, concentration_t, t, stoch_factor, order, rate_constant):
reaction_rate = stoch_factor * rate_constant * (concentration_t ** order)
return reaction_rate
def equations_system(self, kinetics):
dCdt = kinetics
return dCdt
C_init = 200
time_init, time_end = 0, 1000
rate_constant, volume_reactor, order, stoch_factor = 0.0001, 10, 1, -1
time_span = np.linspace(time_init, time_end, 100)
Batch_basic = batch_reator(C_init, volume_reactor, time_init, time_end)
kinetics = Batch_basic.reaction_rate_simple
sol = odeint(Batch_basic.equations_system(kinetics), Batch_basic.C_init, time_span, args=(stoch_factor, order, rate_constant))
plt.plot(time_span, sol)
plt.show()
</code></pre>
|
<p>I assume you are referring to the line</p>
<pre><code>kinetics = Batch_basic.reaction_rate_simple
</code></pre>
<p>You are not calling it, you are saving the method as a variable and then passing that method to <code>equations_system(...)</code>, which simply returns it. I am not familiar with odeint, but according to the documentation, it accepts a callable, which is what you are giving it.</p>
<p>In python functions, lambdas, classes are all <em>callable</em> and can be assigned to variable and passed to functions and called as needed.</p>
<p>In this particular case the callback definition from the <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.odeint.html" rel="nofollow noreferrer">odeint docs</a> say</p>
<blockquote>
<p>func callable(y, t, …) or callable(t, y, …)
Computes the derivative of y at t. If the signature is callable(t, y, ...), then the argument tfirst must be set True.</p>
</blockquote>
<p>So the first two arguments are passed in by odeint, and the other three are coming from the arguments specified by you.</p>
|
python|numpy|math|scipy|chemistry
| 0 |
1,907,739 | 5,286,920 |
Determining Nearest Neighbour in Dijkstra
|
<p>Ok, I have changed my code a little, but I am getting confused on what variable names should be passed to my nearestNeighbour function. These two functions work ok:</p>
<pre><code>infinity = 1000000
invalid_node = -1
startNode = 0
#Values to assign to each node
class Node:
def __init__(self):
self.distFromSource = infinity
self.previous = invalid_node
self.visited = False
#read in all network nodes
#node = the distance values between nodes
def network():
f = open ('network.txt', 'r')
theNetwork = [[int(networkNode) for networkNode in line.split(',')] for line in f.readlines()]
#theNetwork = [[int(node) for node in line.split(',')] for line in f.readlines()]
#print theNetwork
return theNetwork
#for each node assign default values
#populate table with default values
def populateNodeTable():
nodeTable = []
index = 0
f = open('network.txt', 'r')
for line in f:
networkNode = map(int, line.split(','))
nodeTable.append(Node())
#print "The previous node is " ,nodeTable[index].previous
#print "The distance from source is " ,nodeTable[index].distFromSource
#print networkNode
index +=1
nodeTable[startNode].distFromSource = 0
return nodeTable
</code></pre>
<p>So, all well and good. However, my next function is giving me an error, and despite me changing variable names in the brackets I can't work out the problem. Here is the next function code and the error message:</p>
<pre><code>def nearestNeighbour(nodeTable, theNetwork):
listOfNeighbours = []
nodeIndex = 0
for networkNode in nodeTable[currentNode]:
if networkNode != 0 and networkNode.visited == False:
listOfNeighbours.append(nearestNode)
nodeIndex +=1
print listOfNeighbours
## #print node.distFromSource, node.previous, node.visited
##
return listOfNeighbours
for networkNode in nodeTable[currentNode]:
TypeError: iteration over non-sequence
</code></pre>
|
<p>I think you want <code>nodeTable[node]</code>, not <code>node[nodeTable]</code>, and similarly with <code>theNetwork[node]</code>.</p>
|
python|dijkstra
| 1 |
1,907,740 | 67,228,303 |
How can we use data-table across steps?
|
<p>We have some category tiles on the home page through which the products are being filtered, say for eg:- Tile 1 belongs to clothes, once clicked the following page should show clothes, Tile 2 belongs to shoes, once clicked the following page should show shoes and so on.</p>
<p>The objective is to click tile1, verify the text and products, click tile 2 verify text and products, and so on.</p>
<p>I want to use a data table to test this scenario</p>
<p>Here is the feature file content:-</p>
<pre><code>Scenario Outline: Filtering the products on the home page
Given the user is on the homepage
When the user selects the <category> tile for filtering the products
| category |
| cat 1 |
| cat 2 |
| cat 3 |
| cat 4 |
Then only the relevant products should be displayed to the user
</code></pre>
<p>Problem is, I want to click and verify in the same step but with the above setting, I won't be able to do that. Any suggestion/inputs on how this can be done?</p>
<p>Details:- When I click the tile new page will load with the tile text on the left side and relevant products these 2 actions happen one after another and hence I wanted to do this in a same step but doesn't look right in the context of given when and then</p>
|
<p>If you want to verify for each of the test date, <code>category</code> table should be in <code>Examples</code>.</p>
<pre><code>Scenario Outline: Filtering the products on the home page
Given the user is on the homepage
When the user selects the <category> tile for filtering the products
Then only the relevant products should be displayed to the user
Examples:
| category |
| cat 1 |
| cat 2 |
| cat 3 |
| cat 4 |
</code></pre>
|
python|selenium|selenium-webdriver|ui-automation|python-behave
| 0 |
1,907,741 | 71,159,536 |
Change project key with bitbucket rest api
|
<p>I am trying to write a script to move repos in a project to another project but I am getting a 400 error whenever I try.</p>
<p>My python requests line looks like:</p>
<pre><code>url = 'https://bitbucketserver.com/rest/api/1.0/projects/example1/repos/repo1'
token = 'TokenString'
response = requests.put(url, headers={'Content-Type': 'application/json', 'Authorization': 'Bearer' + token}, data={'project': {'key': 'NEW_PROJECT'}}, verify=False)
</code></pre>
<p>I get a response 400 that says 'Unexpected character ('p' (code112)): expected a valid value (number, string, array, object, true, false, or null) at [Source: com.atlassian.stash.internal.web.util.web.CountingServletInputStream@7ccd7631; line 1, column 2]</p>
<p>I'm not sure where my syntax is wrong</p>
|
<p>Not python, but work for me via curl:</p>
<pre><code>curl -u 'USER:PASSWORD' --request PUT \
--url 'https://stash.vsegda.da/rest/api/1.0/projects/OLD_PROJECT/repos/REPO_TO_MOVE' \
--header 'Accept: application/json' \
--header 'Content-Type: application/json' \
--data '{
"project": {"key":"NEW_PROJECT"}
}'
</code></pre>
<p>Maybe can someone help.</p>
|
python-requests|bitbucket|bitbucket-api
| 0 |
1,907,742 | 71,252,052 |
Find users most frequent recommandations based on input queries
|
<p>I have a <code>input query</code> table in the following:</p>
<pre><code> query
0 orange
1 apple
2 meat
</code></pre>
<p>which I want to make against the <code>user query</code> table as following</p>
<pre><code> user query
0 a1 orange
1 a1 strawberry
2 a1 pear
3 a2 orange
4 a2 strawberry
5 a2 lemon
6 a3 orange
7 a3 banana
8 a6 meat
9 a7 beer
10 a8 juice
</code></pre>
<p>Given a query in <code>input query</code>, I want to match it to query by other user in <code>user query</code> table, and return the top 3 ranked by total number of counts.</p>
<p>For example,
<code>orange</code> in <code>input query</code>, it matches user <code>a1</code>,<code>a2</code>,<code>a3</code> in <code>user query</code> where all have queried <code>orange</code>, other items they have query are <code>strawberry</code> (count of 2), <code>pear</code>, <code>lemon</code>, <code>banana</code> (count of 1).</p>
<p>The answer will be <code>strawberry</code> (since it has max count), <code>pear</code>, <code>lemon</code> (since we only return top 3).</p>
<p>Similar reasoning for <code>apple</code> (no user query therefore output 'nothing') and <code>meat</code> query.</p>
<p>So the final <code>output table</code> is</p>
<pre><code> query recommend
0 orange strawberry
1 orange pear
2 orange lemon
3 apple nothing
4 meat nothing
</code></pre>
<p>Here is the code</p>
<pre><code>import pandas as pd
import numpy as np
# Create sample dataframes
df_input = pd.DataFrame( {'query': {0: 'orange', 1: 'apple', 2: 'meat'}} )
df_user = pd.DataFrame( {'user': {0: 'a1', 1: 'a1', 2: 'a1', 3: 'a2', 4: 'a2', 5: 'a2', 6: 'a3', 7: 'a3', 8: 'a6', 9: 'a7', 10: 'a8'}, 'query': {0: 'orange', 1: 'strawberry', 2: 'pear', 3: 'orange', 4: 'strawberry', 5: 'lemon', 6: 'orange', 7: 'banana', 8: 'meat', 9: 'beer', 10: 'juice'}} )
target_users = df_user[df_user['query'].isin(df_input['query'])]['user']
mask_users=df_user['user'].isin(target_users)
mask_queries=df_user['query'].isin(df_input['query'])
df1=df_user[mask_users & mask_queries]
df2=df_user[mask_users]
df=df1.merge(df2,on='user').rename(columns={"query_x":"query", "query_y":"recommend"})
df=df[df['query']!=df['recommend']]
df=df.groupby(['query','recommend'], as_index=False).count().rename(columns={"user":"count"})
df=df.sort_values(['query','recommend'],ascending=False, ignore_index=False)
df=df.groupby('query').head(3)
df=df.drop(columns=['count'])
df=df_input.merge(df,how='left',on='query').fillna('nothing')
df
</code></pre>
<p>Where <code>df</code> is the result. Is there any way to make the code more concise?</p>
|
<p>Unless there is a particular reason to favor pears over bananas (since they both count for one), I would suggest a more idiomatic way to do it:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df_input = pd.DataFrame(...)
df_user = pd.DataFrame(...)
df_input = (
df_input
.assign(
recommend=df_input["query"].map(
lambda x: df_user[
(df_user["user"].isin(df_user.loc[df_user["query"] == x, "user"]))
& (df_user["query"] != x)
]
.value_counts(subset="query")
.index[0:3]
.to_list()
if x in df_user["query"].unique()
else "nothing"
)
)
.explode("recommend")
.fillna("nothing")
.reset_index(drop=True)
)
print(df_input)
# Output
query recommend
0 orange strawberry
1 orange banana
2 orange lemon
3 apple nothing
4 meat nothing
</code></pre>
|
python-3.x|pandas|numpy|jupyter-notebook|jupyter-lab
| 1 |
1,907,743 | 71,417,429 |
Python Recursive enumerate(), with Start Value Incrementation
|
<p>I am attempting to create a Python function which parses through a bracket representation of a binary tree and outputs a line-by-line bipartite graph representation of it, where the partitions are separated by a "|", thus:</p>
<p><strong>Binary tree bracket representation:</strong></p>
<p>(((A, B), C), D)</p>
<p><strong>Bipartite graph relationship output:</strong></p>
<p>A B | C D</p>
<p>A B C | D</p>
<p>I approached it using recursion, maintaining each bipartite relationship line in a list, taking the original bracket notation string and the starting index of parsing as input.</p>
<pre><code>def makeBipartRecursive(treeStr, index):
bipartStr = ""
bipartList = []
for ind, char in enumerate(treeStr, start=index):
if char == '(':
# Make recursive call to makeBipartRecursive
indx = ind
bipartList.append(makeBipartRecursive(treeStr, indx+1))
elif char == ')':
group1 = treeStr[0:index-1] + treeStr[ind+1::]
group2 = treeStr[index:ind]
return group1 + " | " + group2
elif char == ',':
bipartStr += " "
else:
# Begin construction of string
bipartStr += char
</code></pre>
<p>Each time an open-parenthesis is encountered, a recursive call is made, beginning enumeration at the index immediately following the open-parenthesis, so as to prevent infinite recursion (Or so I think). If possible, try to ignore the fact that I'm not actually returning a list. <strong>The main issue is that I encounter infinite recursion, where the enumeration never progresses beyond the first character in my string.</strong> Should my recursive call with the incremented start position for the enumeration not fix this?</p>
<p>Thanks in advance.</p>
|
<p>You are misinterpreting the use of the <code>start</code> parameter of <code>enumerate</code>. It does not mean <em>start the enumeration at this position</em> but <em>start counting from this index</em>. See <code>help(enumerate)</code>:</p>
<pre><code>| The enumerate object yields pairs containing a count (from start, which
| defaults to zero) and a value yielded by the iterable argument.
</code></pre>
<p>So basically each time you perform a recursive call you start again from the beginning of your string.</p>
|
python|recursion|graph|binary-tree|bipartite
| 1 |
1,907,744 | 71,207,541 |
How to group a pandas DataFrame by month? Trying its output to have a index with actual last days
|
<p>I have the following DataFrame, and like to group by month.</p>
<pre><code>import pandas as pd
import numpy as np
idx = pd.date_range(start='2001-01-01', end='2002-01-01', periods = 80)
df = pd.DataFrame(np.random.rand(160).reshape(80,2), index=idx.normalize(), columns=['a','b'])
</code></pre>
<p>With the following codes, I can group <code>df</code> by month, but its index label is last days of each <em><strong>calendar</strong></em> month, but not the last days of each month in <code>df</code>.</p>
<pre><code>k = df.resample('M').apply(lambda x: x[-1])
k1 = df.groupby(pd.Grouper(freq='M')).last()
</code></pre>
<p>For example, <code>df.loc['2001-01'].index[-1]</code> is <code>Timestamp('2001-01-28 00:00:00')</code>, but not <code>Timestamp('2001-01-31 00:00:00')</code>. However, <code>k</code> and <code>k1</code> includes <code>2001-01-31</code> as below.</p>
<pre><code> a b
2001-01-31 0.521604 0.716046
2001-02-28 0.584479 0.560608
2001-03-31 0.201605 0.860491
2001-04-30 0.077426 0.711042
2001-05-31 0.544708 0.865880
2001-06-30 0.755516 0.863443
2001-07-31 0.266727 0.107859
2001-08-31 0.683754 0.098337
2001-09-30 0.586217 0.697163
2001-10-31 0.742394 0.160754
2001-11-30 0.655662 0.400128
2001-12-31 0.902192 0.580582
2002-01-31 0.878815 0.555669
</code></pre>
<p>In other words, I like to group <code>df</code> by month and the grouped <code>df</code> to have index labels of the last days of each month in <code>df</code>, but not the last dates of each <em><strong>calendar</strong></em> month.</p>
|
<p>Let us try with <code>duplicated</code> after trim the index</p>
<pre><code>df = df.sort_index()
out = df[~df.index.strftime('%Y-%m').duplicated(keep='last')]
Out[242]:
a b
2001-01-28 0.984408 0.923390
2001-02-25 0.108587 0.797240
2001-03-29 0.058016 0.025948
2001-04-26 0.095034 0.226460
2001-05-28 0.386954 0.419999
2001-06-30 0.535202 0.576777
2001-07-27 0.389711 0.706282
2001-08-29 0.270434 0.342087
2001-09-30 0.190336 0.872519
2001-10-28 0.333673 0.832585
2001-11-29 0.651579 0.751776
2001-12-27 0.649476 0.748410
2002-01-01 0.670143 0.389339
</code></pre>
|
python|pandas|group
| 1 |
1,907,745 | 60,797,592 |
Google-Kick Start, ROUND-A, Allocation:- Test set skipped
|
<p>I participated in Kick Start and attempted this question:</p>
<p><strong>Problem</strong></p>
<p>There are N houses for sale. The i-th house costs Ai dollars to buy. You have a budget of B dollars to spend.</p>
<p>What is the maximum number of houses you can buy?</p>
<p><strong>Input</strong></p>
<p>The first line of the input gives the number of test cases, T. T test cases follow. Each test case begins with a single line containing the two integers N and B. The second line contains N integers. The i-th integer is Ai, the cost of the i-th house.</p>
<p><strong>Output</strong></p>
<p>For each test case, output one line containing Case #x: y, where x is the test case number (starting from 1) and y is the maximum number of houses you can buy.</p>
<p>Full question:- <a href="https://codingcompetitions.withgoogle.com/kickstart/round/000000000019ffc7/00000000001d3f56" rel="nofollow noreferrer">https://codingcompetitions.withgoogle.com/kickstart/round/000000000019ffc7/00000000001d3f56</a></p>
<p>AND this was my code</p>
<pre><code>t = int(input())
if 1 <= t <= 100:
for case in range(t):
n, b = map(int, input().split())
a = map(int, input().split())
s, c = 0, 0
for i in sorted(a):
s += i
if s <= b:
c += 1
else:
print("Case #{0}: {1}".format(case+1, c))
break
</code></pre>
<p>I kept on getting test set skipped, I just want to know what is wrong with my code?</p>
<p>Is there any possible test set where this solution won't work?</p>
|
<blockquote>
<p>Here is the tested solution -</p>
</blockquote>
<pre><code>tc = int(input())
for i in range(tc):
n, budget = map(int, input().split())
prices = list(map(int, input().split()))
prices.sort()
for j in range(n, 0, -1):
if sum(prices[: j]) <= budget:
print("Case #" + str(i+1) + ': ' + str(len(prices[: j])))
break
else: print("Case #" + str(i+1) + ': 0')
</code></pre>
|
python|python-3.x|optimization|greedy
| 0 |
1,907,746 | 72,619,062 |
is it possible to use database connection in python pyramid tween?
|
<p>I would prepare a tracking system for my application and check every request if it has a special value in the query parameter. In order to do that I created a tween and this tween checks for this special value and tries to update a specific object. In tween, everything looks ok, but when I check the same value at the view, the object looks not updated. Is it somehow possible to save this update?</p>
<pre><code>def tracking_tween_factory(handler, registry):
def tracking_tween(request):
special_token = request.params.get('special_token')
special_object = request.params.get('special_token')
if special_token:
request.dbsession.execute(
f"update {data['special_object']} set clicked = true where id = {data['special_token']}")
# here was only for testing purposes just to check if this works
tracked_object = request.dbsession.execute(
f"select * from {data['special_object']} where id = {data['special_token']}").fetchall()
assert tracked_object[0].clicked is True # value which should change
return handler(request)
return tracking_tween
</code></pre>
<p>Python 3.8
Framework Pyramid</p>
|
<p>Ok, I found a solution.</p>
<p>I had to setup my tween after pyramid_tm_tween.</p>
<p>so for example:</p>
<pre><code>config.add_tween('coma.tweens.tracking_tween_factory', under='pyramid_tm.tm_tween_factory')
</code></pre>
<p>in my init file for pyramid configuration.</p>
|
python|python-3.x|pyramid
| 0 |
1,907,747 | 58,896,418 |
Elasticsearch - IndicesClient.put_settings not working
|
<p>I am trying to update my original index settings.
My initial setting looks like this:</p>
<pre><code>client.create(index = "movies", body= {
"settings": {
"number_of_shards": 1,
"number_of_replicas": 0,
"analysis": {
"filter": {
"my_custom_stop_words": {
"type": "stop",
"stopwords": stop_words
}
},
"analyzer": {
"my_custom_analyzer": {
"filter": [
"lowercase",
"my_custom_stop_words"
],
"type": "custom",
"tokenizer": "standard"
}
}
}
},
"mappings": {
"properties": {
"body": {
"type": "text",
"analyzer": "my_custom_analyzer",
"search_analyzer": "my_custom_analyzer",
"search_quote_analyzer": "my_custom_analyzer"
}
}
}
},
ignore=400
)
</code></pre>
<p>And I am trying to add the synonym filter to my existing analyzer (my_custom_analyzer) using client.put_settings:</p>
<pre><code> client.put_settings(index='movies', body={
"settings": {
"number_of_shards": 1,
"number_of_replicas": 0,
"analysis": {
"analyzer": {
"my_custom_analyzer": {
"filter": [
"lowercase",
"my_stops",
"my_synonyms"
],
"type": "custom",
"tokenizer": "standard"
}
},
"filter": {
"my_custom_stops": {
"type": "stop",
"stopwords": stop_words
},
"my_custom_synonyms": {
"ignore_case": "true",
"type": "synonym",
"synonyms": ["Harry Potter, HP => HP", "Terminator, TM => TM"]
}
}
}
},
"mappings": {
"properties": {
"body": {
"type": "text",
"analyzer": "my_custom_analyzer",
"search_analyzer": "my_custom_analyzer",
"search_quote_analyzer": "my_custom_analyzer"
}
}
}
},
ignore=400
)
</code></pre>
<p>However, when I issue a search query (searching for "HP") that queries the movies index and I'm trying to rank the documents so that the document containing "Harry Potter" 5 times is the top element in the list. Right now, it seems like the document with "HP" 3 times tops the list, so the synonyms filter isn't working. I've closed movies index before I do client.put_settings and then re-opened the index.
Any help would be greatly appreciated!</p>
|
<p>You should re-index all your data in order to apply the updated settings on all your data and fields.</p>
<p>The data that had already been indexed won't be affected by the updated analyzer, only documents that has been indexed after you updated the settings will be affected.</p>
<p>Not re-indexing your data might produce incorrect results since your old data is analyzed with the old custom analyzer and not with the new one.</p>
<p>The most efficient way to resolve this issue is to create a new index, and move your data from the old one to the new one with the updated settings.</p>
<p><a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-reindex.html" rel="nofollow noreferrer">Reindex Api</a> </p>
<p>Follow these steps:</p>
<pre><code>POST _reindex
{
"source": {
"index": "movies"
},
"dest": {
"index": "new_movies"
}
}
DELETE movies
PUT movies
{
"settings": {
"number_of_shards": 1,
"number_of_replicas": 0,
"analysis": {
"analyzer": {
"my_custom_analyzer": {
"filter": [
"lowercase",
"my_custom_stops",
"my_custom_synonyms"
],
"type": "custom",
"tokenizer": "standard"
}
},
"filter": {
"my_custom_stops": {
"type": "stop",
"stopwords": "stop_words"
},
"my_custom_synonyms": {
"ignore_case": "true",
"type": "synonym",
"synonyms": [
"Harry Potter, HP => HP",
"Terminator, TM => TM"
]
}
}
}
},
"mappings": {
"properties": {
"body": {
"type": "text",
"analyzer": "my_custom_analyzer",
"search_analyzer": "my_custom_analyzer",
"search_quote_analyzer": "my_custom_analyzer"
}
}
}
}
POST _reindex?wait_for_completion=false
{
"source": {
"index": "new_movies"
},
"dest": {
"index": "movies"
}
}
</code></pre>
<p>After you've verified all your data is in place you can delete <code>new_movies</code> index. <code>DELETE new_movies</code></p>
<p>Hope these help</p>
|
python|python-3.x|elasticsearch
| 3 |
1,907,748 | 31,392,770 |
Why does Python refuse to execute this code in a new subprocess?
|
<p>I am trying to make a very simple application that allows for people to define their own little python scripts within the application. I want to execute the code in a new process to make it easy to kill later. Unfortunately, Python keeps giving me the following error:</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/dist-packages/spyderlib/widgets/externalshell/sitecustomize.py", line 540, in runfile
execfile(filename, namespace)
File "/home/skylion/Documents/python_exec test.py", line 19, in <module>
code_process = Process(target=exec_, args=(user_input_code))
File "/usr/lib/python2.7/multiprocessing/process.py", line 104, in __init__
self._args = tuple(args)
TypeError: 'code' object is not iterable
>>>
</code></pre>
<p>My code is posted below</p>
<pre><code>user_input_string = '''
import os
world_name='world'
robot_name='default_body + os.path.sep'
joint_names=['hingejoint0', 'hingejoint1', 'hingejoint2', 'hingejoint3', 'hingejoint4', 'hingejoint5', 'hingejoint6', 'hingejoint7', 'hingejoint8']
print(joint_names)
'''
def exec_(arg):
exec(arg)
user_input_code = compile(user_input_string, 'user_defined', 'exec')
from multiprocessing import Process
code_process = Process(target=exec_, args=(user_input_code))
code_process.start()
</code></pre>
<p>What am I missing? Is there something wrong with my user_input_string? With my compile options? Any help would be appreciated.</p>
|
<p>I believe <code>args</code> must be a tuple. To create a single-element tuple, add a comma like so: <code>args=(user_input_code,)</code></p>
|
python|concurrency|compilation|subprocess
| 1 |
1,907,749 | 71,012,540 |
Running PySpark job on Kubernetes spark cluster
|
<p>I am trying to run a Spark job on a separate master Spark server hosted on kubernetes but port forwarding reports the following error:</p>
<pre class="lang-none prettyprint-override"><code>E0206 19:52:24.846137 14968 portforward.go:400] an error occurred forwarding 7077 -> 7077: error forwarding port 7077 to pod 1cf922cbe9fc820ea861077c030a323f6dffd4b33bb0c354431b4df64e0db413, uid : exit status 1: 2022/02/07 00:52:26 socat[25402] E connect(16, AF=2 127.0.0.1:7077, 16): Connection refused
</code></pre>
<p>My setup is:</p>
<ul>
<li>I am using VS Code with a dev container to manage a setup where I can run Spark applications. I can run local spark jobs when I build my context like so : <code>sc = pyspark.SparkContext(appName="Pi")</code></li>
<li>My host computer is running Docker Desktop where I have kubernetes running and used Helm to run the Spark release from Bitnami <a href="https://artifacthub.io/packages/helm/bitnami/spark" rel="nofollow noreferrer">https://artifacthub.io/packages/helm/bitnami/spark</a></li>
<li>The VS Code dev container <strong>can</strong> access the host correctly since I can do <code>curl host.docker.internal:80</code> and I get the Spark web UI status page. The port 80 is forwarded from the host using <code>kubectl port-forward --namespace default --address 0.0.0.0 svc/my-release-spark-master-svc 80:80</code></li>
<li>I am also forwarding the port <code>7077</code> using a similar command <code>kubectl port-forward --address 0.0.0.0 svc/my-release-spark-master-svc 7077:7077</code>.</li>
</ul>
<p>When I create a Spark context like this <code>sc = pyspark.SparkContext(appName="Pi", master="spark://host.docker.internal:7077")</code> I am expecting Spark to submit jobs to that master. I don't know much about Spark but I have seen a few examples creating a context like this.</p>
<p>When I run the code, I see connections attempts failing at port 7077 of kubernetes port forwarding, so the requests are going through but they are being refused somehow.</p>
<pre class="lang-none prettyprint-override"><code>Handling connection for 7077
E0206 19:52:24.846137 14968 portforward.go:400] an error occurred forwarding 7077 -> 7077: error forwarding port 7077 to pod 1cf922cbe9fc820ea861077c030a323f6dffd4b33bb0c354431b4df64e0db413, uid : exit status 1: 2022/02/07 00:52:26 socat[25402] E connect(16, AF=2 127.0.0.1:7077, 16): Connection refused
</code></pre>
<p>Now, I have no idea why the connections are being refused. I know the Spark server is accepting requests because I can see the Web UI from within the docker dev container. I know that the Spark service is exposing port 7077 because I can do:</p>
<pre class="lang-none prettyprint-override"><code>$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 28h
my-release-spark-headless ClusterIP None <none> <none> 7h40m
my-release-spark-master-svc ClusterIP 10.108.109.228 <none> 7077/TCP,80/TCP 7h40m
</code></pre>
<p>Can anyone tell why the connections are refused and how I can successfully configure the Spark master to accept jobs from external callers ?</p>
<p>Example code I am using is:</p>
<pre class="lang-py prettyprint-override"><code>import findspark
findspark.init()
import pyspark
import random
#sc = pyspark.SparkContext(appName="Pi", master="spark://host.docker.internal:7077")
sc = pyspark.SparkContext(appName="Pi")
num_samples = 100000000
def inside(p):
x, y = random.random(), random.random()
return x*x + y*y < 1
count = sc.parallelize(range(0, num_samples)).filter(inside).count()
pi = 4 * count / num_samples
print(pi)
sc.stop()
</code></pre>
|
<p>After tinkering with it a bit more, I noticed this output when launching the helm chart for Apache Spark <code>** IMPORTANT: When submit an application from outside the cluster service type should be set to the NodePort or LoadBalancer. **</code>.</p>
<p>This led me to research a bit more into Kubernetes networking. To submit a job, it is not sufficient to forward port 7077. Instead, the cluster itself needs to have an IP assigned. This requires the helm chart to be launched with the following commands to set Spark config values <code>helm install my-release --set service.type=LoadBalancer --set service.loadBalancerIP=192.168.2.50 bitnami/spark</code>. My host IP address is above and will be reachable by the Docker container.</p>
<p>With the LoadBalancer IP assigned, Spark will run using the example code provided.</p>
<p>Recap: Don't use port forwarding to submit jobs, a Cluster IP needs to be assigned.</p>
|
python|kubernetes|pyspark
| 1 |
1,907,750 | 70,765,103 |
What is the pythonic way to install a single python script
|
<p>I have a git repository with a single python script which I use for some task. I want to create a 'package' for this script and then install it. Previously I was using cmake to do this, but I'm wondering what the pythonic way of doing it is.</p>
<p>I tried using setuptools' <code>console_scripts</code> keyword argument, but that didn't work.</p>
<p>This script should get installed into some <code>./bin</code> directory.</p>
|
<p>The pythonic way here would be to create an actual package <a href="https://packaging.python.org/en/latest/tutorials/packaging-projects/" rel="nofollow noreferrer">https://packaging.python.org/en/latest/tutorials/packaging-projects/</a> and either building something like a .whl and installing that using <code>pip</code> or <code>conda</code> (whichever you use for managing your virtual environment), or publishing the package on PyPI for general use and easy installation on any machine with internet access.</p>
<p>Packaging a project is fairly standard and the linked documentation is official. Getting a package onto PyPI is fairly straightforward as well, although there are many options you can add to make the installation and the accompanying page nicer - there's many guides online on how to publish a package on PyPI, but none of then official documentation, so I suggest just searching for one.</p>
|
python|installation
| 0 |
1,907,751 | 56,241,431 |
pandas load in excel sheets and set to different dataframes
|
<p>I have an excel workbook with sheets named A, B and C</p>
<p>I wanted to load all sheets and set the sheets to different dataframes, is this possible?</p>
<p>This is what I have so far;</p>
<pre><code>sheets=['A','B','C']
for s in sheets:
df_+s=pd.read_excel(file,sheet_name=s)
</code></pre>
<p>so the output would be 3 dataframes named, df_A, df_B and df_C</p>
|
<pre><code>dfs = pd.read_excel(file, None)
</code></pre>
<p>would return a dict of dataframes. The dataframes from sheet A is <code>dfs['A']</code>, sheet B is <code>dfs['B']</code> and sheet C is <code>dfs['C']</code>.</p>
|
excel|pandas|load
| 2 |
1,907,752 | 58,569,503 |
Testing VRFY command on a SMTP
|
<p>I'm trying to test, giving a list of ip addresses and connecting to port 25, if <code>VRFY root</code> command exist.
this is the script I made:</p>
<pre><code>import sys
import socket
socket=socket.socket(socket.AF_INET, socket.SOCK_STREAM)
with open('smtp_open.txt', 'r') as f:
for line in f:
print str.format(line)
socket.connect((line, 25))
banner=socket.recv(1024)
print banner
socket.send('VRFY' + ' root' + '\r\n')
result=socket.recv(1024)
print result
socket.close()
</code></pre>
<p>and this is the output:</p>
<pre><code>10.11.1.22
220 barry ESMTP Sendmail 8.11.6/8.11.6; Sat, 26 Oct 2019 10:56:33 +0200
250 2.1.5 root <root@barry>
10.11.1.72
Traceback (most recent call last):
File "VRFY_script.py", line 15, in <module>
socket.connect((line, 25))
File "/usr/lib/python2.7/socket.py", line 228, in meth
return getattr(self._sock,name)(*args)
File "/usr/lib/python2.7/socket.py", line 174, in _dummy
raise error(EBADF, 'Bad file descriptor')
socket.error: [Errno 9] Bad file descriptor
</code></pre>
<p>as you can see, it works for the first ip only, when it gives the second ip as input the output gives me the error.</p>
|
<p>You cannot use the same socket for multiple connections. Instead you have to create a new TCP socket for each new connection. Reusing an existing socket will not work even if you've explicitly closed it.</p>
|
python|file|sockets|smtp
| 1 |
1,907,753 | 45,632,138 |
How to plot two plots with strings as x axis values
|
<p>I want to plot two figures in one image using matplotlib. Data which I want to plot is:</p>
<pre><code>x1 = ['sale','pseudo','test_mode']
y1 = [2374064, 515, 13]
x2 = ['ready','void']
y2 = [2373078, 1514]
</code></pre>
<p>I want to plot the bar plot for both the figure in one image. I used the code given below:</p>
<pre><code>f, (ax1, ax2) = plt.subplots(1, 2, sharey=True)
ax1.plot(x1, y1)
ax1.set_title('Two plots')
ax2.plot(x2, y2)
</code></pre>
<p>but its giving error:</p>
<pre><code>ValueError: could not convert string to float: PSEUDO
</code></pre>
<p>How I can plot them in one image using matplotlib?</p>
|
<p>Try this:</p>
<pre><code>x1 = ['sale','pseudo','test_mode']
y1 = [23, 51, 13]
x2 = ['ready','void']
y2 = [78, 1514]
y = y1+y2
x = x1+x2
pos = np.arange(len(y))
plt.bar(pos,y)
ticks = plt.xticks(pos, x)
</code></pre>
<p>Separate figures in one image: </p>
<pre><code>x1 = ['sale','pseudo','test_mode']
y1 = [23, 51, 13]
x2 = ['ready','void']
y2 = [78, 1514]
fig, (ax1, ax2) = plt.subplots(1, 2, sharey=True)
pos1 = np.arange(len(y1))
ax1.bar(pos1,y1)
plt.sca(ax1)
plt.xticks(pos1,x1)
pos2 = np.arange(len(y2))
ax2.bar(pos,y2)
plt.sca(ax2)
plt.xticks(pos2,x2)
</code></pre>
|
python|matplotlib|plot
| 1 |
1,907,754 | 57,074,704 |
Fill Empty Panda Dataframe Using Loop Method
|
<p>I am currently working with some telematics data where the trip id is missing. Trip id is unique. 1 trip id contains multiple of rows of data consisting i.e gps coordinate, temp, voltage, rpm, timestamp, engine status (on or off). The data pattern indicate time of engine status on and off, can be cluster as a unique trip id. Though, I have difficulty to translate the above logic in order to generate these tripId. </p>
<p>Tried to use few pandas loop methods but keep failing. </p>
<pre><code>import pandas as pd
inp = [{'Ignition_Status':'ON', 'tripID':''},{'Ignition_Status':'ON','tripID':''},
{'Ignition_Status':'ON', 'tripID':''},{'Ignition_Status':'OFF','tripID':''},
{'Ignition_Status':'ON', 'tripID':''},{'Ignition_Status':'ON','tripID':''},
{'Ignition_Status':'ON', 'tripID':''},{'Ignition_Status':'ON', 'tripID':''},
{'Ignition_Status':'ON', 'tripID':''},{'Ignition_Status':'OFF', 'tripID':''},
{'Ignition_Status':'ON', 'tripID':''},{'Ignition_Status':'OFF', 'tripID':''}]
test = pd.DataFrame(inp)
print (test)
</code></pre>
<p><strong>Approach Taken</strong></p>
<pre><code>n=1
for index, row in test.iterrows():
test['tripID']=np.where(test['Ignition_Status']=='ON',n,n)
n=n+1
</code></pre>
<p><strong>Expected Result</strong></p>
<p><img src="https://i.stack.imgur.com/z4aNb.png" alt="expected result"></p>
|
<p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.eq.html" rel="nofollow noreferrer"><code>series.eq()</code></a> to check for <code>OFF</code> and <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.shift.html" rel="nofollow noreferrer"><code>series.shift()</code></a> with <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.cumsum.html" rel="nofollow noreferrer"><code>series.cumsum()</code></a>:</p>
<pre><code>test=test.assign(tripID=test.Ignition_Status.eq('OFF')
.shift(fill_value=False).cumsum().add(1))
</code></pre>
<hr>
<pre><code> Ignition_Status tripID
0 ON 1
1 ON 1
2 ON 1
3 OFF 1
4 ON 2
5 ON 2
6 ON 2
7 ON 2
8 ON 2
9 OFF 2
10 ON 3
11 OFF 3
</code></pre>
|
pandas|loops
| 3 |
1,907,755 | 20,732,158 |
Red and yellow triangles detection using openCV in Python
|
<p>I am trying to detect red triangles and yellow triangles differentiating them using openCV in Python. I am a beginner.</p>
<p>I would like, on a first hand, detectecing, counting (yellow and red) and mark with a rectangle all the triangles the camera can see. I would like also to find their mass-center.</p>
<p>For the moment, I just detect one single triangle at a time without finding it color.
My calcul of mass center does not work, giving me the error:</p>
<pre><code> centroid_x = int(M['m10']/M['m00'])
ZeroDivisionError: float division by zero
</code></pre>
<p>I have wrote the following code inspired from examples from the web</p>
<pre><code>import numpy as np
import cv2
cap = cv2.VideoCapture(0)
print cap.get(3)
print cap.get(4)
# changing display size
ret = cap.set(3,320)
ret = cap.set(4,240)
def getthresholdedimg(hsv):
yellow = cv2.inRange(hsv,np.array((10,100,100)),np.array((30,255,255)))
red = cv2.inRange(hsv,np.array((0,0,0)),np.array((190,255,255)))
both = cv2.add(yellow,red)
return both
def nothing(x):
pass
# Create a black image, a window
img = np.zeros((300,512,3), np.uint8)
cv2.namedWindow('image')
while(True):
thr1 = 50
thr2 = 110
# Capture frame-by-frame
ret, frame = cap.read()
# Our operations on the frame come here
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
gaussian_blur = cv2.GaussianBlur(gray,(5,5),0)
# Our operations on the frame come here
canny = cv2.Canny(gray,thr1,thr2)
canny_blur = cv2.Canny(gaussian_blur,thr1,thr2)
# Our operations on the frame come here
contours,hier = cv2.findContours(canny,1,2)
for cnt in contours:
approx = cv2.approxPolyDP(cnt,0.02*cv2.arcLength(cnt,True),True)
if len(approx)==3:
cv2.drawContours(frame,[cnt],0,(0,255,0),2)
tri = approx
M = cv2.moments(cnt)
centroid_x = int(M['m10']/M['m00'])
centroid_y = int(M['m01']/M['m00'])
cv2.circle(img,(centroid_x,centroid_y),3,255,-1)
for vertex in tri:
cv2.circle(frame,(vertex[0][0],vertex[0][1]),3,(64,0,128),-1)
cv2.line(img,(vertex[0][0],vertex[0][1]),(centroid_x,centroid_y),(0,0,255),1)
# Display the resulting frame
cv2.imshow('normal flux',frame)
cv2.imshow('gray conversion',gray)
cv2.imshow('canny edges conversion',canny)
cv2.imshow('canny edges gaussian blur',canny_blur)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
</code></pre>
<p>Can you help me please?</p>
|
<p>Maybe you want to do
<code>M = cv2.moments(tri)</code>
instead of <code>M = cv2.moments(cnt)</code> ?</p>
|
python|opencv|colors
| 3 |
1,907,756 | 21,408,407 |
Python: Unicode problems
|
<p>I am getting an error at this line</p>
<pre><code>logger.info(u"Data: {}".format(data))
</code></pre>
<p>I'm getting this error:</p>
<pre><code>UnicodeEncodeError: 'ascii' codec can't encode character u'\u2019' in position 4: ordinal not in range(128)
</code></pre>
<p>Before that line, I tried adding <code>data = data.decode('utf8')</code> and I still get the same error.
I tried <code>data = data.encode('utf8')</code> and it says <code>UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 4: ordinal not in range(128)</code></p>
<p>How do I fix this? I don't know if I should encode or decode but neither works.</p>
|
<p>Use a <em>string</em> literal:</p>
<pre><code>if isinstance(data, unicode):
data = data.encode('utf8')
logger.info("Data: {}".format(data))
</code></pre>
<p>The <code>logging</code> module needs you to pass in string values as these values are passed on unaltered to formatters and the handlers. Writing log messages to a file means that unicode values are encoded with the default (ASCII) codec otherwise. But you also need to pass in a bytestring value when formatting.</p>
<p>Passing in a <code>str</code> value into a <code>unicode</code> <code>.format()</code> template leads to <em>decoding</em> errors, passing in a unicode value into a <code>str</code> <code>.format()</code> template leads to <em>encoding</em> errors, and passing a formatted <code>unicode</code> value to <code>logger.info()</code> leads to encoding errors too.</p>
<p>Better not mix and encode explicitly beforehand.</p>
|
python|unicode|utf-8
| 1 |
1,907,757 | 53,534,735 |
give template tags inside jquery prepend function
|
<p>my jquery:</p>
<pre><code>else if(json.event == "Follow Notify"){
console.log(json.sender)
$("#not").prepend('<li class="media">'+
'<a href="javascript:;">'+
'<div class="media-left">'+
'<i class="fa fa-bug media-object bg-
silver-darker"></i>'+
'</div>'+
'<div class="media-body">'+
'<h6 class="media-
heading">'+json.notification'+
'<i class="fa fa-exclamation-circle text-
danger"></i></h6>'+
'<p>'+json.notification+'</p>'+
'<a href="{% url "student:accept_follow"
pk=request.user.id notify='+json.sender+' %}">Accept</a>'+
'<a href="{% url "student:reject_follow"
pk=request.user.id notify='+json.sender+' %}">Reject</a>'+
'</div></a></li>')
}
</code></pre>
<p>I want to prepend html code with django url tags ..Im receiving a json and parsing it with json.sender..but it seems its taking it as a string .HOw do i propelry allow django template tags inside this jquery function?</p>
|
<p>Not easily.</p>
<p>You're better off having those URLs in the JSON payload, e.g.</p>
<pre><code>return JSONResponse({
"sender": sender,
"accept_follow_url": resolve_url("student:accept_follow", pk=request.user.id, notify=sender),
"..."
})
</code></pre>
|
python|jquery|django
| 1 |
1,907,758 | 53,582,464 |
How to change a variables value from a ComboBox after a button being clicked?
|
<p>I'm working on an Unweighted GPA Calculator and I'm kind of new to the (Py)Qt Designer application. I have ran into a problem where I don't know how to get the results from the ComboBoxes to calculate it into a variable named gpa.</p>
<p>Basically, this is what I want to happen:</p>
<p>if the letter_grade1 ComboBox is A+ then it will add 4.0 to gpa<br>
if the letter_grade2 ComboBox is B then it will add 3.0 to gpa</p>
<p>and later on after that, it'll divided by 5 since there's 5 ComboBoxes being calculated results gets printed and this all happens after the submit_grades button gets clicked.</p>
<p>Here's an image of what the Ui looks like:</p>
<p><img src="https://i.stack.imgur.com/iMYKP.png" alt=""></p>
<p>and here's what the code looks like:</p>
<pre><code>from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_unweight5(object):
def setupUi(self, unweight5):
unweight5.setObjectName("unweight5")
unweight5.resize(424, 228)
self.centralwidget = QtWidgets.QWidget(unweight5)
self.centralwidget.setObjectName("centralwidget")
self.gridLayout = QtWidgets.QGridLayout(self.centralwidget)
self.gridLayout.setHorizontalSpacing(10)
self.gridLayout.setVerticalSpacing(5)
self.gridLayout.setObjectName("gridLayout")
self.assessment_name = QtWidgets.QLineEdit(self.centralwidget)
self.assessment_name.setObjectName("assessment_name")
self.gridLayout.addWidget(self.assessment_name, 1, 0, 1, 1)
self.label = QtWidgets.QLabel(self.centralwidget)
self.label.setObjectName("label")
self.gridLayout.addWidget(self.label, 0, 0, 1, 1)
self.submit_grades = QtWidgets.QPushButton(self.centralwidget)
self.submit_grades.setObjectName("submit_grades")
self.gridLayout.addWidget(self.submit_grades, 6, 0, 1, 3)
self.assessment_name5 = QtWidgets.QLineEdit(self.centralwidget)
self.assessment_name5.setObjectName("assessment_name5")
self.gridLayout.addWidget(self.assessment_name5, 5, 0, 1, 1)
self.assessment_name2 = QtWidgets.QLineEdit(self.centralwidget)
self.assessment_name2.setObjectName("assessment_name2")
self.gridLayout.addWidget(self.assessment_name2, 2, 0, 1, 1)
self.assessment_name4 = QtWidgets.QLineEdit(self.centralwidget)
self.assessment_name4.setObjectName("assessment_name4")
self.gridLayout.addWidget(self.assessment_name4, 4, 0, 1, 1)
self.assessment_name3 = QtWidgets.QLineEdit(self.centralwidget)
self.assessment_name3.setObjectName("assessment_name3")
self.gridLayout.addWidget(self.assessment_name3, 3, 0, 1, 1)
self.letter_grade5 = QtWidgets.QComboBox(self.centralwidget)
self.letter_grade5.setObjectName("letter_grade5")
self.letter_grade5.addItem("")
self.letter_grade5.addItem("")
self.letter_grade5.addItem("")
self.letter_grade5.addItem("")
self.letter_grade5.addItem("")
self.letter_grade5.addItem("")
self.letter_grade5.addItem("")
self.letter_grade5.addItem("")
self.letter_grade5.addItem("")
self.letter_grade5.addItem("")
self.letter_grade5.addItem("")
self.letter_grade5.addItem("")
self.gridLayout.addWidget(self.letter_grade5, 5, 1, 1, 2)
self.letter_grade3 = QtWidgets.QComboBox(self.centralwidget)
self.letter_grade3.setObjectName("letter_grade3")
self.letter_grade3.addItem("")
self.letter_grade3.addItem("")
self.letter_grade3.addItem("")
self.letter_grade3.addItem("")
self.letter_grade3.addItem("")
self.letter_grade3.addItem("")
self.letter_grade3.addItem("")
self.letter_grade3.addItem("")
self.letter_grade3.addItem("")
self.letter_grade3.addItem("")
self.letter_grade3.addItem("")
self.letter_grade3.addItem("")
self.gridLayout.addWidget(self.letter_grade3, 3, 1, 1, 2)
self.letter_grade4 = QtWidgets.QComboBox(self.centralwidget)
self.letter_grade4.setObjectName("letter_grade4")
self.letter_grade4.addItem("")
self.letter_grade4.addItem("")
self.letter_grade4.addItem("")
self.letter_grade4.addItem("")
self.letter_grade4.addItem("")
self.letter_grade4.addItem("")
self.letter_grade4.addItem("")
self.letter_grade4.addItem("")
self.letter_grade4.addItem("")
self.letter_grade4.addItem("")
self.letter_grade4.addItem("")
self.letter_grade4.addItem("")
self.gridLayout.addWidget(self.letter_grade4, 4, 1, 1, 2)
self.letter_grade2 = QtWidgets.QComboBox(self.centralwidget)
self.letter_grade2.setObjectName("letter_grade2")
self.letter_grade2.addItem("")
self.letter_grade2.addItem("")
self.letter_grade2.addItem("")
self.letter_grade2.addItem("")
self.letter_grade2.addItem("")
self.letter_grade2.addItem("")
self.letter_grade2.addItem("")
self.letter_grade2.addItem("")
self.letter_grade2.addItem("")
self.letter_grade2.addItem("")
self.letter_grade2.addItem("")
self.letter_grade2.addItem("")
self.gridLayout.addWidget(self.letter_grade2, 2, 1, 1, 2)
self.letter_grade1 = QtWidgets.QComboBox(self.centralwidget)
self.letter_grade1.setObjectName("letter_grade1")
self.letter_grade1.addItem("")
self.letter_grade1.addItem("")
self.letter_grade1.addItem("")
self.letter_grade1.addItem("")
self.letter_grade1.addItem("")
self.letter_grade1.addItem("")
self.letter_grade1.addItem("")
self.letter_grade1.addItem("")
self.letter_grade1.addItem("")
self.letter_grade1.addItem("")
self.letter_grade1.addItem("")
self.letter_grade1.addItem("")
self.gridLayout.addWidget(self.letter_grade1, 1, 1, 1, 2)
self.label_3 = QtWidgets.QLabel(self.centralwidget)
self.label_3.setAlignment(QtCore.Qt.AlignCenter)
self.label_3.setObjectName("label_3")
self.gridLayout.addWidget(self.label_3, 0, 1, 1, 2)
unweight5.setCentralWidget(self.centralwidget)
self.menubar = QtWidgets.QMenuBar(unweight5)
self.menubar.setGeometry(QtCore.QRect(0, 0, 424, 22))
self.menubar.setObjectName("menubar")
unweight5.setMenuBar(self.menubar)
self.statusbar = QtWidgets.QStatusBar(unweight5)
self.statusbar.setObjectName("statusbar")
unweight5.setStatusBar(self.statusbar)
self.retranslateUi(unweight5)
QtCore.QMetaObject.connectSlotsByName(unweight5)
def retranslateUi(self, unweight5):
_translate = QtCore.QCoreApplication.translate
unweight5.setWindowTitle(_translate("unweight5", "Unweighted Calculator"))
self.label.setText(_translate("unweight5", "Course Name"))
self.submit_grades.setText(_translate("unweight5", "Submit"))
self.letter_grade5.setItemText(0, _translate("unweight5", "A+"))
self.letter_grade5.setItemText(1, _translate("unweight5", "A"))
self.letter_grade5.setItemText(2, _translate("unweight5", "A-"))
self.letter_grade5.setItemText(3, _translate("unweight5", "B+"))
self.letter_grade5.setItemText(4, _translate("unweight5", "B"))
self.letter_grade5.setItemText(5, _translate("unweight5", "B-"))
self.letter_grade5.setItemText(6, _translate("unweight5", "C+"))
self.letter_grade5.setItemText(7, _translate("unweight5", "C"))
self.letter_grade5.setItemText(8, _translate("unweight5", "C-"))
self.letter_grade5.setItemText(9, _translate("unweight5", "D+"))
self.letter_grade5.setItemText(10, _translate("unweight5", "D"))
self.letter_grade5.setItemText(11, _translate("unweight5", "F"))
self.letter_grade3.setItemText(0, _translate("unweight5", "A+"))
self.letter_grade3.setItemText(1, _translate("unweight5", "A"))
self.letter_grade3.setItemText(2, _translate("unweigt5", "A-"))
self.letter_grade3.setItemText(3, _translate("unweight5", "B+"))
self.letter_grade3.setItemText(4, _translate("unweight5", "B"))
self.letter_grade3.setItemText(5, _translate("unweight5", "B-"))
self.letter_grade3.setItemText(6, _translate("unweight5", "C+"))
self.letter_grade3.setItemText(7, _translate("unweight5", "C"))
self.letter_grade3.setItemText(8, _translate("unweight5", "C-"))
self.letter_grade3.setItemText(9, _translate("unweight5", "D+"))
self.letter_grade3.setItemText(10, _translate("unweight5", "D"))
self.letter_grade3.setItemText(11, _translate("unweight5", "F"))
self.letter_grade4.setItemText(0, _translate("unweight5", "A+"))
self.letter_grade4.setItemText(1, _translate("unweight5", "A"))
self.letter_grade4.setItemText(2, _translate("unweight5", "A-"))
self.letter_grade4.setItemText(3, _translate("unweight5", "B+"))
self.letter_grade4.setItemText(4, _translate("unweight5", "B"))
self.letter_grade4.setItemText(5, _translate("unweight5", "B-"))
self.letter_grade4.setItemText(6, _translate("unweight5", "C+"))
self.letter_grade4.setItemText(7, _translate("unweight5", "C"))
self.letter_grade4.setItemText(8, _translate("unweight5", "C-"))
self.letter_grade4.setItemText(9, _translate("unweight5", "D+"))
self.letter_grade4.setItemText(10, _translate("unweight5", "D"))
self.letter_grade4.setItemText(11, _translate("unweight5", "F"))
self.letter_grade2.setItemText(0, _translate("unweight5", "A+"))
self.letter_grade2.setItemText(1, _translate("unweight5", "A"))
self.letter_grade2.setItemText(2, _translate("unweight5", "A-"))
self.letter_grade2.setItemText(3, _translate("unweight5", "B+"))
self.letter_grade2.setItemText(4, _translate("unweight5", "B"))
self.letter_grade2.setItemText(5, _translate("unweight5", "B-"))
self.letter_grade2.setItemText(6, _translate("unweight5", "C+"))
self.letter_grade2.setItemText(7, _translate("unweight5", "C"))
self.letter_grade2.setItemText(8, _translate("unweight5", "C-"))
self.letter_grade2.setItemText(9, _translate("unweight5", "D+"))
self.letter_grade2.setItemText(10, _translate("unweight5", "D"))
self.letter_grade2.setItemText(11, _translate("unweight5", "F"))
self.letter_grade1.setItemText(0, _translate("unweight5", "A+"))
self.letter_grade1.setItemText(1, _translate("unweight5", "A"))
self.letter_grade1.setItemText(2, _translate("unweight5", "A-"))
self.letter_grade1.setItemText(3, _translate("unweight5", "B+"))
self.letter_grade1.setItemText(4, _translate("unweight5", "B"))
self.letter_grade1.setItemText(5, _translate("unweight5", "B-"))
self.letter_grade1.setItemText(6, _translate("unweight5", "C+"))
self.letter_grade1.setItemText(7, _translate("unweight5", "C"))
self.letter_grade1.setItemText(8, _translate("unweight5", "C-"))
self.letter_grade1.setItemText(9, _translate("unweight5", "D+"))
self.letter_grade1.setItemText(10, _translate("unweight5", "D"))
self.letter_grade1.setItemText(11, _translate("unweight5", "F"))
self.label_3.setText(_translate("unweight5", "Grade"))
if __name__ == "__main__":
import sys
app = QtWidgets.QApplication(sys.argv)
unweight5 = QtWidgets.QMainWindow()
ui = Ui_unweight5()
ui.setupUi(unweight5)
unweight5.show()
sys.exit(app.exec_())
</code></pre>
<p>Thanks for your time!</p>
|
<p>Assuming that the .py file that you sample is called design.py you must create another .py file where the logic will be to avoid having many lines of code in a single file.</p>
<p>Going the problem, the idea is to have a Look At Table where the options are related to the values through a dictionary, then you calculate the average.</p>
<pre><code>from PyQt5 import QtCore, QtGui, QtWidgets
from design import Ui_unweight5
class Unweight5(QtWidgets.QMainWindow, Ui_unweight5):
def __init__(self, parent=None):
super(Unweight5, self).__init__(parent)
self.setupUi(self)
self.lut = {"A+": 4.0,
"A": 3.5,
"A-": 3.2,
"B+": 3.0,
"B": 2.8,
"B-": 2.7,
"C+": 2.5,
"C": 2.1,
"C-": 1.5,
"D+": 1,
"D": 0.6,
"F": 0.4}
self.submit_grades.clicked.connect(self.on_clicked)
def on_clicked(self):
combos = (self.letter_grade1, self.letter_grade2, self.letter_grade3, self.letter_grade4, self.letter_grade5)
vals = [self.lut[combo.currentText()] for combo in combos]
print("results:", sum(vals)/len(vals))
if __name__ == "__main__":
import sys
app = QtWidgets.QApplication(sys.argv)
w = Unweight5()
w.show()
sys.exit(app.exec_())
</code></pre>
|
python|pyqt|pyqt5|qcombobox
| 0 |
1,907,759 | 46,119,998 |
Dealing with null in update where clause without hard-coding a statement
|
<p>I want to write a method that generates an <code>update</code> statement, without hard-coding columns and values. The statement is going to include optional <code>where</code> clause and is going to be passed to <code>executemany</code>, the clause contains only columns and values, there is no <code>select</code> in there. Example:</p>
<pre><code>update TABLE
set
Col1 = 'a',
Col2 = 'b',
Col3 = 'c'
where
Col4 = 'd'
and Col5 = 'e'
and Col1 is null;
</code></pre>
<p>What I wrote so far: </p>
<pre><code>def update(self, table_name, update_columns, values, where_columns=None, where=True):
update_columns_and_values = self.generator.generateColumnsAndPlaceholders(update_columns)
if where:
where_clause = self.generator.generateWhereClause(where_columns)
else:
where_clause = ''
query = '''
update {t}
set
{cv}
{w}
'''.format(t=table_name, cv=update_columns_and_values, w=where_clause)
self.cursor.executemany(query, values)
self.connection.commit()
def generateColumnsAndPlaceholders(columns):
if type(columns) is str:
columns = columns.split(', ')
return ', \n'.join([str(c) + ' = ' + "'%s'" for c in columns])
</code></pre>
<p>Now, how should I write a function <code>generateWhereClause</code> that takes any number of columns and returns a <code>where</code> clause with placeholders adjusted for both a not null value (indicated with <code>=</code>) and a null value (indicated with <code>is null</code>)?
Also, I think that string returned by <code>generateColumnsAndPlaceholders</code> is not prepared for <code>null</code> due to single quotes around placeholders. If so, how should I change it?</p>
<p>In general, how do I deal with <code>null</code> in update statement without hard-coding specific statement?</p>
|
<p>The function that generates the query - it takes table name, dictionary of values for columns <code>{column: value}</code> and dictionary of constraints, that respects None as constraint <code>{column: constraint}</code></p>
<pre><code>def update_query(table_name, values, constraints):
v_list = [k + '=' + '"' + v + '"' for k, v in values.iteritems()]
v_query = ', '.join(v_list)
c_list = [k + (' IS NULL' if c is None else '=' + '"' + c + '"') for k, c in constraints.iteritems()]
c_query = ' AND '.join(c_list)
return 'UPDATE ' + table_name + ' SET ' + v_query + ' WHERE ' + c_query
</code></pre>
<p>Test code:</p>
<pre><code>tn = "table"
vl = {"Col1":"a","Col2":"b","Col3":"c"}
cn = {"Col4":"d","Col5":"e","Col":None}
</code></pre>
<p>Result:</p>
<pre><code>UPDATE table SET Col2="b", Col3="c", Col1="a" WHERE Col6 IS NULL AND Col4="d" AND Col5="e"
</code></pre>
<p>I hope order is not an issue for you.</p>
|
python|mysql|python-2.7|null|sql-update
| 1 |
1,907,760 | 29,164,630 |
Django development server seems to be using old version of python source file
|
<p>I'm re-writing the code of my website, and testing with Django's built-in web server using the <code>manage.py runserver</code> command. Now I've come across a very strange problem: The server seems to use the current version of <code>views.py</code> on the very first page load, but all subsequent refreshes give me a server error because the server is apperently using an older version of <code>views.py</code>, but the current versions of all other files, which leads to errors – specifically URL resolver errors, because I changed some code from using hard-coded paths in <code>views.py</code> to using the URL resolver, which of course doesn't work if the URL resolver receives a <em>path</em> (from the old <code>views.py</code>) when it's expecting a view name (which I put in the new <code>views.py</code>).</p>
<p>I have already deleted all the <code>.pyc</code> files in my django project directory and rebooted the machine, to no avail. The problem persists.</p>
<p>I'm using Django 1.7.6 on Python 3.4.2.</p>
<p>Here's the current <code>views.py</code> (it doesn't really make sense, it's just for testing):</p>
<pre><code>from mezgrman.utils import NavigationTemplateResponse
NAV_DATA = {
'app_root': 'index',
'app_title': "Item Manager",
'navbar': [
("Add Item", 'index'),
],
'page_title': "Item Manager",
}
def index(request):
return NavigationTemplateResponse(request, "design_test/index.html", NAV_DATA)
</code></pre>
<p>The <code>NavigationTemplateResponse</code> is a subclass of <code>TemplateResponse</code>:</p>
<pre><code>from django.template.response import TemplateResponse
from django.core.urlresolvers import resolve, reverse
class NavigationTemplateResponse(TemplateResponse):
def __init__(self, request, template, nav_data, context = None, content_type = None, status = None, current_app = None):
if context is None:
context = {}
url_name = resolve(request.path).url_name
app_name = url_name.split(".")[0]
view_prefix = app_name + ".views."
nav_data['app_root'] = reverse(view_prefix + nav_data.get('app_root', ""))
for index, entry in enumerate(nav_data.get('navbar', [])):
title, view_name = entry
nav_data['navbar'][index] = (title, reverse(view_prefix + view_name))
context.update(nav_data)
return super().__init__(request, template, context, content_type, status, current_app)
</code></pre>
<p>The Django server traceback explicitly proves that it's using an old version of <code>views.py</code>, these are the local variables (sans the <code>WSGIRequest</code>) at the time of the error, where <code>nav_data</code> is the same as in the old <code>views.py</code>:</p>
<pre><code>content_type None
template 'design_test/index.html'
url_name 'design_test.views.index'
status None
self <mezgrman.utils.NavigationTemplateResponse object at 0x7f395f8d15f8>
app_name 'design_test'
__class__ <class 'mezgrman.utils.NavigationTemplateResponse'>
context {}
view_prefix 'design_test.views.'
current_app None
nav_data {
'app_root': '/',
'app_title': 'Item Manager',
'navbar': [('Add Item', '/')],
'page_title': 'Item Manager'
}
</code></pre>
<p>This seems to me like a bug in Django, but I'd like to know if there's another reason for this strange behaviour. Any help would be appreciated.</p>
|
<p>It's not bug, and also this is not old code of <code>views.py</code>. You're simply overwriting your data in NAV_DATA inside your view. In first request from starting server NAV_DATA have it's initial values, but on that request you're overriding some values with reversed urls. That change persists between requests until dev server is reloaded.</p>
<p><strong>Solution 1:</strong> work on copy of your dict:</p>
<pre><code>class NavigationTemplateResponse(TemplateResponse):
def __init__(self, request, template, nav_data, context = None, content_type = None, status = None, current_app = None):
nav_data = nav_data.copy()
</code></pre>
<p><strong>Solution 2:</strong> change your logic to store reversed urls in other variables</p>
<p><strong>Solution 3:</strong> change your logic to behave differently when urls are already reversed. That solution is <strong>not</strong> thread safe!</p>
|
python|django|wsgi
| 2 |
1,907,761 | 52,897,985 |
Extract data from pandas data frame
|
<p>I want to create a list of data frames from a bigger data frame based on column value. The column <code>"ID"</code> can repeat for example <code>1,2,3,1,2,3,4,5,1,2</code>. </p>
<p>I want to create a list of data frames by extracting the rows until when the ID repeats over again back to 1. In this case the list should have 3 data with ID's: <code>1,2,3</code>, <code>1,2,3,4,5</code> and then <code>1,2</code>. </p>
<p>Can this be done without using a for loop?</p>
|
<p>No need for loops.</p>
<pre><code>>>> list(zip(*df.groupby(df.ID.diff().ne(1).cumsum())))[1]
</code></pre>
|
python-3.x|pandas|pandas-groupby
| 4 |
1,907,762 | 47,933,089 |
Django display a value from a dictionary within dictionary
|
<p>I'm trying to display values from a dictionary within a dictionary in a template using django.
I have a dictionary like this in my views:</p>
<pre><code>characters = {
"char1": {'name': "David",
'stars': 4,
'series': "All star"},
"char2": {'name': "Patrick",
'stars': 3,
'series': "Demi god"}
}
</code></pre>
<p>I can display the whole dictionary on the page, however I want to display only the 'name' and 'David' key:value pairs. I wrote the following in the template:</p>
<pre><code>{% for char in characters %}
{% for key, value in char %}
{{ key }}: {{ value }}
{% endfor %}
{% endfor %}
</code></pre>
<p>However this doesn't show me anything. What is wrong with this double loop?</p>
<p>Thanks</p>
|
<p>You have to add <a href="https://docs.python.org/3/library/stdtypes.html#dict.item" rel="nofollow noreferrer">.items</a> when you loop through key value pairs.
See below (Python 3):</p>
<pre><code>{% for char in characters.items %}
{% for c in char %}
name: {{ c.name }}
{% endfor %}
{% endfor %}
</code></pre>
<p>In Python 2 it would be .iteritems</p>
<pre><code>{% for char in characters.iteritems %}
{% for c in char %}
name: {{ c.name }}
{% endfor %}
</code></pre>
|
python|django|loops|dictionary
| 1 |
1,907,763 | 34,825,875 |
JSON data convert to the django model
|
<p>I need to convert JSON data to django model.</p>
<p>This is my JSON data</p>
<pre><code>{
"data": [
{
"id": "20ad5d9c-b32e-4599-8866-a3aaa5ac77de",
"name": "name_1"
},
{
"id": "7b6d76cc-86cd-40f8-be90-af6ced7fec44",
"name": "name_2"
},
{
"id": "b8843b1a-9eb0-499f-ba64-25e436f04c4b",
"name": "name_3"
}
]
}
</code></pre>
<p>This is my django method</p>
<pre><code>def get_titles():
url = 'http://localhost:8080/titles/'
r = requests.get(url)
titles = r.json()
print(titles['data'])
</code></pre>
<p>What I need is convert to the model and pass to the template. Please let me know how to convert JSON to Model.</p>
|
<h2>Using JSON in Django templates</h2>
<p>You don't <em>have</em> to convert the JSON structure into a Django model just to use it in a Django template: JSON structures (Python dicts) work just fine in a Django template</p>
<p>e.g. if you pass in <code>{'titles': titles['data']}</code> as the context to your template, you can use it as:</p>
<pre><code>{% for title in titles %}
ID is {{title.id}}, and name is {{title.name}}
{% endfor %}
</code></pre>
<p>As long as you don't need to store the data with Django, the above solution works just fine. If you want to store, read below.</p>
<h2>Make a model</h2>
<p>You can create a model to store that JSON data in. Once stored you can pass the queryset to your template</p>
<pre><code>class Title(models.Model)
id = models.CharField(max_length=36)
name = models.CharField(max_length=255)
</code></pre>
<p>or use an <code>UUIDField</code></p>
<pre><code>class Title(models.Model)
id = models.UUIDField(primary_key=True)
name = models.CharField(max_length=255)
</code></pre>
<p><strong>Store the data in a Django model</strong></p>
<pre><code># Read the JSON
titles = r.json()
# Create a Django model object for each object in the JSON
for title in titles['data']:
Title.objects.create(id=title['id'], name=title['name'])
</code></pre>
<p><strong>Use stored data to pass as template context</strong></p>
<pre><code># Then pass this dict below as the template context
context = {'titles': Title.objects.all()}
</code></pre>
|
python|json|django|python-2.7|django-models
| 13 |
1,907,764 | 47,144,689 |
cx_freeze executable crashes with "Program Has Stopped Working" and "No Module Named Codec"
|
<p>I wrote a Python script that requires a few modules, these being:</p>
<p>PyQt5,
plotly,
pandas,
datetime,
xlsxwriter</p>
<p>I am trying to turn these into a .exe using cx_freeze. I have done this once before with a simpler program that relied primarily on PyQt5.</p>
<p>The line:</p>
<pre><code>python setup.py build
</code></pre>
<p>completes without errors on the command prompt.</p>
<p>My setup.py file looks like:</p>
<pre><code> import sys
kwargs = {"name": "x",
"version": "1.2",
"author": "x",
"author_email": "x",
"description": "x",
"zip_safe": False
}
try:
if sys.argv[1] == "build":
import os
from setuptools import find_packages
from cx_Freeze import setup, Executable
kwargs["options"] = {
"build_exe": {
"packages": find_packages() + ["os", "numpy", "plotly", "xlsxwriter", "sys", "datetime"],
"includes": ["numpy", "plotly", "pkg_resources", "PyQt5", "xlsxwriter", "sys", "datetime","codecs"],
}
}
kwargs["executables"] = [Executable(r"MyScript.py",
base="console")]
setup(**kwargs)
except Exception as e:
print(e)
</code></pre>
<p>The line with [username] has my username in it.</p>
<p>When I run it, the command line reads:</p>
<pre><code>Fatal Python error: Py_Initialize: unable to load the file system codec
Traceback (most recent call last):
File "C:\Users\[Username]\AppData\Local\Programs\Python\Python36\lib\encodings\__init__.py", line 31, in <module>
ModuleNotFoundError: No module named 'codecs'
</code></pre>
<p><strong>UPDATE</strong></p>
<p>After reading more on the internet, it seemed like it might be an install issue. So I've uninstalled and re-installed Python and all the modules I needed after uninstalling Anaconda (just in case it was something to do with the Anaconda distribution). However, I'm still seeing the above error. There <em>is</em> a module named codecs, and the Python script (not the .exe) works fine. I've tried changing my path variable to make sure it points to the correct version of Python (though I've uninstalled all other versions).</p>
<p>Also, I'm running:</p>
<p>OS: Windows 7</p>
<p>Python: Python 3.6.3 64 bit</p>
|
<p>So it turned out that the issue was cx_freeze. If you use pip to install it, it doesn't install the latest version. Instead, I Googled "cx_freeze download", downloaded the latest .whl for my version of windows, then ran:</p>
<pre><code>pip install [name of file here].whl
</code></pre>
<p>After that there were other issues, but the codec issue was resolved.</p>
|
python-3.x|cx-freeze
| 1 |
1,907,765 | 47,210,958 |
Fastest way to find exchange times in Python
|
<p>I have two lists of times. Starting from each point in list1, I want to find the closest subsequent (greater) time in list2. </p>
<p>For example:</p>
<p>list1 = [280, 290]</p>
<p>list2 = [282, 295]</p>
<p>exchange(list1, list2) = [2, 5]</p>
<p>I'm having trouble doing this quickly. The only way I can think to do it is by looping through each element in list1 and taking the first hit in list y greater than that list1 element (lists are sorted). My two attempts below, one pandas, one w/o pandas:</p>
<pre><code># dictionary containing my two lists
transition_trj = {'ALA19': [270.0, 280.0, 320.0, 330.0, 440.0, 450.0,
470.0], 'ALA88': [275.0, 285.0, 325.0, 333.0, 445.0, 455.0, 478.0]}
# for example, exchange times for ('ALA19','ALA88') = [5.0, 5.0, 5.0, 3.0, 5.0, 5.0, 8.0]
#find all possible combinations
names = list(transition_trj.keys())
import itertools
name_pairs = list(itertools.combinations_with_replacement(names, 2))
# non-pandas loop, takes 1.59 s
def exchange(Xk,Yk): # for example, a = 'phiALA18', b = 'phiARG11'
Xv = transition_trj[Xk]
Yv = transition_trj[Yk]
pair = tuple([Xk,Yk])
XY_exchange = [] # one for each pair
for x in range(len(Yv)-1): # over all transitions in Y
ypoint = Yv[x] # y point
greater_xpoints = []
for mini in Xv:
if mini > ypoint:
greater_xpoints.append(mini) # first hit=minimum in sorted list
break
if len(greater_xpoints) > 0:
exchange = greater_xpoints[0] - ypoint
XY_exchange.append(exchange)
ET = sum(XY_exchange) * (1/observation_t)
return pair, ET
# pandas loop, does same thing, takes 11.58 s...I am new to pandas...
import pandas as pd
df = pd.DataFrame(data=transition_trj)
def exchange(dihx, dihy):
pair = tuple([dihx, dihy])
exchange_times = []
for x in range(df.__len__()):
xpoint = df.loc[x, dihx]
for y in range(df.__len__()):
ypoint = df.loc[y, dihy]
if ypoint > xpoint:
exchange = ypoint - xpoint
exchange_times.append(exchange)
break
ET = sum(exchange_times) * (1 / observation_t)
return pair, ET
# here's where I call the def, just for context.
exchange_times = {}
for nm in name_pairs:
pair, ET = exchange(nm[0],nm[1])
exchange_times[pair] = ET
if nm[0] != nm[1]:
pair2, ET2 = exchange(nm[1], nm[0])
exchange_times[pair2] = ET2
</code></pre>
|
<p>I propose a solution with <a href="https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.searchsorted.html" rel="nofollow noreferrer"><code>np.searchsorted</code></a> (numpy is the pandas skeleton) wich find insertion points of a list in an other. it's a <code>O(N ln (N))</code> solution, when yours is <code>O(N²)</code> since you search the mininimum from begining (<code>for mini in Xv:</code>) in each loop.</p>
<p>It works on your example, but I don't know what you want if the two lists have not the same length or are not kindly interleaved. Nevertheless a solution is proposed if the lenghts are equal.</p>
<pre><code>df=pd.DataFrame(transition_trj)
pos=np.searchsorted(df['ALA88'],df['ALA19'])
print(df['ALA88'][pos].reset_index(drop=True)-df['ALA19'])
# 0 5.0
# 1 5.0
# 2 5.0
# 3 3.0
# 4 5.0
# 5 5.0
# 6 8.0
# dtype: float64
</code></pre>
|
python|pandas
| 1 |
1,907,766 | 20,167,471 |
How to call another Object's Method from the current Object's Method in Python
|
<p>I'm trying to simulate cars moving along a road in a graph-like fashion. Each Road object has a source and destination. When a car reaches the end of the road, I want the road to send it to the beginning of the next road. My code looks as follows for the Road class:</p>
<pre><code>from collections import deque
class Road:
length = 10
def __init__(self, src, dst):
self.src = src
self.dst = dst
self.actualRoad = deque([0]*self.length,10)
Road.roadCount += 1
def enterRoad(self, car):
if self.actualRoad[0] == 0:
self.actualRoad.appendleft(car)
else:
return False
def iterate(self):
if self.actualRoad[-1] == 0:
self.actualRoad.appendleft(0)
else:
dst.enterRoad(actualRoad[-1]) #this is where I want to send the car in the last part of the road to the destination road!
def printRoad(self):
print self.actualRoad
testRoad = Road(1,2)
testRoad.enterRoad("car1")
testRoad.iterate()
</code></pre>
<p>In the code above, the problem is at the else part of the method iterate(): How do I call another Object's method from the current Object's Method? Both methods are in the same class.</p>
|
<p>It seems to me that you are confusing the difference between <em>class</em> and <em>object</em>.</p>
<p>A class is the piece of code where you model an object by specifying the attributes that compose its and the methods that defines its behaviors. In this case, the <em>Road</em> class.</p>
<p>In the other hand, an object is nothing else but an instance of the class that defines it. Therefore it has a state which is defined by the values of it attributes. Again, in this case, the <em>testRoad</em> is variable that stores an object of the Road class.</p>
<p>In one word, while the <strong>class is an abstract model, the object is a concrete instance with a well defined state</strong>. </p>
<p>So then when you say that you want:</p>
<blockquote>
<p>call another Object's method from the current Object's Method</p>
</blockquote>
<p>what you are actually wanting is to define in your class a method that allows you to call another method from an object of the same class.</p>
<p>Then, to do so, the class method needs to receive as parameter the object from which you want to invoke whatever method you want to invoke:</p>
<pre><code>def iterate(self, destination_road):
if self.actualRoad[-1] == 0:
self.actualRoad.appendleft(0)
else:
destination_road.enterRoad(actualRoad[-1])
</code></pre>
|
python|class|oop|methods
| 1 |
1,907,767 | 64,180,557 |
Create a tuple with double digit values from input in Python
|
<p>I'm trying to create a tuple from input where the values are two digits.</p>
<pre><code>tup = tuple(input("enter a tuple"))
</code></pre>
<p>but when I enter something like <code>10 11</code>, I get this:</p>
<pre><code>('1', '0', ' ', '1', '1')
</code></pre>
<p>The intended result is this:</p>
<pre class="lang-py prettyprint-override"><code>(10, 11)
</code></pre>
|
<p>Try this:</p>
<pre class="lang-py prettyprint-override"><code>tup = tuple(int(n) for n in input("Enter a tuple: ").split(" "))
</code></pre>
<p>This will work for integers of any number of characters as well as negative integers.</p>
|
python|input|tuples
| 1 |
1,907,768 | 70,694,938 |
Asyncio task raises CancelledError despite being awaited
|
<p>I'm creating a program to better understand producer-consumer flow and using queues.</p>
<p>I keep getting <code>asyncio.exceptions.CancelledError</code> at the end of the coroutine even though to my understanding everything is correctly awaited and all the queues are properly used and joined.</p>
<p>I can't find the culprit and I've been testing this for quite a bit. Maybe someone else can see what I sadly cannot.</p>
<p>The flow goes like this:</p>
<ol>
<li>Open new session, create one queue for handling API requests and a second queue for API responses.</li>
<li>Put all requests on <code>request_queue</code></li>
<li>Create 5 <code>api_workers</code> to work on the requests from one queue, produce responses and put them on the other queue</li>
<li>Create a single <code>profile_worker</code> that consumes the responses</li>
<li>Wait for all requests to be processed</li>
<li>Cancel the <code>request_workers</code></li>
<li>Wait for all the responses to be processed</li>
<li>Cancel the <code>profile_worker</code></li>
<li>Wait for <code>request_workers</code> to finish (<code>gather</code> them all)</li>
<li>Wait for <code>profile_worker</code> to finish</li>
</ol>
<pre class="lang-py prettyprint-override"><code>async def get_profiles():
async with ClientSession() as session:
request_queue = asyncio.Queue()
profile_queue = asyncio.Queue()
profiles = list()
for request in make_get_profile_requests(names, token):
await request_queue.put(request)
api_workers = [asyncio.create_task(
api_response_producer(session,
request_queue, profile_queue))
for _ in range(0, 5)]
profile_worker = asyncio.create_task(
profile_producer(profile_queue, profiles))
await request_queue.join()
for worker in api_workers:
worker.cancel()
await profile_queue.join()
profile_worker.cancel()
await asyncio.gather(*api_workers, return_exceptions=True)
await profile_worker
return profiles
</code></pre>
<p>The 5 <code>api_workers</code> take in requests from <code>request_queue</code> and put responses in <code>profile_queue</code>.</p>
<pre class="lang-py prettyprint-override"><code>async def api_response_producer(session: ClientSession,
queue_in: Queue, queue_out: Queue):
while True:
request: Request = await queue_in.get()
print('Working on: {} -> {}'.format(
request.params["nick"], request.params["command"]))
if request.method == Request.GET:
async with session.get(
get_api_server(), params=request.params) as response:
response.raise_for_status()
await queue_out.put(await response.json(
content_type=None))
queue_in.task_done()
print('Response ready: {} -> {}'.format(
request.params["nick"], request.params["command"]))
</code></pre>
<p>The <code>profile_worker</code> takes 4 recent responses and creates a <code>Profile</code> instance for each run.</p>
<pre class="lang-py prettyprint-override"><code>async def profile_producer(queue_in: Queue, profile_list: list):
while True:
profile = await queue_in.get()
print(f"Received profile JSON: {profile}")
queue_in.task_done()
friends = await queue_in.get()
print(f"Received friends JSON: {friends}")
queue_in.task_done()
photos = await queue_in.get()
print(f"Received photos JSON: {photos}")
queue_in.task_done()
videos = await queue_in.get()
print(f"Received videos JSON: {videos}")
queue_in.task_done()
print(f"Creating profile: {profile}")
#profile_list.append(Profile(profile, friends, photos, videos))
print(f"New profile ready: {profile}")
</code></pre>
<p>And to run this I use:</p>
<pre class="lang-py prettyprint-override"><code>profiles = asyncio.run(get_profiles())
</code></pre>
<p>*<em>edit:</em>
The culprit was that <code>asyncio.gather()</code> for <code>request_workers</code> had the parameter <code>return_exceptions=True</code> and it handled the <code>CancelledError</code> for each of the workers.</p>
<p>The <code>profile_worker</code>'s exception on the other hand was not being handled.</p>
<p>Here's the fix - move <code>await profile_worker</code> to the <code>gather</code> list:</p>
<pre class="lang-py prettyprint-override"><code>await asyncio.gather(*api_workers, profile_worker,
return_exceptions=True)
</code></pre>
|
<p>After reading more about how <code>asyncio.gather()</code> handles task cancellation and how tasks raise the <code>CancelledError</code> that has to be handled if the task is simply awaited, I figured it out:</p>
<ol>
<li>Move <code>profile_worker</code> to the <code>gather</code> task list and add exception handling</li>
</ol>
<pre class="lang-py prettyprint-override"><code>await asyncio.gather(*api_workers, profile_worker,
return_exceptions=True)
</code></pre>
<ol start="2">
<li>Remove awaiting for the <code>profile_worker</code></li>
</ol>
<pre class="lang-py prettyprint-override"><code>await profile_worker
</code></pre>
|
python|python-3.x|python-asyncio|aiohttp
| 0 |
1,907,769 | 70,718,550 |
find similar records with multiple columns
|
<p>I have 100k records in a dataframe</p>
<p><a href="https://i.stack.imgur.com/Zze5F.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Zze5F.png" alt="enter image description here" /></a></p>
<p>and want to find out different prices for the same product for the same invoice and different invoices
along with the store. data snippet of the data mentioned</p>
<p>expected output</p>
<p><a href="https://i.stack.imgur.com/juhGo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/juhGo.png" alt="enter image description here" /></a></p>
|
<p>It is not clear how to determine if products are the same, however you can use this code and change the subset values in order to keep only unique rows in the dataframe:</p>
<pre><code>df = df.drop_duplicates(subset=['InvoiceCo', 'pcode', 'price'], keep="last").sort_values(by=['StoreCode'])
</code></pre>
|
python|python-3.x|pandas
| 0 |
1,907,770 | 70,630,926 |
Problem with adding new instance to DB with Flask-Admin
|
<p>I have a little problem with adding new instances using flask-admin.
My model is:</p>
<pre><code>class MenuCategory(db.Model):
id = db.Column(db.Integer, primary_key=True)
name_category = db.Column(db.String(20), unique=True)
slug = db.Column(db.String(255))
order = db.Column(db.Integer)
path = db.Column(db.Unicode(128))
def __init__(self, **kwargs):
if not 'slug' in kwargs:
kwargs['slug'] = slugify(kwargs.get('name_category', ''))
super().__init__(**kwargs)
def __repr__(self):
return f'<Category {self.name_category}>'
</code></pre>
<p>And also ModelView is:</p>
<pre><code>class MenuCategoryView(ModelView):
column_labels = dict(name_category='Наименование категории', order='Приоритет', path='Изображение')
column_editable_list = ('name_category', 'order')
column_default_sort = 'order'
form_excluded_columns = ('slug')
</code></pre>
<p>When i create new instance in also should add "slug" value. It works fine in terminal with flask shell:</p>
<pre><code>>>> m = MenuCategory(name_category='Something here')
{'name_category': 'Something here', 'slug': 'something-here'}
>>> db.session.add(m)
>>> db.session.commit()
</code></pre>
<p>But when i create new instance with flask-admin, it just add a new one without "slug" and NULL value. Can someone tell me what I'm doing wrong?</p>
|
<p>If we look into flask-admin code, we can see such part while creating model:</p>
<pre><code> def create_model(self, form):
try:
# It calls __new__ of model
model = self._manager.new_instance()
# TODO: We need a better way to create model instances and stay compatible with
# SQLAlchemy __init__() behavior
# Next two lines fill the special _sa_instance_state of model
state = instance_state(model)
self._manager.dispatch.init(state, [], {})
# populate_obj is probably as simple as model.field1 = form.field1 and so on
form.populate_obj(model)
self.session.add(model)
self._on_model_change(form, model, True)
self.session.commit()
</code></pre>
<p>So init isn't even called which even has TODO annotation (however it was added two years ago so I am pretty sure than one won't be resolved at all).</p>
<p>The difference is when adding an event listener you actually listen for events that happens while interacting with database itself, e.g when specific field is set (so in our case, when form.populate_obj would assign <code>model.name_category = ...</code>) which occurs always unlike calling <code>__init__</code></p>
|
python|flask|flask-sqlalchemy|flask-admin
| 0 |
1,907,771 | 70,017,466 |
Circle appears as a Triangle in Arcade python library
|
<p>I have been trying to draw a simple <strong>circle</strong> using the arcade python library, but all I can see is a perfectly positioned <strong>triangle</strong> instead.</p>
<p>Here's the sample code I used:</p>
<pre><code>import arcade
# Set constants for the screen size
SCREEN_WIDTH = 400
SCREEN_HEIGHT = 400
SCREEN_TITLE = "Happy Face Example"
# Open the window. Set the window title and dimensions
arcade.open_window(SCREEN_WIDTH, SCREEN_HEIGHT, SCREEN_TITLE)
# Set the background color
arcade.set_background_color(arcade.color.WHITE)
arcade.start_render()
# Draw the face
x = SCREEN_WIDTH // 2
y = SCREEN_HEIGHT // 2
radius = 100
arcade.draw_circle_filled(x, y, radius, arcade.color.YELLOW)
arcade.finish_render()
# Keep the window open until the user hits the 'close' button
arcade.run()
</code></pre>
<p><a href="https://i.stack.imgur.com/BUDvp.png" rel="nofollow noreferrer">Screenshot of the problem</a></p>
<p>After several hours of searching the internet for a solution, I tried it on another PC, and I got a proper circle !!</p>
<p>My machine spec:</p>
<p>i) AMD Ryzen 5 2500U , Vega 8 graphics</p>
<p>ii) 8 GB RAM</p>
<p>iii) OS: Ubuntu 21.10</p>
<p>iv) Python version: 3.10</p>
<p>The machine on which it worked also runs Ubuntu 21.10, with an AMD CPU + GTX 1060 6 GB video card and has Python 3.10 installed.</p>
<p>What could be the problem?</p>
|
<p>It is due to the fact that a circle is in fact rendered as a polygon. The quality of a sphere is determined by the number of vertices of this polygon.
According to the <a href="https://api.arcade.academy/en/latest/api/drawing_primitives.html?highlight=circle#arcade.draw_circle_filled" rel="nofollow noreferrer">docs</a>, you can set this is <code>num_segments</code> parameter:</p>
<blockquote>
<p>num_segments (int) – Number of triangle segments that make up this circle. Higher is better quality, but slower render time. The default value of -1 means arcade will try to calculate a reasonable amount of segments based on the size of the circle.</p>
</blockquote>
<p>However, on some verisions of the library and some hardware, the default calculation goes wrong, so you have to set in manually.</p>
<p>So, your code should be something like this:</p>
<pre><code>arcade.draw_circle_filled(x, y, radius, arcade.color.YELLOW, 0, 100)
</code></pre>
<p>However, since you most likely want the resolution to depend on the size of the sphere, I suggest you use this formula:</p>
<pre><code>arcade.draw_circle_filled(x, y, radius, arcade.color.YELLOW, 0, int(2.0 * math.pi * radius / 3.0))
</code></pre>
|
python|arcade
| 0 |
1,907,772 | 55,708,384 |
numba njit give my and error on a 2D np.array indexation
|
<p>I'm trying to index a 2D matrix <code>B</code> in a njit function with a vector containing the index I want <code>a</code>, a slice of matrix <code>D</code>
here a minimal example:</p>
<pre><code>import numba as nb
import numpy as np
@nb.njit()
def test(N,P,B,D):
for i in range(N):
a = D[i,:]
b = B[i,a]
P[:,i] =b
P = np.zeros((5,5))
B = np.random.random((5,5))*100
D = (np.random.random((5,5))*5).astype(np.int32)
print(D)
N = 5
print(P)
test(N,P,B,D)
print(P)
</code></pre>
<p>I get an error of numba at the line <code>b = B[i,a]</code></p>
<pre><code>File "dj.py", line 10:
def test(N,P,B,D):
<source elided>
a = D[i,:]
b = B[i,a]
^
This is not usually a problem with Numba itself but instead often caused by
the use of unsupported features or an issue in resolving types.
</code></pre>
<p>I don't understand what AM I doing wrong here.
The code works without the <code>@nb.njit()</code> decorator</p>
|
<p>Another solution is to apply swapaxes on B before calling test and to invert indices (<code>B[i,a]</code> -> <code>B[a,i]</code>). I don't know why this is working, but here is the implementation:</p>
<pre><code>import numba as nb
import numpy as np
@nb.njit()
def test(N,P,B,D):
for i in range(N):
a = D[i,:]
b = B[a,i]
P[:, i] = b
P = np.zeros((5,5))
B = np.arange(25).reshape((5,5))
D = (np.random.random((5,5))*5).astype(np.int32)
N = 5
test(N,P,np.swapaxes(B, 0, 1), D)
</code></pre>
<p>By the way, in the answer given by @chrisb, it is not: <code>b[j] = B[i, j]</code> but <code>b[j] = B[i, a[j]]</code>.</p>
|
python|numba
| 1 |
1,907,773 | 55,580,115 |
numpy arctan funtion mirrored angle
|
<p>Problem: there is a coordinate grid of points of data, they need to be filtered to a specific, curved area.</p>
<p>Solution: </p>
<ul>
<li><p>How is the area determined?</p>
<ul>
<li><p>Within a grid, you are given a specific point (A) on that grid & an
angle from that point (determined in relation to the grid)</p></li>
<li><p>You are also given a target point (B) within the grid</p></li>
<li><p>Draw 2 circles of differing, specific radii from point (A)</p></li>
<li><p>Mark an area bounded by 45 degrees from either side of the given angle</p></li>
<li><p>Evaluate each point on the grid to see if it falls within the area bounded by the 2 circles and the 90-degree section</p></li>
</ul></li>
</ul>
<p>Issue with my solution: rather than filtering 1 area, 2 mirrored areas are filtered, allowing for incorrect data.</p>
<p>each point on the grid is evaluated by the function below if it fits within the filtered area. I've looked into the arctan2 function to see if it will help, but don't understand it. A full summary of the issue can be found <a href="http://coiro.co.uk/math_summary.docx" rel="nofollow noreferrer">here</a></p>
<p>here are some clarifying images:</p>
<p><a href="https://i.stack.imgur.com/RDFPH.jpg" rel="nofollow noreferrer">What this should look like</a>,<a href="https://i.stack.imgur.com/p4Cg7.png" rel="nofollow noreferrer">what this code looks like in practice</a>,as well as <a href="http://www.coiro.co.uk%5Ctestgrilla.csv" rel="nofollow noreferrer">some testable data</a> </p>
<pre><code>def pertenece(x,y,x_pala,y_pala,alpha):
#parametros R int y R ext
R_int = 17
R_ext = 25
#def thetas
Theta_min = (alpha - 45)*(180/m.pi)**-1
Theta_max = (alpha + 45)*(180/m.pi)**-1
#calc R punto y Theta punto
R_punto = ((x-x_pala)**2 + (y-y_pala)**2)**0.5
Theta_punto = np.arctan((y-y_pala)/(x-x_pala))
if (R_punto >= R_int and R_punto<=R_ext) and (Theta_punto >= Theta_min and Theta_punto<=Theta_max):
return True
else:
return False
</code></pre>
<p>as can be seen in the document there should be only one quadrant filtered out, in practice there are 2.</p>
|
<p>You should indeed be using <code>np.arctan2</code> because otherwise it can't differentiate between differently signed input. For example <code>np.arctan(1 / 2)</code> is the same as <code>np.arctan(-1 / -2)</code> because the argument is the same. For that reasons you'll preserve both the <code>(-,-)</code> and the <code>(+,+)</code> quadrant. For <code>np.arctan2</code> you simply pass both coordinates as separate arguments so the algorithm can figure out the quadrant from the sign of the two arguments. Hence you should compute the following:</p>
<pre><code>Theta_punto = np.arctan2((y-y_pala), (x-x_pala))
</code></pre>
|
python|numpy
| 0 |
1,907,774 | 55,768,381 |
How to fix this problem of while loop in the factorial procedure>
|
<p>I try to define a factorial procedure with these codes, but the result I get is the n^2, not n*(n-1)*(n-2)......<em>1. It seems the i</em>n has only been implemented once when i=n. I am confused and What is the problem?</p>
<pre><code>def factorial(n):
i = 1
while n >=i:
result = i * n
i = i + 1
return result
</code></pre>
|
<p>You should keep aggregating the operation over <code>result</code> instead:</p>
<pre><code>def factorial(n):
result = 1
while n > 1:
result *= n
n -= 1
return result
</code></pre>
<p>so that <code>factorial(4)</code> returns: <code>24</code></p>
|
python|factorial
| 0 |
1,907,775 | 73,482,730 |
How to join the last 2 integers in python?
|
<p>I have a list:</p>
<blockquote>
<p>[1, 43, 2, 3]</p>
</blockquote>
<p>And I want to join the last 2 integers into 1 integer to get:</p>
<blockquote>
<p>[1, 43, 23]</p>
</blockquote>
<p>How can I do this?</p>
|
<p>you can do like this:</p>
<pre><code>numbers = [1, 43, 2, 3]
a = int("".join(str(numbers[-2])+str(numbers[-1])))
del numbers[-2]
numbers[-1] = a
print(numbers)
</code></pre>
|
python|list|integer
| -1 |
1,907,776 | 63,897,936 |
Assigning different randomly generated values to each rows (python)
|
<p>I am trying to make a dummy data where I would like to assign random sentences under the question column.</p>
<p>I am using <code>from essential_generators import DocumentGenerator</code> to generate the random sentences.</p>
<pre><code>for i, row in dummy.iterrows():
dummy['Question'] = gen.sentence()
</code></pre>
<p>I thought if I iterate each row and apply gen.sentence(), which randomly generates a sentence each time, i would get different sentences for my data of 1000 rows. However, it is result in giving me the same sentence for all 1000 rows.</p>
<p>What can I do to yield my desired result?</p>
|
<pre><code>for i, row in dummy.iterrows():
row['Question'] = gen.sentence()
</code></pre>
<p>This is probably what you're looking for. I can't advise more as I'm not getting enough information from your question.</p>
|
python|pandas|numpy|dummy-data
| 1 |
1,907,777 | 53,124,871 |
Saving updated dataframe from edited PyQt5 QTableView object
|
<p>I am currently trying to load a dataframe into a PyQt QTableView to allow the re-naming of desired columns. Once the re-naming is complete, save the new dataframe as a .csv in a local folder. I cannot get the updated QTableView model to save. The workflow can be seen below.</p>
<ol>
<li>Read the .csv I would like to modify</li>
</ol>
<p><img src="https://i.stack.imgur.com/4RgvP.png" alt=""></p>
<ol start="2">
<li>Load the dataframe into the QTableView with a Combobox in every column for the first row</li>
</ol>
<p><img src="https://i.stack.imgur.com/21j0l.png" alt=""></p>
<ol start="3">
<li>Be able to select different options to rename the column</li>
</ol>
<p><img src="https://i.stack.imgur.com/k8nNf.png" alt=""></p>
<ol start="4">
<li>Select the desired name for desired columns</li>
</ol>
<p><img src="https://i.stack.imgur.com/fsom9.png" alt=""></p>
<p>It would also be helpful if a certain option was selected in the Combobox (for instance, "Default"), it would make the desired column name to be the same as the original column name.</p>
<ol start="5">
<li>Save the final dataframe as a file to a local folder</li>
</ol>
<p><img src="https://i.stack.imgur.com/JW4XJ.png" alt=""></p>
<p>Note: Only columns that have a value in the combobox are kept in the final dataset. Columns that are specified as "Default" are kept with the original column name.</p>
<p>Example of the Code below:</p>
<pre><code>form_class = uic.loadUiType("DataProcessing.ui")[0]
class MyWindowClass(QtWidgets.QMainWindow, form_class):
def __init__(self, parent=None):
super().__init__()
self.setupUi(self)
self.PushButtonDisplay.clicked.connect(self.IP_Data_Display)
self.PushButtonImport.clicked.connect(self.IP_File_Import)
def IP_Data_Display(self):
DT_Disp = self.CBdisplay.currentText()
data = pd.read_csv('Example.csv')
data.loc[-1] = pd.Series([np.nan])
data.index = data.index + 1
data = data.sort_index()
model = PandasModel(data)
self.TView.setModel(model)
for column in range(model.columnCount()):
c = QtWidgets.QComboBox()
c.addItems(['','Option 1','Option 2','Option 3','Option 4','Default'])
i = self.TView.model().index(0,column)
self.TView.setIndexWidget(i,c)
def IP_File_Import(self):
newModel = self.TView.model()
data = []
for row in range(newModel.rowCount()):
rowRes = []
for column in range(newModel.columnCount()):
index = newModel.index(row, column)
item = newModel.data(index)
if item != '':
rowRes.append(item)
data.append(rowRes)
dataFrame = pd.DataFrame(data)
dataFrame.to_csv('Test.csv')#, index=False, header=False)
class PandasModel(QtCore.QAbstractTableModel):
def __init__(self, df = pd.DataFrame(), parent=None):
QtCore.QAbstractTableModel.__init__(self, parent=parent)
self._df = df
def headerData(self, section, orientation, role=QtCore.Qt.DisplayRole):
if role != QtCore.Qt.DisplayRole:
return QtCore.QVariant()
if orientation == QtCore.Qt.Horizontal:
try:
return self._df.columns.tolist()[section]
except (IndexError, ):
return QtCore.QVariant()
elif orientation == QtCore.Qt.Vertical:
try:
return self._df.index.tolist()[section]
except (IndexError, ):
return QtCore.QVariant()
def data(self, index, role=QtCore.Qt.DisplayRole):
if role != QtCore.Qt.DisplayRole:
return QtCore.QVariant()
if not index.isValid():
return QtCore.QVariant()
return QtCore.QVariant(str(self._df.iloc[index.row(), index.column()]))
def setData(self, index, value, role):
row = self._df.index[index.row()]
col = self._df.columns[index.column()]
if hasattr(value, 'toPyObject'):
value = value.toPyObject()
else:
dtype = self._df[col].dtype
if dtype != object:
value = None if value == '' else dtype.type(value)
self._df.set_value(row, col, value)
return True
def rowCount(self, parent=QtCore.QModelIndex()):
return len(self._df.index)
def columnCount(self, parent=QtCore.QModelIndex()):
return len(self._df.columns)
def sort(self, column, order):
colname = self._df.columns.tolist()[column]
self.layoutAboutToBeChanged.emit()
self._df.sort_values(colname, ascending= order == QtCore.Qt.AscendingOrder, inplace=True)
self._df.reset_index(inplace=True, drop=True)
self.layoutChanged.emit()
if __name__ == '__main__':
app = QtWidgets.QApplication.instance()
if app is None:
app = QtWidgets.QApplication(sys.argv)
else:
print('QApplication instance already exists: %s' % str(app))
main = MyWindowClass(None)
main.show()
sys.exit(app.exec_())
</code></pre>
|
<p>Instead of using setItemWidget it is best to create a delegate that has a permanently open editor since it has access to the QModelIndex, on the other hand a row that will have the data of the header is added. And finally use _df to get the pandas.</p>
<pre><code>import sys
from PyQt5 import QtCore, QtWidgets, uic
import pandas as pd
import numpy as np
form_class = uic.loadUiType("DataProcessing.ui")[0]
class PandasModel(QtCore.QAbstractTableModel):
def __init__(self, df = pd.DataFrame(), parent=None):
QtCore.QAbstractTableModel.__init__(self, parent=parent)
self._df = df
def headerData(self, section, orientation, role=QtCore.Qt.DisplayRole):
if role != QtCore.Qt.DisplayRole:
return QtCore.QVariant()
if orientation == QtCore.Qt.Horizontal:
try:
return self._df.columns.tolist()[section]
except (IndexError, ):
return QtCore.QVariant()
return super(PandasModel, self).headerData(section, orientation, role)
def data(self, index, role=QtCore.Qt.DisplayRole):
if role != QtCore.Qt.DisplayRole:
return QtCore.QVariant()
if not index.isValid():
return QtCore.QVariant()
if index.row() == 0:
return QtCore.QVariant(self._df.columns.values[index.column()])
return QtCore.QVariant(str(self._df.iloc[index.row()-1, index.column()]))
def setData(self, index, value, role):
if index.row() == 0:
if isinstance(value, QtCore.QVariant):
value = value.value()
if hasattr(value, 'toPyObject'):
value = value.toPyObject()
self._df.columns.values[index.column()] = value
self.headerDataChanged.emit(QtCore.Qt.Horizontal, index.column(), index.column())
else:
col = self._df.columns[index.column()]
row = self._df.index[index.row()-1]
if isinstance(value, QtCore.QVariant):
value = value.value()
if hasattr(value, 'toPyObject'):
value = value.toPyObject()
else:
dtype = self._df[col].dtype
if dtype != object:
value = None if value == '' else dtype.type(value)
self._df.set_value(row, col, value)
return True
def rowCount(self, parent=QtCore.QModelIndex()):
return len(self._df.index) +1
def columnCount(self, parent=QtCore.QModelIndex()):
return len(self._df.columns)
def sort(self, column, order):
colname = self._df.columns.tolist()[column]
self.layoutAboutToBeChanged.emit()
self._df.sort_values(colname, ascending= order == QtCore.Qt.AscendingOrder, inplace=True)
self._df.reset_index(inplace=True, drop=True)
self.layoutChanged.emit()
class ComboBoxDelegate(QtWidgets.QStyledItemDelegate):
def createEditor(self, parent, option, index):
editor = QtWidgets.QComboBox(parent)
value = index.data()
options = [value, 'Option 1','Option 2','Option 3','Option 4','Default']
editor.addItems(options)
editor.currentTextChanged.connect(self.commitAndCloseEditor)
return editor
@QtCore.pyqtSlot()
def commitAndCloseEditor(self):
editor = self.sender()
self.commitData.emit(editor)
class MyWindowClass(QtWidgets.QMainWindow, form_class):
def __init__(self, parent=None):
super().__init__()
self.setupUi(self)
self.PushButtonDisplay.clicked.connect(self.IP_Data_Display)
self.PushButtonImport.clicked.connect(self.IP_File_Import)
delegate = ComboBoxDelegate(self.TView)
self.TView.setItemDelegateForRow(0, delegate)
def IP_Data_Display(self):
DT_Disp = self.CBdisplay.currentText()
data = pd.read_csv('Example.csv')
data = data.sort_index()
model = PandasModel(data)
self.TView.setModel(model)
for i in range(model.columnCount()):
ix = model.index(0, i)
self.TView.openPersistentEditor(ix)
def IP_File_Import(self):
newModel = self.TView.model()
dataFrame = newModel._df.copy()
dataFrame.to_csv('Test.csv') #, index=False, header=False)
if __name__ == '__main__':
app = QtWidgets.QApplication.instance()
if app is None:
app = QtWidgets.QApplication(sys.argv)
else:
print('QApplication instance already exists: %s' % str(app))
main = MyWindowClass(None)
main.show()
sys.exit(app.exec_())
</code></pre>
|
python|pandas|pyqt5
| 1 |
1,907,778 | 72,053,551 |
convert a CSV file to JSON file
|
<p>I am trying to convert CSV file to JSON file based on a column value. The csv file looks somewhat like this.</p>
<pre><code>ID Name Age
CSE001 John 18
CSE002 Marie 20
ECE001 Josh 22
ECE002 Peter 23
</code></pre>
<p>currently I am using the following code to obtain json file.</p>
<pre><code>import csv
import json
def csv_to_json(csv_file_path, json_file_path):
data_dict = {}
with open(csv_file_path, encoding = 'utf-8') as csv_file_handler:
csv_reader = csv.DictReader(csv_file_handler)
for rows in csv_reader:
key = rows['ID']
data_dict[key] = rows
with open(json_file_path, 'w', encoding = 'utf-8') as json_file_handler:
json_file_handler.write(json.dumps(data_dict, indent = 4))
</code></pre>
<p>OUTPUT:</p>
<pre><code>**{
"CSE001":{
"ID":"CSE001",
"Name":"John",
"Age":18
}
"CSE002":{
"ID":"CSE002",
"Name":"Marie",
"Age":20
}
"ECE001":{
"ID":"ECE001",
"Name":"Josh",
"Age":22
}
"ECE002":{
"ID":"ECE002",
"Name":"Peter",
"Age":23
}
}**
</code></pre>
<p>I want my output to generate two separate json files for CSE and ECE based on the ID value. Is there a way to achieve this output.</p>
<p>Required Output:</p>
<p>CSE.json:</p>
<pre><code>{
"CSE001":{
"ID":"CSE001",
"Name":"John",
"Age":18
}
"CSE002":{
"ID":"CSE002",
"Name":"Marie",
"Age":20
}
}
</code></pre>
<p>ECE.json:</p>
<pre><code>{
"ECE001":{
"ID":"ECE001",
"Name":"Josh",
"Age":22
}
"ECE002":{
"ID":"ECE002",
"Name":"Peter",
"Age":23
}
}
</code></pre>
|
<p>I would suggest you to use pandas, that way will be more easier.</p>
<p>Code may look like:</p>
<pre><code>import pandas as pd
def csv_to_json(csv_file_path):
df = pd.read_csv(csv_file_path)
df_CSE = df[df['ID'].str.contains('CSE')]
df_ECE = df[df['ID'].str.contains('ECE')]
df_CSE.to_json('CSE.json')
df_ECE.to_json('ESE.json')
</code></pre>
|
python|json|csv|key|csvtojson
| 0 |
1,907,779 | 68,771,849 |
How to define a prediction function in keras for NER system?
|
<p>I am creating a NER system following some tutorials using Keras. After the training and the first prediction, I'd like to use it to identify the NEs in a single string or in a list of strings of unseen data.</p>
<p>I can't seem to find the way to pass such string, or list of strings to the <code>model.predict()</code> and get an appropriate prediction.</p>
<p>This is the prediction for the test data in my code, so I was trying to adjust it to accept strings of unseen data and print the token + prediction:</p>
<pre><code>i = np.random.randint(0, x_test.shape[0])
print("This is sentence:",i)
p = model.predict(np.array([x_test[i]]))
p = np.argmax(p, axis=-1)
print("{:15}{:5}\t {}\n".format("Word", "True", "Pred"))
print("-" *30)
for w, true, pred in zip(x_test[i], y_test[i], p[0]):
print("{:15}{}\t{}".format(words[w-1], tags[true], tags[pred]))
</code></pre>
<p>This piece of code predicts and print each token with the NE Tag, but I don't really understand how it works</p>
<p>This code prints something like:</p>
<pre><code>Word True Pred
------------------------------
The O O
British B-gpe B-gpe
pharmaceutical O O
company O O
GlaxoSmithKlineB-org O
</code></pre>
<p>I'd like to pass for example:</p>
<pre><code>sentence = "President Obama became the first sitting American president to visit Hiroshima"
</code></pre>
<p>and being able to see the identified NEs. Any advise on how to do this?</p>
<p>A copy of the full code is <a href="https://colab.research.google.com/drive/1Nc8U3B6K1NRthLixWEceoggjmFPi99cM?usp=sharing" rel="nofollow noreferrer">here</a> and the dataset is used is <a href="https://www.kaggle.com/abhinavwalia95/entity-annotated-corpus/" rel="nofollow noreferrer">here</a>.</p>
|
<p>You can make a prediction on a list of sentences like this:</p>
<pre><code>my_sentences = ["President Obama became the first sitting American president to visit Hiroshima",
"Jack is a good person and living in Iran"]
my_sentences_idx = [[word2idx[w] for w in s.split(" ")] for s in my_sentences]
my_sentences_padded = pad_sequences(maxlen=max_len, sequences=my_sentences_idx, padding="post", value=num_words-1)
preds = np.argmax(model.predict(np.array(my_sentences_padded)), axis=-1)
for idx, p in enumerate(preds):
print("-" *30)
print(my_sentences[idx])
print("-" *30)
for w, pred in zip(my_sentences[idx].split(" "), preds[idx]):
if tags[pred]!="O":
print("{:15} {} ".format(w, tags[pred]))
print()
</code></pre>
<p>Output:</p>
<pre><code>------------------------------
President Obama became the first sitting American president to visit Hiroshima
------------------------------
President B-per
Obama I-per
American B-gpe
Hiroshima B-geo
------------------------------
Jack is a good person and living in Iran
------------------------------
Jack B-per
Iran B-geo
</code></pre>
|
python|tensorflow|keras|named-entity-recognition
| 1 |
1,907,780 | 62,883,722 |
How to define custom axis in Matplotlib?
|
<p>Suppose I had the following code:</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
import numpy as np
x = np.arange(520)
y = np.random.rand(520)
plt.plot(x, y)
</code></pre>
<p>which produces the following graph:</p>
<p><a href="https://i.stack.imgur.com/v0HUS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/v0HUS.png" alt="enter image description here" /></a></p>
<p>How can I define a custom axis so my X-axis looks like this:</p>
<p><a href="https://i.stack.imgur.com/gAPJV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gAPJV.png" alt="enter image description here" /></a></p>
|
<p>You can use:</p>
<ul>
<li><code>plt.xscale('log')</code> to change the scale to a log scale</li>
<li><a href="https://matplotlib.org/3.2.1/api/ticker_api.html" rel="nofollow noreferrer"><code>set_major_formatter(ScalarFormatter())</code></a> to set the formatting back to normal (replacing the <code>LogFormatter</code>)</li>
<li><code>set_minor_locator(NullLocator())</code> to remove the minor ticks (that were also set by the log scale)</li>
<li><code>set_major_locator(FixedLocator([...]</code> or <code>plt.xticks([...])</code> to set the desired ticks on the x-axis</li>
</ul>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
from matplotlib.ticker import ScalarFormatter, NullLocator, FixedLocator
import numpy as np
x = np.arange(520)
y = np.random.uniform(-1, 1, 520).cumsum()
plt.plot(x, y)
plt.xscale('log')
# plt.xticks([...])
plt.gca().xaxis.set_major_locator(FixedLocator([2**i for i in range(0, 7)] + [130, 260, 510]))
plt.gca().xaxis.set_major_formatter(ScalarFormatter())
plt.gca().xaxis.set_minor_locator(NullLocator())
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/S0IPZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/S0IPZ.png" alt="example plot" /></a></p>
|
python|matplotlib
| 4 |
1,907,781 | 62,609,294 |
How to filter a list with a blacklist in python
|
<p>I'm hoping you can please help me with a problem I'm having. I have two lists: one with file paths and a second list with blacklisted terms. I essentially want to filter the list of file paths using a blacklist and print out the remaining (non-blacklisted) file paths. However, I can't seem to get it working. I suspect the nested for loops are causing the 'in' test to evaluate in a way I don't understand. Any help anyone can give me would be much appreciated.</p>
<p>Desired outcome:</p>
<pre><code>
filepaths = [
'/windows/bad_file1',
'/windows/bad_file2'
'/windows/good_file',
]
blacklist = [
'bad_file1'
'bad_file2',
]
for path in filepaths:
for bad in blacklist:
if bad not in path:
print(path)
#prints /windows/good_file
</code></pre>
<p>However, currently I'm getting:</p>
<pre><code>/windows/bad_file1
/windows/bad_file2/windows/good_file
/windows/bad_file1
/windows/bad_file2/windows/good_file
</code></pre>
<p>EDIT: Thanks everyone for your thoughtful (and prompt) responses! I really appreciate you taking the time to talk me through it.</p>
|
<p>You can use <code>os.path.basename</code> to compare the files with the blacklist as</p>
<pre><code>import os
filepaths = [
'/windows/bad_file1',
'/windows/bad_file2',
'/windows/good_file'
]
blacklist = [
'bad_file1',
'bad_file2'
]
result = [file for file in filepaths if os.path.basename(file) not in blacklist]
print(*result)
</code></pre>
<p>Output</p>
<pre><code>/windows/good_file
</code></pre>
|
python
| 1 |
1,907,782 | 61,668,455 |
Append Dictionary to JSON file
|
<p>I am trying to append a dictionary to a json file with already 2 dictionaries. But it is giving me the initial file and the results all together in the same json file. My code is as below. Thanks in advance people.</p>
<pre><code>import json
import os
cwd = os.getcwd()
fp = cwd + '/xero_credentials.json'
def json_append():
data_object = {
"clientName": "Company Test",
"clientId": "null",
"clientSecret": "null",
"redirect_url": "http://localhost:8080/callback",
'scopes': "offline_access accounting.transactions.read accounting.journals.read",
'refreshToken': "null"
}
with open(fp, 'r+') as json_file:
data = json.load(json_file)
data_dictionary = data['credentials']
data_dictionary.append(data_object)
json.dump(data, json_file, indent = 4, sort_keys=True)
json_file.close()
# **********
json_append()
</code></pre>
<p>This is the result:</p>
<pre><code>{
"credentials": [
{
"clientName": "C1",
"clientId": "null"
},
{
"clientName": "C2",
"clientId": "null"
}
]
}
{
"credentials": [
{
"clientName": "C1",
"clientId": "null"
},
{
"clientName": "C2",
"clientId": "null"
},
{
"clientName": "C3",
"clientId": "null"
}
]
}
</code></pre>
|
<p>It is difficult to update a file in-place (except for some special cases), so generally one often has to first read its entire contents into memory, update that, and then use it to rewrite the entire file.</p>
<p>Here's what I mean:</p>
<pre><code>import json
import os
cwd = os.getcwd()
fp = cwd + '/xero_credentials.json'
def json_append():
data_object = {
"clientName": "Company Test",
"clientId": "null",
"clientSecret": "null",
"redirect_url": "http://localhost:8080/callback",
'scopes': "offline_access accounting.transactions.read accounting.journals.read",
'refreshToken': "null"
}
# Read the entire file.
with open(fp, 'r') as json_file:
data = json.load(json_file)
# Update the data read.
credentials = data['credentials']
credentials.append(data_object)
# Update the file by rewriting it.
with open(fp, 'w') as json_file:
json.dump(data, json_file, indent=4, sort_keys=True)
json_append()
</code></pre>
<p>Updated file:</p>
<pre><code>{
"credentials": [
{
"clientId": "null",
"clientName": "C1"
},
{
"clientId": "null",
"clientName": "C2"
},
{
"clientId": "null",
"clientName": "Company Test",
"clientSecret": "null",
"redirect_url": "http://localhost:8080/callback",
"refreshToken": "null",
"scopes": "offline_access accounting.transactions.read accounting.journals.read"
}
]
}
</code></pre>
|
python|json
| 1 |
1,907,783 | 60,428,792 |
Saving a numpy homographic ndarray to file
|
<p>I am trying to save a numpy ndarray homographic table to file (to use it later if needed) using:</p>
<pre><code>h.tofile("h.h", sep=",", format="%s")
</code></pre>
<p>But when I later load it and try to use it on my calculations, using:</p>
<pre><code>h = np.fromfile('h.h')
</code></pre>
<p>I get the following error:</p>
<pre><code>OpenCV Error: Assertion failed (scn + 1 == m.cols) in perspectiveTransform, file /tmp/binarydeb/ros-kinetic-opencv3-3.3.1/modules/core/src/matmul.cpp, line 2268
Traceback (most recent call last):
File "...camera_to_point.py", line 166, in draw_circle2
pointOut = cv2.perspectiveTransform(a, h)
cv2.error: /tmp/binarydeb/ros-kinetic-opencv3-3.3.1/modules/core/src/matmul.cpp:2268: error: (-215) scn + 1 == m.cols in function perspectiveTransform
</code></pre>
<p>The interesting thing is that when I inspect the homographic matrix before saving to file and after retrieving it from file, I get:</p>
<p>Before saving:</p>
<pre><code>h
array([[ 1.86326937e-02, -1.16700086e-02, 2.66963340e+00],
[-7.51402803e-03, -3.88364336e-02, 1.69502899e+01],
[-1.05249671e-03, 1.47791921e-02, 1.00000000e+00]])
[0:3] : [array([ 0.01863269, ...6696334 ]), array([-7.51402803e-...2899e+01]), array([-0.0010525 , ... ])]
dtype: dtype('float64')
max: 16.950289865517334
min: -0.038836433627212195
shape: (3, 3)
size: 9
__internals__: {'T': array([[ 1.86326937e...000e+00]]), 'base': None, 'ctypes': <numpy.core._interna...ff8179b90>, 'data': <read-write buffer f...ff81797f0>, 'dtype': dtype('float64'), 'flags': C_CONTIGUOUS : Tru...PY : False, 'flat': <numpy.flatiter obje...0x2f69850>, 'imag': array([[0., 0., 0.],... 0., 0.]]), 'itemsize': 8, 'nbytes': 72, 'ndim': 2, 'real': array([[ 1.86326937e...000e+00]]), 'shape': (3, 3), 'size': 9, ...}
</code></pre>
<p>After retrieving:</p>
<pre><code>h
array([7.12605079e-67, 1.18069470e-95, 9.95130728e-43, 9.95309333e-43,
5.40222957e-62, 4.32553942e-91, 2.74137239e-57, 3.06246782e-57,
7.11172728e-38, 8.16641713e-43, 1.83288526e-76, 9.92883300e-96,
1.69053846e-52, 9.34287548e-67, 9.05446937e-43, 7.11877246e-67])
[0:16] : [7.126050789796848e-67, 1.1806946993563433e-95, 9.951307279141174e-43, 9.95309332763986e-43, 5.402229572720159e-62, 4.325539416797926e-91, 2.741372385066056e-57, 3.0624678193742925e-57, 7.111727282221548e-38, 8.16641712557458e-43, 1.8328852622133153e-76, 9.928833002794128e-96, 1.690538456980108e-52, 9.342875479816443e-67, ...]
dtype: dtype('float64')
max: 7.111727282221548e-38
min: 9.928833002794128e-96
shape: (16,)
size: 16
__internals__: {'T': array([7.12605079e-6...7246e-67]), 'base': None, 'ctypes': <numpy.core._interna...ff8179790>, 'data': <read-write buffer f...ff8179470>, 'dtype': dtype('float64'), 'flags': C_CONTIGUOUS : Tru...PY : False, 'flat': <numpy.flatiter obje...0x2f68e00>, 'imag': array([0., 0., 0., 0..., 0., 0.]), 'itemsize': 8, 'nbytes': 128, 'ndim': 1, 'real': array([7.12605079e-6...7246e-67]), 'shape': (16,), 'size': 16, ...}
</code></pre>
<p>Which shows two different matrices.</p>
<p>So, how can I save a numpy ndmatrix to a file and retrieve it successfully? </p>
<p>I am using opencv2 with python2.7</p>
|
<p>With <code>h.tofile("h.h", sep=",", format="%s")</code> a text file with all the numbers, separated by commas is stored, but the information of the dimension of the array is lost.<br>
<code>h = np.fromfile('h.h')</code> results in your output file. If you use<br>
<code>h = np.fromfile('h.h', sep=",")</code> instead and then resize h like<br>
<code>h.resize((3, 3))</code>
you would get the desired result.</p>
<p>If you use a binary file format, the array information can be retrieved.</p>
<p>np.arrays could be stored in binary file format using the pickle library:<br>
script runs in python 2.7 and 3.8</p>
<pre><code>from pickle import dump, load
from numpy import array
def save_h():
h = array([[1.86326937e-02, -1.16700086e-02, 2.66963340e+00],
[-7.51402803e-03, -3.88364336e-02, 1.69502899e+01],
[-1.05249671e-03, 1.47791921e-02, 1.00000000e+00]])
with open("save.h", "wb") as f:
dump(h, f)
with open("save.h", "rb") as f:
h1 = load(f)
return h, h1
if __name__ == "__main__":
stored, loaded = save_h()
print(stored)
print(loaded)
</code></pre>
<p>Output:</p>
<pre><code>[[ 1.86326937e-02 -1.16700086e-02 2.66963340e+00]
[-7.51402803e-03 -3.88364336e-02 1.69502899e+01]
[-1.05249671e-03 1.47791921e-02 1.00000000e+00]]
[[ 1.86326937e-02 -1.16700086e-02 2.66963340e+00]
[-7.51402803e-03 -3.88364336e-02 1.69502899e+01]
[-1.05249671e-03 1.47791921e-02 1.00000000e+00]]
</code></pre>
|
python|numpy|opencv|multidimensional-array|save
| 0 |
1,907,784 | 64,442,290 |
Intersection of overlapping rectangles
|
<p>i am an industrial engineer so you know my coding isn`t that good thats why i need your help. my problem is that i need to know first the area of intersection between two rectangles so that to check if there is overlapping occurring, this has to be done for 6 rectangles i need to check if they overlap. my second problem is that i have 6 rectangles inside a large warehouse with defined boundaries, i want to maximize the utilized area. how can i write a code to do so. i have used the bellow code which is online but i dont know how to use it to check for the 6 rectangles.</p>
<pre><code># Python program to check if rectangles overlap
class Point:
def __init__(self, x, y):
self.x = x
self.y = y
# Returns true if two rectangles(l1, r1)
# and (l2, r2) overlap
def doOverlap(l1, r1, l2, r2):
# If one rectangle is on left side of other
if(l1.x >= r2.x or l2.x >= r1.x):
return False
# If one rectangle is above other
if(l1.y <= r2.y or l2.y <= r1.y):
return False
return True
# Driver Code
if __name__ == "__main__":
l1 = Point(0, 10)
r1 = Point(10, 0)
l2 = Point(5, 5)
r2 = Point(15, 0)
if(doOverlap(l1, r1, l2, r2)):
print("Rectangles Overlap")
else:
print("Rectangles Don't Overlap")
</code></pre>
|
<p>this may help you getting started:</p>
<p>a <code>Rectangle</code> is definded here with a <code>base_point</code> (which is the lower left point) and a <code>size</code> (how much it extends in <code>x</code> and <code>y</code>).</p>
<pre><code>from dataclasses import dataclass
@dataclass
class Point:
x: float
y: float
@dataclass
class Rectangle:
base_point: Point
size: Point
def x_overlap(self, other: "Rectangle") -> bool:
if self.base_point.x < other.base_point.x:
lower = self
higher = other
else:
lower = other
higher = self
if lower.base_point.x + lower.size.x >= higher.base_point.x:
return True
else:
return False
def y_overlap(self, other: "Rectangle") -> bool:
if self.base_point.y < other.base_point.y:
lower = self
higher = other
else:
lower = other
higher = self
if lower.base_point.y + lower.size.y >= higher.base_point.y:
return True
else:
return False
def overlap(self, other: "Rectangle") -> bool:
return self.x_overlap(other) and self.y_overlap(other)
r0 = Rectangle(base_point=Point(x=0, y=0), size=Point(x=3, y=2))
r1 = Rectangle(base_point=Point(x=2, y=1), size=Point(x=3, y=2))
print(r0.overlap(r1))
</code></pre>
<p>rectangles overlap if they do overlap on the x and on the y axis. the code is probably overly designed... you may want to simplify it to suit your needs. but the way it is - i think - it showcases the idea well.</p>
<p>in order to find if more rectangles overlap you have to compare all of them agains one another...</p>
|
python|industrial
| 0 |
1,907,785 | 64,195,631 |
I am trying to make a program which allows three inputs and then prints out the biggest one, but the code is not working
|
<pre class="lang-py prettyprint-override"><code>print("this is program is to check which NUMBER you chose is bigger,(NOTE: any input except number input will break down the program)\n")
a=int(input("first number:"))
b=int(input("second number:"))
c=int(input("third number:"))
if "a">"b":
z=a
else:
z=b
if "z" > "c":
print(z)
else:
print(c)
</code></pre>
<p>my code ↑
I have tried everything like doing it in notepad & saved it with the extension but I am always get the whatever input is in variable "b"</p>
|
<p>Try this, change your variable "a","b" and "z" to a, b and z. "a" is considered as a string instead of being a variable</p>
<pre><code>print("this is program is to check which NUMBER you chose is bigger,(NOTE: any input except number input will break down the program)\n")
a=int(input("first number:"))
b=int(input("second number:"))
c=int(input("third number:"))
if a > b:
z=a
else:
z=b
if z > c:
print(z)
else:
print(c)
</code></pre>
|
python|python-3.x|database|windows|output
| 0 |
1,907,786 | 70,349,916 |
Using class variable for type hinting in __init__
|
<p>In Python, I have a list of "allowed types" in my class, and in the constructor I would like to pass an argument that has to be in that list of allowed types. So, conceptually, this is what I would like:</p>
<pre><code>from typing import Union
class A:
allowed_types = [typeA, typeB]
def __init__(self, some_argument: Union[allowed_types]):
(do stuff)
</code></pre>
<p>I'm not sure how to tackle this. How would you set something like this up? Maybe there's a better set-up for this, but I'm not sure how. Thanks!</p>
|
<p>I would use this in your in your constructor:</p>
<pre><code>for element in some_argument:
# This will throw an error if the type is not in the approved types.
assert type(element) in approved_types
</code></pre>
|
python|type-hinting|python-typing
| 0 |
1,907,787 | 70,085,358 |
cannot figure out making timer in tkinter
|
<p>I haven't added much to my code so far but what it is supposed to do is decrement variable time_1 by 1 every second, it does not do this and instead it turns to 0 almost immediately. What am I doing wrong?</p>
<pre><code>import tkinter as tk
import threading
from tkinter import ttk
root = tk.Tk()
root.title("Chess Timer")
root.minsize(250,250)
time_1 = 60.0
time_2 = 60.0
timer_1 = ttk.Label(root, text=time_1)
timer_1.grid(column=1, row=0)
timer_2 = ttk.Label(root, text=time_2)
timer_2.grid(column=2, row=0)
def update():
timer_1.config(text=time_1)
timer_2.config(text=time_2)
while time_1 and time_2 > 0:
time_1 -= 1
threading.Timer(1.0,update).start()
root.mainloop()
</code></pre>
<p>I'm new to python, any help is appreciated!</p>
|
<p>The problem here is in the <code>while</code> loop. The call to <code>threading.Timer(1.0, update).start()</code> does not make the <em>loop</em> wait one second; it makes the <em>calls to <code>update</code></em> wait one second.. Thus, 60 1-second timers are set almost immediately, and all update the labels at about the same time. The <code>while</code> loop, meanwhile, subtracts both <code>time_1</code> down to zero in almost an instant, since it's not being hindered by the timer.</p>
<p>For something like this, you don't need to use the <code>threading</code> module. You can just use the <code>root.after()</code> function to call your function after a set number of milliseconds. Here is what your code would look like using <code>root.after()</code>:</p>
<pre class="lang-py prettyprint-override"><code>import tkinter as tk
from tkinter import ttk
root = tk.Tk()
root.title("Chess Timer")
root.minsize(250,250)
time_1 = 60.0
time_2 = 60.0
timer_1 = ttk.Label(root, text=time_1)
timer_1.grid(column=1, row=0)
timer_2 = ttk.Label(root, text=time_2)
timer_2.grid(column=2, row=0)
def update():
global time_1, timer_1, root
time_1 -= 1
timer_1.config(text=time_1)
# Only let the timer continue if it is greater than zero
if time_1 > 0:
root.after(1000, update)
update()
root.mainloop()
</code></pre>
|
python|tkinter
| 0 |
1,907,788 | 11,228,667 |
Structuring a Python App (eventually for deb packaging)
|
<p>So I've written an app in Python that has a main .py file and recently I've written some libraries that it'll use (some more py files). How would I go about "installing" this in Ubuntu? Before I added the libraries, I simply had a bash script that would copy the main py file to /usr/bin so that the user could run the app with just $ appname.py</p>
<p>And what would be the best way to do this for future deployment as a .deb?</p>
|
<p>I think you could make use of the fact that setup.py know how to install python scripts as regular unix programs:</p>
<p>See this doc:
<a href="http://docs.python.org/distutils/setupscript.html#installing-scripts" rel="nofollow">http://docs.python.org/distutils/setupscript.html#installing-scripts</a></p>
<p>I think overall it depends on how integrated/professional you want your process to be.</p>
<p>Anyway, setuptools/distutils are the pythonic way to go.</p>
<p>If you want to go one step further and install your application via regular debian/ubuntu tools like apt-get/aptitude etc, there are folk out there that have written plugins for setuptools to create regular debian/ubuntu packages.</p>
<p>See this module:
<a href="http://pypi.python.org/pypi/stdeb/" rel="nofollow">http://pypi.python.org/pypi/stdeb/</a></p>
|
python|packaging
| 4 |
1,907,789 | 70,546,551 |
I have an issue : Reading Multiple Text files using Multi-Threading by python
|
<p><strong>Hello Friends,
I hope someone check my code and helping me on this issue.</strong></p>
<p>I want to read from multiple text files (at least 4) sequentially and print their content on the screen</p>
<ul>
<li>First time not using Threading</li>
<li>Measure the elapsed time in both cases multiple time and calculate the average</li>
</ul>
<p><strong>this my Code:</strong></p>
<pre><code>import pandas as pd
from datetime import datetime
start_time = datetime.now()
text1 = pd.read_csv('alice_in_wonderland.txt', delimiter = "\t")
print(text1)
text2 = pd.read_csv('On-Sunset-Highways-Thomas-D-Murph.txt', delimiter = "\t")
print(text2)
text3 = pd.read_csv('History-of-Texas-Lan-Bill-Allcorn.txt', delimiter = "\t")
print(text3)
text4 = pd.read_csv('A-Secret-of-the-Sea--T-W-Thomas.txt', delimiter = "\t")
print(text4)
time_elapsed = datetime.now() - start_time
print('Time elapsed (hh:mm:ss.ms) {}'.format(time_elapsed))
</code></pre>
<hr />
<p><strong>From Here I have an issue how I make a multithreading by python.</strong>
I want to make a 4 threads to read from text files and print on the screen , Also I want to</p>
<ul>
<li>Measure the elapsed time multiple times.</li>
<li>record the results.</li>
<li>calculate the average time.
<strong>Note: number of files = 4 text files.</strong></li>
</ul>
|
<p>Here's a code snippet that creates four threads to print the contents of four files (the comments have addressed the <code>timeit</code> module already, so I've treated the timing issue as resolved):</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import threading
def print_text(filename):
text = pd.read_csv(filename, delimiter = "\t")
print(text)
if __name__ == "__main__":
filenames = ["test1.txt", "test2.txt", "test3.txt", "test4.txt"]
# Create thread for each filename.
threads = [threading.Thread(target=print_text, args=(filename,)) for filename in filenames]
# Start execution of each thread.
for thread in threads:
thread.start()
# Join threads when execution is complete.
for thread in threads:
thread.join()
</code></pre>
|
python|python-3.x|multithreading
| 1 |
1,907,790 | 63,455,933 |
Is there a way to parse out HTML in a response from requests.get()?
|
<p>I'm using the <code>requests</code> package to get data from an API and see some HTML elements in the response data such as <code><p></code>, <code></p></code>, and <code>\'</code>, among a bunch of other elements. The return value for <code>response.encoding</code> is <code>utf-8</code> if that helps.</p>
<p>I'd like to parse out all the HTML values and just have a simple text value in the field. Is there a way to easily remove or parse all HTML elements in the response?</p>
|
<p>A cleaner way for the above requirement would be to use <a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/" rel="nofollow noreferrer">Beautiful Soup</a>.</p>
<p>Following should be your approach.</p>
<pre><code>import requests
from bs4 import BeautifulSoup
response = requests.get("xyz")
soup = BeautifulSoup(response.content, 'html.parser')
</code></pre>
<p>After this, you have all the html in an object which is basically a collection of dictionaries and lists. You could traverse and fetch the value you need.</p>
|
python|python-requests
| 2 |
1,907,791 | 69,786,654 |
Count occurrences of Enum in a string
|
<p>I am attempting to count the number of occurrences of an ENUM in a string value e.g.</p>
<pre><code>class numbers(Enum):
one = 1
two = 2
string = "121212123324"
string.count(str(numbers.one.value))
</code></pre>
<p>This just seems very unintuitive to convert the enum back to string - are there any quicker ways?</p>
|
<p>Your solution is good, you can see runtime of 5 approach in below:</p>
<pre><code>from timeit import timeit
from collections import Counter
from enum import Enum
class numbers(Enum):
one = 1
two = 2
three = 3
four = 4
def approach1(products):
return Counter(products)[str(numbers.one.value)]
def approach2(products):
return products.count(str(numbers.one.value))
def approach3(products):
lst = list(map(int, products))
return lst.count(int(numbers.one.value))
def approach4(products):
cnt = Counter(products)
return (cnt[str(numbers.one.value)] , str(numbers.two.value) ,
cnt[str(numbers.three.value)] , str(numbers.four.value))
def approach5(products):
cnt_o = products.count(str(numbers.one.value))
cnt_t = products.count(str(numbers.two.value))
cnt_h = products.count(str(numbers.three.value))
cnt_f = products.count(str(numbers.four.value))
return (cnt_o , cnt_t , cnt_h , cnt_f)
funcs = approach1, approach2, approach3 , approach4, approach5
products = "121212123324"*10000000
for _ in range(3):
for func in funcs:
t = timeit(lambda: func(products), number=1)
print('%.3f s ' % t, func.__name__)
print()
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code>6.279 s approach1
0.140 s approach2
17.172 s approach3
6.403 s approach4
0.491 s approach5
6.340 s approach1
0.139 s approach2
16.049 s approach3
6.559 s approach4
0.474 s approach5
6.245 s approach1
0.143 s approach2
15.876 s approach3
6.172 s approach4
0.475 s approach5
</code></pre>
|
python|string|enumerate
| 3 |
1,907,792 | 17,915,950 |
Is there a list of all ASCII characters in python's standard library?
|
<p>Is there a field or a function that would return all ASCII characters in python's standard library?</p>
|
<p>You can make one.</p>
<pre><code>ASCII = ''.join(chr(x) for x in range(128))
</code></pre>
<p>If you need to check for membership, there are other ways to do it:</p>
<pre><code>if c in ASCII:
# c is an ASCII character
if c <= '\x7f':
# c is an ASCII character
</code></pre>
<p>If you want to check that an entire string is ASCII:</p>
<pre><code>def is_ascii(s):
"""Returns True if a string is ASCII, False otherwise."""
try:
s.encode('ASCII')
return True
except UnicodeEncodeError:
return False
</code></pre>
|
python
| 30 |
1,907,793 | 60,766,313 |
Make python wait for setting of variable through tkinter
|
<p>Im running into this problem with tkinter, where I want to set the source of a document so my code can work with the file in python using a search button and askopenfilename.</p>
<p>here is the snipped of my code.</p>
<pre><code>...
from tkinter import *
from tkinter import filedialog
root = Tk()
root.title("Alpha")
root.iconbitmap('images\alpha.ico')
def search():
global location
root.filename = filedialog.askopenfilename(initialdir="/", title="Select A File",
filetypes=(("txt files", "*.txt"), ("All Files", "*.*")))
location = root.filename
open_button = Button(root, text="Open File", command=search).pack()
input_txt = open(location, "r", encoding="utf8")
...
root.mainloop()
</code></pre>
<p>Problem: As I am running the program, the window opens for a brief moment and I am instantly getting the error that <code>location</code> in the <code>input_txt</code> variable is not defined, which I totally understand. I guess my python code is not waiting for me to press the button in the program window and search for my file so <code>location</code> can be defined. How can I make python wait for <code>open()</code> to return a value for <code>location</code> before trying to define <code>input_txt</code>?</p>
<p>I have tried with </p>
<pre><code>import time
...
location = ''
open_button = Button(root, text="Open File", command=open).pack()
while not location:
time.sleep(0.1)
</code></pre>
<p>this however causes the program to freeze and I know sleep is not the not the best option here.
Any suggestions?</p>
|
<h2>Regarding your question</h2>
<p>Like boomkin suggests, I'd advise moving the <code>input_txt = open(location, ...)</code> line into your <code>search</code> function. That way, the program will only try to open from <code>location</code> once you've pressed the button and defined <code>location</code>.</p>
<p>If there's other stuff going on, you could make another function and call that:</p>
<pre><code>def file_handling(location):
input_txt = open(location, "r", encoding="utf8")
... #anything using input_txt
def search():
root.filename = filedialog.askopenfilename(initialdir="/", title="Select A File",
filetypes=(("txt files", "*.txt"), ("All Files", "*.*")))
file_handling(root.filename)
open_button = Button(root, text="Open File", command=search)
open_button.pack()
...
root.mainloop()
</code></pre>
<p>The issue is that Tkinter objects don't do anything until you reach the mainloop--but once you're in the mainloop, you can't go back and fill anything in. So everything you want to do has to be tied to some sort of input: like a button press (or a keystroke, or a mouseover).</p>
<p>In this case, you want to set <code>location</code>, but you have to wait until you call mainloop and the button starts taking input. But by that time, you've passed the line that needs <code>location</code> and can't go back. That's why I suggest calling the <code>input_txt</code> line from the <code>search</code> function--because it won't get called until you've already gotten the location.</p>
<p>That's a little long-winded, but I hope it illuminated the issue.</p>
<h2>As a sidenote</h2>
<p>I'd also recommend you declare and pack your widgets separately. That is, change this:</p>
<pre><code>open_button = Button(root, text="Open File", command=search).pack()
</code></pre>
<p>to this:</p>
<pre><code>open_button = Button(root, text="Open File", command=search)
open_button.pack()
</code></pre>
<p>Otherwise, you end up storing the value of <code>pack()</code> (which is <code>None</code>) instead of storing your widgets (the <code>Button</code> object).</p>
|
python|button|tkinter|block|wait
| 0 |
1,907,794 | 66,005,296 |
Python microsoft graph API - /callback error
|
<p>In my Django web application, I need to <strong>enable SSO</strong> - single tenant - Internal to the organization.</p>
<p>Based on ref tutorial: <a href="https://docs.microsoft.com/en-us/graph/tutorials/python" rel="nofollow noreferrer">link</a> I was able to copy-paste required code snippets in views.py, urls.py.</p>
<p>I also created an <strong>oauth_settings.yml</strong> file-</p>
<pre><code>app_id: {app id}
app_secret: {app secret}
redirect: "http://localhost:8000/callback"
scopes:
- user.read
authority: "https://login.microsoftonline.com/{tenant}"
</code></pre>
<p>Yet every time after I submit the O365 credentials, I am facing the same /callback error :-</p>
<p><a href="https://i.stack.imgur.com/Ztajd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ztajd.png" alt="enter image description here" /></a></p>
<p>I have identified the issue to be in 'auth_flow' variable which holds entire flow dict. data which is reflected earlier, <strong>but fails to be saved in the request.session</strong>.</p>
<p>Please guide what exactly may be the issue at hand. Thanks.</p>
|
<p>According to my test, when we were browsing to <a href="http://127.0.0.1:8000" rel="nofollow noreferrer">http://127.0.0.1:8000</a> instead of http://localhost:8000, we got the error. Because the browser does not store session and set cookie when using IP address. So please use http://localhost:8000. to access project when you develop the project on your local machine. For more details, please refer to the <a href="https://github.com/microsoftgraph/msgraph-training-pythondjangoapp/issues/43#issuecomment-760211058" rel="nofollow noreferrer">Github issue</a></p>
<p>use IP adreee
<a href="https://i.stack.imgur.com/NVYpM.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NVYpM.gif" alt="enter image description here" /></a></p>
<p>use localhost
<a href="https://i.stack.imgur.com/WWKwe.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WWKwe.gif" alt="enter image description here" /></a></p>
|
python|django|azure-active-directory|microsoft-graph-api|msal
| 1 |
1,907,795 | 68,926,149 |
DRF: router for ViewSet with a foreign-key lookup_field
|
<p>Using Django REST Framework 3.12.4, I cannot properly get the URLs for a ViewSet to work if that ViewSet has a foreign-key lookup field.</p>
<p>I have the following in <code>models.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>class Domain(models.Model):
name = models.CharField(max_length=191, unique=True)
class Token(rest_framework.authtoken.models.Token):
id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False)
user = models.ForeignKey(User, on_delete=models.CASCADE)
domain_policies = models.ManyToManyField(Domain, through='TokenDomainPolicy')
class TokenDomainPolicy(models.Model):
token = models.ForeignKey(Token, on_delete=models.CASCADE)
domain = models.ForeignKey(Domain, on_delete=models.CASCADE)
class Meta:
constraints = [models.UniqueConstraint(fields=['token', 'domain'], name='unique_entry')]
</code></pre>
<p>In <code>views.py</code>, I have:</p>
<pre class="lang-py prettyprint-override"><code>class TokenDomainPolicyViewSet(viewsets.ModelViewSet):
lookup_field = 'domain__name'
serializer_class = serializers.TokenDomainPolicySerializer
def get_queryset(self):
return models.TokenDomainPolicy.objects.filter(token_id=self.kwargs['id'], token__user=self.request.user)
</code></pre>
<p>You can see from <code>TokenDomainPolicyViewSet.lookup_field</code> that I would like to be able to query the <code>-detail</code> endpoint by using the related <code>Domain</code>'s <code>name</code> field instead of its primary key. (<code>name</code> is unique for a given token.)</p>
<p>I thought this can be done with <code>lookup_field = 'domain__name'</code>.</p>
<p>However, it doesn't quite work. Here's my <code>urls.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>tokens_router = SimpleRouter()
tokens_router.register(r'', views.TokenViewSet, basename='token')
tokendomainpolicies_router = SimpleRouter()
tokendomainpolicies_router.register(r'', views.TokenDomainPolicyViewSet, basename='token_domain_policies')
auth_urls = [
path('tokens/', include(tokens_router.urls)), # for completeness only; problem persists if removed
path('tokens/<id>/domain_policies/', include(tokendomainpolicies_router.urls)),
]
urlpatterns = [
path('auth/', include(auth_urls)),
]
</code></pre>
<p>The list endpoint works fine (e.g. <code>/auth/tokens/6f82e9bc-46b8-4719-b99f-2dc0da062a02/domain_policies/</code>); it returns a list of serialized <code>TokenDomainPolicy</code> objects.</p>
<p>However, let's say there is a <code>Domain</code> object with <code>name = 'test.net'</code> related to this <code>Token</code>. I would think I can GET <code>/auth/tokens/6f82e9bc-46b8-4719-b99f-2dc0da062a02/domain_policies/test.net/</code> to retrieve this object, but the result is 404.</p>
<p>Additional observations:</p>
<ul>
<li><p>It nearly <strong>does</strong> work if I set <code>lookup_field = 'domain'</code>. However, this leads to URLs that contain the <code>Domain</code>'s ID (like <code>.../25/</code>) which is not what I want. But based on this, I conclude that the <code>-detail</code> endpoint does get routed in principle.</p>
</li>
<li><p>It <strong>does</strong> work if I add an explicit override like</p>
<pre class="lang-py prettyprint-override"><code> path('tokens/<id>/domain_policies/<domain__name>/', views.TokenDomainPolicyViewSet.as_view(
{'get': 'retrieve', 'put': 'update', 'patch': 'partial_update', 'delete': 'destroy'}
), name='token_domain_policies-detail'),
</code></pre>
<p>However, why would it be necessarily to explicitly bind the methods like this? (It's not necessary if <code>lookup_field</code> does not specify a foreign-key lookup!)</p>
</li>
<li><p>Weirdest of all, if I install <code>django_extensions</code> and run <code>manage.py show_urls</code>, the <code>-detail</code> endpoint URL shows up with the <strong>correct</strong> <code><domain__name></code> URL kwarg, even <strong>without</strong> the override from the previous bullet. If I add the override, the corresponding line in the output is displayed twice as an <strong>identical duplicate</strong>.</p>
<p>How can it be that the set of known URLs remains unchanged with or without the override, but in one case the endpoint works as expected, and in the other case the response is 404?</p>
</li>
</ul>
<p>What am I missing?</p>
|
<p>According to the <a href="https://www.django-rest-framework.org/api-guide/routers/#simplerouter" rel="nofollow noreferrer"><code>docs</code></a>, the default match lookup will ignore slashes and period characters, that's why <code>test.name</code> can't be found:</p>
<blockquote>
<p>The router will match lookup values containing any characters except slashes and period characters.</p>
</blockquote>
<p>You can find it in the <a href="https://github.com/encode/django-rest-framework/blob/master/rest_framework/routers.py#L221" rel="nofollow noreferrer"><code>source</code></a> as well:</p>
<blockquote>
<pre><code>lookup_value = getattr(viewset, 'lookup_value_regex', '[^/.]+')
</code></pre>
</blockquote>
<p>So to fix, just change the <code>lookup_value_regex</code> to allow periods in the viewset's lookup:</p>
<pre><code>class TokenDomainPolicyViewSet(viewsets.ModelViewSet):
lookup_field = 'domain__name'
lookup_value_regex = '[^/]+'
</code></pre>
|
python|django|django-rest-framework|django-rest-viewsets
| 1 |
1,907,796 | 69,030,266 |
How to write data from a dict via Django ORM in SQLite?
|
<p>How the models should look like in Django 1.11 so that I can write this data in them? And how to write this data?</p>
<pre><code> {
"userId": 1,
"id": 1,
"title": "sunt aut facere repellat provident occaecati excepturi optio reprehenderit",
"body": "quia et suscipit\nsuscipit recusandae consequuntur expedita et cum\nreprehenderit molestiae ut ut quas totam\nnostrum rerum est autem sunt rem eveniet architecto"
}
]
[
{
"id": 1,
"name": "Leanne Graham",
"username": "Bret",
"email": "Sincere@april.biz",
"address": {
"street": "Kulas Light",
"suite": "Apt. 556",
"city": "Gwenborough",
"zipcode": "92998-3874",
"geo": {
"lat": "-37.3159",
"lng": "81.1496"
}
]
</code></pre>
|
<p>You should create</p>
<ul>
<li>A user model with <code>first_name</code>, <code>last_name</code>, <code>username</code>, <code>email</code></li>
<li>An address model with those fields and a foreign key to user</li>
<li>And another model with a ForeignKey to user with <code>title</code> and <code>body</code></li>
</ul>
<p>here is an example from the <a href="https://docs.djangoproject.com/en/3.2/topics/db/models/" rel="nofollow noreferrer">documentation</a></p>
<pre><code>from django.db import models
class Person(models.Model):
first_name = models.CharField(max_length=30)
last_name = models.CharField(max_length=30)
</code></pre>
<p>Django has a built-in user model if you prefer.</p>
|
python-3.x|django|sqlite|django-models
| 0 |
1,907,797 | 72,686,561 |
How to implement execCommand of Twisted SSH server for use with fabric?
|
<p>I implemented a Twisted SSH server to test a component that uses fabric to run commands on a remote machine via SSH. I have found <a href="https://docs.twistedmatrix.com/en/twisted-20.3.0/_downloads/2cc3ee3e05ebfb5e87e57ff98dd9b5c6/sshsimpleserver.py" rel="nofollow noreferrer">this example</a> but I don't understand how I have to implement the <code>execCommand</code> method to be compatible with fabric. Here is my implementation of the SSH server:</p>
<pre><code>
from pathlib import Path
from twisted.conch import avatar, recvline
from twisted.conch.insults import insults
from twisted.conch.interfaces import ISession
from twisted.conch.ssh import factory, keys, session
from twisted.cred import checkers, portal
from twisted.internet import reactor
from zope.interface import implementer
SSH_KEYS_FOLDER = Path(__file__).parent.parent / "resources" / "ssh_keys"
@implementer(ISession)
class SSHDemoAvatar(avatar.ConchUser):
def __init__(self, username: str):
avatar.ConchUser.__init__(self)
self.username = username
self.channelLookup.update({b"session": session.SSHSession})
def openShell(self, protocol):
pass
def getPty(self, terminal, windowSize, attrs):
return None
def execCommand(self, protocol: session.SSHSessionProcessProtocol, cmd: bytes):
protocol.write("Some text to return")
protocol.session.conn.sendEOF(protocol.session)
def eofReceived(self):
pass
def closed(self):
pass
@implementer(portal.IRealm)
class SSHDemoRealm(object):
def requestAvatar(self, avatarId, _, *interfaces):
return interfaces[0], SSHDemoAvatar(avatarId), lambda: None
def getRSAKeys():
with open(SSH_KEYS_FOLDER / "ssh_key") as private_key_file:
private_key = keys.Key.fromString(data=private_key_file.read())
with open(SSH_KEYS_FOLDER / "ssh_key.pub") as public_key_file:
public_key = keys.Key.fromString(data=public_key_file.read())
return public_key, private_key
if __name__ == "__main__":
sshFactory = factory.SSHFactory()
sshFactory.portal = portal.Portal(SSHDemoRealm())
users = {
"admin": b"aaa",
"guest": b"bbb",
}
sshFactory.portal.registerChecker(checkers.InMemoryUsernamePasswordDatabaseDontUse(**users))
pubKey, privKey = getRSAKeys()
sshFactory.publicKeys = {b"ssh-rsa": pubKey}
sshFactory.privateKeys = {b"ssh-rsa": privKey}
reactor.listenTCP(22222, sshFactory)
reactor.run()
</code></pre>
<p>Trying to execute a command via fabric yields the following output:</p>
<pre><code>[paramiko.transport ][INFO ] Connected (version 2.0, client Twisted_22.4.0)
[paramiko.transport ][INFO ] Authentication (password) successful!
Some text to return
</code></pre>
<p>This looks promising but the program execution hangs after this line. Do I have to close the connection from the server side after executing the command? How do I implement that properly?</p>
|
<p>In a traditional UNIX server, it's still the server's responsibility to close the connection if it was told to execute a command. It's up to the server's discretion where and how to do this.</p>
<p>I <em>believe</em> you just want to change <code>protocol.session.conn.sendEOF(protocol.session)</code> to <code>protocol.loseConnection()</code>. I apologize for not testing this myself to be sure, setting up an environment to properly test a full-size conch setup like this is a bit tedious (and the example isn't self-contained, requiring key generation and moduli, etc)</p>
|
python|ssh|twisted|paramiko|fabric
| 0 |
1,907,798 | 72,568,822 |
Using Python Scrapy to extract XPATH in a soccer live site
|
<p>im trying to use Scrapy to return the results and statistics from live games in SofaScore.</p>
<p>Site : <a href="https://www.sofascore.com/" rel="nofollow noreferrer">https://www.sofascore.com/</a></p>
<p>The code below :</p>
<pre><code>import scrapy
class SofascoreSpider(scrapy.Spider):
name = 'SofaScore'
allowed_domains = ['sofascore.com']
start_urls = ['http://sofascore.com/']
def parse(self, response):
time1 =
response.xpath("/html/body/div[1]/main/div/div[2]/div/div[3]/div[2]/div/div/div/div/div[2]/a/div/div").extract()
print(time1)
pass
</code></pre>
<p>I tried to use <code>response.xpath("//html/body/div[1]/main/div/div[2]/div/div[3]/div[2]/div/div/div/div/div[2]/a/div/div").getall()</code> too, but it returns nothing. I used so many different xpath's and it didn't return. What im doing wrong ?</p>
<p>Like, today 10/06 the first match on the page is France vs Austria, xpath : /html/body/div[1]/main/div/div[2]/div/div[3]/div[2]/div/div/div/div/div[2]/a/div/div</p>
|
<p>The data is generated with JavaScript, but you can get it from the API.</p>
<p>Open devtools in the browser and click on the <code>network</code> tab. Then click on the <code>live</code> button and look where it loads the data from. Then look at the JSON file to see its structure.</p>
<pre class="lang-py prettyprint-override"><code>import scrapy
class SofascoreSpider(scrapy.Spider):
name = 'SofaScore'
allowed_domains = ['sofascore.com']
start_urls = ['https://api.sofascore.com/api/v1/sport/football/events/live']
custom_settings = {'DOWNLOAD_DELAY': 0.4}
def start_requests(self):
headers = {
"Accept": "*/*",
"Accept-Encoding": "gzip, deflate, br",
"Accept-Language": "en-US,en;q=0.5",
"Cache-Control": "no-cache",
"Connection": "keep-alive",
"DNT": "1",
"Host": "api.sofascore.com",
"Origin": "https://www.sofascore.com",
"Pragma": "no-cache",
"Referer": "https://www.sofascore.com/",
"Sec-Fetch-Dest": "empty",
"Sec-Fetch-Mode": "cors",
"Sec-Fetch-Site": "same-site",
"Sec-GPC": "1",
"TE": "trailers",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36"
}
yield scrapy.Request(url=self.start_urls[0], headers=headers)
def parse(self, response):
events = response.json()
events = events['events']
# now iterate throught the list and get what you want from it
# example:
for event in events:
yield {
'event name': event['tournament']['name'],
'time': event['time']
}
</code></pre>
|
javascript|python|html|css|scrapy
| 0 |
1,907,799 | 68,282,299 |
how to show related product in Django?
|
<p>I have about 10 + models where I'm showing the 3 of them. Where the name is <code>motors</code>, <code>drum</code>, <code>slide</code>. Now in these three models, one field is the same which is the name <code>code</code>.</p>
<p>Trying to explain:</p>
<p><strong>models.py</strong></p>
<pre><code>class Motors(models.Model):
.
.
code = models.CharField(max_length=10)
.
.
.
def __str__(self):
return str(self.id)
class Drum(models.Model):
.
.
code = models.CharField(max_length=10)
.
.
.
def __str__(self):
return str(self.id)
class Sliders(models.Model):
.
.
code = models.CharField(max_length=10)
.
.
.
def __str__(self):
return str(self.id)
</code></pre>
<p>Now in the above model, they have the same field as <code>code</code>. And in the <code>code</code>, they have the same name like for eg: in one model the code is <code>DD1, DD2, DD3</code> and another model code is <code>DD1, DD2, DD3</code>.</p>
<p>Now in the Above, the codes are matching. So, if the user selects one motor with <code>Code</code> <code>DD1</code>. Then show the related 2 models with code <code>DD1</code> or if the user selects one drum with code <code>DD1</code>. Then show related two models items in Django</p>
<p>I had used add to cart functionality the code goes here.</p>
<p><strong>views.py</strong></p>
<pre><code>def addProductMotor(request):
user = request.user
motor_id = request.GET.get('motor_id')
motor_cart = Motor.objects.get(id=motor_id)
Cart(user=user, motor=motor_cart).save()
return redirect('main:index')
</code></pre>
<p>In the above, I add one product item in the cart then show related items. How it is possible?</p>
<p>How can I get those two related models item?</p>
|
<p>You can show other 2 model objects using following code -</p>
<pre><code>motor_cart = Motor.objects.get(id=motor_id) # get motor object
code = motor_cart.code # get code from the model
drum = Drum.objects.get(code=code) # get drum from the code
sliders = Sliders.objects.get(code=code) # get sliders from the code
</code></pre>
<p>I'll recommend you to use relation between models as you want it to be related. You can use <strong>One-to-One</strong>, <strong>Many-to-One</strong>, <strong>Many-to-Many</strong> as per your requirement.</p>
<p>You'll find more at <a href="https://docs.djangoproject.com/en/3.2/topics/db/examples/" rel="nofollow noreferrer">Django docs</a></p>
|
python|django|django-models|django-views|django-templates
| 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.