content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
how to set a relative box size in toga python
I'm making an app with Beeware and toga using python and I need a box to be half size of its parent.
Does toga have relative size units like CSS? How do I use them?
I thought to use the parent box size as a reference but the Box object has no size-related attributes (at least not documented).
A:
If you add two children with equal flex values, then they will each be half the size of the parent:
parent = Box(style=Pack(direction="column"))
child1 = Box(style=Pack(flex=1))
child2 = Box(style=Pack(flex=1))
parent.add(child1, child2)
|
how to set a relative box size in toga python
|
I'm making an app with Beeware and toga using python and I need a box to be half size of its parent.
Does toga have relative size units like CSS? How do I use them?
I thought to use the parent box size as a reference but the Box object has no size-related attributes (at least not documented).
|
[
"If you add two children with equal flex values, then they will each be half the size of the parent:\nparent = Box(style=Pack(direction=\"column\"))\nchild1 = Box(style=Pack(flex=1))\nchild2 = Box(style=Pack(flex=1))\nparent.add(child1, child2)\n\n"
] |
[
1
] |
[] |
[] |
[
"beeware",
"python",
"units_of_measurement"
] |
stackoverflow_0074573735_beeware_python_units_of_measurement.txt
|
Q:
Does keras.backend.clear_session() deletes sessions in a process or globally?
I create up to 100 keras models in separated script an save them localy with model.save().
For Training them, I use multiprocessing.pool. In those processes I load each model separately. Because of occuring Memory Errors I used keras.backend.clear_session(). This seems to work but I have also read that it deletes the weights of models.
So to come back to my question, if I import "from keras import backend as K" in each process of the pool and at the end, after I saved the models, I use K.clear_session(), do I clear important data of parallel running processes or just data of this process?
If it deletes important data of parallel running processes. Is there any possibility of creating a local tensorflow session inside the process. Then assign the needed model to this session and then clear_session() this local one?
I´m thankful for any input.
In adition it would be helpful if anyone knows the exact functionality of clear_session(). The explanation of this function is not very informative especially for beginners like me.
Thank you :)
A:
I faced similar kind of issue but I am not running models in parallel but alternatively i;e; either of the models (in different folders but same model file names) will run.
When I run the models directly without clear_session it was conflicting with the previously loaded model and cannot switch to other model. After including clear_session at the beginning of statements (which loads the model) it was working, however it was also deleting global variables declared at the beginning of the program which are necessary for prediction activity.
lesson learnt:
clear_session will not only "Destroys the current TF graph and creates a new one." as mentioned in the documentation but also deletes global variables defined in the program.
So I defined the global variables just after the clear_session statement
** feedback appreciated
A:
I had same problem, using both ProcessPoolExecutor, or Process.Pool. Need to use maxtasksperchild=1 in Process.pool, so that after each model has completed running in a process, the process is killed, and an entirely new process is created. OOM is solved using maxtaksperchild=1. My working code:
# Multiprocess cycles - PROCESS POOL
with Pool(processes=NUM_WORKERS, maxtasksperchild=1) as pool:
tempCycOUTlist = list(pool.map(evaluateSingleModel, MP_package,
chunksize=1))
|
Does keras.backend.clear_session() deletes sessions in a process or globally?
|
I create up to 100 keras models in separated script an save them localy with model.save().
For Training them, I use multiprocessing.pool. In those processes I load each model separately. Because of occuring Memory Errors I used keras.backend.clear_session(). This seems to work but I have also read that it deletes the weights of models.
So to come back to my question, if I import "from keras import backend as K" in each process of the pool and at the end, after I saved the models, I use K.clear_session(), do I clear important data of parallel running processes or just data of this process?
If it deletes important data of parallel running processes. Is there any possibility of creating a local tensorflow session inside the process. Then assign the needed model to this session and then clear_session() this local one?
I´m thankful for any input.
In adition it would be helpful if anyone knows the exact functionality of clear_session(). The explanation of this function is not very informative especially for beginners like me.
Thank you :)
|
[
"I faced similar kind of issue but I am not running models in parallel but alternatively i;e; either of the models (in different folders but same model file names) will run. \nWhen I run the models directly without clear_session it was conflicting with the previously loaded model and cannot switch to other model. After including clear_session at the beginning of statements (which loads the model) it was working, however it was also deleting global variables declared at the beginning of the program which are necessary for prediction activity. \nlesson learnt:\nclear_session will not only \"Destroys the current TF graph and creates a new one.\" as mentioned in the documentation but also deletes global variables defined in the program. \nSo I defined the global variables just after the clear_session statement\n** feedback appreciated \n",
"I had same problem, using both ProcessPoolExecutor, or Process.Pool. Need to use maxtasksperchild=1 in Process.pool, so that after each model has completed running in a process, the process is killed, and an entirely new process is created. OOM is solved using maxtaksperchild=1. My working code:\n# Multiprocess cycles - PROCESS POOL\nwith Pool(processes=NUM_WORKERS, maxtasksperchild=1) as pool:\n tempCycOUTlist = list(pool.map(evaluateSingleModel, MP_package, \n chunksize=1))\n\n"
] |
[
3,
0
] |
[] |
[] |
[
"keras",
"multiprocessing",
"python",
"tensorflow"
] |
stackoverflow_0050823233_keras_multiprocessing_python_tensorflow.txt
|
Q:
Changing the labelling of the numbers in the plot
I want to create IDL-like plots in python. I have come close to doing so by changing some of the details in the matplotlibrc file in the matplotlib directory. The following is what I have changed my matplotlibrc file to look like from the standard matplotlibrc file:
### MATPLOTLIBRC FORMAT
backend : tkagg
### LINES
lines.linewidth : 1.1 # line width in points
lines.color : black # has no affect on plot(); see axes.color_cycle
### FONT
font.family : sans-serif
font.weight : ultralight
font.size : 12.0
font.sans-serif : Avant Garde
### TEXT
### LaTeX customizations. See http://www.scipy.org/Wiki/Cookbook/Matplotlib/UsingTex
text.usetex : True # use latex for all text handling. The following fonts
# are supported through the usual rc parameter settings:
# new century schoolbook, bookman, times, palatino,
# zapf chancery, charter, serif, sans-serif, helvetica,
# avant garde, courier, monospace, computer modern roman,
# computer modern sans serif, computer modern typewriter
# If another font is desired which can loaded using the
# LaTeX \usepackage command, please inquire at the
# matplotlib mailing list
mathtext.fontset : custom # Should be 'cm' (Computer Modern), 'stix',
# 'stixsans' or 'custom'
### AXES
axes.facecolor : white # axes background color
### TICKS
xtick.major.size : 6 # major tick size in points
xtick.minor.size : 3 # minor tick size in points
xtick.major.width : 1 # major tick width in points
xtick.minor.width : 1 # minor tick width in points
ytick.major.size : 6 # major tick size in points
ytick.minor.size : 3 # minor tick size in points
ytick.major.width : 1 # major tick width in points
ytick.minor.width : 1 # minor tick width in points
### GRIDS
legend.numpoints : 1 # the number of points in the legend line
legend.frameon : False # whether or not to draw a frame around legend
### FIGURE
figure.figsize : 4, 4 # figure size in inches
figure.dpi : 100 # figure dots per inch
figure.facecolor : none # figure facecolor; 0.75 is scalar gray
figure.edgecolor : white # figure edgecolor
### SAVING FIGURES
savefig.dpi : 1000 # figure dots per inch
savefig.format : ps # png, ps, pdf, svg
savefig.bbox : tight # 'tight' or 'standard'.
An example of a plot that is produced from these changes is shown. Notice that the output (i.e., the plot) labels the x and y axes with Avant Garde (as specificied in matplotlibrc), but the numbers are not Avatn Garde type. How can I make the numbers the same type font as the labels in the plot, so that both are Avant Garde? Also, is there a way to make the font narrower (thinner), so that the words 'Initial Velocity' are quite thin thin (i.e., like Hershey vector fonts)?
A:
You need to include the line
text.latex.preamble : \usepackage{sfmath}
in your .matplotlibrc file. This tells latex to use sans-serif fonts for math-text, which is what it uses for tick labels.
A:
Try to download this TTF font which is a replicate of the IDL's default Hershey font https://github.com/yangcht/Hershey_font_TTF. Set text.usetex : False and use this Hershey font.
Here is an example of the result:
|
Changing the labelling of the numbers in the plot
|
I want to create IDL-like plots in python. I have come close to doing so by changing some of the details in the matplotlibrc file in the matplotlib directory. The following is what I have changed my matplotlibrc file to look like from the standard matplotlibrc file:
### MATPLOTLIBRC FORMAT
backend : tkagg
### LINES
lines.linewidth : 1.1 # line width in points
lines.color : black # has no affect on plot(); see axes.color_cycle
### FONT
font.family : sans-serif
font.weight : ultralight
font.size : 12.0
font.sans-serif : Avant Garde
### TEXT
### LaTeX customizations. See http://www.scipy.org/Wiki/Cookbook/Matplotlib/UsingTex
text.usetex : True # use latex for all text handling. The following fonts
# are supported through the usual rc parameter settings:
# new century schoolbook, bookman, times, palatino,
# zapf chancery, charter, serif, sans-serif, helvetica,
# avant garde, courier, monospace, computer modern roman,
# computer modern sans serif, computer modern typewriter
# If another font is desired which can loaded using the
# LaTeX \usepackage command, please inquire at the
# matplotlib mailing list
mathtext.fontset : custom # Should be 'cm' (Computer Modern), 'stix',
# 'stixsans' or 'custom'
### AXES
axes.facecolor : white # axes background color
### TICKS
xtick.major.size : 6 # major tick size in points
xtick.minor.size : 3 # minor tick size in points
xtick.major.width : 1 # major tick width in points
xtick.minor.width : 1 # minor tick width in points
ytick.major.size : 6 # major tick size in points
ytick.minor.size : 3 # minor tick size in points
ytick.major.width : 1 # major tick width in points
ytick.minor.width : 1 # minor tick width in points
### GRIDS
legend.numpoints : 1 # the number of points in the legend line
legend.frameon : False # whether or not to draw a frame around legend
### FIGURE
figure.figsize : 4, 4 # figure size in inches
figure.dpi : 100 # figure dots per inch
figure.facecolor : none # figure facecolor; 0.75 is scalar gray
figure.edgecolor : white # figure edgecolor
### SAVING FIGURES
savefig.dpi : 1000 # figure dots per inch
savefig.format : ps # png, ps, pdf, svg
savefig.bbox : tight # 'tight' or 'standard'.
An example of a plot that is produced from these changes is shown. Notice that the output (i.e., the plot) labels the x and y axes with Avant Garde (as specificied in matplotlibrc), but the numbers are not Avatn Garde type. How can I make the numbers the same type font as the labels in the plot, so that both are Avant Garde? Also, is there a way to make the font narrower (thinner), so that the words 'Initial Velocity' are quite thin thin (i.e., like Hershey vector fonts)?
|
[
"You need to include the line \ntext.latex.preamble : \\usepackage{sfmath}\n\nin your .matplotlibrc file. This tells latex to use sans-serif fonts for math-text, which is what it uses for tick labels.\n",
"Try to download this TTF font which is a replicate of the IDL's default Hershey font https://github.com/yangcht/Hershey_font_TTF. Set text.usetex : False and use this Hershey font.\nHere is an example of the result:\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"matplotlib",
"python"
] |
stackoverflow_0033643100_matplotlib_python.txt
|
Q:
How to get the common index of two pandas dataframes?
I have two pandas DataFrames df1 and df2 and I want to transform them in order that they keep values only for the index that are common to the 2 dataframes.
df1
values 1
0
28/11/2000 -0.055276
29/11/2000 0.027427
30/11/2000 0.066009
01/12/2000 0.012749
04/12/2000 0.113892
df2
values 2
24/11/2000 -0.004808
27/11/2000 -0.001812
28/11/2000 -0.026316
29/11/2000 0.015222
30/11/2000 -0.024480
become
df1
value 1
28/11/2000 -0.055276
29/11/2000 0.027427
30/11/2000 0.066009
df2
value 2
28/11/2000 -0.026316
29/11/2000 0.015222
30/11/2000 -0.024480
A:
You can use Index.intersection + DataFrame.loc:
idx = df1.index.intersection(df2.index)
print (idx)
Index(['28/11/2000', '29/11/2000', '30/11/2000'], dtype='object')
Alternative solution with numpy.intersect1d:
idx = np.intersect1d(df1.index, df2.index)
print (idx)
['28/11/2000' '29/11/2000' '30/11/2000']
df1 = df1.loc[idx]
print (df1)
values 1
28/11/2000 -0.055276
29/11/2000 0.027427
30/11/2000 0.066009
df2 = df2.loc[idx]
A:
In [352]: common = df1.index.intersection(df2.index)
In [353]: df1.loc[common]
Out[353]:
values1
0
28/11/2000 -0.055276
29/11/2000 0.027427
30/11/2000 0.066009
In [354]: df2.loc[common]
Out[354]:
values2
0
28/11/2000 -0.026316
29/11/2000 0.015222
30/11/2000 -0.024480
A:
And, using isin. intersection might be faster though.
In [286]: df1.loc[df1.index.isin(df2.index)]
Out[286]:
values1
0
28/11/2000 -0.055276
29/11/2000 0.027427
30/11/2000 0.066009
In [287]: df2.loc[df2.index.isin(df1.index)]
Out[287]:
values2
0
28/11/2000 -0.026316
29/11/2000 0.015222
30/11/2000 -0.024480
A:
reindex + dropna
df1.reindex(df2.index).dropna()
Out[21]:
values1
28/11/2000 -0.055276
29/11/2000 0.027427
30/11/2000 0.066009
df2.reindex(df1.index).dropna()
Out[22]:
values2
28/11/2000 -0.026316
29/11/2000 0.015222
30/11/2000 -0.024480
A:
Have you tried something like
df1 = df1.loc[[x for x in df1.index if x in df2.index]]
df2 = df2.loc[[x for x in df2.index if x in df1.index]]
A:
The index object has some set-like properties so you simply can take the intersection as follows:
df1 = df1.reindex[ df1.index & df2.index ]
This retains the order of the first dataframe in the intersection, df.
A:
You can pd.merge them with an intermediary DataFrame created with the indexes of the other DataFrame:
df2_indexes = pd.DataFrame(index=df2.index)
df1 = pd.merge(df1, df2_indexes, left_index=True, right_index=True)
df1_indexes = pd.DataFrame(index=df1.index)
df2 = pd.merge(df2, df1_indexes, left_index=True, right_index=True)
or you can use pd.eval:
df2_indexes = df2.index.values
df1 = df1[eval("df1.index in df2_indexes"]
df1_indexes = df1.index.values
df2 = df2[eval("df2.index in df1_indexes"]
A:
I found pd.Index and set combination much faster than numpy.intersect1d as well df1.index.intersection(df2.index). Here is what I used:
df2 = df2.loc[pd.Index(set(df1.index)&set(df2.index))]
A:
%timeit df1.index.intersection(df2.index)
66.5 µs ± 2.31 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
from numpy.lib.arraysetops import intersect1d
%timeit np.intersect1d(df1.index,df2.index)
83.1 µs ± 7.94 µs per loop (mean ± std. dev.
of 7 runs, 10000 loops each)
|
How to get the common index of two pandas dataframes?
|
I have two pandas DataFrames df1 and df2 and I want to transform them in order that they keep values only for the index that are common to the 2 dataframes.
df1
values 1
0
28/11/2000 -0.055276
29/11/2000 0.027427
30/11/2000 0.066009
01/12/2000 0.012749
04/12/2000 0.113892
df2
values 2
24/11/2000 -0.004808
27/11/2000 -0.001812
28/11/2000 -0.026316
29/11/2000 0.015222
30/11/2000 -0.024480
become
df1
value 1
28/11/2000 -0.055276
29/11/2000 0.027427
30/11/2000 0.066009
df2
value 2
28/11/2000 -0.026316
29/11/2000 0.015222
30/11/2000 -0.024480
|
[
"You can use Index.intersection + DataFrame.loc:\nidx = df1.index.intersection(df2.index)\nprint (idx)\nIndex(['28/11/2000', '29/11/2000', '30/11/2000'], dtype='object')\n\nAlternative solution with numpy.intersect1d:\nidx = np.intersect1d(df1.index, df2.index)\nprint (idx)\n['28/11/2000' '29/11/2000' '30/11/2000']\n\n\ndf1 = df1.loc[idx]\nprint (df1)\n values 1\n28/11/2000 -0.055276\n29/11/2000 0.027427\n30/11/2000 0.066009\n\ndf2 = df2.loc[idx]\n\n",
"In [352]: common = df1.index.intersection(df2.index)\n\nIn [353]: df1.loc[common]\nOut[353]:\n values1\n0\n28/11/2000 -0.055276\n29/11/2000 0.027427\n30/11/2000 0.066009\n\nIn [354]: df2.loc[common]\nOut[354]:\n values2\n0\n28/11/2000 -0.026316\n29/11/2000 0.015222\n30/11/2000 -0.024480\n\n",
"And, using isin. intersection might be faster though.\nIn [286]: df1.loc[df1.index.isin(df2.index)]\nOut[286]:\n values1\n0\n28/11/2000 -0.055276\n29/11/2000 0.027427\n30/11/2000 0.066009\n\nIn [287]: df2.loc[df2.index.isin(df1.index)]\nOut[287]:\n values2\n0\n28/11/2000 -0.026316\n29/11/2000 0.015222\n30/11/2000 -0.024480\n\n",
"reindex + dropna\ndf1.reindex(df2.index).dropna()\nOut[21]: \n values1\n28/11/2000 -0.055276\n29/11/2000 0.027427\n30/11/2000 0.066009\n\n\ndf2.reindex(df1.index).dropna()\nOut[22]: \n values2\n28/11/2000 -0.026316\n29/11/2000 0.015222\n30/11/2000 -0.024480\n\n",
"Have you tried something like\ndf1 = df1.loc[[x for x in df1.index if x in df2.index]]\ndf2 = df2.loc[[x for x in df2.index if x in df1.index]]\n\n",
"The index object has some set-like properties so you simply can take the intersection as follows: \ndf1 = df1.reindex[ df1.index & df2.index ]\n\nThis retains the order of the first dataframe in the intersection, df. \n",
"You can pd.merge them with an intermediary DataFrame created with the indexes of the other DataFrame:\ndf2_indexes = pd.DataFrame(index=df2.index)\ndf1 = pd.merge(df1, df2_indexes, left_index=True, right_index=True)\ndf1_indexes = pd.DataFrame(index=df1.index)\ndf2 = pd.merge(df2, df1_indexes, left_index=True, right_index=True)\n\nor you can use pd.eval:\ndf2_indexes = df2.index.values\ndf1 = df1[eval(\"df1.index in df2_indexes\"]\ndf1_indexes = df1.index.values\ndf2 = df2[eval(\"df2.index in df1_indexes\"]\n\n",
"I found pd.Index and set combination much faster than numpy.intersect1d as well df1.index.intersection(df2.index). Here is what I used:\ndf2 = df2.loc[pd.Index(set(df1.index)&set(df2.index))]\n",
"%timeit df1.index.intersection(df2.index)\n\n66.5 µs ± 2.31 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)\n\n\nfrom numpy.lib.arraysetops import intersect1d\n%timeit np.intersect1d(df1.index,df2.index)\n83.1 µs ± 7.94 µs per loop (mean ± std. dev. \n\nof 7 runs, 10000 loops each)\n"
] |
[
35,
8,
8,
4,
2,
2,
1,
0,
0
] |
[] |
[] |
[
"dataframe",
"pandas",
"python"
] |
stackoverflow_0048170867_dataframe_pandas_python.txt
|
Q:
Django how do you AutoComplete a ForeignKey Input field using Crispy Forms
Looking for any assistance as i just can't seem to get this.
I have a 'category' field that has approx 4000 categories in it, sourced from my table "Category". When a user inputs their details they choose from the category field. This works fine as a drop down list but takes ages to scroll. I'd rather have the field as text entry so when they start typing, for example 'plum', then every category with 'plum' somewhere in it appears in the list so they can choose. They must also choose from list and not enter rubbish. Can anyone assist?
Here's how it works just now with the drop down list, is there any way to change this (category1) to an autocomplete field? I've looked at django autocomplete_light but got nowhere.
Models.py:
class Category(models.Model):
details = models.CharField(max_length=250, blank=True, null=True)
def __str__(self):
return self.details
class Search(models.Model):
name = models.CharField(max_length=200)
email = models.CharField(max_length=200)
category1 = models.ForeignKey('Category', blank=True, null=True, on_delete=models.CASCADE, related_name='category')
Forms.py:
class NewSearch(forms.ModelForm):
class Meta:
model = Search
fields = ['name', 'email', 'category1']
def __init__(self, *args, **kwargs):
super(NewSearch, self).__init__(*args, **kwargs)
self.fields['category1'] = forms.ModelChoiceField(queryset=Category.objects.all().order_by('details'))
self.helper = FormHelper()
self.helper.form_show_labels = False
Views.py:
@csrf_exempt
def search(request):
my_form = NewSearch()
if request.method == 'POST':
my_form = NewSearch(request.POST)
if my_form.is_valid():
my_form.save()
return redirect('frontpage-results')
context = {
'my_form': my_form,
}
return render(request, 'frontpage/search.html', context)
Search.html:
<form method="POST" class="page-section" enctype="multipart/form-data">
<div>
{% csrf_token %}
<fieldset class="form-group">
<div class="form-row">
<div class="form-group col-md-5 mb=0">
Your Full Name:
{{ my_form.name|as_crispy_field }}
</div>
<div class="form-group col-md-7 mb=0">
Your E-mail Address:
{{ my_form.email|as_crispy_field }}
</div>
<div class="form-group col-md-4 mb=0">
Category you are looking for:
{{ my_form.category1|as_crispy_field }}
</div>
</div>
{{ my_form.media }} {# Form required JS and CSS #}
</fieldset>
<div class="form-group">
<button class="btn btn-secondary" type="submit" name="first">SEARCH NOW</button>
</div>
</div>
</form>
Urls.py:
urlpatterns = [
path('', views.home, name='frontpage-home'),
path('search/', views.search, name='frontpage-search'),
]
MY SOLUTION (but this didn't work):
SETTINGS.PY
INSTALLED_APPS = [
'autocomplete_light',
URLS.PY (added this line)
path('autocomplete/', include('autocomplete_light.urls')),
FORMS.PY
import autocomplete_light
autocomplete_light.register(Search, name='CatAutocomplete', choices=Category.objects.all())
class NewSearch(forms.ModelForm):
class Meta:
model = Search
fields = ['name', 'email', 'category1']
autocomplete_fields = {'category1': 'CatAutocomplete'}
def __init__(self, *args, **kwargs):
super(NewSearch, self).__init__(*args, **kwargs)
self.helper = FormHelper()
self.helper.form_show_labels = False
But i got the error AttributeError: module 'autocomplete_light' has no attribute 'register'" and got no further, any ideas? Thanks
A:
Try changing your import:
import autocomplete_light.shortcuts as al
al.register
This has changed in version 2.2:
2.2.0rc1
PENDING BREAK WARNING, Django >= 1.9.
The good old ``import autocomplete_light`` API support will be dropped with
Django 1.9. All imports have moved to ``autocomplete_light.shortcuts`` and
importing ``autocomplete_light`` will work until the project is used with
Django 1.9.
Apparently, the documentation wasn't properly updated.
|
Django how do you AutoComplete a ForeignKey Input field using Crispy Forms
|
Looking for any assistance as i just can't seem to get this.
I have a 'category' field that has approx 4000 categories in it, sourced from my table "Category". When a user inputs their details they choose from the category field. This works fine as a drop down list but takes ages to scroll. I'd rather have the field as text entry so when they start typing, for example 'plum', then every category with 'plum' somewhere in it appears in the list so they can choose. They must also choose from list and not enter rubbish. Can anyone assist?
Here's how it works just now with the drop down list, is there any way to change this (category1) to an autocomplete field? I've looked at django autocomplete_light but got nowhere.
Models.py:
class Category(models.Model):
details = models.CharField(max_length=250, blank=True, null=True)
def __str__(self):
return self.details
class Search(models.Model):
name = models.CharField(max_length=200)
email = models.CharField(max_length=200)
category1 = models.ForeignKey('Category', blank=True, null=True, on_delete=models.CASCADE, related_name='category')
Forms.py:
class NewSearch(forms.ModelForm):
class Meta:
model = Search
fields = ['name', 'email', 'category1']
def __init__(self, *args, **kwargs):
super(NewSearch, self).__init__(*args, **kwargs)
self.fields['category1'] = forms.ModelChoiceField(queryset=Category.objects.all().order_by('details'))
self.helper = FormHelper()
self.helper.form_show_labels = False
Views.py:
@csrf_exempt
def search(request):
my_form = NewSearch()
if request.method == 'POST':
my_form = NewSearch(request.POST)
if my_form.is_valid():
my_form.save()
return redirect('frontpage-results')
context = {
'my_form': my_form,
}
return render(request, 'frontpage/search.html', context)
Search.html:
<form method="POST" class="page-section" enctype="multipart/form-data">
<div>
{% csrf_token %}
<fieldset class="form-group">
<div class="form-row">
<div class="form-group col-md-5 mb=0">
Your Full Name:
{{ my_form.name|as_crispy_field }}
</div>
<div class="form-group col-md-7 mb=0">
Your E-mail Address:
{{ my_form.email|as_crispy_field }}
</div>
<div class="form-group col-md-4 mb=0">
Category you are looking for:
{{ my_form.category1|as_crispy_field }}
</div>
</div>
{{ my_form.media }} {# Form required JS and CSS #}
</fieldset>
<div class="form-group">
<button class="btn btn-secondary" type="submit" name="first">SEARCH NOW</button>
</div>
</div>
</form>
Urls.py:
urlpatterns = [
path('', views.home, name='frontpage-home'),
path('search/', views.search, name='frontpage-search'),
]
MY SOLUTION (but this didn't work):
SETTINGS.PY
INSTALLED_APPS = [
'autocomplete_light',
URLS.PY (added this line)
path('autocomplete/', include('autocomplete_light.urls')),
FORMS.PY
import autocomplete_light
autocomplete_light.register(Search, name='CatAutocomplete', choices=Category.objects.all())
class NewSearch(forms.ModelForm):
class Meta:
model = Search
fields = ['name', 'email', 'category1']
autocomplete_fields = {'category1': 'CatAutocomplete'}
def __init__(self, *args, **kwargs):
super(NewSearch, self).__init__(*args, **kwargs)
self.helper = FormHelper()
self.helper.form_show_labels = False
But i got the error AttributeError: module 'autocomplete_light' has no attribute 'register'" and got no further, any ideas? Thanks
|
[
"Try changing your import:\nimport autocomplete_light.shortcuts as al\n\nal.register\n\nThis has changed in version 2.2:\n\n2.2.0rc1\n\n PENDING BREAK WARNING, Django >= 1.9.\n\n The good old ``import autocomplete_light`` API support will be dropped with\n Django 1.9. All imports have moved to ``autocomplete_light.shortcuts`` and\n importing ``autocomplete_light`` will work until the project is used with\n Django 1.9.\n\n\nApparently, the documentation wasn't properly updated.\n"
] |
[
0
] |
[] |
[] |
[
"autocomplete",
"django",
"django_autocomplete_light",
"django_crispy_forms",
"python"
] |
stackoverflow_0074302824_autocomplete_django_django_autocomplete_light_django_crispy_forms_python.txt
|
Q:
Is using an 'anonymous' threading.Lock() always an error?
I'm trying to make sense of some code and I see this function below
def get_batch(
self,
) -> Union[Tuple[List[int], torch.Tensor], Tuple[None, None]]:
"""
Return an inference batch
"""
with threading.Lock():
indices: List[int] = []
for _ in range(self.batch_size):
try:
index = self.full_queue.get(timeout=0.05)
indices.append(index)
except:
break
if indices:
# tqdm.write(str(len(jobs)))
batch = {
key: torch.stack([self.input_buffers[key][index] for index in indices])
.to(torch.device('cpu'), non_blocking=True)
.unsqueeze(0)
for key in self.input_buffers
}
return indices, batch
else:
return None, None
the with threading.Lock() line must be an error right? Like generally speaking a lock must be shared, and this isn't shared with anything?
A:
Yes, @Homer512's comment nailed it. Each activation of the function creates a new Lock object, and there's no way for those objects to be shared between threads. Nothing is accomplished by locking a Lock that cannot be locked by any other thread. It's effectively a no-op.
|
Is using an 'anonymous' threading.Lock() always an error?
|
I'm trying to make sense of some code and I see this function below
def get_batch(
self,
) -> Union[Tuple[List[int], torch.Tensor], Tuple[None, None]]:
"""
Return an inference batch
"""
with threading.Lock():
indices: List[int] = []
for _ in range(self.batch_size):
try:
index = self.full_queue.get(timeout=0.05)
indices.append(index)
except:
break
if indices:
# tqdm.write(str(len(jobs)))
batch = {
key: torch.stack([self.input_buffers[key][index] for index in indices])
.to(torch.device('cpu'), non_blocking=True)
.unsqueeze(0)
for key in self.input_buffers
}
return indices, batch
else:
return None, None
the with threading.Lock() line must be an error right? Like generally speaking a lock must be shared, and this isn't shared with anything?
|
[
"Yes, @Homer512's comment nailed it. Each activation of the function creates a new Lock object, and there's no way for those objects to be shared between threads. Nothing is accomplished by locking a Lock that cannot be locked by any other thread. It's effectively a no-op.\n"
] |
[
1
] |
[] |
[] |
[
"multithreading",
"python"
] |
stackoverflow_0074589199_multithreading_python.txt
|
Q:
Create a land mask from latitude and longitude arrays
Given latitude and longitude arrays, I'm tryin to genereate a land_mask, an array of the same size that tells whether a coordinate is land or not.
lon=np.random.uniform(0,150,size=[1000,1000])
lat=np.random.uniform(-90,90,size=[1000,1000])
from global_land_mask import globe
land_mask=globe.is_land(lat,lon)
This is a very efficient method to create land mask if all values are defined. But if some values in lat or lon are masked or are nan values, it throws an error.
I've tried to use for loops to avoid that error but it's taking almost 15-20 minutes to run. I've to run it on an array with 3000×3000 elements, some of which are masked.
What would be a better way for generating land mask for arrays with masked/nan values?
A:
so it seems globe.is_land(y,x) doesn't take a masked array. An equitable solution would be to use a coord outside your domain (if possible). So:
lon[lon==327.67] = 170
lat[lat==327.67] = -90
from global_land_mask import globe
land_mask=globe.is_land(lat,lon)
masked = np.where((lat==-90)|(lon==170), False, land_mask)
Alternatively, you could mask the values prior to passing them in:
lat_mask = np.where(lat==326.67, np.nan, lat)
lon_mask = np.where(lon==326.67, np.nan, lon)
master_mask = np.where((lat_mask==np.nan)|(lon_mask==np.nan), False, True)
lat[master_mask]==True
lon[master_mask]==True
from global_land_mask import globe
land_mask=globe.is_land(lat,lon)
The second solution will change (flatten) your lat/lon arrays but does not require you to find an area outside of your domain
|
Create a land mask from latitude and longitude arrays
|
Given latitude and longitude arrays, I'm tryin to genereate a land_mask, an array of the same size that tells whether a coordinate is land or not.
lon=np.random.uniform(0,150,size=[1000,1000])
lat=np.random.uniform(-90,90,size=[1000,1000])
from global_land_mask import globe
land_mask=globe.is_land(lat,lon)
This is a very efficient method to create land mask if all values are defined. But if some values in lat or lon are masked or are nan values, it throws an error.
I've tried to use for loops to avoid that error but it's taking almost 15-20 minutes to run. I've to run it on an array with 3000×3000 elements, some of which are masked.
What would be a better way for generating land mask for arrays with masked/nan values?
|
[
"so it seems globe.is_land(y,x) doesn't take a masked array. An equitable solution would be to use a coord outside your domain (if possible). So:\nlon[lon==327.67] = 170\nlat[lat==327.67] = -90\n\nfrom global_land_mask import globe\nland_mask=globe.is_land(lat,lon)\n\nmasked = np.where((lat==-90)|(lon==170), False, land_mask)\n\nAlternatively, you could mask the values prior to passing them in:\nlat_mask = np.where(lat==326.67, np.nan, lat)\nlon_mask = np.where(lon==326.67, np.nan, lon)\n\nmaster_mask = np.where((lat_mask==np.nan)|(lon_mask==np.nan), False, True)\n\nlat[master_mask]==True \nlon[master_mask]==True \n\nfrom global_land_mask import globe\nland_mask=globe.is_land(lat,lon)\n\nThe second solution will change (flatten) your lat/lon arrays but does not require you to find an area outside of your domain\n"
] |
[
2
] |
[] |
[] |
[
"arrays",
"cartopy",
"numpy",
"python"
] |
stackoverflow_0074593424_arrays_cartopy_numpy_python.txt
|
Q:
How can I change the value of a row with indexing?
I've scraped the crypto.com website to get the current prices of crypto coins in DataFrame form, it worked perfectly with pandas, but the 'Prices' values are mixed.
here's the output:
Name Price 24H CHANGE
0 BBitcoinBTC 16.678,36$16.678,36+0,32% +0,32%
1 EEthereumETH $1.230,40$1.230,40+0,52% +0,52%
2 UTetherUSDT $1,02$1,02-0,01% -0,01%
3 BBNBBNB $315,46$315,46-0,64% -0,64%
4 UUSD CoinUSDC $1,00$1,00+0,00% +0,00%
5 BBinance USDBUSD $1,00$1,00+0,00% +0,00%
6 XXRPXRP $0,4067$0,4067-0,13% -0,13%
7 DDogecoinDOGE $0,1052$0,1052+13,73% +13,73%
8 ACardanoADA $0,3232$0,3232+0,98% +0,98%
9 MPolygonMATIC $0,8727$0,8727+1,20% +1,20%
10 DPolkadotDOT $5,48$5,48+0,79% +0,79%
I created a regex to filter the mixed date:
import re
pattern = re.compile(r'(\$.*)(\$)')
for value in df['Price']:
value = pattern.search(value)
print(value.group(1))
output:
$16.684,53
$1.230,25
$1,02
$315,56
$1,00
$1,00
$0,4078
$0,105
$0,3236
$0,8733
but I couldn't find a way to change the values. Which is the best way to do it? Thanks.
A:
if youre regex expression is good, this would work
df['Price']= df['Price'].apply(lambda x: pattern.search(x).group(1))
A:
can you try this:
df['price_v2']=df['Price'].apply(lambda x: '$' + x.split('$')[1])
'''
0 $16.678,36+0,32%
1 $1.230,40
2 $1,02
3 $315,46
4 $1,00
5 $1,00
6 $0,4067
7 $0,1052
8 $0,3232
9 $0,8727
10 $5,48
Name: price, dtype: object
Also, BTC looks different from others. Is this a typo you made or is this the response from the api ? If there are parities that look like BTC, we can add an if else block to the code:
df['price']=df['Price'].apply(lambda x: '$' + x.split('$')[1] if x.startswith('$') else '$' + x.split('$')[0])
'''
0 $16.678,36
1 $1.230,40
2 $1,02
3 $315,46
4 $1,00
5 $1,00
6 $0,4067
7 $0,1052
8 $0,3232
9 $0,8727
10 $5,48
'''
Detail:
string = '$1,02$1,02-0,01%'
values = string.split('$') # output -- > ['', '1,02', '1,02-0,01%']
final_value = values[1] # we need only price. Thats why i choose the second element and apply this to all dataframe.
|
How can I change the value of a row with indexing?
|
I've scraped the crypto.com website to get the current prices of crypto coins in DataFrame form, it worked perfectly with pandas, but the 'Prices' values are mixed.
here's the output:
Name Price 24H CHANGE
0 BBitcoinBTC 16.678,36$16.678,36+0,32% +0,32%
1 EEthereumETH $1.230,40$1.230,40+0,52% +0,52%
2 UTetherUSDT $1,02$1,02-0,01% -0,01%
3 BBNBBNB $315,46$315,46-0,64% -0,64%
4 UUSD CoinUSDC $1,00$1,00+0,00% +0,00%
5 BBinance USDBUSD $1,00$1,00+0,00% +0,00%
6 XXRPXRP $0,4067$0,4067-0,13% -0,13%
7 DDogecoinDOGE $0,1052$0,1052+13,73% +13,73%
8 ACardanoADA $0,3232$0,3232+0,98% +0,98%
9 MPolygonMATIC $0,8727$0,8727+1,20% +1,20%
10 DPolkadotDOT $5,48$5,48+0,79% +0,79%
I created a regex to filter the mixed date:
import re
pattern = re.compile(r'(\$.*)(\$)')
for value in df['Price']:
value = pattern.search(value)
print(value.group(1))
output:
$16.684,53
$1.230,25
$1,02
$315,56
$1,00
$1,00
$0,4078
$0,105
$0,3236
$0,8733
but I couldn't find a way to change the values. Which is the best way to do it? Thanks.
|
[
"if youre regex expression is good, this would work\ndf['Price']= df['Price'].apply(lambda x: pattern.search(x).group(1))\n\n",
"can you try this:\ndf['price_v2']=df['Price'].apply(lambda x: '$' + x.split('$')[1])\n\n'''\n0 $16.678,36+0,32%\n1 $1.230,40\n2 $1,02\n3 $315,46\n4 $1,00\n5 $1,00\n6 $0,4067\n7 $0,1052\n8 $0,3232\n9 $0,8727\n10 $5,48\nName: price, dtype: object\n\nAlso, BTC looks different from others. Is this a typo you made or is this the response from the api ? If there are parities that look like BTC, we can add an if else block to the code:\ndf['price']=df['Price'].apply(lambda x: '$' + x.split('$')[1] if x.startswith('$') else '$' + x.split('$')[0])\n\n'''\n0 $16.678,36\n1 $1.230,40\n2 $1,02\n3 $315,46\n4 $1,00\n5 $1,00\n6 $0,4067\n7 $0,1052\n8 $0,3232\n9 $0,8727\n10 $5,48\n\n'''\n\nDetail:\nstring = '$1,02$1,02-0,01%'\nvalues = string.split('$') # output -- > ['', '1,02', '1,02-0,01%']\nfinal_value = values[1] # we need only price. Thats why i choose the second element and apply this to all dataframe.\n\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"dataframe",
"pandas",
"python"
] |
stackoverflow_0074593208_dataframe_pandas_python.txt
|
Q:
PyGIWarning: Gtk and Rsvg were imported without specifying a version first. Use gi.require_version
$ python -c 'from gi.repository import Gtk'
-c:1: PyGIWarning: Gtk was imported without specifying a version first. Use gi.require_version('Gtk', '3.0') before import to ensure that the right version gets loaded.
what should i do?
A:
You got a warning because you are importing gtk wihtouht specifing the version. This is because gtk has several version so you should declare which want to use.
In order to do so you can open a python terminal (type python on your commandline) and execute the following code:
import gi
gi.require_version('Gtk', '3.0')
from gi.repository import Gtk
A:
I had same issue.
In my error, it lists the file location of where the code needs to be placed.
C:\users\me\radioconda\lib\site-packages\gnuradio\grc\main.py
When i used a notepad to edit the file I found the code from the post above, but after a set of three import commands.
from gi.repository import Gtk
import argparse
import logging
import sys
import gi
gi.require_version('Gtk', '3.0')
gi.require_version('PangoCairo', '1.0')
I changed the order to this and no longer receive the error.
I hope this helps.
import gi
gi.require_version('Gtk', '3.0')
gi.require_version('PangoCairo', '1.0')
from gi.repository import Gtk
import argparse
import logging
import sys
A:
I had the same issue as described in the question. I tried to change the order of the above listed commands in the source file, but some extension of VS Code was resetting the order to bottom up the oder suggested in the above answer. When I force-saved the code with the sequence as suggested, it solved the query. This might work in most cases. Thank you.
|
PyGIWarning: Gtk and Rsvg were imported without specifying a version first. Use gi.require_version
|
$ python -c 'from gi.repository import Gtk'
-c:1: PyGIWarning: Gtk was imported without specifying a version first. Use gi.require_version('Gtk', '3.0') before import to ensure that the right version gets loaded.
what should i do?
|
[
"You got a warning because you are importing gtk wihtouht specifing the version. This is because gtk has several version so you should declare which want to use.\nIn order to do so you can open a python terminal (type python on your commandline) and execute the following code:\nimport gi\ngi.require_version('Gtk', '3.0')\nfrom gi.repository import Gtk\n\n",
"I had same issue.\nIn my error, it lists the file location of where the code needs to be placed.\nC:\\users\\me\\radioconda\\lib\\site-packages\\gnuradio\\grc\\main.py\nWhen i used a notepad to edit the file I found the code from the post above, but after a set of three import commands.\nfrom gi.repository import Gtk\nimport argparse\nimport logging\nimport sys\n\nimport gi\ngi.require_version('Gtk', '3.0')\ngi.require_version('PangoCairo', '1.0')\n\nI changed the order to this and no longer receive the error.\nI hope this helps.\nimport gi\ngi.require_version('Gtk', '3.0')\ngi.require_version('PangoCairo', '1.0')\n\nfrom gi.repository import Gtk\nimport argparse\nimport logging\nimport sys \n\n",
"I had the same issue as described in the question. I tried to change the order of the above listed commands in the source file, but some extension of VS Code was resetting the order to bottom up the oder suggested in the above answer. When I force-saved the code with the sequence as suggested, it solved the query. This might work in most cases. Thank you.\n"
] |
[
5,
1,
0
] |
[] |
[] |
[
"centos",
"gtk",
"linux",
"python",
"tryton"
] |
stackoverflow_0063631072_centos_gtk_linux_python_tryton.txt
|
Q:
for loop on lists and keep common items
I wish to iterate through a list and only retain in it those items that also appear in two other lists.
For example:
list1 = [1, 2, 3, "a", 4, 5, 6, 7, 8, 9]
list2 = [2, 5, "a", 8, 4]
list3 = [4, 6, "a", 5]
for item in list1:
if item not in list2 and item not in list3:
list1.remove(item)
print(list1)
I expect the following output: [4, "a", 5], but I get [2, 'a', 4, 5, 6, 8].
Can someone please explain why?
A:
here is another way of doing it:
list1 = [1, 2, 3, "a", 4, 5, 6, 7, 8, 9]
list2 = [2, 5, "a", 8, 4]
list3 = [4, 6, "a", 5]
list4 = []
for item in list1:
if item in list2 and item in list3:
list4.append(item)
print(list4)
|
for loop on lists and keep common items
|
I wish to iterate through a list and only retain in it those items that also appear in two other lists.
For example:
list1 = [1, 2, 3, "a", 4, 5, 6, 7, 8, 9]
list2 = [2, 5, "a", 8, 4]
list3 = [4, 6, "a", 5]
for item in list1:
if item not in list2 and item not in list3:
list1.remove(item)
print(list1)
I expect the following output: [4, "a", 5], but I get [2, 'a', 4, 5, 6, 8].
Can someone please explain why?
|
[
"here is another way of doing it:\nlist1 = [1, 2, 3, \"a\", 4, 5, 6, 7, 8, 9]\nlist2 = [2, 5, \"a\", 8, 4]\nlist3 = [4, 6, \"a\", 5]\n\nlist4 = []\n\nfor item in list1:\n if item in list2 and item in list3:\n list4.append(item)\n\nprint(list4)\n\n\n\n"
] |
[
1
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074593641_python.txt
|
Q:
Get correlation per groupby/apply in Python Polars
I have a pandas DataFrame df:
d = {'era': ["a", "a", "b","b","c", "c"], 'feature1': [3, 4, 5, 6, 7, 8], 'feature2': [7, 8, 9, 10, 11, 12], 'target': [1, 2, 3, 4, 5 ,6]}
df = pd.DataFrame(data=d)
And I want to apply a correlation between the feature_cols = ['feature1', 'feature2'] and the TARGET_COL = 'target' for each era:
corrs_split = (
training_data
.groupby("era")
.apply(lambda d: d[feature_cols].corrwith(d[TARGET_COL]))
)
I've been trying to get this done with Polars, but I can't get a polars dataframe with a column for each different era and the correlations for each feature. The maximum I've got, is a single column, with all the correlations calculated, but without the era as index and not discriminated by feature.
A:
Here's the polars equivalent of that code. You can do this by combining groupby() and agg().
import polars as pl
d = {'era': ["a", "a", "b","b","c", "c"], 'feature1': [3, 4, 5, 6, 7, 8], 'feature2': [7, 8, 9, 10, 11, 12], 'target': [1, 2, 3, 4, 5 ,6]}
df = pl.DataFrame(d)
feature_cols = ['feature1', 'feature2']
TARGET_COL = 'target'
agg_cols = []
for feature_col in feature_cols:
agg_cols += [pl.pearson_corr(feature_col, TARGET_COL)]
print(df.groupby("era").agg(agg_cols))
Output:
shape: (3, 3)
┌─────┬──────────┬──────────┐
│ era ┆ feature1 ┆ feature2 │
│ --- ┆ --- ┆ --- │
│ str ┆ f64 ┆ f64 │
╞═════╪══════════╪══════════╡
│ a ┆ 1.0 ┆ 1.0 │
├╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌┤
│ c ┆ 1.0 ┆ 1.0 │
├╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌┤
│ b ┆ 1.0 ┆ 1.0 │
└─────┴──────────┴──────────┘
(Order may be different for you.)
|
Get correlation per groupby/apply in Python Polars
|
I have a pandas DataFrame df:
d = {'era': ["a", "a", "b","b","c", "c"], 'feature1': [3, 4, 5, 6, 7, 8], 'feature2': [7, 8, 9, 10, 11, 12], 'target': [1, 2, 3, 4, 5 ,6]}
df = pd.DataFrame(data=d)
And I want to apply a correlation between the feature_cols = ['feature1', 'feature2'] and the TARGET_COL = 'target' for each era:
corrs_split = (
training_data
.groupby("era")
.apply(lambda d: d[feature_cols].corrwith(d[TARGET_COL]))
)
I've been trying to get this done with Polars, but I can't get a polars dataframe with a column for each different era and the correlations for each feature. The maximum I've got, is a single column, with all the correlations calculated, but without the era as index and not discriminated by feature.
|
[
"Here's the polars equivalent of that code. You can do this by combining groupby() and agg().\nimport polars as pl\n\nd = {'era': [\"a\", \"a\", \"b\",\"b\",\"c\", \"c\"], 'feature1': [3, 4, 5, 6, 7, 8], 'feature2': [7, 8, 9, 10, 11, 12], 'target': [1, 2, 3, 4, 5 ,6]}\ndf = pl.DataFrame(d)\nfeature_cols = ['feature1', 'feature2']\nTARGET_COL = 'target'\n\nagg_cols = []\nfor feature_col in feature_cols:\n agg_cols += [pl.pearson_corr(feature_col, TARGET_COL)]\nprint(df.groupby(\"era\").agg(agg_cols))\n\nOutput:\nshape: (3, 3)\n┌─────┬──────────┬──────────┐\n│ era ┆ feature1 ┆ feature2 │\n│ --- ┆ --- ┆ --- │\n│ str ┆ f64 ┆ f64 │\n╞═════╪══════════╪══════════╡\n│ a ┆ 1.0 ┆ 1.0 │\n├╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌┤\n│ c ┆ 1.0 ┆ 1.0 │\n├╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌┼╌╌╌╌╌╌╌╌╌╌┤\n│ b ┆ 1.0 ┆ 1.0 │\n└─────┴──────────┴──────────┘\n\n(Order may be different for you.)\n"
] |
[
1
] |
[] |
[] |
[
"group_by",
"pandas",
"pandas_apply",
"python",
"python_polars"
] |
stackoverflow_0074593723_group_by_pandas_pandas_apply_python_python_polars.txt
|
Q:
Is It Possible To Upgrade The Tkinter Library In Python?
I really want to know is it possible to upgrade the tkinter library in python because currently i am working on a project named Translator where text of any language will be converted to a text of any other language and vice versa same as our google translator. So the problem I am facing is that whenever I want to write a text in any other language it is not taking, it is showing ????. This means that the tkinter textbox is not supporting any other language other than English. I have seen people saying that if you upgrade the tkinter library to the next version it will work. So can someone please help me out with this!!
I have provided a screenshot here to check what is happening with me right now.
A:
Each version of Python comes with corresponding versions of the Python-coded tkinter and C-coded _tkinter modules. (Tkinter imports _tkinter.) One cannot upgrade tkinter except by upgrading Python.
That said, the tkinter that comes with current versions of Python (3.10+) potentially display all unicode characters. What are you using? The following is from IDLE's settings dialog font sample on Windows with SourceCodePro font.
<ASCII/Latin1>
AaBbCcDdEeFfGgHhIiJj
1234567890#:+=(){}[]
¢£¥§©«®¶½ĞÀÁÂÃÄÅÇÐØß
<IPA,Greek,Cyrillic>
ɐɕɘɞɟɤɫɮɰɷɻʁʃʆʎʞʢʫʭʯ
ΑαΒβΓγΔδΕεΖζΗηΘθΙιΚκ
БбДдЖжПпФфЧчЪъЭэѠѤѬӜ
<Hebrew, Arabic>
אבגדהוזחטיךכלםמןנסעף
ابجدهوزحطي٠١٢٣٤٥٦٧٨٩
<Devanagari, Tamil>
०१२३४५६७८९अआइईउऊएऐओऔ
௦௧௨௩௪௫௬௭௮௯அஇஉஎ
<East Asian>
〇一二三四五六七八九
汉字漢字人木火土金水
가냐더려모뵤수유즈치
あいうえおアイウエオ
If characters do not display, the issue is with the OS and font. Characters not in the Basic Multilingual Plane do, however, interfere with editing. Everything you see above is in the BMP and the same will be true of most anything that people want to insert into a text translator.
Post the minimum amount of code needed to illustrate your problem: just one text-display widget and code to insert text that results in '?'s.
|
Is It Possible To Upgrade The Tkinter Library In Python?
|
I really want to know is it possible to upgrade the tkinter library in python because currently i am working on a project named Translator where text of any language will be converted to a text of any other language and vice versa same as our google translator. So the problem I am facing is that whenever I want to write a text in any other language it is not taking, it is showing ????. This means that the tkinter textbox is not supporting any other language other than English. I have seen people saying that if you upgrade the tkinter library to the next version it will work. So can someone please help me out with this!!
I have provided a screenshot here to check what is happening with me right now.
|
[
"Each version of Python comes with corresponding versions of the Python-coded tkinter and C-coded _tkinter modules. (Tkinter imports _tkinter.) One cannot upgrade tkinter except by upgrading Python.\nThat said, the tkinter that comes with current versions of Python (3.10+) potentially display all unicode characters. What are you using? The following is from IDLE's settings dialog font sample on Windows with SourceCodePro font.\n<ASCII/Latin1>\nAaBbCcDdEeFfGgHhIiJj\n1234567890#:+=(){}[]\n¢£¥§©«®¶½ĞÀÁÂÃÄÅÇÐØß\n\n<IPA,Greek,Cyrillic>\nɐɕɘɞɟɤɫɮɰɷɻʁʃʆʎʞʢʫʭʯ\nΑαΒβΓγΔδΕεΖζΗηΘθΙιΚκ\nБбДдЖжПпФфЧчЪъЭэѠѤѬӜ\n\n<Hebrew, Arabic>\nאבגדהוזחטיךכלםמןנסעף\nابجدهوزحطي٠١٢٣٤٥٦٧٨٩\n\n<Devanagari, Tamil>\n०१२३४५६७८९अआइईउऊएऐओऔ\n௦௧௨௩௪௫௬௭௮௯அஇஉஎ\n\n<East Asian>\n〇一二三四五六七八九\n汉字漢字人木火土金水\n가냐더려모뵤수유즈치\nあいうえおアイウエオ\n\nIf characters do not display, the issue is with the OS and font. Characters not in the Basic Multilingual Plane do, however, interfere with editing. Everything you see above is in the BMP and the same will be true of most anything that people want to insert into a text translator.\nPost the minimum amount of code needed to illustrate your problem: just one text-display widget and code to insert text that results in '?'s.\n"
] |
[
0
] |
[] |
[] |
[
"python",
"python_idle",
"tkinter"
] |
stackoverflow_0074587775_python_python_idle_tkinter.txt
|
Q:
Iterate a JSONfield corresponding to an object
The view receives an user request and then returns the corresponding object on the 'ControleProdutos' model db.
views.py
def relatorio_produtos(request):
if request.method == 'POST':
prod_json = ControleProduto.objects.get(pk = request.POST.get('periodo'))
return render(request, 'selecao/historico-produtos.html', {'prod_json':prod_json})
else:
return HttpResponseRedirect('/relatorios')
model.py
class ControleProduto(models.Model):
periodo = models.DateTimeField(auto_now_add= True, verbose_name='Período')
produtos = models.JSONField(verbose_name='Produtos')
faturamento = models.FloatField(verbose_name='Faturamento')
log_forma_pagamento = models.CharField(max_length=50, verbose_name='Forma de Pagamento')
def __str__(self):
return "{} {} {} {}".format(self.periodo, self.produtos, self.faturamento, self.log_forma_pagamento)
def get_data(self):
return{
'periodo': self.periodo,
'produtos': self.produtos,
'faturamento': self.faturamento,
'log_forma_pagamento': self.log_forma_pagamento
}
class ListaProdutos(models.Model):
nome_produto = models.CharField(max_length=50, verbose_name='Produto')
quantidade_produto = models.IntegerField(verbose_name='Qntd.')
vendido = models.IntegerField(verbose_name='Vendidos')
data_adicao_prod= models.DateTimeField(auto_now_add= True ,verbose_name='Data de Adição')
nota_produto = models.TextField(null=True, blank=True)
custo = models.FloatField(verbose_name='Custo')
tipo_produto = models.TextField(verbose_name='Tipo de Produto')
def __str__(self):
return "{} {} {} {} {} {} {} {}".format(self.nome_produto, self.quantidade_produto, self.vendido, self.data_adicao_prod, self.nota_produto, self.custo, self.tipo_produto)
def get_data(self):
return{
'id': self.id,
'nome_produto': self.nome_produto,
'quantidade_produto': self.quantidade_produto,
'vendido': self.vendido,
'custo': self.custo,
'tipo_produto': self.tipo_produto,
}
Then, on the html file I'm using the for loop to iterate the JSONfield, but Django is indentifying the field as a string.
html
<p>{{ prod_json.periodo }}</p>
<p>{{ prod_json.produtos }}</p>
<p>{{ prod_json.faturamento }}</p>
<p>{{ prod_json.log_forma_pagamento }}</p>
<table>
<thead>
<tr>
<th>ID</th>
<th>Produto</th>
<th>Quantidade Vendida</th>
</tr>
</thead>
{% for prod in prod_json.produtos %}
<tbody>
<tr>
<td>{{prod.pk}}</td>
</tr>
</tbody>
{% endfor %}
</table>
Print
Tried other JSON files didn't work either;
I tried {{ prod.filter.pk }} and that didn't work either;
I reviewed the file and didn't see an apparent error
A:
Django probably stores the JOSNField, produtos, in a varchar or nvarchar field in your database.
Whether or not that's true, you probably could solve this issue in the get_data method in ControleProduto.
An example of this would be:
def get_data(self):
return{
'periodo': self.periodo,
'produtos': json.loads(self.produtos),
'faturamento': self.faturamento,
'log_forma_pagamento': self.log_forma_pagamento
}
Remember that you would have to add this line to the top of models.py:
import json
|
Iterate a JSONfield corresponding to an object
|
The view receives an user request and then returns the corresponding object on the 'ControleProdutos' model db.
views.py
def relatorio_produtos(request):
if request.method == 'POST':
prod_json = ControleProduto.objects.get(pk = request.POST.get('periodo'))
return render(request, 'selecao/historico-produtos.html', {'prod_json':prod_json})
else:
return HttpResponseRedirect('/relatorios')
model.py
class ControleProduto(models.Model):
periodo = models.DateTimeField(auto_now_add= True, verbose_name='Período')
produtos = models.JSONField(verbose_name='Produtos')
faturamento = models.FloatField(verbose_name='Faturamento')
log_forma_pagamento = models.CharField(max_length=50, verbose_name='Forma de Pagamento')
def __str__(self):
return "{} {} {} {}".format(self.periodo, self.produtos, self.faturamento, self.log_forma_pagamento)
def get_data(self):
return{
'periodo': self.periodo,
'produtos': self.produtos,
'faturamento': self.faturamento,
'log_forma_pagamento': self.log_forma_pagamento
}
class ListaProdutos(models.Model):
nome_produto = models.CharField(max_length=50, verbose_name='Produto')
quantidade_produto = models.IntegerField(verbose_name='Qntd.')
vendido = models.IntegerField(verbose_name='Vendidos')
data_adicao_prod= models.DateTimeField(auto_now_add= True ,verbose_name='Data de Adição')
nota_produto = models.TextField(null=True, blank=True)
custo = models.FloatField(verbose_name='Custo')
tipo_produto = models.TextField(verbose_name='Tipo de Produto')
def __str__(self):
return "{} {} {} {} {} {} {} {}".format(self.nome_produto, self.quantidade_produto, self.vendido, self.data_adicao_prod, self.nota_produto, self.custo, self.tipo_produto)
def get_data(self):
return{
'id': self.id,
'nome_produto': self.nome_produto,
'quantidade_produto': self.quantidade_produto,
'vendido': self.vendido,
'custo': self.custo,
'tipo_produto': self.tipo_produto,
}
Then, on the html file I'm using the for loop to iterate the JSONfield, but Django is indentifying the field as a string.
html
<p>{{ prod_json.periodo }}</p>
<p>{{ prod_json.produtos }}</p>
<p>{{ prod_json.faturamento }}</p>
<p>{{ prod_json.log_forma_pagamento }}</p>
<table>
<thead>
<tr>
<th>ID</th>
<th>Produto</th>
<th>Quantidade Vendida</th>
</tr>
</thead>
{% for prod in prod_json.produtos %}
<tbody>
<tr>
<td>{{prod.pk}}</td>
</tr>
</tbody>
{% endfor %}
</table>
Print
Tried other JSON files didn't work either;
I tried {{ prod.filter.pk }} and that didn't work either;
I reviewed the file and didn't see an apparent error
|
[
"Django probably stores the JOSNField, produtos, in a varchar or nvarchar field in your database.\nWhether or not that's true, you probably could solve this issue in the get_data method in ControleProduto.\nAn example of this would be:\ndef get_data(self):\n return{\n 'periodo': self.periodo,\n 'produtos': json.loads(self.produtos),\n 'faturamento': self.faturamento,\n 'log_forma_pagamento': self.log_forma_pagamento\n }\n\nRemember that you would have to add this line to the top of models.py:\nimport json\n\n"
] |
[
1
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0074593683_django_python.txt
|
Q:
Pygame-2.1.3.dev8 release not working with moviepy
I tried installing pygame with pip install pygame --pre but that release(Pygame-2.1.3.dev8) of pygame seems to not be compatible with the moviepy library.
Does anyone know if there is some kind of workaround to get this to work?
Error:
Traceback (most recent call last):
File "C:\Users\oscar\OneDrive\Skrivbord\Auto-tok\main.py", line 99, in <module>
main()
File "C:\Users\oscar\OneDrive\Skrivbord\Auto-tok\main.py", line 95, in main
videostuff(reddit_json_blob)
File "C:\Users\oscar\OneDrive\Skrivbord\Auto-tok\main.py", line 55, in videostuff
final.show(2, interactive=True)
File "C:\Users\oscar\OneDrive\Skrivbord\Auto-tok\venv\Lib\site-packages\moviepy\editor.py", line 118, in show
raise ImportError("clip.show requires Pygame installed")
ImportError: clip.show requires Pygame installed
A:
Unfortunately, Pygame has not been ported to Python 3.11. I would downgrade to 3.10 and try that instead. There aren't many critical features added in 3.11 that I think you should be able to live without. I doubt you'll need tomllib, for instance.
|
Pygame-2.1.3.dev8 release not working with moviepy
|
I tried installing pygame with pip install pygame --pre but that release(Pygame-2.1.3.dev8) of pygame seems to not be compatible with the moviepy library.
Does anyone know if there is some kind of workaround to get this to work?
Error:
Traceback (most recent call last):
File "C:\Users\oscar\OneDrive\Skrivbord\Auto-tok\main.py", line 99, in <module>
main()
File "C:\Users\oscar\OneDrive\Skrivbord\Auto-tok\main.py", line 95, in main
videostuff(reddit_json_blob)
File "C:\Users\oscar\OneDrive\Skrivbord\Auto-tok\main.py", line 55, in videostuff
final.show(2, interactive=True)
File "C:\Users\oscar\OneDrive\Skrivbord\Auto-tok\venv\Lib\site-packages\moviepy\editor.py", line 118, in show
raise ImportError("clip.show requires Pygame installed")
ImportError: clip.show requires Pygame installed
|
[
"Unfortunately, Pygame has not been ported to Python 3.11. I would downgrade to 3.10 and try that instead. There aren't many critical features added in 3.11 that I think you should be able to live without. I doubt you'll need tomllib, for instance.\n"
] |
[
0
] |
[] |
[] |
[
"moviepy",
"pygame",
"python",
"python_3.11"
] |
stackoverflow_0074593879_moviepy_pygame_python_python_3.11.txt
|
Q:
Why doesn't my program properly read from my text file?
I made a text file with a list of usernames and passwords. My program (in a tkinter page) is supposed to check whether the username and password exists in the file, and then if it doesn't it makes a label that says 'username or password incorrect'. However, even when the username and password clealy exists in the text file, it will still print the 'incorrect' message. Here's an example of something in my text file:
testusername.testpassword
And here is the code that's supposed to detect it:
def login_incorrect():
Label(loginPage, text="Username or password incorrect.").place(x=120, y=120)
# print("def login incorrect")
def LoginToAccount():
print("def login to account")
# while True: # This loop will run as long as the user is not logged in.
with open('AccountDatabase.txt'):
if loginUsernameE.get() + '.' + loginPasswordE.get() not in open('AccountDatabase.txt').read():
login_incorrect()
print('incorrect')
print(loginUsernameE.get() + '.' + loginPasswordE.get())
But when I write testusername in the username field and testpassword in the password field, it still shows the error. Here's a screenshot:
Why can't I detect if text is in a text file?
A:
It looks like you first need to read the file, and only then check for the occurrence of the desired one.
with open('AccountDatabase.txt', 'r') as f:
file_logins = f.read()
if loginUsernameE.get() + '.' + loginPasswordE.get() not in file_logins:
login_incorrect()
print('incorrect')
A:
Try this code. I fixed opening the file for read and the condition.
def login_incorrect():
Label(loginPage, text="Username or password incorrect.").place(x=120, y=120)
# print("def login incorrect")
def LoginToAccount():
print("def login to account")
# while True: # This loop will run as long as the user is not logged in.
with open('AccountDatabase.txt', 'r') as f:
if loginUsernameE.get() + '.' + loginPasswordE.get() not in f.read():
login_incorrect()
print('incorrect')
print(loginUsernameE.get() + '.' + loginPasswordE.get())
|
Why doesn't my program properly read from my text file?
|
I made a text file with a list of usernames and passwords. My program (in a tkinter page) is supposed to check whether the username and password exists in the file, and then if it doesn't it makes a label that says 'username or password incorrect'. However, even when the username and password clealy exists in the text file, it will still print the 'incorrect' message. Here's an example of something in my text file:
testusername.testpassword
And here is the code that's supposed to detect it:
def login_incorrect():
Label(loginPage, text="Username or password incorrect.").place(x=120, y=120)
# print("def login incorrect")
def LoginToAccount():
print("def login to account")
# while True: # This loop will run as long as the user is not logged in.
with open('AccountDatabase.txt'):
if loginUsernameE.get() + '.' + loginPasswordE.get() not in open('AccountDatabase.txt').read():
login_incorrect()
print('incorrect')
print(loginUsernameE.get() + '.' + loginPasswordE.get())
But when I write testusername in the username field and testpassword in the password field, it still shows the error. Here's a screenshot:
Why can't I detect if text is in a text file?
|
[
"It looks like you first need to read the file, and only then check for the occurrence of the desired one.\nwith open('AccountDatabase.txt', 'r') as f:\n file_logins = f.read()\n if loginUsernameE.get() + '.' + loginPasswordE.get() not in file_logins:\n login_incorrect()\n print('incorrect')\n\n",
"Try this code. I fixed opening the file for read and the condition.\ndef login_incorrect():\n Label(loginPage, text=\"Username or password incorrect.\").place(x=120, y=120)\n # print(\"def login incorrect\")\ndef LoginToAccount():\n print(\"def login to account\")\n # while True: # This loop will run as long as the user is not logged in.\n with open('AccountDatabase.txt', 'r') as f:\n\n if loginUsernameE.get() + '.' + loginPasswordE.get() not in f.read():\n login_incorrect()\n print('incorrect')\n print(loginUsernameE.get() + '.' + loginPasswordE.get())\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"database",
"python",
"text_files",
"tkinter",
"txt"
] |
stackoverflow_0074593227_database_python_text_files_tkinter_txt.txt
|
Q:
Converting the output of pickle.dumps() into a string and back?
In my Python program, I have a list with some objects from a custom class:
# Some example program, not the actual code.
class SomeClass:
def __init__(self):
import random
import os
self.thing = random.randint(5,15)
self.thing2 = str(os.urandom(16))
self.thing3 = random.randint(1,10)
self.thing4 = "You get the idea"
a_list = [SomeClass(),SomeClass(),SomeClass(),SomeClass(),SomeClass()]
import pickle
print((pickle.dumps(a_list)))
I need to convert the output of pickle.dumps() into a string, which is easy enough, but how do I convert that back into a byte stream that pickle.loads() can use? Help would be appreciated.
I tried converting it into UTF-8, but since the file I need to save the string to is UTF-8 encoded, so that did not work.
A:
The usual way to "stringify" binary data is to base64-encode it:
>>> import pickle
>>> import base64
>>> L = list(range(5))
>>> ps = pickle.dumps(L)
>>> ps
b'\x80\x04\x95\x0f\x00\x00\x00\x00\x00\x00\x00]\x94(K\x00K\x01K\x02K\x03K\x04e.'
>>> s = base64.b64encode(ps).decode('ascii')
>>> s
'gASVDwAAAAAAAABdlChLAEsBSwJLA0sEZS4='
>>># Round trip
>>> pickle.loads(base64.b64decode(s))
[0, 1, 2, 3, 4]
Base64 encoding is usually used for transferring binary data in text-only environments (such as HTTP). However if you want to save you pickled data to a file you can do this directly by opening the file in binary mode:
with open('myfile.bin', 'wb') as f:
pickle.dump(my_object, f)
|
Converting the output of pickle.dumps() into a string and back?
|
In my Python program, I have a list with some objects from a custom class:
# Some example program, not the actual code.
class SomeClass:
def __init__(self):
import random
import os
self.thing = random.randint(5,15)
self.thing2 = str(os.urandom(16))
self.thing3 = random.randint(1,10)
self.thing4 = "You get the idea"
a_list = [SomeClass(),SomeClass(),SomeClass(),SomeClass(),SomeClass()]
import pickle
print((pickle.dumps(a_list)))
I need to convert the output of pickle.dumps() into a string, which is easy enough, but how do I convert that back into a byte stream that pickle.loads() can use? Help would be appreciated.
I tried converting it into UTF-8, but since the file I need to save the string to is UTF-8 encoded, so that did not work.
|
[
"The usual way to \"stringify\" binary data is to base64-encode it:\n>>> import pickle\n>>> import base64\n>>> L = list(range(5))\n>>> ps = pickle.dumps(L)\n>>> ps\nb'\\x80\\x04\\x95\\x0f\\x00\\x00\\x00\\x00\\x00\\x00\\x00]\\x94(K\\x00K\\x01K\\x02K\\x03K\\x04e.'\n>>> s = base64.b64encode(ps).decode('ascii')\n>>> s\n'gASVDwAAAAAAAABdlChLAEsBSwJLA0sEZS4='\n>>># Round trip\n>>> pickle.loads(base64.b64decode(s))\n[0, 1, 2, 3, 4]\n\nBase64 encoding is usually used for transferring binary data in text-only environments (such as HTTP). However if you want to save you pickled data to a file you can do this directly by opening the file in binary mode:\nwith open('myfile.bin', 'wb') as f:\n pickle.dump(my_object, f)\n\n"
] |
[
2
] |
[] |
[] |
[
"pickle",
"python"
] |
stackoverflow_0074593860_pickle_python.txt
|
Q:
python ndarray multiply columns
I have a dataframe with two columns that are json.
So for example,
df = A B C D
1. 2. {b:1,c:2,d:{r:1,t:{y:0}}} {v:9}
I want to flatten it entirely, so every value in the json will be in a seperate columns, and the name will be the full path. So here the value 0 will be in the column:
C_d_t_y
What is the best way to do it, and without having to predefine the depth of the json or the fields?
A:
If your dataframe contains only nested dictionaries (no lists), you can try:
def get_values(df):
def _parse(val, current_path):
if isinstance(val, dict):
for k, v in val.items():
yield from _parse(v, current_path + [k])
else:
yield "_".join(map(str, current_path)), val
rows = []
for idx, row in df.iterrows():
tmp = {}
for i in row.index:
tmp.update(dict(_parse(row[i], [i])))
rows.append(tmp)
return pd.DataFrame(rows, index=df.index)
print(get_values(df))
Prints:
A B C_b C_c C_d_r C_d_t_y D_v
0 1 2 1 2 1 0 9
|
python ndarray multiply columns
|
I have a dataframe with two columns that are json.
So for example,
df = A B C D
1. 2. {b:1,c:2,d:{r:1,t:{y:0}}} {v:9}
I want to flatten it entirely, so every value in the json will be in a seperate columns, and the name will be the full path. So here the value 0 will be in the column:
C_d_t_y
What is the best way to do it, and without having to predefine the depth of the json or the fields?
|
[
"If your dataframe contains only nested dictionaries (no lists), you can try:\ndef get_values(df):\n def _parse(val, current_path):\n if isinstance(val, dict):\n for k, v in val.items():\n yield from _parse(v, current_path + [k])\n else:\n yield \"_\".join(map(str, current_path)), val\n\n rows = []\n for idx, row in df.iterrows():\n tmp = {}\n for i in row.index:\n tmp.update(dict(_parse(row[i], [i])))\n rows.append(tmp)\n\n return pd.DataFrame(rows, index=df.index)\n\n\nprint(get_values(df))\n\nPrints:\n A B C_b C_c C_d_r C_d_t_y D_v\n0 1 2 1 2 1 0 9\n\n"
] |
[
1
] |
[] |
[] |
[
"dataframe",
"json",
"pandas",
"python"
] |
stackoverflow_0074593846_dataframe_json_pandas_python.txt
|
Q:
How do I connect categorical scatter points with a vertical line?
I have data in dataframe about different assets - let's say A,B,C,D.
What I would like to do is create a chart that looks something like this:
These assets are at a maximum of price n (let's say in our case 3.5), the dotted line and dotted circle show the historic minimum. Furthermore, it is possible to also display a range using 2 dotted circles (i.e. full circle would mean that this is the current maximum, and the range between two dotted circles indicates the range of the price in a given timeframe). I have all the data in df, but I would like to know whether you know how to display this either in matplotlib or seaborn.
The data:
df = pd.DataFrame({'A': [3.5,2,1.5], 'B': [3.5,1.7,1.7],'C': [3.5,0.7,0.7],'D': [3.5,1.1,1.1]})
A:
You can draw an open scatter plot using facecolor='none'. And setting facecolor=None will get the default with the face color equal to the main color.
With plt.vlines() you can draw vertical lines between the minima and the maxima.
from matplotlib import pyplot as plt
import pandas as pd
df = pd.DataFrame({'A': [3.5,2,1.5], 'B': [3.5,1.7,1.7], 'C': [3.5,0.7,0.7], 'D': [3.5,1.1,1.1]})
xvals = df.columns
colors = plt.cm.Set1.colors[:len(xvals)]
for ind, row in df.iterrows():
plt.scatter(xvals, row, color=colors, s=200, facecolor=None if ind == 0 else 'none')
plt.vlines(xvals, df.min(), df.max(), color=colors, ls='--')
plt.tight_layout()
plt.show()
|
How do I connect categorical scatter points with a vertical line?
|
I have data in dataframe about different assets - let's say A,B,C,D.
What I would like to do is create a chart that looks something like this:
These assets are at a maximum of price n (let's say in our case 3.5), the dotted line and dotted circle show the historic minimum. Furthermore, it is possible to also display a range using 2 dotted circles (i.e. full circle would mean that this is the current maximum, and the range between two dotted circles indicates the range of the price in a given timeframe). I have all the data in df, but I would like to know whether you know how to display this either in matplotlib or seaborn.
The data:
df = pd.DataFrame({'A': [3.5,2,1.5], 'B': [3.5,1.7,1.7],'C': [3.5,0.7,0.7],'D': [3.5,1.1,1.1]})
|
[
"You can draw an open scatter plot using facecolor='none'. And setting facecolor=None will get the default with the face color equal to the main color.\nWith plt.vlines() you can draw vertical lines between the minima and the maxima.\nfrom matplotlib import pyplot as plt\nimport pandas as pd\n\ndf = pd.DataFrame({'A': [3.5,2,1.5], 'B': [3.5,1.7,1.7], 'C': [3.5,0.7,0.7], 'D': [3.5,1.1,1.1]})\nxvals = df.columns\ncolors = plt.cm.Set1.colors[:len(xvals)]\nfor ind, row in df.iterrows():\n plt.scatter(xvals, row, color=colors, s=200, facecolor=None if ind == 0 else 'none')\nplt.vlines(xvals, df.min(), df.max(), color=colors, ls='--')\n\nplt.tight_layout()\nplt.show()\n\n\n"
] |
[
2
] |
[] |
[] |
[
"matplotlib",
"pandas",
"python",
"scatter_plot"
] |
stackoverflow_0074593695_matplotlib_pandas_python_scatter_plot.txt
|
Q:
How to split a list into sublists with specific range for each sublist?
I want to split a list into sublist with specific 'if statement' for each sublist.
For examle:
input:
a = [1, 2, 7.9, 3, 4, 3.7, 5, 6, 2.2, 7, 8, 1.2, 5.7]
output:
b = [[1, 1.2, 2], [2.2, 3, 3.7, 4], [5, 5.7, 6], [7, 7.9, 8]]
Values should be grouped by certain range. here it is between (1:2); (2.1:4); (4.1:6); (6.1:8). I hope I was able to get my point across.
A:
You seem to want to divide your data into buckets of width dx. Assuming this, your expected output would be:
[[1, 1.2, 2], [2.2, 3], [3.7, 4], [5, 5.7, 6], [7, 7.9, 8]]
First, let's sort the input numbers:
numbers = sorted(a)
Now, we'll iterate over this sorted list, and append to a bucket list as long as appending the current number wouldn't exceed our desired range for this bucket. If appending the current number would cause the desired range to be exceeded, then we create a new bucket and start appending to it:
bucket = []
result = [bucket]
for n in numbers:
# bucket is empty, or bucket range <= dx, so append
if not bucket or n - bucket[0] <= dx:
bucket.append(n)
else:
bucket = [n] # Create a new bucket with the current number
result.append(bucket) # Add it to our result array
This gives your expected result:
result = [[1, 1.2, 2], [2.2, 3], [3.7, 4], [5, 5.7, 6], [7, 7.9, 8]]
A:
The logic is not fully clear. Assuming you want to group by bins of width 2 (1-2, 3-4, 5-6, ...) with the right boundary included.
We can use a dictionary to temporarily hold the bins and to facilitate the creation of the sublists.
from math import ceil
a = [1, 2, 7.9, 3, 4, 3.7, 5, 6, 2.2, 7, 8, 1.2, 5.7]
dx = 2
d = {}
for x in sorted(a):
k = ceil(x+1)//dx
d.setdefault(k, []).append(x)
b = list(d.values())
Output:
[[1, 1.2, 2], [2.2, 3, 3.7, 4], [5, 5.7, 6], [7, 7.9, 8]]
|
How to split a list into sublists with specific range for each sublist?
|
I want to split a list into sublist with specific 'if statement' for each sublist.
For examle:
input:
a = [1, 2, 7.9, 3, 4, 3.7, 5, 6, 2.2, 7, 8, 1.2, 5.7]
output:
b = [[1, 1.2, 2], [2.2, 3, 3.7, 4], [5, 5.7, 6], [7, 7.9, 8]]
Values should be grouped by certain range. here it is between (1:2); (2.1:4); (4.1:6); (6.1:8). I hope I was able to get my point across.
|
[
"You seem to want to divide your data into buckets of width dx. Assuming this, your expected output would be:\n[[1, 1.2, 2], [2.2, 3], [3.7, 4], [5, 5.7, 6], [7, 7.9, 8]]\n\nFirst, let's sort the input numbers:\nnumbers = sorted(a)\n\nNow, we'll iterate over this sorted list, and append to a bucket list as long as appending the current number wouldn't exceed our desired range for this bucket. If appending the current number would cause the desired range to be exceeded, then we create a new bucket and start appending to it:\nbucket = []\nresult = [bucket]\nfor n in numbers:\n # bucket is empty, or bucket range <= dx, so append\n if not bucket or n - bucket[0] <= dx: \n bucket.append(n)\n else:\n bucket = [n] # Create a new bucket with the current number\n result.append(bucket) # Add it to our result array\n\nThis gives your expected result:\nresult = [[1, 1.2, 2], [2.2, 3], [3.7, 4], [5, 5.7, 6], [7, 7.9, 8]]\n\n",
"The logic is not fully clear. Assuming you want to group by bins of width 2 (1-2, 3-4, 5-6, ...) with the right boundary included.\nWe can use a dictionary to temporarily hold the bins and to facilitate the creation of the sublists.\nfrom math import ceil\n\na = [1, 2, 7.9, 3, 4, 3.7, 5, 6, 2.2, 7, 8, 1.2, 5.7]\ndx = 2\n\nd = {}\n\nfor x in sorted(a):\n k = ceil(x+1)//dx\n d.setdefault(k, []).append(x)\n\nb = list(d.values())\n\nOutput:\n[[1, 1.2, 2], [2.2, 3, 3.7, 4], [5, 5.7, 6], [7, 7.9, 8]]\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"arrays",
"loops",
"python",
"sorting"
] |
stackoverflow_0074593812_arrays_loops_python_sorting.txt
|
Q:
Pass build inputs from Jenkins to a Python script
I wrote this simple Jenkinsfile to execute a Python script.
The Jenkins job is supposed to take the value of the Jenkins build parameter and inject it to the python script, then execute the python script.
Here is the Jenkinsfile
pipeline{
agent any
parameters {
string description: 'write the week number', name: 'Week_Number'
}
stages{
stage("Pass Week Number&execute script"){
steps{
sh 'python3 statistics.py'
}
}
}
}
So what will happen is that I will go to Jenkins, choose build with parameters, and write some value in the Week_Number variable.
What i need to do is: Pass this Week_Number value as an integer to a variable in the python script.
This is the part of the Python script that I'm interested in:
weekNum = int(os.environ.get("Week_Number"))
I read online about the use of os.environ.get() to pass values, but I think something is still missing for the Python script to fetch the Jenkins build parameter.
Any help?
A:
You need your python script to be able to parse command line arguments or named command line arguments.
If your script is using command line argument you can pass parameters as follow:
stages{
stage("Pass Week Number&execute script"){
steps{
sh('python3 statistics.py ' + params.Week_Number)
}
}
}
If your script uses the named command line argument where the named argument in the script is input_week_number you can pass as follows:
stages{
stage("Pass Week Number&execute script"){
steps{
sh('python3 statistics.py --input_week_number' + params.Week_Number)
}
}
}
|
Pass build inputs from Jenkins to a Python script
|
I wrote this simple Jenkinsfile to execute a Python script.
The Jenkins job is supposed to take the value of the Jenkins build parameter and inject it to the python script, then execute the python script.
Here is the Jenkinsfile
pipeline{
agent any
parameters {
string description: 'write the week number', name: 'Week_Number'
}
stages{
stage("Pass Week Number&execute script"){
steps{
sh 'python3 statistics.py'
}
}
}
}
So what will happen is that I will go to Jenkins, choose build with parameters, and write some value in the Week_Number variable.
What i need to do is: Pass this Week_Number value as an integer to a variable in the python script.
This is the part of the Python script that I'm interested in:
weekNum = int(os.environ.get("Week_Number"))
I read online about the use of os.environ.get() to pass values, but I think something is still missing for the Python script to fetch the Jenkins build parameter.
Any help?
|
[
"You need your python script to be able to parse command line arguments or named command line arguments.\nIf your script is using command line argument you can pass parameters as follow:\nstages{\n stage(\"Pass Week Number&execute script\"){\n steps{\n sh('python3 statistics.py ' + params.Week_Number)\n }\n }\n}\n\nIf your script uses the named command line argument where the named argument in the script is input_week_number you can pass as follows:\nstages{\n stage(\"Pass Week Number&execute script\"){\n steps{\n sh('python3 statistics.py --input_week_number' + params.Week_Number)\n }\n }\n}\n\n"
] |
[
0
] |
[] |
[] |
[
"groovy",
"jenkins",
"python"
] |
stackoverflow_0074548804_groovy_jenkins_python.txt
|
Q:
AttributeError: 'Tensor' object has no attribute '_keras_history'
I looked for all the "'Tensor' object has no attribute ***" but none seems related to Keras (except for TensorFlow: AttributeError: 'Tensor' object has no attribute 'log10' which didn't help)...
I am making a sort of GAN (Generative Adversarial Networks). Here you can find the structure.
Layer (type) Output Shape Param # Connected to
_____________________________________________________________________________
input_1 (InputLayer) (None, 30, 91) 0
_____________________________________________________________________________
model_1 (Model) (None, 30, 1) 12558 input_1[0][0]
_____________________________________________________________________________
model_2 (Model) (None, 30, 91) 99889 input_1[0][0]
model_1[1][0]
_____________________________________________________________________________
model_3 (Model) (None, 1) 456637 model_2[1][0]
_____________________________________________________________________________
I pretrained model_2, and model_3. The thing is I pretrained model_2 with list made of 0 and 1, but model_1 return approached values. So i considered rounding the model1_output, with the following code : the K.round() on model1_out.
import keras.backend as K
[...]
def make_gan(GAN_in, model1, model2, model3):
model1_out = model1(GAN_in)
model2_out = model2([GAN_in, K.round(model1_out)])
GAN_out = model3(model2_out)
GAN = Model(GAN_in, GAN_out)
GAN.compile(loss=loss, optimizer=model1.optimizer, metrics=['binary_accuracy'])
return GAN
[...]
I have the following error :
AttributeError: 'Tensor' object has no attribute '_keras_history'
Full traceback :
Traceback (most recent call last):
File "C:\Users\Asmaa\Documents\BillyValuation\GFD.py", line 88, in <module>
GAN = make_gan(inputSentence, G, F, D)
File "C:\Users\Asmaa\Documents\BillyValuation\GFD.py", line 61, in make_gan
GAN = Model(GAN_in, GAN_out)
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\legacy\interfaces.py", line 88, in wrapper
return func(*args, **kwargs)
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\topology.py", line 1705, in __init__
build_map_of_graph(x, finished_nodes, nodes_in_progress)
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\topology.py", line 1695, in build_map_of_graph
layer, node_index, tensor_index)
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\topology.py", line 1695, in build_map_of_graph
layer, node_index, tensor_index)
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\topology.py", line 1665, in build_map_of_graph
layer, node_index, tensor_index = tensor._keras_history
AttributeError: 'Tensor' object has no attribute '_keras_history'
I'm using Python 3.6, with Spyder 3.1.4, on Windows 7. I upgraded TensorFlow and Keras with pip last week.
Thank you for any help provided !
A:
My problem was using '+' instead of 'Add' on keras
A:
Since the error comes directly from here:
Traceback (most recent call last):
File "C:\Users\Asmaa\Documents\BillyValuation\GFD.py", line 88, in <module>
GAN = make_gan(inputSentence, G, F, D)
File "C:\Users\Asmaa\Documents\BillyValuation\GFD.py", line 61, in make_gan
GAN = Model(GAN_in, GAN_out)
, and the inputs of your models depend on the outputs from previous models, I believe the bug lies in the codes in your model.
In you model code, please check line by line whether or not you apply a non-Keras operation, especially in the last few lines. For example ,for element-wise addition, you might intuitively use + or even numpy.add, but keras.layers.Add() should be used instead.
A:
@'Maëva LC': I can't post a comment, this answers your None issue.
but the code is working fine without the line
model1_out = (lambda x: K.round(x), output_shape=...)(model1_out)
and not anything else. Anyway, thank you for trying.
Function round() is not differentiable, hence the gradient is None. I suggest you just remove the line.
A:
Try this:
def make_gan(GAN_in, model1, model2, model3):
model1_out = model1(GAN_in)
model1_out = Lambda(lambda x: K.round(x), output_shape=...)(model1_out)
model2_out = model2([GAN_in, model1_out])
GAN_out = model3(model2_out)
GAN = Model(GAN_in, GAN_out)
GAN.compile(loss=loss, optimizer=model1.optimizer,
metrics=['binary_accuracy'])
return GAN
A:
This is supported in tensorflow versions 1.x
You are using version 2.x probably.
%tensorflow_version 1.x
use the above tensorflow_version magic before importing tensorflow in google colab.
This is not valid in jupyter-notebook. Please do use Google Colab
A:
I also have faced the same problem. When I use x = relu(x) then I got same error. Overcome this problem, I define a function and use Lambda layer.
def relu_func(x):
return relu(x)
x = layers.Lambda(relu_func)(x)
|
AttributeError: 'Tensor' object has no attribute '_keras_history'
|
I looked for all the "'Tensor' object has no attribute ***" but none seems related to Keras (except for TensorFlow: AttributeError: 'Tensor' object has no attribute 'log10' which didn't help)...
I am making a sort of GAN (Generative Adversarial Networks). Here you can find the structure.
Layer (type) Output Shape Param # Connected to
_____________________________________________________________________________
input_1 (InputLayer) (None, 30, 91) 0
_____________________________________________________________________________
model_1 (Model) (None, 30, 1) 12558 input_1[0][0]
_____________________________________________________________________________
model_2 (Model) (None, 30, 91) 99889 input_1[0][0]
model_1[1][0]
_____________________________________________________________________________
model_3 (Model) (None, 1) 456637 model_2[1][0]
_____________________________________________________________________________
I pretrained model_2, and model_3. The thing is I pretrained model_2 with list made of 0 and 1, but model_1 return approached values. So i considered rounding the model1_output, with the following code : the K.round() on model1_out.
import keras.backend as K
[...]
def make_gan(GAN_in, model1, model2, model3):
model1_out = model1(GAN_in)
model2_out = model2([GAN_in, K.round(model1_out)])
GAN_out = model3(model2_out)
GAN = Model(GAN_in, GAN_out)
GAN.compile(loss=loss, optimizer=model1.optimizer, metrics=['binary_accuracy'])
return GAN
[...]
I have the following error :
AttributeError: 'Tensor' object has no attribute '_keras_history'
Full traceback :
Traceback (most recent call last):
File "C:\Users\Asmaa\Documents\BillyValuation\GFD.py", line 88, in <module>
GAN = make_gan(inputSentence, G, F, D)
File "C:\Users\Asmaa\Documents\BillyValuation\GFD.py", line 61, in make_gan
GAN = Model(GAN_in, GAN_out)
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\legacy\interfaces.py", line 88, in wrapper
return func(*args, **kwargs)
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\topology.py", line 1705, in __init__
build_map_of_graph(x, finished_nodes, nodes_in_progress)
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\topology.py", line 1695, in build_map_of_graph
layer, node_index, tensor_index)
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\topology.py", line 1695, in build_map_of_graph
layer, node_index, tensor_index)
File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\topology.py", line 1665, in build_map_of_graph
layer, node_index, tensor_index = tensor._keras_history
AttributeError: 'Tensor' object has no attribute '_keras_history'
I'm using Python 3.6, with Spyder 3.1.4, on Windows 7. I upgraded TensorFlow and Keras with pip last week.
Thank you for any help provided !
|
[
"My problem was using '+' instead of 'Add' on keras\n",
"Since the error comes directly from here:\nTraceback (most recent call last):\n File \"C:\\Users\\Asmaa\\Documents\\BillyValuation\\GFD.py\", line 88, in <module>\nGAN = make_gan(inputSentence, G, F, D)\n File \"C:\\Users\\Asmaa\\Documents\\BillyValuation\\GFD.py\", line 61, in make_gan\nGAN = Model(GAN_in, GAN_out)\n\n, and the inputs of your models depend on the outputs from previous models, I believe the bug lies in the codes in your model.\nIn you model code, please check line by line whether or not you apply a non-Keras operation, especially in the last few lines. For example ,for element-wise addition, you might intuitively use + or even numpy.add, but keras.layers.Add() should be used instead.\n",
"@'Maëva LC': I can't post a comment, this answers your None issue.\n\nbut the code is working fine without the line\nmodel1_out = (lambda x: K.round(x), output_shape=...)(model1_out) \nand not anything else. Anyway, thank you for trying.\n\nFunction round() is not differentiable, hence the gradient is None. I suggest you just remove the line.\n",
"Try this:\ndef make_gan(GAN_in, model1, model2, model3):\n model1_out = model1(GAN_in)\n model1_out = Lambda(lambda x: K.round(x), output_shape=...)(model1_out)\n model2_out = model2([GAN_in, model1_out])\n GAN_out = model3(model2_out)\n GAN = Model(GAN_in, GAN_out)\n GAN.compile(loss=loss, optimizer=model1.optimizer, \n metrics=['binary_accuracy'])\n return GAN\n\n",
"This is supported in tensorflow versions 1.x\nYou are using version 2.x probably.\n%tensorflow_version 1.x\nuse the above tensorflow_version magic before importing tensorflow in google colab.\nThis is not valid in jupyter-notebook. Please do use Google Colab\n",
"I also have faced the same problem. When I use x = relu(x) then I got same error. Overcome this problem, I define a function and use Lambda layer.\ndef relu_func(x):\n \n return relu(x)\n\n x = layers.Lambda(relu_func)(x)\n\n"
] |
[
23,
13,
4,
1,
0,
0
] |
[] |
[] |
[
"attributeerror",
"keras",
"python"
] |
stackoverflow_0044889187_attributeerror_keras_python.txt
|
Q:
I'm not sure what I'm doing wrong on this program
Define a Course base class with attributes number and title. Define a print_info() method that displays the course number and title.
Also define a derived class OfferedCourse with the additional attributes instructor_name, term, and class_time.
Ex: If the input is:
ECE287
Digital Systems Design
ECE387
Embedded Systems Design
Mark Patterson
Fall 2018
WF: 2-3:30 pm
the output is:
Course Information:
Course Number: ECE287
Course Title: Digital Systems Design
Course Information:
Course Number: ECE387
Course Title: Embedded Systems Design
Instructor Name: Mark Patterson
Term: Fall 2018
Class Time: WF: 2-3:30 pm
Here is the code I have so far:
class Course:
# TODO: Define constructor with attributes: number, title
def __init__(self):
self.number = ''
self.title = 0
# TODO: Define print_info()
def print_info(self):
print(' Course Number:', self.number)
print(' Title:', self.title)
class OfferedCourse(Course):
# TODO: Define constructor with attributes:
# number, title, instructor_name, term, class_time
def __init__(self, number, title, instructor_name, term, class_time):
Course.__init__(course_number, course_title)
self.instructor_name = ''
self.term = ''
self.class_time = 0
if __name__ == '__main__':
course_number = input()
course_title = input()
o_course_number = input()
o_course_title = input()
instructor_name = input()
term = input()
class_time = input()
my_course = Course(course_number, course_title)
my_course.print_info()
my_offered_course = OfferedCourse(
o_course_number, o_course_title, instructor_name, term, class_time
)
my_offered_course.print_info()
print(' Instructor Name:', my_offered_course.instructor_name)
print(' Term:', my_offered_course.term)
print(' Class Time:', my_offered_course.class_time)
When I run the code, I'm getting the following error:
Traceback (most recent call last): File "main.py", line 32, in <module> my_course = Course(course_number, course_title) TypeError: __init__() takes 1 positional argument but 3 were given
A:
The thing is with the def __init__(self): method in the Course class. Here you are telling python that the class Course does not receive anything else than itself. If you want to be able to pass those arguments to init, but keep the default values, you can provide a default value inside init
def __init__(self, number: int = 0, title:str = ''):
self.number = number
self.title = title
A:
Course.__init__ is an ordinary function, not a bound method, so you need to pass self explicitly:
class OfferedCourse(Course):
# TODO: Define constructor with attributes:
# number, title, instructor_name, term, class_time
def __init__(self, number, title, instructor_name, term, class_time):
Course.__init__(self, course_number, course_title)
self.instructor_name = ''
self.term = ''
self.class_time = 0
or, you use super().__init__ and let self be passed implicitly via a bit of compiler-implemented magic.
class OfferedCourse(Course):
# TODO: Define constructor with attributes:
# number, title, instructor_name, term, class_time
def __init__(self, number, title, instructor_name, term, class_time):
super().__init__(course_number, course_title)
self.instructor_name = ''
self.term = ''
self.class_time = 0
Either way, you need Course.__init__ to accept the arguments you are passing from OfferedCourse.__init__:
class Course:
def __init__(self, number='', title=0):
self.number = number
self.title = title
...
class OfferedCourse(Course):
# TODO: Define constructor with attributes:
# number, title, instructor_name, term, class_time
def __init__(self, number, title, instructor_name='', term='', class_time=0):
super().__init__(number, title)
self.instructor_name = instructor_name
self.term = term
self.class_time = class_time
...
|
I'm not sure what I'm doing wrong on this program
|
Define a Course base class with attributes number and title. Define a print_info() method that displays the course number and title.
Also define a derived class OfferedCourse with the additional attributes instructor_name, term, and class_time.
Ex: If the input is:
ECE287
Digital Systems Design
ECE387
Embedded Systems Design
Mark Patterson
Fall 2018
WF: 2-3:30 pm
the output is:
Course Information:
Course Number: ECE287
Course Title: Digital Systems Design
Course Information:
Course Number: ECE387
Course Title: Embedded Systems Design
Instructor Name: Mark Patterson
Term: Fall 2018
Class Time: WF: 2-3:30 pm
Here is the code I have so far:
class Course:
# TODO: Define constructor with attributes: number, title
def __init__(self):
self.number = ''
self.title = 0
# TODO: Define print_info()
def print_info(self):
print(' Course Number:', self.number)
print(' Title:', self.title)
class OfferedCourse(Course):
# TODO: Define constructor with attributes:
# number, title, instructor_name, term, class_time
def __init__(self, number, title, instructor_name, term, class_time):
Course.__init__(course_number, course_title)
self.instructor_name = ''
self.term = ''
self.class_time = 0
if __name__ == '__main__':
course_number = input()
course_title = input()
o_course_number = input()
o_course_title = input()
instructor_name = input()
term = input()
class_time = input()
my_course = Course(course_number, course_title)
my_course.print_info()
my_offered_course = OfferedCourse(
o_course_number, o_course_title, instructor_name, term, class_time
)
my_offered_course.print_info()
print(' Instructor Name:', my_offered_course.instructor_name)
print(' Term:', my_offered_course.term)
print(' Class Time:', my_offered_course.class_time)
When I run the code, I'm getting the following error:
Traceback (most recent call last): File "main.py", line 32, in <module> my_course = Course(course_number, course_title) TypeError: __init__() takes 1 positional argument but 3 were given
|
[
"The thing is with the def __init__(self): method in the Course class. Here you are telling python that the class Course does not receive anything else than itself. If you want to be able to pass those arguments to init, but keep the default values, you can provide a default value inside init\ndef __init__(self, number: int = 0, title:str = ''):\n self.number = number\n self.title = title\n\n",
"Course.__init__ is an ordinary function, not a bound method, so you need to pass self explicitly:\nclass OfferedCourse(Course):\n # TODO: Define constructor with attributes:\n # number, title, instructor_name, term, class_time\n def __init__(self, number, title, instructor_name, term, class_time):\n Course.__init__(self, course_number, course_title)\n self.instructor_name = ''\n self.term = ''\n self.class_time = 0\n\nor, you use super().__init__ and let self be passed implicitly via a bit of compiler-implemented magic.\nclass OfferedCourse(Course):\n # TODO: Define constructor with attributes:\n # number, title, instructor_name, term, class_time\n def __init__(self, number, title, instructor_name, term, class_time):\n super().__init__(course_number, course_title)\n self.instructor_name = ''\n self.term = ''\n self.class_time = 0\n\nEither way, you need Course.__init__ to accept the arguments you are passing from OfferedCourse.__init__:\nclass Course:\n def __init__(self, number='', title=0):\n self.number = number\n self.title = title\n\n ...\n\nclass OfferedCourse(Course):\n # TODO: Define constructor with attributes:\n # number, title, instructor_name, term, class_time\n def __init__(self, number, title, instructor_name='', term='', class_time=0):\n super().__init__(number, title)\n self.instructor_name = instructor_name\n self.term = term\n self.class_time = class_time\n\n ...\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"derived_class",
"inheritance",
"python"
] |
stackoverflow_0074593913_derived_class_inheritance_python.txt
|
Q:
Why does it say IndexError: list index out of range?
I am a python newbie. I am in the phase of testing my code but I am quite confused why sometimes this works and sometimes it does not. As per my understanding the random.randint(0,13) this means that random numbers from 0 to 12 which is the number of my cards list.
Error im geting:
Traceback (most recent call last):
File "main.py", line 72, in <module>
generate_random_hand()
File "main.py", line 32, in generate_random_hand
computer_hand.append(cards[rand1])
IndexError: list index out of range
Here is the code:
#Init
cards = [11, 2, 3, 4, 5, 6, 7, 8, 9, 10, 10, 10, 10]
computer_hand = []
player_hand = []
isContinue = True
#Generate first 2 cards of computer and player
def generate_random_hand():
for _ in range(0,2):
rand1 = random.randint(0,13)
rand2 = random.randint(0,13)
computer_hand.append(cards[rand1])
player_hand.append(cards[rand2])
Here is the screenshot of the problem:
Image of ERROR
EDIT:
Seems like I have mistaken the functionality of
for _ in range() which does not include the 2nd argument
and the random.randint() which includes the 2nd argument. Since I cannot delete this post anymore.
A:
Seems you have an incorrect assumption.
A quick test gave me the following output:
>>> from random import randint
>>> randint(0,13)
3
>>> randint(0,13)
1
>>> randint(0,13)
10
>>> randint(0,13)
2
>>> randint(0,13)
12
>>> randint(0,13)
12
>>> randint(0,13)
3
>>> randint(0,13)
12
>>> randint(0,13)
6
>>> randint(0,13)
2
>>> randint(0,13)
13
So when you eventually get a 13, the exception tells you the value provided is not in the range of indices in your list of cards: 0-12
A:
random.randint(0,13) goes from 0 to 13.
The array only goes from 0 to 12;
change random.randint(0,13) to:
random.randint(0, 12)
A:
I think I see your error. The list is 13 items long, so if randint generates 13, and you use cards[13], it is out of range, as indexes start from 0. In this case, you would do cards[randint(0, 12)].
|
Why does it say IndexError: list index out of range?
|
I am a python newbie. I am in the phase of testing my code but I am quite confused why sometimes this works and sometimes it does not. As per my understanding the random.randint(0,13) this means that random numbers from 0 to 12 which is the number of my cards list.
Error im geting:
Traceback (most recent call last):
File "main.py", line 72, in <module>
generate_random_hand()
File "main.py", line 32, in generate_random_hand
computer_hand.append(cards[rand1])
IndexError: list index out of range
Here is the code:
#Init
cards = [11, 2, 3, 4, 5, 6, 7, 8, 9, 10, 10, 10, 10]
computer_hand = []
player_hand = []
isContinue = True
#Generate first 2 cards of computer and player
def generate_random_hand():
for _ in range(0,2):
rand1 = random.randint(0,13)
rand2 = random.randint(0,13)
computer_hand.append(cards[rand1])
player_hand.append(cards[rand2])
Here is the screenshot of the problem:
Image of ERROR
EDIT:
Seems like I have mistaken the functionality of
for _ in range() which does not include the 2nd argument
and the random.randint() which includes the 2nd argument. Since I cannot delete this post anymore.
|
[
"Seems you have an incorrect assumption.\nA quick test gave me the following output:\n>>> from random import randint\n>>> randint(0,13)\n3\n>>> randint(0,13)\n1\n>>> randint(0,13)\n10\n>>> randint(0,13)\n2\n>>> randint(0,13)\n12\n>>> randint(0,13)\n12\n>>> randint(0,13)\n3\n>>> randint(0,13)\n12\n>>> randint(0,13)\n6\n>>> randint(0,13)\n2\n>>> randint(0,13)\n13\n\nSo when you eventually get a 13, the exception tells you the value provided is not in the range of indices in your list of cards: 0-12\n",
"random.randint(0,13) goes from 0 to 13.\nThe array only goes from 0 to 12;\nchange random.randint(0,13) to:\nrandom.randint(0, 12)\n\n",
"I think I see your error. The list is 13 items long, so if randint generates 13, and you use cards[13], it is out of range, as indexes start from 0. In this case, you would do cards[randint(0, 12)].\n"
] |
[
0,
0,
0
] |
[
"There are 12 elements in your list, you are trying to check for the 13th element when it should be computer_hand.append(cards[rand1 - 2]) since all indexes of elements start at 0. So there are actually 0,1,2,3,4,5,6,7,8,9,10,11 elements. Therefore, there should only be a maximum of index 11.\n"
] |
[
-1
] |
[
"python"
] |
stackoverflow_0074593962_python.txt
|
Q:
How can I play audio with playsound and type in an entry box at the same time in tkinter?
I want to type something in the user_text entry box while the play_audio function is running
I tried the following code:
from tkinter import *
from playsound import playsound
root = Tk()
def play_audio():
playsound('audio.mp3')
play_audio_button = Button(root, text='Play audio', command=play_audio)
user_text = Entry(root)
play_audio_button.pack()
user_text.pack(padx=10, pady=10)
mainloop()
but it doesn't let me do anything while the audio is playing in the background. It only lets me type after the audio is finished.
I also tried doing the same thing withouth tkinter and it works:
def play_audio():
playsound('audio.mp3')
play_audio()
play_audio_input = input('Your text: \n')
It does let me type while the audio is playing in the background that way.
So how can I get it to work in tkinter?
A:
playsound can run sound in the background, you should use threads if you need to loop the sound or something more than just running a single sound file.
def play_audio():
playsound('audio.mp3', block=False)
if you want to loop the sound you don't need multiprocessing, the threading module is perfectly usable for running tasks in the background, which will run the audio in another thread, leaving the main thread to run your GUI.
import threading
def play_audio():
while True:
playsound('audio.mp3')
play_audio_button = Button(root, text='Play audio', command=lambda: threading.Thread(play_audio).start())
|
How can I play audio with playsound and type in an entry box at the same time in tkinter?
|
I want to type something in the user_text entry box while the play_audio function is running
I tried the following code:
from tkinter import *
from playsound import playsound
root = Tk()
def play_audio():
playsound('audio.mp3')
play_audio_button = Button(root, text='Play audio', command=play_audio)
user_text = Entry(root)
play_audio_button.pack()
user_text.pack(padx=10, pady=10)
mainloop()
but it doesn't let me do anything while the audio is playing in the background. It only lets me type after the audio is finished.
I also tried doing the same thing withouth tkinter and it works:
def play_audio():
playsound('audio.mp3')
play_audio()
play_audio_input = input('Your text: \n')
It does let me type while the audio is playing in the background that way.
So how can I get it to work in tkinter?
|
[
"playsound can run sound in the background, you should use threads if you need to loop the sound or something more than just running a single sound file.\ndef play_audio():\n playsound('audio.mp3', block=False)\n\nif you want to loop the sound you don't need multiprocessing, the threading module is perfectly usable for running tasks in the background, which will run the audio in another thread, leaving the main thread to run your GUI.\nimport threading\n\ndef play_audio():\n while True:\n playsound('audio.mp3')\n\nplay_audio_button = Button(root, text='Play audio', command=lambda: threading.Thread(play_audio).start())\n\n"
] |
[
0
] |
[] |
[] |
[
"multiprocessing",
"playsound",
"python",
"tkinter"
] |
stackoverflow_0074593915_multiprocessing_playsound_python_tkinter.txt
|
Q:
How to use Boto to self-terminate instance its running on?
I need to terminate an instance from an AutoScalingGroup as the policies ASG has are leaving the scaled out instances running longer than desired. I need to terminate said instance after its done running a python process.
The code already uses Boto to access other AWS services, so I'm looking to leverage Boto to self-terminate. I have been told that I need to detach the instance from its ASG prior to terminate to avoid side effects.
Any idea how I can go about doing this detachment and self-termination?
A:
An instance can be removed from an Auto Scaling Group by using detach_instances():
Removes one or more instances from the specified Auto Scaling group.
After the instances are detached, you can manage them independent of the Auto Scaling group.
If you do not specify the option to decrement the desired capacity, Amazon EC2 Auto Scaling launches instances to replace the ones that are detached.
response = client.detach_instances(
InstanceIds=[
'string',
],
AutoScalingGroupName='string',
ShouldDecrementDesiredCapacity=True|False
)
So, the steps would be:
Obtain the Instance ID to be removed
Call detach_instances(InstanceIds=['i-xxx'], ShouldDecrementDesiredCapacity=True)
Call terminate_instances(InstanceIds=['i-xxx'])
This can be run from the instance itself, or from anywhere on the Internet.
A:
If you want to get the instance id automatically (if you are using autoscale group you most likely don't know). So You can use this -
from subprocess import Popen, PIPE
from ec2_metadata import ec2_metadata # pip3 install ec2-metadata
REGION = 'ap-southeast-1'
instance_id = ec2_metadata.instance_id
command_response = Popen(f"(aws autoscaling terminate-instance-in-auto-scaling-group --instance-id {instance_id} --should-decrement-desired-capacity --region {REGION})", stderr=PIPE, stdout=PIPE, shell=True)
....
Don't forget to attach autoscaling policy to your instance/template instance.
|
How to use Boto to self-terminate instance its running on?
|
I need to terminate an instance from an AutoScalingGroup as the policies ASG has are leaving the scaled out instances running longer than desired. I need to terminate said instance after its done running a python process.
The code already uses Boto to access other AWS services, so I'm looking to leverage Boto to self-terminate. I have been told that I need to detach the instance from its ASG prior to terminate to avoid side effects.
Any idea how I can go about doing this detachment and self-termination?
|
[
"An instance can be removed from an Auto Scaling Group by using detach_instances():\n\nRemoves one or more instances from the specified Auto Scaling group.\nAfter the instances are detached, you can manage them independent of the Auto Scaling group.\nIf you do not specify the option to decrement the desired capacity, Amazon EC2 Auto Scaling launches instances to replace the ones that are detached.\n\nresponse = client.detach_instances(\n InstanceIds=[\n 'string',\n ],\n AutoScalingGroupName='string',\n ShouldDecrementDesiredCapacity=True|False\n)\n\nSo, the steps would be:\n\nObtain the Instance ID to be removed\nCall detach_instances(InstanceIds=['i-xxx'], ShouldDecrementDesiredCapacity=True)\nCall terminate_instances(InstanceIds=['i-xxx'])\n\nThis can be run from the instance itself, or from anywhere on the Internet.\n",
"If you want to get the instance id automatically (if you are using autoscale group you most likely don't know). So You can use this -\nfrom subprocess import Popen, PIPE\nfrom ec2_metadata import ec2_metadata # pip3 install ec2-metadata\n\nREGION = 'ap-southeast-1'\ninstance_id = ec2_metadata.instance_id\n\ncommand_response = Popen(f\"(aws autoscaling terminate-instance-in-auto-scaling-group --instance-id {instance_id} --should-decrement-desired-capacity --region {REGION})\", stderr=PIPE, stdout=PIPE, shell=True)\n\n....\n\nDon't forget to attach autoscaling policy to your instance/template instance.\n"
] |
[
0,
0
] |
[] |
[] |
[
"amazon_ec2",
"amazon_web_services",
"boto",
"python"
] |
stackoverflow_0052749959_amazon_ec2_amazon_web_services_boto_python.txt
|
Q:
PyTorch vectorized sum different from looped sum
I am using torch 1.7.1 and I noticed that vectorized sums are different from sums in a loop if the indices are repeated. For example:
import torch
indices = torch.LongTensor([0,1,2,1])
values = torch.FloatTensor([1,1,2,2])
result = torch.FloatTensor([0,0,0])
looped_result = torch.zeros_like(result)
for i in range(indices.shape[0]):
looped_result[indices[i]] += values[i]
result[indices] += values
print('result:',result)
print('looped result:', looped_result)
results in:
result tensor: ([1., 2., 2.])
looped result tensor: ([1., 3., 2.])
As you can see the looped variable has the correct sums while the vectorized one doesn’t. Is it possible to avoid the loop and still get the correct result?
A:
The issue here is that you're indexing result multiple times at the same index, which is bound to fail for this inplace operation. Instead what you'd need to use is index_add or index_add_, e.g. (as a continuation of your snippet):
>>> result_ia = torch.zeros_like(result)
>>> result_ia.index_add_(0, indices, values)
tensor([1., 3., 2.]
|
PyTorch vectorized sum different from looped sum
|
I am using torch 1.7.1 and I noticed that vectorized sums are different from sums in a loop if the indices are repeated. For example:
import torch
indices = torch.LongTensor([0,1,2,1])
values = torch.FloatTensor([1,1,2,2])
result = torch.FloatTensor([0,0,0])
looped_result = torch.zeros_like(result)
for i in range(indices.shape[0]):
looped_result[indices[i]] += values[i]
result[indices] += values
print('result:',result)
print('looped result:', looped_result)
results in:
result tensor: ([1., 2., 2.])
looped result tensor: ([1., 3., 2.])
As you can see the looped variable has the correct sums while the vectorized one doesn’t. Is it possible to avoid the loop and still get the correct result?
|
[
"The issue here is that you're indexing result multiple times at the same index, which is bound to fail for this inplace operation. Instead what you'd need to use is index_add or index_add_, e.g. (as a continuation of your snippet):\n>>> result_ia = torch.zeros_like(result)\n>>> result_ia.index_add_(0, indices, values)\ntensor([1., 3., 2.]\n\n"
] |
[
1
] |
[] |
[] |
[
"python",
"pytorch"
] |
stackoverflow_0074593825_python_pytorch.txt
|
Q:
NLTK download SSL: Certificate verify failed
I get the following error when trying to install Punkt for nltk:
nltk.download('punkt')
[nltk_data] Error loading Punkt: <urlopen error [SSL:
[nltk_data] CERTIFICATE_VERIFY_FAILED] certificate verify failed
[nltk_data] (_ssl.c:590)>
False
A:
TLDR: Here is a better solution: https://github.com/gunthercox/ChatterBot/issues/930#issuecomment-322111087
Note that when you run nltk.download(), a window will pop up and let you select which packages to download (Download is not automatically started right away).
To complement the accepted answer, the following is a complete list of directories that will be searched on Mac (not limited to the one mentioned in the accepted answer):
- '/Users/YOUR_USERNAME/nltk_data'
- '/usr/share/nltk_data'
- '/usr/local/share/nltk_data'
- '/usr/lib/nltk_data'
- '/usr/local/lib/nltk_data'
- '/Users/YOUR_USERNAME/YOUR_VIRTUAL_ENV_DIRECTORY/nltk_data'
- '/Users/YOUR_USERNAME/YOUR_VIRTUAL_ENV_DIRECTORY/share/nltk_data'
- '/Users/YOUR_USERNAME/YOUR_VIRTUAL_ENV_DIRECTORY/lib/nltk_data'
In case the link above dies, here is the solution pasted in its entirety:
import nltk
import ssl
try:
_create_unverified_https_context = ssl._create_unverified_context
except AttributeError:
pass
else:
ssl._create_default_https_context = _create_unverified_https_context
nltk.download()
Run the above code in your favourite Python IDE or via the command line.
A:
This works by disabling SSL check!
import nltk
import ssl
try:
_create_unverified_https_context = ssl._create_unverified_context
except AttributeError:
pass
else:
ssl._create_default_https_context = _create_unverified_https_context
nltk.download()
A:
Run the Python interpreter and type the commands:
import nltk
nltk.download()
from here: http://www.nltk.org/data.html
if you get an SSL/Certificate error, run the following command
bash /Applications/Python 3.6/Install Certificates.command
from here: ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:749)
A:
The downloader script is broken. As a temporal workaround can manually download the punkt tokenizer from here and then place the unzipped folder in the corresponding location. The default folders for each OS are:
Windows: C:\nltk_data\tokenizers
OSX: /usr/local/share/nltk_data/tokenizers
Unix: /usr/share/nltk_data/tokenizers
A:
Search 'Install Certificates.command' in the finder and open it.
Then do the following steps in the terminal:
python3
import nltk
nltk.download()
A:
You just need to Install the certificate doing this simple step
In the python application folder double-click on the file 'Certificates.command'
this will make a prompt window show in your screen and basically will automatically install the certificate for you, close this window and try again.
A:
This is how I solved it for MAC OS.
Initially after installing nltk, I was getting the SSL error.
Solution:
Goto
cd /Applications/Python\ 3.8
Run the command
./Install\ Certificates.command
Now if you try again, it should work!
Thanks a lot to this article!
A:
My solution is:
Download punkt.zip from here and unzip
Create nltk_data/tokenizers folders under home folder
Put punkt folder under tokenizers folder
A:
There is a very simple way to fix all of this as written in the formal bug report for anyone else coming across this problem recently (e.g. 2019) and using MacOS. From the bug report at https://bugs.python.org/issue28150:
...there is a simple double-clickable or command-line-runnable script ("/Applications/Python 3.6/Install Certificates.command") that does two things: 1. uses pip to install certifi and 2. creates a symlink in the OpenSSL directory to certifi's installed bundle location.
Simply running the "Install Certificates.command" script worked for me on MacOS (10.15 beta as of this writing) and I was off and running.
A:
My solution after nothing worked. I navigated, via the GUI to the Python 3.7 folder, opened the 'Certificates.command' file in terminal and the SSL issue was immediately resolved.
A:
A bit late to the party but I just entered Certificates.command into Spotlight which found it and ran it. All fixed in seconds.
I'm running mac Catalina and using python 3.7 installed by Homebrew
A:
It means that you are not using HTTPS to work consistently with other run time dependencies for Python etc.
If you are using Linux (Ubuntu)
~$ sudo apt-get install ca-certificates
Should solve the issue.
If you are using this in a script with a docker file, you have to make sure you have install the the ca-certificates modules in your docker file.
A:
For mac users,
just copy paste the following in the terminal:
/Applications/Python\ 3.10/Install\ Certificates.command ; exit;
A:
First go to the path /Applications/Python 3.6/ and run
Install Certificates.command
You will admin rights for the same.
If you are unable to download it, then as other answer suggest you can download directly and place it. You need to place them in the following directory structure.
> nltk_data
> corpora
> brown
> conll2000
> movie_reviews
> wordnet
> taggers
> averaged_perceptron_tagger
> tokenizers
> punkt
A:
Updating the python certificates worked for me.
At the top of your script, keep:
import nltk
nltk.download('punkt')
In a separate terminal run (Mac):
bash /Applications/Python <version>/Install Certificates.command
|
NLTK download SSL: Certificate verify failed
|
I get the following error when trying to install Punkt for nltk:
nltk.download('punkt')
[nltk_data] Error loading Punkt: <urlopen error [SSL:
[nltk_data] CERTIFICATE_VERIFY_FAILED] certificate verify failed
[nltk_data] (_ssl.c:590)>
False
|
[
"TLDR: Here is a better solution: https://github.com/gunthercox/ChatterBot/issues/930#issuecomment-322111087\nNote that when you run nltk.download(), a window will pop up and let you select which packages to download (Download is not automatically started right away).\nTo complement the accepted answer, the following is a complete list of directories that will be searched on Mac (not limited to the one mentioned in the accepted answer):\n\n - '/Users/YOUR_USERNAME/nltk_data'\n - '/usr/share/nltk_data'\n - '/usr/local/share/nltk_data'\n - '/usr/lib/nltk_data'\n - '/usr/local/lib/nltk_data'\n - '/Users/YOUR_USERNAME/YOUR_VIRTUAL_ENV_DIRECTORY/nltk_data'\n - '/Users/YOUR_USERNAME/YOUR_VIRTUAL_ENV_DIRECTORY/share/nltk_data'\n - '/Users/YOUR_USERNAME/YOUR_VIRTUAL_ENV_DIRECTORY/lib/nltk_data'\n\nIn case the link above dies, here is the solution pasted in its entirety:\nimport nltk\nimport ssl\n\ntry:\n _create_unverified_https_context = ssl._create_unverified_context\nexcept AttributeError:\n pass\nelse:\n ssl._create_default_https_context = _create_unverified_https_context\n\nnltk.download()\n\nRun the above code in your favourite Python IDE or via the command line.\n",
"This works by disabling SSL check!\nimport nltk\nimport ssl\n\ntry:\n _create_unverified_https_context = ssl._create_unverified_context\nexcept AttributeError:\n pass\nelse:\n ssl._create_default_https_context = _create_unverified_https_context\n\nnltk.download()\n\n",
"Run the Python interpreter and type the commands:\nimport nltk\nnltk.download()\n\nfrom here: http://www.nltk.org/data.html\nif you get an SSL/Certificate error, run the following command\nbash /Applications/Python 3.6/Install Certificates.command\nfrom here: ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:749)\n",
"The downloader script is broken. As a temporal workaround can manually download the punkt tokenizer from here and then place the unzipped folder in the corresponding location. The default folders for each OS are:\n\nWindows: C:\\nltk_data\\tokenizers\nOSX: /usr/local/share/nltk_data/tokenizers\nUnix: /usr/share/nltk_data/tokenizers\n\n",
"Search 'Install Certificates.command' in the finder and open it.\nThen do the following steps in the terminal:\npython3\nimport nltk\nnltk.download()\n\n",
"You just need to Install the certificate doing this simple step \nIn the python application folder double-click on the file 'Certificates.command'\nthis will make a prompt window show in your screen and basically will automatically install the certificate for you, close this window and try again.\n",
"This is how I solved it for MAC OS.\nInitially after installing nltk, I was getting the SSL error.\nSolution:\nGoto\ncd /Applications/Python\\ 3.8\n\nRun the command\n./Install\\ Certificates.command\n\nNow if you try again, it should work!\nThanks a lot to this article!\n",
"My solution is:\n\nDownload punkt.zip from here and unzip\nCreate nltk_data/tokenizers folders under home folder\nPut punkt folder under tokenizers folder\n\n",
"There is a very simple way to fix all of this as written in the formal bug report for anyone else coming across this problem recently (e.g. 2019) and using MacOS. From the bug report at https://bugs.python.org/issue28150:\n\n...there is a simple double-clickable or command-line-runnable script (\"/Applications/Python 3.6/Install Certificates.command\") that does two things: 1. uses pip to install certifi and 2. creates a symlink in the OpenSSL directory to certifi's installed bundle location. \n\nSimply running the \"Install Certificates.command\" script worked for me on MacOS (10.15 beta as of this writing) and I was off and running.\n",
"My solution after nothing worked. I navigated, via the GUI to the Python 3.7 folder, opened the 'Certificates.command' file in terminal and the SSL issue was immediately resolved. \n",
"A bit late to the party but I just entered Certificates.command into Spotlight which found it and ran it. All fixed in seconds.\nI'm running mac Catalina and using python 3.7 installed by Homebrew\n",
"It means that you are not using HTTPS to work consistently with other run time dependencies for Python etc.\nIf you are using Linux (Ubuntu)\n~$ sudo apt-get install ca-certificates\n\nShould solve the issue.\nIf you are using this in a script with a docker file, you have to make sure you have install the the ca-certificates modules in your docker file.\n",
"For mac users,\njust copy paste the following in the terminal:\n/Applications/Python\\ 3.10/Install\\ Certificates.command ; exit;\n",
"First go to the path /Applications/Python 3.6/ and run \nInstall Certificates.command\nYou will admin rights for the same.\nIf you are unable to download it, then as other answer suggest you can download directly and place it. You need to place them in the following directory structure.\n> nltk_data\n > corpora\n > brown\n > conll2000\n > movie_reviews\n > wordnet\n > taggers\n > averaged_perceptron_tagger\n > tokenizers\n > punkt\n\n",
"Updating the python certificates worked for me.\nAt the top of your script, keep:\nimport nltk\nnltk.download('punkt')\n\nIn a separate terminal run (Mac):\nbash /Applications/Python <version>/Install Certificates.command\n\n"
] |
[
160,
59,
36,
27,
26,
7,
7,
4,
3,
2,
2,
1,
1,
0,
0
] |
[
"For me, the solution was much simpler: I was still connected to my corporate network/VPN which blocks certain types of downloads. Switching the network made the SSL error disappear.\n"
] |
[
-1
] |
[
"nltk",
"python",
"ssl_certificate"
] |
stackoverflow_0038916452_nltk_python_ssl_certificate.txt
|
Q:
HTML wont work when I send a message with CKEditor
Right now I'm trying to send bulk messages in an app made with Python. Now, when I do it, the message that it's supposed to be formatted with HTML won't renderize.
emails = [c for c in view_contactos]
if add.validate_on_submit(): #validamos datos
subject = add.title.data
body_message = add.body.data
#conexion al server
context = ssl.create_default_context()
server = smtplib.SMTP_SSL('smtp.gmail.com', DevConfig.MAIL_PORT, context=context)
server.login(DevConfig.MAIL_USERNAME, DevConfig.MAIL_PASSWORD)
#envio del correo
for row in emails:
em = EmailMessage()
em['From'] = DevConfig.MAIL_USERNAME
em['To'] = row
em['Subject'] = subject
em.set_content(body_message)
server.send_message(em)
server.close()
print('done')
This is the code Im using to send the messages
And this is an example of how you can see it in gmail
I am trying to send bulk html messages,
I want the messages to be rendered when they are delivered to their recipients
A:
In the line em.set_content(body_message) I had to write ", subtype="html" after the body_message
em.set_content(body_message, subtype="html")
|
HTML wont work when I send a message with CKEditor
|
Right now I'm trying to send bulk messages in an app made with Python. Now, when I do it, the message that it's supposed to be formatted with HTML won't renderize.
emails = [c for c in view_contactos]
if add.validate_on_submit(): #validamos datos
subject = add.title.data
body_message = add.body.data
#conexion al server
context = ssl.create_default_context()
server = smtplib.SMTP_SSL('smtp.gmail.com', DevConfig.MAIL_PORT, context=context)
server.login(DevConfig.MAIL_USERNAME, DevConfig.MAIL_PASSWORD)
#envio del correo
for row in emails:
em = EmailMessage()
em['From'] = DevConfig.MAIL_USERNAME
em['To'] = row
em['Subject'] = subject
em.set_content(body_message)
server.send_message(em)
server.close()
print('done')
This is the code Im using to send the messages
And this is an example of how you can see it in gmail
I am trying to send bulk html messages,
I want the messages to be rendered when they are delivered to their recipients
|
[
"In the line em.set_content(body_message) I had to write \", subtype=\"html\" after the body_message \nem.set_content(body_message, subtype=\"html\")\n"
] |
[
0
] |
[] |
[] |
[
"bulk_mail",
"email",
"flask",
"gmail",
"python"
] |
stackoverflow_0074594039_bulk_mail_email_flask_gmail_python.txt
|
Q:
Is there a way to use pylast to get the top tracks?
I need to use the pylast module and last.fm api to get the top tracks (https://www.last.fm/api/show/chart.getTopTracks) but I can't find how to do this in python.
API_KEY = "my key"
API_SECRET = "my secret"
network = pylast.LastFMNetwork(api_key = API_KEY)
print(network.chart_get_top_tracks())
But the chart_get_top_tracks() method doesn't exist. how do I use this?
If it helps, my end result will hopefully be a live Spotify playlist of the top 100 songs. It will hopefully update every 10 mins or so. I've got the playlist_add_items() working with Spotify and now all I need is to find a way to get the top 100 and use them both together.
I tried pylast.LastFMNetwork(api_key = API_KEY).network.chart_get_top_tracks() and I was hoping for it to return 50 top tracks but this method is not the real one. Am I doing something completely wrong or do I just not know the method name?
A:
You're looking for network.get_top_tracks(), not network.chart_get_top_tracks():
import pylast
API_KEY = "TODO"
network = pylast.LastFMNetwork(api_key=API_KEY)
tracks = network.get_top_tracks()
for track in tracks[:10]:
print(track.item)
Outputs:
Taylor Swift - Anti-Hero
Drake - Rich Flex
Taylor Swift - Lavender Haze
Taylor Swift - Snow on the Beach (feat. Lana Del Rey)
Taylor Swift - Midnight Rain
Taylor Swift - Karma
Taylor Swift - Maroon
Taylor Swift - You're On Your Own, Kid
Steve Lacy - Bad Habit
Taylor Swift - Bejeweled
|
Is there a way to use pylast to get the top tracks?
|
I need to use the pylast module and last.fm api to get the top tracks (https://www.last.fm/api/show/chart.getTopTracks) but I can't find how to do this in python.
API_KEY = "my key"
API_SECRET = "my secret"
network = pylast.LastFMNetwork(api_key = API_KEY)
print(network.chart_get_top_tracks())
But the chart_get_top_tracks() method doesn't exist. how do I use this?
If it helps, my end result will hopefully be a live Spotify playlist of the top 100 songs. It will hopefully update every 10 mins or so. I've got the playlist_add_items() working with Spotify and now all I need is to find a way to get the top 100 and use them both together.
I tried pylast.LastFMNetwork(api_key = API_KEY).network.chart_get_top_tracks() and I was hoping for it to return 50 top tracks but this method is not the real one. Am I doing something completely wrong or do I just not know the method name?
|
[
"You're looking for network.get_top_tracks(), not network.chart_get_top_tracks():\nimport pylast\n\nAPI_KEY = \"TODO\"\n\nnetwork = pylast.LastFMNetwork(api_key=API_KEY)\ntracks = network.get_top_tracks()\n\nfor track in tracks[:10]:\n print(track.item)\n\nOutputs:\nTaylor Swift - Anti-Hero\nDrake - Rich Flex\nTaylor Swift - Lavender Haze\nTaylor Swift - Snow on the Beach (feat. Lana Del Rey)\nTaylor Swift - Midnight Rain\nTaylor Swift - Karma\nTaylor Swift - Maroon\nTaylor Swift - You're On Your Own, Kid\nSteve Lacy - Bad Habit\nTaylor Swift - Bejeweled\n\n"
] |
[
0
] |
[] |
[] |
[
"api",
"last.fm",
"pylast",
"python",
"spotify"
] |
stackoverflow_0074320861_api_last.fm_pylast_python_spotify.txt
|
Q:
Error when using include('admin.site.urls'): Passing a 3-tuple to include() is not supported
I'm fairly new to Python and I am using a video tutorial on Lynda to help me build the frameworks for a Social WebApp. I'm trying to run the server from the cmd using python manage.py runserver from the cmd, however, I keep running into this error message.
CMD PROMPT ERROR
Traceback (most recent call last):
File "C:\Users\Kelechi\AppData\Roaming\Python\Python35\site-packages\django\utils\autoreload.py", line 225, in wrapper
fn(*args, **kwargs)
File "C:\Users\Kelechi\AppData\Roaming\Python\Python35\site-packages\django\core\management\commands\runserver.py", line 121, in inner_run
self.check(display_num_errors=True)
File "C:\Users\Kelechi\AppData\Roaming\Python\Python35\site-packages\django\core\management\base.py", line 364, in check
include_deployment_checks=include_deployment_checks,
File "C:\Users\Kelechi\AppData\Roaming\Python\Python35\site-packages\django\core\management\base.py", line 351, in _run_checks
return checks.run_checks(**kwargs)
File "C:\Users\Kelechi\AppData\Roaming\Python\Python35\site-packages\django\core\checks\registry.py", line 73, in run_checks
new_errors = check(app_configs=app_configs)
File "C:\Users\Kelechi\AppData\Roaming\Python\Python35\site-packages\django\core\checks\urls.py", line 40, in check_url_namespaces_unique
all_namespaces = _load_all_namespaces(resolver)
File "C:\Users\Kelechi\AppData\Roaming\Python\Python35\site-packages\django\core\checks\urls.py", line 57, in _load_all_namespaces
url_patterns = getattr(resolver, 'url_patterns', [])
File "C:\Users\Kelechi\AppData\Roaming\Python\Python35\site-packages\django\utils\functional.py", line 36, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "C:\Users\Kelechi\AppData\Roaming\Python\Python35\site-packages\django\urls\resolvers.py", line 536, in url_patterns
patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module)
File "C:\Users\Kelechi\AppData\Roaming\Python\Python35\site-packages\django\utils\functional.py", line 36, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "C:\Users\Kelechi\AppData\Roaming\Python\Python35\site-packages\django\urls\resolvers.py", line 529, in urlconf_module
return import_module(self.urlconf_name)
File "C:\Users\Kelechi\AppData\Local\Programs\Python\Python35\lib\importlib\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 986, in _gcd_import
File "<frozen importlib._bootstrap>", line 969, in _find_and_load
File "<frozen importlib._bootstrap>", line 958, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 673, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 665, in exec_module
File "<frozen importlib._bootstrap>", line 222, in _call_with_frames_removed
File "C:\Windows\SysWOW64\bookmarks\bookmarks\urls.py", line 20, in <module>
url(r'^admin/', include(admin.site.urls)),
File "C:\Users\Kelechi\AppData\Roaming\Python\Python35\site-packages\django\urls\conf.py", line 27, in include
'provide the namespace argument to include() instead.' % len(arg)
django.core.exceptions.ImproperlyConfigured: Passing a 3-tuple to include() is not supported. Pass a 2-tuple containing the list of patterns and app_name, and provide the namespace argument to include() instead.
My urls.py looks like this:
from django.contrib import admin
urlpatterns = [
url(r'^admin/', include(admin.site.urls)),
...
]
A:
In Django 2.0 you can no longer use include(admin.site.urls) (release notes). Just use admin.site.urls instead.
from django.contrib import admin
urlpatterns = [
url(r'^admin/', admin.site.urls),
...
]
|
Error when using include('admin.site.urls'): Passing a 3-tuple to include() is not supported
|
I'm fairly new to Python and I am using a video tutorial on Lynda to help me build the frameworks for a Social WebApp. I'm trying to run the server from the cmd using python manage.py runserver from the cmd, however, I keep running into this error message.
CMD PROMPT ERROR
Traceback (most recent call last):
File "C:\Users\Kelechi\AppData\Roaming\Python\Python35\site-packages\django\utils\autoreload.py", line 225, in wrapper
fn(*args, **kwargs)
File "C:\Users\Kelechi\AppData\Roaming\Python\Python35\site-packages\django\core\management\commands\runserver.py", line 121, in inner_run
self.check(display_num_errors=True)
File "C:\Users\Kelechi\AppData\Roaming\Python\Python35\site-packages\django\core\management\base.py", line 364, in check
include_deployment_checks=include_deployment_checks,
File "C:\Users\Kelechi\AppData\Roaming\Python\Python35\site-packages\django\core\management\base.py", line 351, in _run_checks
return checks.run_checks(**kwargs)
File "C:\Users\Kelechi\AppData\Roaming\Python\Python35\site-packages\django\core\checks\registry.py", line 73, in run_checks
new_errors = check(app_configs=app_configs)
File "C:\Users\Kelechi\AppData\Roaming\Python\Python35\site-packages\django\core\checks\urls.py", line 40, in check_url_namespaces_unique
all_namespaces = _load_all_namespaces(resolver)
File "C:\Users\Kelechi\AppData\Roaming\Python\Python35\site-packages\django\core\checks\urls.py", line 57, in _load_all_namespaces
url_patterns = getattr(resolver, 'url_patterns', [])
File "C:\Users\Kelechi\AppData\Roaming\Python\Python35\site-packages\django\utils\functional.py", line 36, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "C:\Users\Kelechi\AppData\Roaming\Python\Python35\site-packages\django\urls\resolvers.py", line 536, in url_patterns
patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module)
File "C:\Users\Kelechi\AppData\Roaming\Python\Python35\site-packages\django\utils\functional.py", line 36, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "C:\Users\Kelechi\AppData\Roaming\Python\Python35\site-packages\django\urls\resolvers.py", line 529, in urlconf_module
return import_module(self.urlconf_name)
File "C:\Users\Kelechi\AppData\Local\Programs\Python\Python35\lib\importlib\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 986, in _gcd_import
File "<frozen importlib._bootstrap>", line 969, in _find_and_load
File "<frozen importlib._bootstrap>", line 958, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 673, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 665, in exec_module
File "<frozen importlib._bootstrap>", line 222, in _call_with_frames_removed
File "C:\Windows\SysWOW64\bookmarks\bookmarks\urls.py", line 20, in <module>
url(r'^admin/', include(admin.site.urls)),
File "C:\Users\Kelechi\AppData\Roaming\Python\Python35\site-packages\django\urls\conf.py", line 27, in include
'provide the namespace argument to include() instead.' % len(arg)
django.core.exceptions.ImproperlyConfigured: Passing a 3-tuple to include() is not supported. Pass a 2-tuple containing the list of patterns and app_name, and provide the namespace argument to include() instead.
My urls.py looks like this:
from django.contrib import admin
urlpatterns = [
url(r'^admin/', include(admin.site.urls)),
...
]
|
[
"In Django 2.0 you can no longer use include(admin.site.urls) (release notes). Just use admin.site.urls instead.\nfrom django.contrib import admin\n\nurlpatterns = [\n url(r'^admin/', admin.site.urls),\n ...\n]\n\n"
] |
[
12
] |
[
"As from Django version 2.0, the documentation clears this:\nWhen to use include()\nYou should always use include() when you include other URL patterns. admin.site.urls is the only exception to this.\n",
"Including another URLconf :\n\nImport the include() function : from django.urls import include, path\nAdd a URL to urlpatterns : path('blog/', include('blog.urls'))\n\nI was having this trouble and it was a simple as of putting your url into single quotes.\n"
] |
[
-1,
-1
] |
[
"django",
"django_2.0",
"python",
"valueerror"
] |
stackoverflow_0048203313_django_django_2.0_python_valueerror.txt
|
Q:
Problem with monthly data on yfinance for Python
I am having a problem downloading monthly data for any ticker (or list of tickers). The dates in the index of the result show more than just the beginning of the month.
Example :
import yfinance as yf
y_params = {
'tickers': 'AAPL',
'start': '2020-01-01',
'end': '2022-11-01',
'interval': '1mo'
}
data = yf.download(**y_params)['Adj Close']
The result I get for data is :
Date
2020-01-01 75.805000
2020-02-01 66.951164
2020-02-07 NaN
2020-03-01 62.428360
2020-04-01 72.128082
2020-05-01 78.054482
2020-05-08 NaN
2020-06-01 89.801064
2020-07-01 104.630051
2020-08-01 127.060638
2020-08-07 NaN
2020-08-31 NaN
2020-09-01 114.239166
2020-10-01 107.383446
2020-11-01 117.435234
2020-11-06 NaN
2020-12-01 131.116058
2021-01-01 130.394714
2021-02-01 119.821625
2021-02-05 NaN
2021-03-01 120.881424
2021-04-01 130.094742
2021-05-01 123.315880
2021-05-07 NaN
2021-06-01 135.767838
2021-07-01 144.590378
2021-08-01 150.508408
2021-08-06 NaN
2021-09-01 140.478470
2021-10-01 148.718552
2021-11-01 164.106659
2021-11-05 NaN
2021-12-01 176.545380
2022-01-01 173.771454
2022-02-01 164.167221
2022-02-04 NaN
2022-03-01 173.823639
2022-04-01 156.940002
2022-05-01 148.169693
2022-05-06 NaN
2022-06-01 136.304245
2022-07-01 162.015808
2022-08-01 156.741913
2022-08-05 NaN
2022-09-01 137.971115
2022-10-01 153.086044
Name: Adj Close, dtype: float64
You see I have a lot of NaN for apparently random dates.
Am I doing something wrong or this is a bug ?
Thank you in advance
A:
Yahoo Finance generate a special record each time there is a split or a dividend payment.
In your data, we see a NaN every 3 months. That's a dividend entry. Other NaN are probably splits.
You can't see the amounts because you only look at one column ('Adj Close').
I can't provide more details because last time I looked at yfinance was 10 years ago.
A:
It looks like this issue has been solved by yfinance after version 0.1.87.
It now downloads correctly properly adjusting for NaN values.
|
Problem with monthly data on yfinance for Python
|
I am having a problem downloading monthly data for any ticker (or list of tickers). The dates in the index of the result show more than just the beginning of the month.
Example :
import yfinance as yf
y_params = {
'tickers': 'AAPL',
'start': '2020-01-01',
'end': '2022-11-01',
'interval': '1mo'
}
data = yf.download(**y_params)['Adj Close']
The result I get for data is :
Date
2020-01-01 75.805000
2020-02-01 66.951164
2020-02-07 NaN
2020-03-01 62.428360
2020-04-01 72.128082
2020-05-01 78.054482
2020-05-08 NaN
2020-06-01 89.801064
2020-07-01 104.630051
2020-08-01 127.060638
2020-08-07 NaN
2020-08-31 NaN
2020-09-01 114.239166
2020-10-01 107.383446
2020-11-01 117.435234
2020-11-06 NaN
2020-12-01 131.116058
2021-01-01 130.394714
2021-02-01 119.821625
2021-02-05 NaN
2021-03-01 120.881424
2021-04-01 130.094742
2021-05-01 123.315880
2021-05-07 NaN
2021-06-01 135.767838
2021-07-01 144.590378
2021-08-01 150.508408
2021-08-06 NaN
2021-09-01 140.478470
2021-10-01 148.718552
2021-11-01 164.106659
2021-11-05 NaN
2021-12-01 176.545380
2022-01-01 173.771454
2022-02-01 164.167221
2022-02-04 NaN
2022-03-01 173.823639
2022-04-01 156.940002
2022-05-01 148.169693
2022-05-06 NaN
2022-06-01 136.304245
2022-07-01 162.015808
2022-08-01 156.741913
2022-08-05 NaN
2022-09-01 137.971115
2022-10-01 153.086044
Name: Adj Close, dtype: float64
You see I have a lot of NaN for apparently random dates.
Am I doing something wrong or this is a bug ?
Thank you in advance
|
[
"Yahoo Finance generate a special record each time there is a split or a dividend payment.\nIn your data, we see a NaN every 3 months. That's a dividend entry. Other NaN are probably splits.\nYou can't see the amounts because you only look at one column ('Adj Close').\nI can't provide more details because last time I looked at yfinance was 10 years ago.\n",
"It looks like this issue has been solved by yfinance after version 0.1.87.\nIt now downloads correctly properly adjusting for NaN values.\n"
] |
[
0,
0
] |
[] |
[] |
[
"python",
"yfinance"
] |
stackoverflow_0074586281_python_yfinance.txt
|
Q:
How to solve Local path is not registered within uploads in the request in PyCharm 2022.2.1 (Professional Edition)?
I want to set up a Django project with docker-compose and PyCharm on my PC with Ubuntu 22.04 OS. Using PyCharm 2022.2.1 (Professional) I get the following error
How to solve Local path is not registered within uploads in the request
I added a Python interpreter from Settings > project > Python interpreter and then add interpreter > on SSH after that entered ssh credentials and on the system interpreter finally I created the Python interpreter.
I have docker-compose run on another terminal.
After I run the runserver command it shows this error:
this is the runserver command configuration:
I have recreated the interpreter, and explored the same problems on JetBrains website but couldn't solve the issue.
A:
This bug was reported as PY-55396 on the JetBrains bug tracker.
The bug was solved in PyCharm 2022.2.2, the solution is to upgrade to that version or downgrade to PyCharm 2021.3.
|
How to solve Local path is not registered within uploads in the request in PyCharm 2022.2.1 (Professional Edition)?
|
I want to set up a Django project with docker-compose and PyCharm on my PC with Ubuntu 22.04 OS. Using PyCharm 2022.2.1 (Professional) I get the following error
How to solve Local path is not registered within uploads in the request
I added a Python interpreter from Settings > project > Python interpreter and then add interpreter > on SSH after that entered ssh credentials and on the system interpreter finally I created the Python interpreter.
I have docker-compose run on another terminal.
After I run the runserver command it shows this error:
this is the runserver command configuration:
I have recreated the interpreter, and explored the same problems on JetBrains website but couldn't solve the issue.
|
[
"This bug was reported as PY-55396 on the JetBrains bug tracker.\nThe bug was solved in PyCharm 2022.2.2, the solution is to upgrade to that version or downgrade to PyCharm 2021.3.\n"
] |
[
1
] |
[] |
[] |
[
"django",
"docker_compose",
"interpreter",
"pycharm",
"python"
] |
stackoverflow_0074221022_django_docker_compose_interpreter_pycharm_python.txt
|
Q:
Transpose Columns in Python
Let´s say I have the following table:
Produced by the following python Code
import pandas as pd
data = [["Car","Sport","Wheel", 4],
["Car", "Sport","engine HP", 65],
["Car", "Sport","windows", 5],
["Car","Van","Wheel", 4],
["Car", "Van","engine HP", 85],
["Car", "Van","windows", 8],
["Truck","Small","Wheel", 4],
["Truck", "Small","engine HP", 125],
["Truck", "Small","windows", 2],
["Truck","Large","Wheel", 8],
["Truck", "Large","engine HP", 200],
["Truck", "Large","windows", 2]
]
df = pd.DataFrame(data)
#define header names
df.columns = ["Vehicle", "Type","Parameter","Value"]
df here
How do I manipulate by Dataframe to transpose the parameter value if , I don't know in advance the content of the parameter columns, or how many type of parameters there might be.
The end result would be the following table
"Vehicle","Type","Wheel","Engine","Windows"
"Car","Sport",4,65,51
"Car","Van",4,85,8
"Truck","Small",4,125,2
"Truck","Large",8,200,2
A:
You can try the built-in transpose method provided by pandas.
You can have a look about pandas.DataFrame.transpose
|
Transpose Columns in Python
|
Let´s say I have the following table:
Produced by the following python Code
import pandas as pd
data = [["Car","Sport","Wheel", 4],
["Car", "Sport","engine HP", 65],
["Car", "Sport","windows", 5],
["Car","Van","Wheel", 4],
["Car", "Van","engine HP", 85],
["Car", "Van","windows", 8],
["Truck","Small","Wheel", 4],
["Truck", "Small","engine HP", 125],
["Truck", "Small","windows", 2],
["Truck","Large","Wheel", 8],
["Truck", "Large","engine HP", 200],
["Truck", "Large","windows", 2]
]
df = pd.DataFrame(data)
#define header names
df.columns = ["Vehicle", "Type","Parameter","Value"]
df here
How do I manipulate by Dataframe to transpose the parameter value if , I don't know in advance the content of the parameter columns, or how many type of parameters there might be.
The end result would be the following table
"Vehicle","Type","Wheel","Engine","Windows"
"Car","Sport",4,65,51
"Car","Van",4,85,8
"Truck","Small",4,125,2
"Truck","Large",8,200,2
|
[
"You can try the built-in transpose method provided by pandas.\nYou can have a look about pandas.DataFrame.transpose\n"
] |
[
0
] |
[] |
[] |
[
"pandas",
"python"
] |
stackoverflow_0074594043_pandas_python.txt
|
Q:
Python Find columns of second dataframe with matching index to first datframe
I have two dataframes.
Input data
# First df mainly consists data provided by the user
fdf = pd.DataFrame(columns=['user_data'],data=[10,14,1],index=['alpha','beta','gamma'])
user_data
alpha 10
beta 14
gamma 1
# Second df is basically a default data consisting kind of analysis I can run based on the data in the first dataframe provided the user
sdf = pd.DataFrame(columns=['AD_analysis','BGD_analysis','ABG_analysis'],
data=[[1,0,1],[0,1,1],[0,1,1],[1,1,0]],index=['alpha','beta','gamma','delta'])
sdf =
AD_analysis BGD_analysis ABG_analysis
alpha 1 0 1
beta 0 1 1
gamma 0 1 1
delta 1 1 0
# Above table basically tells us that we can do AD_analysis if alpha, delta values are given by the user in the first df
So, I want to know kind of analysis (sdf) I can run based on the data provided by the user (fdf).
Expected answer:
# Since delta is not given and I cannot run any analysis associated with this parameters
# Possible analysis with given data is
['ABG_analysis']
My approach:
# find common index
com_idx = fdf.index.intersection(sdf.index)
if len(com_idx)==3 & com_idx.isin('alpha'):
print('ABG_analysis')
if len(com_idx)==3 & com_idx.isin('delta'):
print('BGD_analysis')
if len(com_idx)==2 :
print('AD_analysis')
Too many if statements does not convince as a best pythonic approach. Can you suggest a better approach?
A:
Get the indices provided by the use from your second table. Then subset the columns where all the arguments are equal to 1.
sdf.loc[fdf.index].eq(1).all(0).loc[lambda x:x].index
Index(['ABG_analysis'], dtype='object')
A:
Assuming you want to identify the analyses for which no required data is missing. You can use:
# get indices not provided by user
diff = sdf.index.difference(fdf.index)
# ensure they are not required for an analysis
sdf.columns[~sdf.reindex(diff).any()]
Output: Index(['ABG_analysis'], dtype='object')
If you want to ensure that all data is used (an analysis requiring only alpha and beta would be excluded):
sdf.columns[sdf.reindex(fdf.index).all()
&~sdf.loc[sdf.index.difference(fdf.index)].any()]
Used inputs:
fdf = pd.DataFrame(columns=['user_data'],data=[10,14,1],index=['alpha','beta','gamma'])
sdf = pd.DataFrame(columns=['AD_analysis','BGD_analysis','ABG_analysis'],
data=[[1,0,1],[0,1,1],[0,1,1],[1,1,0]],index=['alpha','beta','gamma','delta'])
|
Python Find columns of second dataframe with matching index to first datframe
|
I have two dataframes.
Input data
# First df mainly consists data provided by the user
fdf = pd.DataFrame(columns=['user_data'],data=[10,14,1],index=['alpha','beta','gamma'])
user_data
alpha 10
beta 14
gamma 1
# Second df is basically a default data consisting kind of analysis I can run based on the data in the first dataframe provided the user
sdf = pd.DataFrame(columns=['AD_analysis','BGD_analysis','ABG_analysis'],
data=[[1,0,1],[0,1,1],[0,1,1],[1,1,0]],index=['alpha','beta','gamma','delta'])
sdf =
AD_analysis BGD_analysis ABG_analysis
alpha 1 0 1
beta 0 1 1
gamma 0 1 1
delta 1 1 0
# Above table basically tells us that we can do AD_analysis if alpha, delta values are given by the user in the first df
So, I want to know kind of analysis (sdf) I can run based on the data provided by the user (fdf).
Expected answer:
# Since delta is not given and I cannot run any analysis associated with this parameters
# Possible analysis with given data is
['ABG_analysis']
My approach:
# find common index
com_idx = fdf.index.intersection(sdf.index)
if len(com_idx)==3 & com_idx.isin('alpha'):
print('ABG_analysis')
if len(com_idx)==3 & com_idx.isin('delta'):
print('BGD_analysis')
if len(com_idx)==2 :
print('AD_analysis')
Too many if statements does not convince as a best pythonic approach. Can you suggest a better approach?
|
[
"Get the indices provided by the use from your second table. Then subset the columns where all the arguments are equal to 1.\nsdf.loc[fdf.index].eq(1).all(0).loc[lambda x:x].index\n\nIndex(['ABG_analysis'], dtype='object')\n\n",
"Assuming you want to identify the analyses for which no required data is missing. You can use:\n# get indices not provided by user\ndiff = sdf.index.difference(fdf.index)\n\n# ensure they are not required for an analysis\nsdf.columns[~sdf.reindex(diff).any()]\n\nOutput: Index(['ABG_analysis'], dtype='object')\nIf you want to ensure that all data is used (an analysis requiring only alpha and beta would be excluded):\nsdf.columns[sdf.reindex(fdf.index).all() \n &~sdf.loc[sdf.index.difference(fdf.index)].any()]\n\nUsed inputs:\nfdf = pd.DataFrame(columns=['user_data'],data=[10,14,1],index=['alpha','beta','gamma'])\n\nsdf = pd.DataFrame(columns=['AD_analysis','BGD_analysis','ABG_analysis'],\n data=[[1,0,1],[0,1,1],[0,1,1],[1,1,0]],index=['alpha','beta','gamma','delta'])\n\n"
] |
[
1,
1
] |
[] |
[] |
[
"dataframe",
"numpy",
"pandas",
"python"
] |
stackoverflow_0074594034_dataframe_numpy_pandas_python.txt
|
Q:
Is it possible to get the bounding boxes for each word with Python?
I know that
pdftotext -bbox foobar.pdf
creates a HTML file which contains content like
<word xMin="301.703800" yMin="104.483700" xMax="309.697000" yMax="115.283700">is</word>
<word xMin="313.046200" yMin="104.483700" xMax="318.374200" yMax="115.283700">a</word>
<word xMin="321.603400" yMin="104.483700" xMax="365.509000" yMax="115.283700">universal</word>
<word xMin="368.858200" yMin="104.483700" xMax="384.821800" yMax="115.283700">file</word>
<word xMin="388.291000" yMin="104.483700" xMax="420.229000" yMax="115.283700">format</word>
Hence each single word has a bounding box.
The Python package PDFminer in contrast seems only to be able to give the position of a block of text (see example).
How can I get the bounding boxes for each word in Python?
A:
disclaimer: I am the author of borb, the package used in this answer.
You will need to do some kind of processing in order to get bounding boxes on a word-level. The problem is that a PDF (worst case scenario) only contains rendering instructions, and not structure-information.
Put simply, your PDF might contain (in pseudo-code):
move to 90, 700
set the active font to Helvetica, size 12
set the active color to black
render "Hello World" in the active font
The problem is that instruction 3 might contain anything from
a single letter
multiple letters
a single word,
to multiple words
In order to retrieve the bounding boxes of words, you'll need to do some processing (as mentioned before). You will need to render those instructions and split the text (preferably as it is being rendered) into words.
Then it's a matter of keeping track of the coordinates of the turtle, and you're set to go.
borb does this (under the hood) for you.
from borb.pdf import PDF
from borb.toolkit import RegularExpressionTextExtraction
# this line builds a RegularExpressionTextExtraction
# this class listens to rendering instructions
# and performs the logic I mentioned in the text part of this answer
l: RegularExpressionTextExtraction = RegularExpressionTextExtraction("[^ ]+")
# now we can load the file and perform our processing
with open("input.pdf", "rb") as fh:
PDF.loads(fh, [l])
# now we just need to get the boxes out of it
# RegularExpressionTextExtraction returns a list of type PDFMatch
# this class can return a list of bounding boxes (should your
# regular expression ever need to be matched over separate lines of text)
for m in l.get_matches_for_page(0):
# here we just print the Rectangle
# but feel free to do something useful with it
print(m.get_bounding_boxes()[0])
borb is an open source, pure Python PDF library that creates, modifies and reads PDF documents. You can download it using:
pip install borb
Alternatively, you can build from source by forking/downloading the GitHub repository.
|
Is it possible to get the bounding boxes for each word with Python?
|
I know that
pdftotext -bbox foobar.pdf
creates a HTML file which contains content like
<word xMin="301.703800" yMin="104.483700" xMax="309.697000" yMax="115.283700">is</word>
<word xMin="313.046200" yMin="104.483700" xMax="318.374200" yMax="115.283700">a</word>
<word xMin="321.603400" yMin="104.483700" xMax="365.509000" yMax="115.283700">universal</word>
<word xMin="368.858200" yMin="104.483700" xMax="384.821800" yMax="115.283700">file</word>
<word xMin="388.291000" yMin="104.483700" xMax="420.229000" yMax="115.283700">format</word>
Hence each single word has a bounding box.
The Python package PDFminer in contrast seems only to be able to give the position of a block of text (see example).
How can I get the bounding boxes for each word in Python?
|
[
"disclaimer: I am the author of borb, the package used in this answer.\nYou will need to do some kind of processing in order to get bounding boxes on a word-level. The problem is that a PDF (worst case scenario) only contains rendering instructions, and not structure-information.\nPut simply, your PDF might contain (in pseudo-code):\n\nmove to 90, 700\nset the active font to Helvetica, size 12\nset the active color to black\nrender \"Hello World\" in the active font\n\nThe problem is that instruction 3 might contain anything from\n\na single letter\nmultiple letters\na single word,\nto multiple words\n\nIn order to retrieve the bounding boxes of words, you'll need to do some processing (as mentioned before). You will need to render those instructions and split the text (preferably as it is being rendered) into words.\nThen it's a matter of keeping track of the coordinates of the turtle, and you're set to go.\nborb does this (under the hood) for you.\nfrom borb.pdf import PDF\nfrom borb.toolkit import RegularExpressionTextExtraction\n\n# this line builds a RegularExpressionTextExtraction\n# this class listens to rendering instructions \n# and performs the logic I mentioned in the text part of this answer\nl: RegularExpressionTextExtraction = RegularExpressionTextExtraction(\"[^ ]+\")\n\n# now we can load the file and perform our processing\nwith open(\"input.pdf\", \"rb\") as fh:\n PDF.loads(fh, [l])\n\n# now we just need to get the boxes out of it\n# RegularExpressionTextExtraction returns a list of type PDFMatch\n# this class can return a list of bounding boxes (should your\n# regular expression ever need to be matched over separate lines of text)\nfor m in l.get_matches_for_page(0):\n # here we just print the Rectangle\n # but feel free to do something useful with it\n print(m.get_bounding_boxes()[0])\n\nborb is an open source, pure Python PDF library that creates, modifies and reads PDF documents. You can download it using:\npip install borb\n\nAlternatively, you can build from source by forking/downloading the GitHub repository.\n"
] |
[
1
] |
[] |
[] |
[
"pdf",
"python"
] |
stackoverflow_0045082427_pdf_python.txt
|
Q:
Tkinter Entry.insert() changes type from int to str
I need to do a simple gui for accepting user input for further processing. It's my first time when I'm using tkinter and I've encountered a strange problem. Namely Entry.insert() changes type from int to str. Moreover first it was working alright, but then I was trying to implement something and I did couple of strong ctrl + z and now I'm not able to fix it. Here's my code:
from tkinter import *
from tkinter import messagebox
from dataclasses import make_dataclass
def build_gui():
def check_for_correct_input():
for name, value in list(entries.items())[:-1]:
if value.get().isdigit():
messagebox.showwarning(
title='Entered wrong data',
message=f'Field "{name}" accepts only letters.\nInstead: "{value.get()}" was provided',
)
return False
elif len(value.get()) == 0:
messagebox.showwarning(
title='Input not provided.',
message='Please provide needed input.',
)
return False
for name, value in list(entries.items())[-1:]:
print(value.get())
print(type(value.get()))
if not isinstance(value.get(), int):
messagebox.showwarning(
title='Entered wrong data.',
message=f'Field "{name}" accepts only integers.\nInstead: "{value.get()}" was provided',
)
return False
elif len(value.get()) == 0:
messagebox.showwarning(
title='Input not provided.',
message='Please provide needed input.',
)
return False
return True
# saves values to config file
def store_values():
if check_for_correct_input():
for name, entry in entries.items():
entries[name] = entry.get()
messagebox.showinfo(
title='Valid data provided.',
message='Program will start working now.',
)
root.destroy()
return True
messagebox.showerror(
title='Invalid input data.',
message=f'Please provide valid input.',
)
# clears all entries
def clear_entries():
for entry in entries.values():
entry.delete(0, END)
root = Tk()
root.title('Config for wohoho.py')
root.geometry('1000x400')
# Heading
Label(root, text='Enter needed data:', font='comicsansms 13 bold', pady=15).grid(row=0, column=3)
# Text for our form
labels = {
'xlsx_file_name': Label(root, text='xlsx_file_name ', font='comicsansms 12',),
'URLs_column': Label(root, text="URLs_column ", font='comicsansms 12',),
'views_column': Label(root, text='views_column ', font='comicsansms 12',),
'date_column': Label(root, text='date_column ', font='comicsansms 12',),
'starting_row': Label(root, text='starting_row ', font='comicsansms 12',),
}
# Pack text for our form
row = 1
for label in labels.values():
label.grid(row=row, column=2, sticky=E)
row += 1
# Tkinter variable for storing entries
LabelAttributes = make_dataclass(
'LabelAttributes', ['value', 'label_name',]
)
labels_values = {
'xlsx_file_name_value': LabelAttributes(
value='.xlsx path', label_name='xlsx_file_name',
),
'URLs_column_value': LabelAttributes(
value='L', label_name='URLs_column_value',
),
'views_column_value': LabelAttributes(
value='B', label_name='views_column_value',
),
'date_column_value': LabelAttributes(
value='D', label_name='date_column_value',
),
'starting_row_value': LabelAttributes(
value=4, label_name='starting_row_value',
),
}
entries = {}
row = 1
for value in labels_values.values():
entries[value.label_name] = Entry(root, width=100)
entries[value.label_name].grid(row=row, column=3)
entries[value.label_name].insert(0, value.value)
row += 1
# Button & packing it and assigning it a command
Button(
text='Clear',
command=clear_entries,
height=2,
width=10,
font='comicsansms 12 bold',
bd=3,
).grid(row=7, column=3
)
Button(
text='Accept',
command=store_values,
height=2,
width=10,
font='comicsansms 12 bold',
bd=3,
).grid(row=8, column=3
)
root.mainloop()
build_gui()
As shown in this fragment:
starting_row_value': LabelAttributes(value=4)
I'm setting starting_row_value to 4 although then it goes for validation to check_for_correct_input() it's type is changed from int to str. I have no idea why it is happening.
Also I think that my code is messed up. If someone would be so kind to refactor it I will be grateful. Basically I would like to:
Take input from user (with pre-entered suggestion as in my code).
Validate if input is correct. If it is correct -> store it in dict/yaml or whatever, if not ask for input once again. I was also trying to implement while loop which was rerunning tkinter gui if it was closed and input not stored in, but I have failed :(.
Any help will be really appreciated, because I've wasted 4 hours and I'm still in the same place.
Best regards!
A:
Namely Entry.insert() changes type from int to str... it's type is changed from int to str. I have no idea why it is happening.
Yes, this is how the Entry widget has always worked. The get method always returns a string, and the insert method converts all non-string arguments into strings before inserting the data into the widget. It's up to you to convert the data to the type you want.
The tkinter documentation for get is simply "Return the text.", and the canonical tcl/tk documentation is equally straightforward: "Returns the entry's string.".
Moreover first it was working alright
I don't understand what you mean by that. Calling get on an entry widget has always returned a string.
|
Tkinter Entry.insert() changes type from int to str
|
I need to do a simple gui for accepting user input for further processing. It's my first time when I'm using tkinter and I've encountered a strange problem. Namely Entry.insert() changes type from int to str. Moreover first it was working alright, but then I was trying to implement something and I did couple of strong ctrl + z and now I'm not able to fix it. Here's my code:
from tkinter import *
from tkinter import messagebox
from dataclasses import make_dataclass
def build_gui():
def check_for_correct_input():
for name, value in list(entries.items())[:-1]:
if value.get().isdigit():
messagebox.showwarning(
title='Entered wrong data',
message=f'Field "{name}" accepts only letters.\nInstead: "{value.get()}" was provided',
)
return False
elif len(value.get()) == 0:
messagebox.showwarning(
title='Input not provided.',
message='Please provide needed input.',
)
return False
for name, value in list(entries.items())[-1:]:
print(value.get())
print(type(value.get()))
if not isinstance(value.get(), int):
messagebox.showwarning(
title='Entered wrong data.',
message=f'Field "{name}" accepts only integers.\nInstead: "{value.get()}" was provided',
)
return False
elif len(value.get()) == 0:
messagebox.showwarning(
title='Input not provided.',
message='Please provide needed input.',
)
return False
return True
# saves values to config file
def store_values():
if check_for_correct_input():
for name, entry in entries.items():
entries[name] = entry.get()
messagebox.showinfo(
title='Valid data provided.',
message='Program will start working now.',
)
root.destroy()
return True
messagebox.showerror(
title='Invalid input data.',
message=f'Please provide valid input.',
)
# clears all entries
def clear_entries():
for entry in entries.values():
entry.delete(0, END)
root = Tk()
root.title('Config for wohoho.py')
root.geometry('1000x400')
# Heading
Label(root, text='Enter needed data:', font='comicsansms 13 bold', pady=15).grid(row=0, column=3)
# Text for our form
labels = {
'xlsx_file_name': Label(root, text='xlsx_file_name ', font='comicsansms 12',),
'URLs_column': Label(root, text="URLs_column ", font='comicsansms 12',),
'views_column': Label(root, text='views_column ', font='comicsansms 12',),
'date_column': Label(root, text='date_column ', font='comicsansms 12',),
'starting_row': Label(root, text='starting_row ', font='comicsansms 12',),
}
# Pack text for our form
row = 1
for label in labels.values():
label.grid(row=row, column=2, sticky=E)
row += 1
# Tkinter variable for storing entries
LabelAttributes = make_dataclass(
'LabelAttributes', ['value', 'label_name',]
)
labels_values = {
'xlsx_file_name_value': LabelAttributes(
value='.xlsx path', label_name='xlsx_file_name',
),
'URLs_column_value': LabelAttributes(
value='L', label_name='URLs_column_value',
),
'views_column_value': LabelAttributes(
value='B', label_name='views_column_value',
),
'date_column_value': LabelAttributes(
value='D', label_name='date_column_value',
),
'starting_row_value': LabelAttributes(
value=4, label_name='starting_row_value',
),
}
entries = {}
row = 1
for value in labels_values.values():
entries[value.label_name] = Entry(root, width=100)
entries[value.label_name].grid(row=row, column=3)
entries[value.label_name].insert(0, value.value)
row += 1
# Button & packing it and assigning it a command
Button(
text='Clear',
command=clear_entries,
height=2,
width=10,
font='comicsansms 12 bold',
bd=3,
).grid(row=7, column=3
)
Button(
text='Accept',
command=store_values,
height=2,
width=10,
font='comicsansms 12 bold',
bd=3,
).grid(row=8, column=3
)
root.mainloop()
build_gui()
As shown in this fragment:
starting_row_value': LabelAttributes(value=4)
I'm setting starting_row_value to 4 although then it goes for validation to check_for_correct_input() it's type is changed from int to str. I have no idea why it is happening.
Also I think that my code is messed up. If someone would be so kind to refactor it I will be grateful. Basically I would like to:
Take input from user (with pre-entered suggestion as in my code).
Validate if input is correct. If it is correct -> store it in dict/yaml or whatever, if not ask for input once again. I was also trying to implement while loop which was rerunning tkinter gui if it was closed and input not stored in, but I have failed :(.
Any help will be really appreciated, because I've wasted 4 hours and I'm still in the same place.
Best regards!
|
[
"\nNamely Entry.insert() changes type from int to str... it's type is changed from int to str. I have no idea why it is happening.\n\nYes, this is how the Entry widget has always worked. The get method always returns a string, and the insert method converts all non-string arguments into strings before inserting the data into the widget. It's up to you to convert the data to the type you want.\nThe tkinter documentation for get is simply \"Return the text.\", and the canonical tcl/tk documentation is equally straightforward: \"Returns the entry's string.\".\n\nMoreover first it was working alright\n\nI don't understand what you mean by that. Calling get on an entry widget has always returned a string.\n"
] |
[
0
] |
[] |
[] |
[
"python",
"tkinter"
] |
stackoverflow_0074594124_python_tkinter.txt
|
Q:
Upload any file type to S3 using Lambda
I'm trying to upload files to S3 using API Gateway and Lambda, all the processes work fine until I arrive at the Lambda, my lambda looks like this:
import base64
import boto3
import os
s3_client = boto3.client('s3')
bucket_name = os.environ['S3_BUCKET_NAME']
def lambda_handler(event, context):
contend_decode = base64.b64decode(event['body'])
response = s3_client.put_object(Bucket=bucket_name, Body=contend_decode)
print(response)
return {
'statusCode': 200,
'body': 'File uploaded'
}
When I upload for example an mp3 file I receive an error that says:
[ERROR] ValueError: string argument should contain only ASCII characters
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 10, in lambda_handler
contend_decode = base64.b64decode(event['body'])
File "/var/lang/lib/python3.8/base64.py", line 80, in b64decode
s = _bytes_from_decode_data(s)
File "/var/lang/lib/python3.8/base64.py", line 39, in _bytes_from_decode_data
raise ValueError('string argument should contain only ASCII characters')
[ERROR] ValueError: string argument should contain only ASCII characters Traceback (most recent call last): File "/var/task/lambda_function.py", line 10, in lambda_handler contend_decode = base64.b64decode(event['body']) File "/var/lang/lib/python3.8/base64.py", line 80, in b64decode s = _bytes_from_decode_data(s) File "/var/lang/lib/python3.8/base64.py", line 39, in _bytes_from_decode_data raise ValueError('string argument should contain only ASCII characters')
Any idea about this issue, please?
Edit:
The content of the event is something like this:
{
"resource": "/upload",
"path": "/upload",
"httpMethod": "POST",
"headers": {
"Accept": "*/*",
"Accept-Encoding": "gzip, deflate, br",
"CloudFront-Forwarded-Proto": "https",
"CloudFront-Is-Desktop-Viewer": "true",
"CloudFront-Is-Mobile-Viewer": "false",
"CloudFront-Is-SmartTV-Viewer": "false",
"CloudFront-Is-Tablet-Viewer": "false",
"CloudFront-Viewer-ASN": "5410",
"CloudFront-Viewer-Country": "FR",
"Content-Type": "audio/mpeg",
"Host": "um8xxxxpxx.execute-api.eu-west-1.amazonaws.com",
"Postman-Token": "fe49e15f-82c6-44c7-8399-4b6fba9b9abc",
"User-Agent": "PostmanRuntime/7.29.2",
"Via": "1.1 12bc6711250373a4xxxxxxxxxx44504.cloudfront.net (CloudFront)",
"X-Amz-Cf-Id": "5Zv2MVCxxxxxxxxxxxxyzMuv_CfIAxxxxxxxxxxxxJyz4JtHb-QImYZGQ==",
"X-Amzn-Trace-Id": "Root=1-6383d306-4e81300e0000000c3262b7a45",
"x-api-key": "g4KOPDl5zoB0E2QBpAAXSaESDFyGkR38f000",
"X-Forwarded-For": "1XX.XX9.2XX.XX9, 1XX.XX6.XX5.XXX",
"X-Forwarded-Port": "443",
"X-Forwarded-Proto": "https"
},
"multiValueHeaders": {
"Accept": [
"*/*"
],
"Accept-Encoding": [
"gzip, deflate, br"
],
"CloudFront-Forwarded-Proto": [
"https"
],
"CloudFront-Is-Desktop-Viewer": [
"true"
],
"CloudFront-Is-Mobile-Viewer": [
"false"
],
"CloudFront-Is-SmartTV-Viewer": [
"false"
],
"CloudFront-Is-Tablet-Viewer": [
"false"
],
"CloudFront-Viewer-ASN": [
"5410"
],
"CloudFront-Viewer-Country": [
"FR"
],
"Content-Type": [
"audio/mpeg"
],
"Host": [
"um8xxxxpxx.execute-api.eu-west-1.amazonaws.com"
],
"Postman-Token": [
"fDDDDf-82c8-44c9-DDD1-4b6f9QASFF9abc"
],
"User-Agent": [
"PostmanRuntime/7.29.2"
],
"Via": [
"1.1 12bVASD16aeca2DDD44504.cloudfront.net (CloudFront)"
],
"X-Amz-Cf-Id": [
"5Zv2MVCnaDDDzMuv_CfIA6iC89CiUnjDDDAZXAb-QImYZGQ=="
],
"X-Amzn-Trace-Id": [
"Root=1-6383AZDD-4e81002e022374c326hu8a45"
],
"x-api-key": [
"g4KOPDl5zoBia3cT4pYMkynzyGkX00aa"
],
"X-Forwarded-For": [
"1XX.XX9.2XX.XX9, 1XX.XX6.XX5.XXX"
],
"X-Forwarded-Port": [
"443"
],
"X-Forwarded-Proto": [
"https"
]
},
"queryStringParameters": "None",
"multiValueQueryStringParameters": "None",
"pathParameters": "None",
"stageVariables": "None",
"requestContext": {
"resourceId": "adddazq",
"resourcePath": "/upload",
"httpMethod": "POST",
"extendedRequestId": "cR3o-zddEFgazz=",
"requestTime": "27/Nov/2022:21:13:42 +0000",
"path": "/dev/upload",
"accountId": "114782879802",
"protocol": "HTTP/1.1",
"stage": "dev",
"domainPrefix": "ua8xjwxraf",
"requestTimeEpoch": 1669583622098,
"requestId": "23e099f9-eda4-42b2-8b4f-b1aaea589978",
"identity": {
"cognitoIdentityPoolId": "None",
"cognitoIdentityId": "None",
"apiKey": "h4KOPDl5zoqsdT4pYMkynzdddaz8f95560",
"principalOrgId": "None",
"cognitoAuthenticationType": "None",
"userArn": "None",
"apiKeyId": "z887qsddox4",
"userAgent": "PostmanRuntime/7.29.2",
"accountId": "None",
"caller": "None",
"sourceIp": "176.139.21.129",
"accessKey": "None",
"cognitoAuthenticationProvider": "None",
"user": "None"
},
"domainName": "um8xxxxpxx.execute-api.eu-west-1.amazonaws.com",
"apiId": "um8xxxxpxx"
},
"body": "\x04\x08-P�\x10,Gh�m\x0c\x06K����Te�U�-��\r\x01�Y��l�,3�\x11�Q�4$�........6��\x1872Ip�d�p\x1d�M�PX�0`�x�0����d�\x0f�\x0c.ǃ��\x12\x00\x00\r \x00\x00\x01\x18��........",
"isBase64Encoded": "False"
}
Note: I put just a little bit of characters that exist in the body, just for demonstration purpose.
A:
The Error And A Bunch Of Computer Science
So I still think that John Rotenstein's answer is objectively correct, ie the problem is that you can't decode event['body'] into bytes, because its a string in the form of bytes that have non-ascii characters, and that's why it is throwing an error.
If you look at event['body'] you should be able to maybe piece that much together:
"\x04\x08-P�\x10,Gh�m\x0c\x06K����Te�U�-��\r\x01�Y��l�,3�\x11�Q�4$�........6��\x1872Ip�d�p\x1d�M�PX�0`�x�0����d�\x0f�\x0c.ǃ��\x12\x00\x00\r \x00\x00\x01\x18��........"
Notice that its not throwing a padding error, which occurs when the string is not the right length (typically because of the trailing "="). You'd use decode on a base64 string (eg "TWFueSBoYW5kcyBtYWtlIGxpZ2h0IHdvcmsu" - stolen from wikipedia) to turn it into bytes.
Free tid bit of information:
Run b64.b64decode("TWFueSBoYW5kcyBtYWtlIGxpZ2h0IHdvcmsu") to get the string back as a byte string (a = b"Many hands make light work.").
Convert it to a list by doing b = list(a) -> [77, 97, 110, 121, 32, 104, 97, 110, 100, 115, 32, 109, 97, 107, 101, 32, 108, 105, 103, 104, 116, 32, 119, 111, 114, 107, 46].
Then to its hex representation (I had to format it in notepad afterwards) "".join([hex(c).replace("0x", "\\x") for c in b]) -> \x4d\x61\x6e\x79\x20\x68\x61\x6e\x64\x73\x20\x6d\x61\x6b\x65\x20\x6c\x69\x67\x68\x74\x20\x77\x6f\x72\x6b\x2e.
The disconnect for me is that with open(filename, "rb") as f; a = f.read() will return something like what you have in your event['body'] if its an image or something of the sort, so you'd assume that b"hello world" would also be bytes similar to that of the with open()..., but apparently not(?). I don't know; a lot to unpack.
If you're unfamiliar with what is in your event['body'], this string is actually decoded bytes - granted this is slightly ambiguous, because the \x is actually an escape character for hex in Python, but there are some very easy reproduceable examples where this doesn't seem to be the case (take your event['body'] for instance - what even is this "\x1872Ip�d�p"). You can get decoded bytes from doing something like the below, with the caveat that it was casted to a string, so its no longer a bytes like object - its a string:
a = "hello world"
b = a.encode("utf-8")
# or
c = bytes(a, "utf-8")
# or - the one below I think defaults to utf8
a = b"hello world"
# the closest I could get to hex representation of the string was from this
# "".join([hex(ord(c)).replace("0x", "\\x") for c in a])
Thing is, I don't know what encoding it was using to decode it into bytes, and its unclear as to if I can expect body to be bytes every time or if it would be a base64 string as isBase64Encoded might would leave me to believe. I'm not 100% certain, but my assumption is that if you do something like the below, granted the resulting decoded string may not be base64, you can get a base64 string output:
Quick Edit - I believe I misunderstood what isBase64Encoded means. After the writing of this, I think it should be understood as "are the bytes encoded as base64? True or False.", I will edit the below code. Additionally, I will assume that the data for event['body'] underwent one of two processes: either opened as bytes -> isBase64Encoded set as False -> sent or opened as bytes -> b64encoded -> converted to bytes -> isBase64Encoded set as True -> sent. From here on out in this answer, you will see me refer to the answer before this edit as pre edit and after this edit as post edit.
import base64
# pre edit
if not event['isBase64Encoded']:
event['body'] = bytes(event[body], "whatever that encoding is").decode()
# b64encode takes a string and converts it to a bytes like object.
# b64decode takes a bytes like object and converts it to a string.
event['body'] = base64.b64decode(event['body'])
print(event['body'])
# post edit
# you might be able to read bytes with an arbitrary encoding using BytesIO
from io import BytesIO
if event['isBase64Encoded']:
# this would've been sent as the default according to my notes from the edit
# take the string, convert it to bytes, then decode it - should be a base64 string with a utf8 encoding
event['body'] = bytes(event['body']).decode()
# decode the utf8 string to base64 bytes
event['body'] = base64.b64decode(event['body'])
else:
#event['body'] = bytes(event[body], some encoding)
event['body'] = BytesIO(event[body]).read()
Pre edit - To be 100% clear as to what this does, this:
Checks if it is not a base64 string
If not, convert body to bytes with an encoding, then to a string with decode()
base64decode() takes that string and if its a base64 string (like from above), and converts it to bytes with a base64 encoding
Post edit - I've included some helpful comments in the code, but either way, it should return bytes.
Pushing Objects to the Bucket
However, you seem to also want to push those bytes to a bucket - the docs:
response = client.put_object(
#Body=bytes(event["body"], encoding),
# event['body'] should already be bytes by now as per the post edit comments
Body=event["body"],
Bucket="my_bucket",
#ContentEncoding=event["multiValueHeaders"]["Accept-Encoding"],
ContentType=event["multiValueHeaders"]["Content-Type"],
Key="my/object/name.mp4"
)
Pre Edit - So realistically, set all of those key word values and you should be golden - you don't have to run a base64 decode operation in this instance (based on what was returned in your event - you might if it actually was encoded as a base64 string), just pass put_object() the bytes.
Post Edit - Still set all the key words and read the below (Content Encoding), but we should've handled both cases of isBase64Encoded by now, and the result should be a bytes like object stored in event['body'], so no significant change has to be made to this paragraph in regards to put_object().
Here is a link to what ContentEncoding is, compared to ContentType, which may shed some light on whether or not you should use it or need to use it.
What Your Function Might Should Be
You shouldn't use such generalized try / except statements like I did below, but if it really bothers you, you can hunt down what those errors throw and add it in yourself or remove them completely, but conceptually, this should be what you want.
Pre Edit
import base64
import boto3
import os
s3_client = boto3.client('s3')
bucket_name = os.environ['S3_BUCKET_NAME']
def lambda_handler(event, context):
if not event['isBase64Encoded']:
try:
event['body'] = bytes(event[body], "whatever that encoding is").decode()
except:
return {
# AWS probably returns a 403, so maybe return something different for debugging?
'statusCode': 406,
'body': 'Misconfigured object.'
}
else:
try:
event['body'] = base64.b64decode(event['body'])
except:
return {
# AWS probably returns a 403, so maybe return something different for debugging?
'statusCode': 406,
'body': 'Misconfigured object.'
}
try:
response = client.put_object(
Body=bytes(event["body"], encoding),
Bucket="my_bucket",
#ContentEncoding=event["multiValueHeaders"]["Accept-Encoding"],
ContentType=event["multiValueHeaders"]["Content-Type"],
Key="my/object/name.mp4"
)
except:
return {
# AWS probably returns a 403, so maybe return something different for debugging?
'statusCode': 406,
'body': 'Misconfigured object.'
}
else:
print(response)
return {
'statusCode': 200,
'body': 'File uploaded'
}
Post Edit
import base64
import boto3
import os
from io import BytesIO
s3_client = boto3.client('s3')
bucket_name = os.environ['S3_BUCKET_NAME']
def lambda_handler(event, context):
if event['isBase64Encoded']:
# this would've been sent as the default according to my notes from the edit
# take the string, convert it to bytes, then decode it - should be a base64 string with a utf8 encoding
event['body'] = bytes(event['body']).decode()
# decode the utf8 string to base64 bytes
event['body'] = base64.b64decode(event['body'])
else:
#event['body'] = bytes(event[body], some encoding)
event['body'] = BytesIO(event[body]).read()
try:
response = client.put_object(
Body=bytes(event["body"], encoding),
Bucket="my_bucket",
#ContentEncoding=event["multiValueHeaders"]["Accept-Encoding"],
ContentType=event["multiValueHeaders"]["Content-Type"],
Key="my/object/name.mp4"
)
except:
return {
# AWS probably returns a 403, so maybe return something different for debugging?
'statusCode': 406,
'body': 'Misconfigured object.'
}
else:
print(response)
return {
'statusCode': 200,
'body': 'File uploaded'
}
Extra Resources
How does Base64 work? - wikipedia
Base64 Encode - docs
Base64 Decode - docs
A:
The error is:
ValueError: string argument should contain only ASCII characters
The error is on this line:
contend_decode = base64.b64decode(event['body'])
So, it is saying that event['body'] does not contain base64 encoded data.
The binary content will actually be provided in the content parameter.
Therefore, the line should instead be:
contend_decode = base64.b64decode(event['content'])
|
Upload any file type to S3 using Lambda
|
I'm trying to upload files to S3 using API Gateway and Lambda, all the processes work fine until I arrive at the Lambda, my lambda looks like this:
import base64
import boto3
import os
s3_client = boto3.client('s3')
bucket_name = os.environ['S3_BUCKET_NAME']
def lambda_handler(event, context):
contend_decode = base64.b64decode(event['body'])
response = s3_client.put_object(Bucket=bucket_name, Body=contend_decode)
print(response)
return {
'statusCode': 200,
'body': 'File uploaded'
}
When I upload for example an mp3 file I receive an error that says:
[ERROR] ValueError: string argument should contain only ASCII characters
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 10, in lambda_handler
contend_decode = base64.b64decode(event['body'])
File "/var/lang/lib/python3.8/base64.py", line 80, in b64decode
s = _bytes_from_decode_data(s)
File "/var/lang/lib/python3.8/base64.py", line 39, in _bytes_from_decode_data
raise ValueError('string argument should contain only ASCII characters')
[ERROR] ValueError: string argument should contain only ASCII characters Traceback (most recent call last): File "/var/task/lambda_function.py", line 10, in lambda_handler contend_decode = base64.b64decode(event['body']) File "/var/lang/lib/python3.8/base64.py", line 80, in b64decode s = _bytes_from_decode_data(s) File "/var/lang/lib/python3.8/base64.py", line 39, in _bytes_from_decode_data raise ValueError('string argument should contain only ASCII characters')
Any idea about this issue, please?
Edit:
The content of the event is something like this:
{
"resource": "/upload",
"path": "/upload",
"httpMethod": "POST",
"headers": {
"Accept": "*/*",
"Accept-Encoding": "gzip, deflate, br",
"CloudFront-Forwarded-Proto": "https",
"CloudFront-Is-Desktop-Viewer": "true",
"CloudFront-Is-Mobile-Viewer": "false",
"CloudFront-Is-SmartTV-Viewer": "false",
"CloudFront-Is-Tablet-Viewer": "false",
"CloudFront-Viewer-ASN": "5410",
"CloudFront-Viewer-Country": "FR",
"Content-Type": "audio/mpeg",
"Host": "um8xxxxpxx.execute-api.eu-west-1.amazonaws.com",
"Postman-Token": "fe49e15f-82c6-44c7-8399-4b6fba9b9abc",
"User-Agent": "PostmanRuntime/7.29.2",
"Via": "1.1 12bc6711250373a4xxxxxxxxxx44504.cloudfront.net (CloudFront)",
"X-Amz-Cf-Id": "5Zv2MVCxxxxxxxxxxxxyzMuv_CfIAxxxxxxxxxxxxJyz4JtHb-QImYZGQ==",
"X-Amzn-Trace-Id": "Root=1-6383d306-4e81300e0000000c3262b7a45",
"x-api-key": "g4KOPDl5zoB0E2QBpAAXSaESDFyGkR38f000",
"X-Forwarded-For": "1XX.XX9.2XX.XX9, 1XX.XX6.XX5.XXX",
"X-Forwarded-Port": "443",
"X-Forwarded-Proto": "https"
},
"multiValueHeaders": {
"Accept": [
"*/*"
],
"Accept-Encoding": [
"gzip, deflate, br"
],
"CloudFront-Forwarded-Proto": [
"https"
],
"CloudFront-Is-Desktop-Viewer": [
"true"
],
"CloudFront-Is-Mobile-Viewer": [
"false"
],
"CloudFront-Is-SmartTV-Viewer": [
"false"
],
"CloudFront-Is-Tablet-Viewer": [
"false"
],
"CloudFront-Viewer-ASN": [
"5410"
],
"CloudFront-Viewer-Country": [
"FR"
],
"Content-Type": [
"audio/mpeg"
],
"Host": [
"um8xxxxpxx.execute-api.eu-west-1.amazonaws.com"
],
"Postman-Token": [
"fDDDDf-82c8-44c9-DDD1-4b6f9QASFF9abc"
],
"User-Agent": [
"PostmanRuntime/7.29.2"
],
"Via": [
"1.1 12bVASD16aeca2DDD44504.cloudfront.net (CloudFront)"
],
"X-Amz-Cf-Id": [
"5Zv2MVCnaDDDzMuv_CfIA6iC89CiUnjDDDAZXAb-QImYZGQ=="
],
"X-Amzn-Trace-Id": [
"Root=1-6383AZDD-4e81002e022374c326hu8a45"
],
"x-api-key": [
"g4KOPDl5zoBia3cT4pYMkynzyGkX00aa"
],
"X-Forwarded-For": [
"1XX.XX9.2XX.XX9, 1XX.XX6.XX5.XXX"
],
"X-Forwarded-Port": [
"443"
],
"X-Forwarded-Proto": [
"https"
]
},
"queryStringParameters": "None",
"multiValueQueryStringParameters": "None",
"pathParameters": "None",
"stageVariables": "None",
"requestContext": {
"resourceId": "adddazq",
"resourcePath": "/upload",
"httpMethod": "POST",
"extendedRequestId": "cR3o-zddEFgazz=",
"requestTime": "27/Nov/2022:21:13:42 +0000",
"path": "/dev/upload",
"accountId": "114782879802",
"protocol": "HTTP/1.1",
"stage": "dev",
"domainPrefix": "ua8xjwxraf",
"requestTimeEpoch": 1669583622098,
"requestId": "23e099f9-eda4-42b2-8b4f-b1aaea589978",
"identity": {
"cognitoIdentityPoolId": "None",
"cognitoIdentityId": "None",
"apiKey": "h4KOPDl5zoqsdT4pYMkynzdddaz8f95560",
"principalOrgId": "None",
"cognitoAuthenticationType": "None",
"userArn": "None",
"apiKeyId": "z887qsddox4",
"userAgent": "PostmanRuntime/7.29.2",
"accountId": "None",
"caller": "None",
"sourceIp": "176.139.21.129",
"accessKey": "None",
"cognitoAuthenticationProvider": "None",
"user": "None"
},
"domainName": "um8xxxxpxx.execute-api.eu-west-1.amazonaws.com",
"apiId": "um8xxxxpxx"
},
"body": "\x04\x08-P�\x10,Gh�m\x0c\x06K����Te�U�-��\r\x01�Y��l�,3�\x11�Q�4$�........6��\x1872Ip�d�p\x1d�M�PX�0`�x�0����d�\x0f�\x0c.ǃ��\x12\x00\x00\r \x00\x00\x01\x18��........",
"isBase64Encoded": "False"
}
Note: I put just a little bit of characters that exist in the body, just for demonstration purpose.
|
[
"The Error And A Bunch Of Computer Science\nSo I still think that John Rotenstein's answer is objectively correct, ie the problem is that you can't decode event['body'] into bytes, because its a string in the form of bytes that have non-ascii characters, and that's why it is throwing an error.\nIf you look at event['body'] you should be able to maybe piece that much together:\n\"\\x04\\x08-P�\\x10,Gh�m\\x0c\\x06K����Te�U�-��\\r\\x01�Y��l�,3�\\x11�Q�4$�........6��\\x1872Ip�d�p\\x1d�M�PX�0`�x�0����d�\\x0f�\\x0c.ǃ��\\x12\\x00\\x00\\r \\x00\\x00\\x01\\x18��........\"\n\nNotice that its not throwing a padding error, which occurs when the string is not the right length (typically because of the trailing \"=\"). You'd use decode on a base64 string (eg \"TWFueSBoYW5kcyBtYWtlIGxpZ2h0IHdvcmsu\" - stolen from wikipedia) to turn it into bytes.\nFree tid bit of information:\n\nRun b64.b64decode(\"TWFueSBoYW5kcyBtYWtlIGxpZ2h0IHdvcmsu\") to get the string back as a byte string (a = b\"Many hands make light work.\").\nConvert it to a list by doing b = list(a) -> [77, 97, 110, 121, 32, 104, 97, 110, 100, 115, 32, 109, 97, 107, 101, 32, 108, 105, 103, 104, 116, 32, 119, 111, 114, 107, 46].\nThen to its hex representation (I had to format it in notepad afterwards) \"\".join([hex(c).replace(\"0x\", \"\\\\x\") for c in b]) -> \\x4d\\x61\\x6e\\x79\\x20\\x68\\x61\\x6e\\x64\\x73\\x20\\x6d\\x61\\x6b\\x65\\x20\\x6c\\x69\\x67\\x68\\x74\\x20\\x77\\x6f\\x72\\x6b\\x2e.\nThe disconnect for me is that with open(filename, \"rb\") as f; a = f.read() will return something like what you have in your event['body'] if its an image or something of the sort, so you'd assume that b\"hello world\" would also be bytes similar to that of the with open()..., but apparently not(?). I don't know; a lot to unpack.\n\nIf you're unfamiliar with what is in your event['body'], this string is actually decoded bytes - granted this is slightly ambiguous, because the \\x is actually an escape character for hex in Python, but there are some very easy reproduceable examples where this doesn't seem to be the case (take your event['body'] for instance - what even is this \"\\x1872Ip�d�p\"). You can get decoded bytes from doing something like the below, with the caveat that it was casted to a string, so its no longer a bytes like object - its a string:\na = \"hello world\"\nb = a.encode(\"utf-8\")\n# or\nc = bytes(a, \"utf-8\")\n# or - the one below I think defaults to utf8\na = b\"hello world\"\n\n# the closest I could get to hex representation of the string was from this\n# \"\".join([hex(ord(c)).replace(\"0x\", \"\\\\x\") for c in a])\n\nThing is, I don't know what encoding it was using to decode it into bytes, and its unclear as to if I can expect body to be bytes every time or if it would be a base64 string as isBase64Encoded might would leave me to believe. I'm not 100% certain, but my assumption is that if you do something like the below, granted the resulting decoded string may not be base64, you can get a base64 string output:\nQuick Edit - I believe I misunderstood what isBase64Encoded means. After the writing of this, I think it should be understood as \"are the bytes encoded as base64? True or False.\", I will edit the below code. Additionally, I will assume that the data for event['body'] underwent one of two processes: either opened as bytes -> isBase64Encoded set as False -> sent or opened as bytes -> b64encoded -> converted to bytes -> isBase64Encoded set as True -> sent. From here on out in this answer, you will see me refer to the answer before this edit as pre edit and after this edit as post edit.\nimport base64\n# pre edit\nif not event['isBase64Encoded']:\n event['body'] = bytes(event[body], \"whatever that encoding is\").decode()\n # b64encode takes a string and converts it to a bytes like object.\n # b64decode takes a bytes like object and converts it to a string.\n event['body'] = base64.b64decode(event['body'])\nprint(event['body'])\n\n# post edit\n# you might be able to read bytes with an arbitrary encoding using BytesIO\nfrom io import BytesIO \n\nif event['isBase64Encoded']:\n # this would've been sent as the default according to my notes from the edit\n # take the string, convert it to bytes, then decode it - should be a base64 string with a utf8 encoding\n event['body'] = bytes(event['body']).decode()\n # decode the utf8 string to base64 bytes\n event['body'] = base64.b64decode(event['body'])\nelse:\n #event['body'] = bytes(event[body], some encoding)\n event['body'] = BytesIO(event[body]).read()\n\nPre edit - To be 100% clear as to what this does, this:\n\nChecks if it is not a base64 string\nIf not, convert body to bytes with an encoding, then to a string with decode()\nbase64decode() takes that string and if its a base64 string (like from above), and converts it to bytes with a base64 encoding\n\nPost edit - I've included some helpful comments in the code, but either way, it should return bytes.\n\nPushing Objects to the Bucket\nHowever, you seem to also want to push those bytes to a bucket - the docs:\nresponse = client.put_object(\n #Body=bytes(event[\"body\"], encoding),\n # event['body'] should already be bytes by now as per the post edit comments\n Body=event[\"body\"],\n Bucket=\"my_bucket\",\n #ContentEncoding=event[\"multiValueHeaders\"][\"Accept-Encoding\"],\n ContentType=event[\"multiValueHeaders\"][\"Content-Type\"],\n Key=\"my/object/name.mp4\"\n)\n\nPre Edit - So realistically, set all of those key word values and you should be golden - you don't have to run a base64 decode operation in this instance (based on what was returned in your event - you might if it actually was encoded as a base64 string), just pass put_object() the bytes.\nPost Edit - Still set all the key words and read the below (Content Encoding), but we should've handled both cases of isBase64Encoded by now, and the result should be a bytes like object stored in event['body'], so no significant change has to be made to this paragraph in regards to put_object().\nHere is a link to what ContentEncoding is, compared to ContentType, which may shed some light on whether or not you should use it or need to use it.\n\nWhat Your Function Might Should Be\nYou shouldn't use such generalized try / except statements like I did below, but if it really bothers you, you can hunt down what those errors throw and add it in yourself or remove them completely, but conceptually, this should be what you want.\nPre Edit\nimport base64\nimport boto3\nimport os\n\ns3_client = boto3.client('s3')\nbucket_name = os.environ['S3_BUCKET_NAME']\n\n\ndef lambda_handler(event, context):\n if not event['isBase64Encoded']:\n try:\n event['body'] = bytes(event[body], \"whatever that encoding is\").decode()\n except:\n return {\n # AWS probably returns a 403, so maybe return something different for debugging?\n 'statusCode': 406,\n 'body': 'Misconfigured object.'\n }\n else:\n try:\n event['body'] = base64.b64decode(event['body'])\n except:\n return {\n # AWS probably returns a 403, so maybe return something different for debugging?\n 'statusCode': 406,\n 'body': 'Misconfigured object.'\n }\n\n try:\n response = client.put_object(\n Body=bytes(event[\"body\"], encoding),\n Bucket=\"my_bucket\",\n #ContentEncoding=event[\"multiValueHeaders\"][\"Accept-Encoding\"],\n ContentType=event[\"multiValueHeaders\"][\"Content-Type\"],\n Key=\"my/object/name.mp4\"\n )\n except:\n return {\n # AWS probably returns a 403, so maybe return something different for debugging?\n 'statusCode': 406,\n 'body': 'Misconfigured object.'\n }\n else:\n print(response)\n return {\n 'statusCode': 200,\n 'body': 'File uploaded'\n }\n\nPost Edit\nimport base64\nimport boto3\nimport os\nfrom io import BytesIO\n\ns3_client = boto3.client('s3')\nbucket_name = os.environ['S3_BUCKET_NAME']\n\n\ndef lambda_handler(event, context):\n\n if event['isBase64Encoded']:\n # this would've been sent as the default according to my notes from the edit\n # take the string, convert it to bytes, then decode it - should be a base64 string with a utf8 encoding\n event['body'] = bytes(event['body']).decode()\n # decode the utf8 string to base64 bytes\n event['body'] = base64.b64decode(event['body'])\n else:\n #event['body'] = bytes(event[body], some encoding)\n event['body'] = BytesIO(event[body]).read()\n\n try:\n response = client.put_object(\n Body=bytes(event[\"body\"], encoding),\n Bucket=\"my_bucket\",\n #ContentEncoding=event[\"multiValueHeaders\"][\"Accept-Encoding\"],\n ContentType=event[\"multiValueHeaders\"][\"Content-Type\"],\n Key=\"my/object/name.mp4\"\n )\n except:\n return {\n # AWS probably returns a 403, so maybe return something different for debugging?\n 'statusCode': 406,\n 'body': 'Misconfigured object.'\n }\n else:\n print(response)\n return {\n 'statusCode': 200,\n 'body': 'File uploaded'\n }\n\n\nExtra Resources\nHow does Base64 work? - wikipedia\nBase64 Encode - docs\nBase64 Decode - docs\n",
"The error is:\n\nValueError: string argument should contain only ASCII characters\n\nThe error is on this line:\ncontend_decode = base64.b64decode(event['body'])\n\nSo, it is saying that event['body'] does not contain base64 encoded data.\nThe binary content will actually be provided in the content parameter.\nTherefore, the line should instead be:\ncontend_decode = base64.b64decode(event['content'])\n\n"
] |
[
3,
1
] |
[] |
[] |
[
"amazon_s3",
"amazon_web_services",
"aws_lambda",
"python"
] |
stackoverflow_0074592604_amazon_s3_amazon_web_services_aws_lambda_python.txt
|
Q:
How to get the relative position of a tkinter canvas after it got scaled and dragged around?
The canvas c is the basis of a kind of CAD modelling software I'm working on. The methods for transforming it work (bound to mouse button 2).
In another function I want to add/edit items on the canvas so I need the new relative position to the canvas.
Context:
That should be (0,0) in the end:
enter image description here
The following is a minimal reproducible example:
import tkinter as tk
root = tk.Tk()
root.geometry("1000x500")
c = tk.Canvas(root, width=1000, height=1000,
bg="white")
scalingFactorIndex = 0
def callback(event):
print(c.canvasx(event.x),c.canvasy(event.y))
print(c.canvasx())
def create_grid(event=None):
w = c.winfo_reqwidth() # Get current width of canvas
h = c.winfo_reqheight() # Get current height of canvas
c.delete('grid_line') # Will only remove the grid_line
# Creates all vertical lines at intevals of 25
for i in range(0, w, 25):
c.create_line([(i, 0), (i, h)], tag='grid_line')
# Creates all horizontal lines at intevals of 25
for i in range(0, h, 25):
c.create_line([(0, i), (w, i)], tag='grid_line')
def move_start(event):
c.scan_mark(event.x, event.y)
def move_move(event):
c.scan_dragto(event.x, event.y, gain=1)
def zoomer(event):
if (event.delta > 0):
c.scale("all", c.canvasx(event.x), c.canvasy(event.y), 1.1, 1.1)
elif (event.delta < 0):
c.scale("all", c.canvasx(event.x), c.canvasy(event.y), 0.9, 0.9)
scrollRegion = (c.bbox("all"))
c.configure(scrollregion=((-150, -150, scrollRegion[2]*1.5, scrollRegion[3]*1.5)))
c.bind("<Configure>", create_grid)
c.bind("<ButtonPress-2>", move_start)
c.bind("<B2-Motion>", move_move)
c.bind("<MouseWheel>", zoomer)
c.bind("<Button-3>",callback)
c.pack(fill=tk.BOTH, expand=True)
root.mainloop()
Edit:
Through further experimentation and comments, I found out that it has only to do with the zoom function. The other problem could be solved with using canvasx(event.x). However, after zooming and moving, the coordinates are wrong again and get more wrong the longer you use it.
A:
In a comment you wrote:
To clarify my end goal more: I want to add an object to the model (on the grid) where the mouse pointer is.
To do that you need to pass the x/y coordinate from the event through the canvasx and canvasy methods.
For example, if you want to draw a circle under the mouse pointer when you press "o", you can do something like this:
...
def create_circle(event):
x = event.widget.canvasx(event.x)
y = event.widget.canvasy(event.y)
event.widget.create_oval(x-10, y-10, x+10, y+10, fill="red")
...
c.bind("o", create_circle)
c.focus_set()
...
If you want the drawn object to be at the same scale, you'll have to apply the scaling factor to the coordinates yourself. The above code will give you the correct starting point under the mouse.
|
How to get the relative position of a tkinter canvas after it got scaled and dragged around?
|
The canvas c is the basis of a kind of CAD modelling software I'm working on. The methods for transforming it work (bound to mouse button 2).
In another function I want to add/edit items on the canvas so I need the new relative position to the canvas.
Context:
That should be (0,0) in the end:
enter image description here
The following is a minimal reproducible example:
import tkinter as tk
root = tk.Tk()
root.geometry("1000x500")
c = tk.Canvas(root, width=1000, height=1000,
bg="white")
scalingFactorIndex = 0
def callback(event):
print(c.canvasx(event.x),c.canvasy(event.y))
print(c.canvasx())
def create_grid(event=None):
w = c.winfo_reqwidth() # Get current width of canvas
h = c.winfo_reqheight() # Get current height of canvas
c.delete('grid_line') # Will only remove the grid_line
# Creates all vertical lines at intevals of 25
for i in range(0, w, 25):
c.create_line([(i, 0), (i, h)], tag='grid_line')
# Creates all horizontal lines at intevals of 25
for i in range(0, h, 25):
c.create_line([(0, i), (w, i)], tag='grid_line')
def move_start(event):
c.scan_mark(event.x, event.y)
def move_move(event):
c.scan_dragto(event.x, event.y, gain=1)
def zoomer(event):
if (event.delta > 0):
c.scale("all", c.canvasx(event.x), c.canvasy(event.y), 1.1, 1.1)
elif (event.delta < 0):
c.scale("all", c.canvasx(event.x), c.canvasy(event.y), 0.9, 0.9)
scrollRegion = (c.bbox("all"))
c.configure(scrollregion=((-150, -150, scrollRegion[2]*1.5, scrollRegion[3]*1.5)))
c.bind("<Configure>", create_grid)
c.bind("<ButtonPress-2>", move_start)
c.bind("<B2-Motion>", move_move)
c.bind("<MouseWheel>", zoomer)
c.bind("<Button-3>",callback)
c.pack(fill=tk.BOTH, expand=True)
root.mainloop()
Edit:
Through further experimentation and comments, I found out that it has only to do with the zoom function. The other problem could be solved with using canvasx(event.x). However, after zooming and moving, the coordinates are wrong again and get more wrong the longer you use it.
|
[
"In a comment you wrote:\n\nTo clarify my end goal more: I want to add an object to the model (on the grid) where the mouse pointer is.\n\nTo do that you need to pass the x/y coordinate from the event through the canvasx and canvasy methods.\nFor example, if you want to draw a circle under the mouse pointer when you press \"o\", you can do something like this:\n...\ndef create_circle(event):\n x = event.widget.canvasx(event.x)\n y = event.widget.canvasy(event.y)\n event.widget.create_oval(x-10, y-10, x+10, y+10, fill=\"red\")\n...\nc.bind(\"o\", create_circle)\nc.focus_set()\n...\n\nIf you want the drawn object to be at the same scale, you'll have to apply the scaling factor to the coordinates yourself. The above code will give you the correct starting point under the mouse.\n"
] |
[
0
] |
[] |
[] |
[
"python",
"tkinter",
"tkinter_canvas"
] |
stackoverflow_0074592864_python_tkinter_tkinter_canvas.txt
|
Q:
Find, delete and add text into pdf file in Python
I have a pdf file, it is necessary to delete certain text in it. Then add new text below to the existing one.
I'm trying to use the PyMuPDF library - fitz. Open the file, set the text to search, but I did not find how to delete it and add new text.
Please could you help me how to delete the found text and add to the existing one.
Using libraries is not important, we can use PyPDF2 and others.
The sample pdf file with description is attached.
import fitz
doc = fitz.open(MyFilePath)
page = doc[0]
text1 = “ANA”
text_instances1 = page.searchFor(text1)
# found text should be deleted …
text_to_add = “Text”
text2 = “TAIL NO.”
text_instances2 = page.searchFor(text2)
# should be added "text_to_add" after found text "text2"
doc.save(OutputFilePath, garbage=4, deflate=True, clean=True)
A:
The library doesn't officially support adding/deleting text of a pdf document. However, from a recorded issue there is a workaround this. You can see the answer here from the author of the library on how you can get around this using a Text Modification method.
It also worries me that the documentation for the library seems to be unavailable. Not sure if this a permanent case but if so you should consider using a different library. You should see the answers here on the best alternative library - Add text to Existing PDF using Python
A:
disclaimer: I am the author of borb, the library used in this answer
Replacing text in a PDF is hard (as you have no doubt found out). The problem is that PDF contains (in the worst case) only the rendering instructions in order to put content on the page.
Your document might contain (in pseudo-code):
go to 80, 700
set the active font to Helvetica, size 12
render the characters "Hell"
move to 120, 700
render the characters "o"
move to 130, 700
render the characters "World"
As you can see, there is no concept of "words". Letters can just be rendered wherever they happen to be needed. Spaces don't need to be included, software responsible for creating a PDF can just tell the renderer to move the cursor along the x-axis.
In order to replace text, you first need to find it.
from borb.pdf import PDF
from borb.toolkit import RegularExpressionTextExtraction
# RegularExpressionTextExtraction implements EventListener
# EventListener processes rendering events
# you can pass a regular expression to RegularExpressionTextExtraction
# and it will keep track of where that content appears
l: RegularExpressionTextExtraction = RegularExpressionTextExtraction("ANA")
# now we need to load the PDF
with open("input.pdf", "rb") as fh:
PDF.loads(fh, [l])
# Now we can access the locations of the match(es).
# I am only going to use the first one, but feel free
# to update my code to take into account all matches
#
# A match can have multiple bounding boxes
# for instance if the regular expression could be matched over
# multiple lines of text.
print(l.get_matches_for_page(0)[0].get_bounding_boxes()[0])
Next step is to remove content at a given location.
For this we can use redaction. Redaction erases content in a PDF.
from borb.pdf import PDF
from borb.pdf import Document
from borb.pdf import Page
from borb.pdf.canvas.layout.annotation.redact_annotation import RedactAnnotation
import typing
# open the PDF
doc: typing.Optional[Document] = None
with open("input.pdf", "rb") as fh:
doc = PDF.loads(fh)
# get the first page
# maybe you'll need to modify this to apply it to all pages
# keep that in mind
page: Page = doc.get_page(0)
# add the redaction annotation
page.add_annotation(
RedactAnnotation(
Rectangle(Decimal(405), Decimal(721), Decimal(40), Decimal(8))
)
)
)
# apply redaction annotations
page.apply_redact_annotations()
# now we can store the PDF again
with open("input_002.pdf", "wb") as out_file_handle:
PDF.dumps(out_file_handle, doc)
Lastly, we need to put some content back in the PDF, at the location that we previously removed content from.
from borb.pdf import PDF
from borb.pdf import Document
from borb.pdf import Page
from borb.pdf import Paragraph
import typing
# load the PDF
doc: typing.Optional[Document] = None
with open("input.pdf", "rb") as fh:
doc = PDF.loads(fh)
# add a Paragraph at an absolute location
# fmt: off
r: Rectangle = Rectangle(
Decimal(59), # x: 0 + page_margin
Decimal(848 - 84 - 100), # y: page_height - page_margin - height_of_textbox
Decimal(595 - 59 * 2), # width: page_width - 2 * page_margin
Decimal(100), # height
)
# fmt: on
# the next line of code uses absolute positioning
page: Page = doc.get_page(0)
Paragraph("Hello World!").paint(page, r)
# store the PDF
with open("output.pdf", "wb") as fh:
PDF.dumps(fh, doc)
borb is an open source, pure Python PDF library that creates, modifies and reads PDF documents. You can download it using:
pip install borb
Alternatively, you can build from source by forking/downloading the GitHub repository.
|
Find, delete and add text into pdf file in Python
|
I have a pdf file, it is necessary to delete certain text in it. Then add new text below to the existing one.
I'm trying to use the PyMuPDF library - fitz. Open the file, set the text to search, but I did not find how to delete it and add new text.
Please could you help me how to delete the found text and add to the existing one.
Using libraries is not important, we can use PyPDF2 and others.
The sample pdf file with description is attached.
import fitz
doc = fitz.open(MyFilePath)
page = doc[0]
text1 = “ANA”
text_instances1 = page.searchFor(text1)
# found text should be deleted …
text_to_add = “Text”
text2 = “TAIL NO.”
text_instances2 = page.searchFor(text2)
# should be added "text_to_add" after found text "text2"
doc.save(OutputFilePath, garbage=4, deflate=True, clean=True)
|
[
"The library doesn't officially support adding/deleting text of a pdf document. However, from a recorded issue there is a workaround this. You can see the answer here from the author of the library on how you can get around this using a Text Modification method.\nIt also worries me that the documentation for the library seems to be unavailable. Not sure if this a permanent case but if so you should consider using a different library. You should see the answers here on the best alternative library - Add text to Existing PDF using Python\n",
"disclaimer: I am the author of borb, the library used in this answer\nReplacing text in a PDF is hard (as you have no doubt found out). The problem is that PDF contains (in the worst case) only the rendering instructions in order to put content on the page.\nYour document might contain (in pseudo-code):\n\ngo to 80, 700\nset the active font to Helvetica, size 12\nrender the characters \"Hell\"\nmove to 120, 700\nrender the characters \"o\"\nmove to 130, 700\nrender the characters \"World\"\n\nAs you can see, there is no concept of \"words\". Letters can just be rendered wherever they happen to be needed. Spaces don't need to be included, software responsible for creating a PDF can just tell the renderer to move the cursor along the x-axis.\nIn order to replace text, you first need to find it.\nfrom borb.pdf import PDF\nfrom borb.toolkit import RegularExpressionTextExtraction\n\n# RegularExpressionTextExtraction implements EventListener\n# EventListener processes rendering events\n# you can pass a regular expression to RegularExpressionTextExtraction\n# and it will keep track of where that content appears\nl: RegularExpressionTextExtraction = RegularExpressionTextExtraction(\"ANA\")\n\n# now we need to load the PDF\nwith open(\"input.pdf\", \"rb\") as fh:\n PDF.loads(fh, [l])\n\n# Now we can access the locations of the match(es).\n# I am only going to use the first one, but feel free\n# to update my code to take into account all matches\n#\n# A match can have multiple bounding boxes\n# for instance if the regular expression could be matched over\n# multiple lines of text.\nprint(l.get_matches_for_page(0)[0].get_bounding_boxes()[0])\n\nNext step is to remove content at a given location.\nFor this we can use redaction. Redaction erases content in a PDF.\nfrom borb.pdf import PDF\nfrom borb.pdf import Document\nfrom borb.pdf import Page\nfrom borb.pdf.canvas.layout.annotation.redact_annotation import RedactAnnotation\n\nimport typing\n\n# open the PDF\ndoc: typing.Optional[Document] = None\nwith open(\"input.pdf\", \"rb\") as fh:\n doc = PDF.loads(fh)\n\n# get the first page\n# maybe you'll need to modify this to apply it to all pages\n# keep that in mind\npage: Page = doc.get_page(0)\n\n# add the redaction annotation\npage.add_annotation(\n RedactAnnotation(\n Rectangle(Decimal(405), Decimal(721), Decimal(40), Decimal(8))\n )\n )\n )\n\n# apply redaction annotations\npage.apply_redact_annotations()\n\n# now we can store the PDF again\nwith open(\"input_002.pdf\", \"wb\") as out_file_handle:\n PDF.dumps(out_file_handle, doc)\n\nLastly, we need to put some content back in the PDF, at the location that we previously removed content from.\nfrom borb.pdf import PDF\nfrom borb.pdf import Document\nfrom borb.pdf import Page\nfrom borb.pdf import Paragraph\n\nimport typing\n\n# load the PDF\ndoc: typing.Optional[Document] = None\nwith open(\"input.pdf\", \"rb\") as fh:\n doc = PDF.loads(fh)\n\n# add a Paragraph at an absolute location\n# fmt: off\nr: Rectangle = Rectangle(\n Decimal(59), # x: 0 + page_margin\n Decimal(848 - 84 - 100), # y: page_height - page_margin - height_of_textbox\n Decimal(595 - 59 * 2), # width: page_width - 2 * page_margin\n Decimal(100), # height\n )\n# fmt: on\n\n# the next line of code uses absolute positioning\npage: Page = doc.get_page(0)\nParagraph(\"Hello World!\").paint(page, r)\n\n# store the PDF\nwith open(\"output.pdf\", \"wb\") as fh:\n PDF.dumps(fh, doc)\n\nborb is an open source, pure Python PDF library that creates, modifies and reads PDF documents. You can download it using:\npip install borb\n\nAlternatively, you can build from source by forking/downloading the GitHub repository.\n"
] |
[
0,
0
] |
[] |
[] |
[
"pdf",
"python",
"python_3.x"
] |
stackoverflow_0062793843_pdf_python_python_3.x.txt
|
Q:
How to count the number of list elements embedded in a datafrane column?
I have a dataframe that looks like the below (inclusive of the brackets and quotes):
ID
Interests
2131
['music','art','travel']
3213
[]
3132
['martial arts']
3232
['martial arts']
The desired output I am trying to get is:
ID
Interests
2131
3
3213
0
3132
1
3232
1
I've tried using
from collections import Counter
ravel = np.ravel(user.personal_interests.to_list())
But that just gives me the count of each combination i.e.:
['martial arts']:2
I've also tried stripping the quotes and using a series to count, but to no avail.
A:
If you have lists (['music','art','travel']):
df['Interests'] = df['Interests'].str.len()
If you have strings ("['music','art','travel']"):
from ast import literal_eval
df['Interests'] = df['Interests'].apply(literal_eval).str.len()
Or, if you know that there are no quoted commas:
df['Interests'] = df['Interests'].str.count(',').add(df['Interests'].ne('[]'))
A:
You can try using len() method in Python
If df is your dataframe,
df['new_interests'] = df['Interests'].apply(lambda x: temp.append(len(x)))
|
How to count the number of list elements embedded in a datafrane column?
|
I have a dataframe that looks like the below (inclusive of the brackets and quotes):
ID
Interests
2131
['music','art','travel']
3213
[]
3132
['martial arts']
3232
['martial arts']
The desired output I am trying to get is:
ID
Interests
2131
3
3213
0
3132
1
3232
1
I've tried using
from collections import Counter
ravel = np.ravel(user.personal_interests.to_list())
But that just gives me the count of each combination i.e.:
['martial arts']:2
I've also tried stripping the quotes and using a series to count, but to no avail.
|
[
"If you have lists (['music','art','travel']):\ndf['Interests'] = df['Interests'].str.len()\n\nIf you have strings (\"['music','art','travel']\"):\nfrom ast import literal_eval\n\ndf['Interests'] = df['Interests'].apply(literal_eval).str.len()\n\nOr, if you know that there are no quoted commas:\ndf['Interests'] = df['Interests'].str.count(',').add(df['Interests'].ne('[]'))\n\n",
"You can try using len() method in Python\nIf df is your dataframe,\ndf['new_interests'] = df['Interests'].apply(lambda x: temp.append(len(x)))\n\n"
] |
[
2,
0
] |
[] |
[] |
[
"dataframe",
"list",
"numpy",
"pandas",
"python"
] |
stackoverflow_0074594185_dataframe_list_numpy_pandas_python.txt
|
Q:
Nonzero for integers
My problem is as follows. I am generating a random bitstring of size n, and need to iterate over the indices for which the random bit is 1. For example, if my random bitstring ends up being 00101, I want to retrieve [2, 4] (on which I will iterate over). The goal is to do so in the fastest way possible with Python/NumPy.
One of the fast methods is to use NumPy and do
bitstring = np.random.randint(2, size=(n,))
l = np.nonzero(bitstring)[0]
The advantage with np.non_zero is that it finds indices of bits set to 1 much faster than if one iterates (with a for loop) over each bit and checks if it is set to 1.
Now, NumPy can generate a random bitstring faster via np.random.bit_generator.randbits(n). The problem is that it returns it as an integer, on which I cannot use np.nonzero anymore. I saw that for integers one can get the count of bits set to 1 in an integer x by using x.bit_count(), however there is no function to get the indices where bits are set to 1. So currently, I have to resort to a slow for loop, hence losing the initial speedup given by np.random.bit_generator.randbits(n).
How would you do something similar to (and as fast as) np.non_zero, but on integers instead?
Thank you in advance for your suggestions!
A:
A minor optimisation to your code would be to use the new style random interface and generate bools rather than 64bit integers
rng = np.random.default_rng()
def original(n):
bitstring = rng.integers(2, size=n, dtype=bool)
return np.nonzero(bitstring)[0]
this causes it to take ~24 µs on my laptop, tested n upto 128.
I've previously noticed that getting a Numpy to generate a permutation is particularly fast, hence my comment above. Leading to:
def perm(n):
a = rng.permutation(n)
return a[:rng.binomial(n, 0.5)]
which takes between ~7 µs and ~10 µs depending on n. It also returns the indicies out of order, not sure if that's an issue for you. If your n isn't changing much, you could also swap to using rng.shuffle on an pre-allocated array, something like:
n = 32
a = np.arange(n)
def shuffle():
rng.shuffle(a)
return a[:rng.binomial(n, 0.5)]
which saves a couple of microseconds.
A:
After some interesting proposals, I decided to do some benchmarking to understand how the running times grow as a function of n. The functions tested are the following:
def func1(n):
bit_array = np.random.randint(2, size=n)
return np.nonzero(bit_array)[0]
def func2(n):
bit_int = np.random.bit_generator.randbits(n)
a = np.zeros(bit_int.bit_count())
i = 0
for j in range(n):
if 1 & (bit_int >> j):
a[i] = j
i += 1
return a
def func3(n):
bit_string = format(np.random.bit_generator.randbits(n), f'0{n}b')
bit_array = np.array(list(bit_string), dtype=int)
return np.nonzero(bit_array)[0]
def func4(n):
rng = np.random.default_rng()
a = rng.permutation(n)
return a[:rng.binomial(n, 0.5)]
def func5(n):
a = np.arange(n)
rng.shuffle(a)
return a[:rng.binomial(n, 0.5)]
I used timeit to do the benchmark, looping 1000 over a statement each time and averaging over 10 runs. The value of n ranges from 2 to 65536, growing as powers of 2. The average running time is plotted and error bars correspondond to the standard deviation.
For solutions generating a bitstring, the simple func1 actually performs best among them whenever n is large enough (n>32). We can see that for low values of n (n< 16), using the randbits solution with the for loop (func2) is fastest, because the loop is not costly yet. However as n becomes larger, this becomes the worst solution, because all the time is spent in the for loop. This is why having a nonzero for integers would bring the best of both world and hopefully give a faster solution. We can observe that func3, which does a conversion in order to use nonzero after using randbits spends too long doing the conversion.
For implementations which exploit the binomial distribution (see Sam Mason's answer), we see that the use of shuffle (func5) instead of permutation (func4) can reduce the time by a bit, but overall they have similar performance.
Considering all values of n (that were tested), the solution given by Sam Mason which employs a binomial distribution together with shuffling (func5) is so far the most performant in terms of running time. Let's see if this can be improved!
A:
I had a play with Cython to see how much difference it would make. I ended up with quite a lot of code and only ~5x better runtime performance:
from cpython.pycapsule cimport PyCapsule_IsValid, PyCapsule_GetPointer
import numpy as np
cimport numpy as np
cimport cython
from numpy.random cimport bitgen_t
np.import_array()
DTYPE = np.uint32
ctypedef np.uint32_t DTYPE_t
cdef extern int __builtin_popcountl(unsigned long) nogil
cdef extern int __builtin_ffsl(unsigned long) nogil
cdef const char *bgen_capsule_name = "BitGenerator"
@cython.boundscheck(False) # Deactivate bounds checking
@cython.wraparound(False) # Deactivate negative indexing.
cdef size_t generate_bits(object bitgen, np.uint64_t *state, Py_ssize_t state_len, np.uint64_t last_mask):
cdef Py_ssize_t i
cdef size_t nset
cdef bitgen_t *rng
capsule = bitgen.capsule
if not PyCapsule_IsValid(capsule, bgen_capsule_name):
raise ValueError("Expecting Numpy BitGenerator Capsule")
rng = <bitgen_t *> PyCapsule_GetPointer(capsule, bgen_capsule_name)
with bitgen.lock:
nset = 0
for i in range(state_len-1):
state[i] = rng.next_uint64(rng.state)
nset += __builtin_popcountl(state[i])
i = state_len-1
state[i] = rng.next_uint64(rng.state) & last_mask
nset += __builtin_popcountl(state[i])
return nset
cdef size_t write_setbits(DTYPE_t *result, DTYPE_t off, np.uint64_t state) nogil:
cdef size_t j
cdef int k
j = 0
while state:
# find first set bit returns zero when nothing is set
k = __builtin_ffsl(state) - 1
# clear out bit k
state &= ~(1ul<<k)
# record in output
result[j] = off + k
j += 1
return j
@cython.boundscheck(False) # Deactivate bounds checking
@cython.wraparound(False) # Deactivate negative indexing.
def rint(bitgen, unsigned int n):
cdef Py_ssize_t i, j, nset
cdef np.uint64_t[::1] state
cdef DTYPE_t[::1] result
state = np.empty((n + 63) // 64, dtype=np.uint64)
nset = generate_bits(bitgen, &state[0], len(state), (1ul << (n & 63)) - 1)
pyresult = np.empty(nset, dtype=DTYPE)
result = pyresult
j = 0
for i in range(len(state)):
j += write_setbits(&result[j], i * 64, state[i])
return pyresult
The above code is easy to use via the Cython Jupyter extension.
Comparing this to slightly tidied up versions of the OP's code can be done via:
import random
import timeit
import numpy as np
import matplotlib.pyplot as plt
bitgen = np.random.PCG64()
def func1(n):
# bool type is a bit faster
bit_array = np.random.randint(2, size=n, dtype=bool)
return np.nonzero(bit_array)[0]
def func2(n):
# OPs variant ends up using a CSPRNG which is slower
bit_int = random.getrandbits(n)
# this is much easier than using numpy arrays
return [i for i in range(n) if 1 & (bit_int >> i)]
def func3(n):
bit_string = format(random.getrandbits(n), f'0{n}b')
bit_array = np.array(list(bit_string), dtype='int8')
return np.nonzero(bit_array)[0]
def func4(n):
# shuffle variant is mostly the same
# plot already busy enough
a = np.random.permutation(n)
return a[:np.random.binomial(n, 0.5)]
def func_cython(n):
return rint(bitgen, n)
result = {}
niter = [2**i for i in range(1, 17)]
for name in 'func1 func2 func3 func4 func_cython'.split():
result[name] = res = []
for n in niter:
t = timeit.Timer(f"fn({n})", f"fn = {name}", globals=globals())
nit, dt = t.autorange()
res.append(dt / nit)
plt.loglog()
for name, times in result.items():
plt.plot(niter, np.array(times) * 1000, '.-', label=name)
plt.legend()
Which might produce output like:
Note that in order to reduce variance it's helpful to turn off CPU frequency scaling and turn off turbo modes. The Arch wiki has useful info on how to do this under Linux.
A:
you could convert the number you get with randbits(n) to a numpy.ndarray.
depending on the size of n the compute time of the conversion should be faster than the loop.
n = 10
l = np.random.bit_generator.randbits(n) # gives you the int 616
l_string = f'{l:0{n}b}' # gives you a string representation of the int in length n 1001101000
l_nparray = np.array(list(l_string), dtype=int) # gives you the numpy.ndarray like np.random.randint [1 0 0 1 1 0 1 0 0 0]
|
Nonzero for integers
|
My problem is as follows. I am generating a random bitstring of size n, and need to iterate over the indices for which the random bit is 1. For example, if my random bitstring ends up being 00101, I want to retrieve [2, 4] (on which I will iterate over). The goal is to do so in the fastest way possible with Python/NumPy.
One of the fast methods is to use NumPy and do
bitstring = np.random.randint(2, size=(n,))
l = np.nonzero(bitstring)[0]
The advantage with np.non_zero is that it finds indices of bits set to 1 much faster than if one iterates (with a for loop) over each bit and checks if it is set to 1.
Now, NumPy can generate a random bitstring faster via np.random.bit_generator.randbits(n). The problem is that it returns it as an integer, on which I cannot use np.nonzero anymore. I saw that for integers one can get the count of bits set to 1 in an integer x by using x.bit_count(), however there is no function to get the indices where bits are set to 1. So currently, I have to resort to a slow for loop, hence losing the initial speedup given by np.random.bit_generator.randbits(n).
How would you do something similar to (and as fast as) np.non_zero, but on integers instead?
Thank you in advance for your suggestions!
|
[
"A minor optimisation to your code would be to use the new style random interface and generate bools rather than 64bit integers\nrng = np.random.default_rng()\n\ndef original(n):\n bitstring = rng.integers(2, size=n, dtype=bool)\n return np.nonzero(bitstring)[0]\n\nthis causes it to take ~24 µs on my laptop, tested n upto 128.\nI've previously noticed that getting a Numpy to generate a permutation is particularly fast, hence my comment above. Leading to:\ndef perm(n):\n a = rng.permutation(n)\n return a[:rng.binomial(n, 0.5)]\n\nwhich takes between ~7 µs and ~10 µs depending on n. It also returns the indicies out of order, not sure if that's an issue for you. If your n isn't changing much, you could also swap to using rng.shuffle on an pre-allocated array, something like:\nn = 32\na = np.arange(n)\n\ndef shuffle():\n rng.shuffle(a)\n return a[:rng.binomial(n, 0.5)]\n\nwhich saves a couple of microseconds.\n",
"After some interesting proposals, I decided to do some benchmarking to understand how the running times grow as a function of n. The functions tested are the following:\ndef func1(n):\n bit_array = np.random.randint(2, size=n)\n return np.nonzero(bit_array)[0]\n\ndef func2(n):\n bit_int = np.random.bit_generator.randbits(n)\n a = np.zeros(bit_int.bit_count())\n i = 0\n for j in range(n):\n if 1 & (bit_int >> j):\n a[i] = j\n i += 1\n return a\n\ndef func3(n):\n bit_string = format(np.random.bit_generator.randbits(n), f'0{n}b')\n bit_array = np.array(list(bit_string), dtype=int)\n return np.nonzero(bit_array)[0]\n\ndef func4(n):\n rng = np.random.default_rng()\n a = rng.permutation(n)\n return a[:rng.binomial(n, 0.5)]\n\ndef func5(n):\n a = np.arange(n)\n rng.shuffle(a)\n return a[:rng.binomial(n, 0.5)]\n\nI used timeit to do the benchmark, looping 1000 over a statement each time and averaging over 10 runs. The value of n ranges from 2 to 65536, growing as powers of 2. The average running time is plotted and error bars correspondond to the standard deviation.\n\nFor solutions generating a bitstring, the simple func1 actually performs best among them whenever n is large enough (n>32). We can see that for low values of n (n< 16), using the randbits solution with the for loop (func2) is fastest, because the loop is not costly yet. However as n becomes larger, this becomes the worst solution, because all the time is spent in the for loop. This is why having a nonzero for integers would bring the best of both world and hopefully give a faster solution. We can observe that func3, which does a conversion in order to use nonzero after using randbits spends too long doing the conversion.\nFor implementations which exploit the binomial distribution (see Sam Mason's answer), we see that the use of shuffle (func5) instead of permutation (func4) can reduce the time by a bit, but overall they have similar performance.\nConsidering all values of n (that were tested), the solution given by Sam Mason which employs a binomial distribution together with shuffling (func5) is so far the most performant in terms of running time. Let's see if this can be improved!\n",
"I had a play with Cython to see how much difference it would make. I ended up with quite a lot of code and only ~5x better runtime performance:\nfrom cpython.pycapsule cimport PyCapsule_IsValid, PyCapsule_GetPointer\n\nimport numpy as np\ncimport numpy as np\ncimport cython\n\nfrom numpy.random cimport bitgen_t\n\nnp.import_array()\n\nDTYPE = np.uint32\nctypedef np.uint32_t DTYPE_t\n\ncdef extern int __builtin_popcountl(unsigned long) nogil\ncdef extern int __builtin_ffsl(unsigned long) nogil\n\ncdef const char *bgen_capsule_name = \"BitGenerator\"\n\n@cython.boundscheck(False) # Deactivate bounds checking\n@cython.wraparound(False) # Deactivate negative indexing.\ncdef size_t generate_bits(object bitgen, np.uint64_t *state, Py_ssize_t state_len, np.uint64_t last_mask):\n cdef Py_ssize_t i\n cdef size_t nset\n cdef bitgen_t *rng\n\n capsule = bitgen.capsule\n if not PyCapsule_IsValid(capsule, bgen_capsule_name):\n raise ValueError(\"Expecting Numpy BitGenerator Capsule\")\n rng = <bitgen_t *> PyCapsule_GetPointer(capsule, bgen_capsule_name)\n\n with bitgen.lock:\n nset = 0\n for i in range(state_len-1):\n state[i] = rng.next_uint64(rng.state)\n nset += __builtin_popcountl(state[i])\n\n i = state_len-1\n state[i] = rng.next_uint64(rng.state) & last_mask\n nset += __builtin_popcountl(state[i])\n \n return nset\n\ncdef size_t write_setbits(DTYPE_t *result, DTYPE_t off, np.uint64_t state) nogil:\n cdef size_t j\n cdef int k\n j = 0\n while state:\n # find first set bit returns zero when nothing is set\n k = __builtin_ffsl(state) - 1\n # clear out bit k\n state &= ~(1ul<<k)\n # record in output\n result[j] = off + k\n j += 1\n return j\n\n@cython.boundscheck(False) # Deactivate bounds checking\n@cython.wraparound(False) # Deactivate negative indexing.\ndef rint(bitgen, unsigned int n):\n cdef Py_ssize_t i, j, nset\n cdef np.uint64_t[::1] state\n cdef DTYPE_t[::1] result\n\n state = np.empty((n + 63) // 64, dtype=np.uint64)\n\n nset = generate_bits(bitgen, &state[0], len(state), (1ul << (n & 63)) - 1)\n\n pyresult = np.empty(nset, dtype=DTYPE)\n result = pyresult\n\n j = 0\n for i in range(len(state)):\n j += write_setbits(&result[j], i * 64, state[i])\n\n return pyresult\n\nThe above code is easy to use via the Cython Jupyter extension.\nComparing this to slightly tidied up versions of the OP's code can be done via:\nimport random\nimport timeit\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nbitgen = np.random.PCG64()\n\ndef func1(n):\n # bool type is a bit faster\n bit_array = np.random.randint(2, size=n, dtype=bool)\n return np.nonzero(bit_array)[0]\n\ndef func2(n):\n # OPs variant ends up using a CSPRNG which is slower\n bit_int = random.getrandbits(n)\n # this is much easier than using numpy arrays\n return [i for i in range(n) if 1 & (bit_int >> i)]\n\ndef func3(n):\n bit_string = format(random.getrandbits(n), f'0{n}b')\n bit_array = np.array(list(bit_string), dtype='int8')\n return np.nonzero(bit_array)[0]\n\ndef func4(n):\n # shuffle variant is mostly the same\n # plot already busy enough\n a = np.random.permutation(n)\n return a[:np.random.binomial(n, 0.5)]\n\ndef func_cython(n):\n return rint(bitgen, n)\n\nresult = {}\nniter = [2**i for i in range(1, 17)]\nfor name in 'func1 func2 func3 func4 func_cython'.split():\n result[name] = res = []\n for n in niter:\n t = timeit.Timer(f\"fn({n})\", f\"fn = {name}\", globals=globals())\n nit, dt = t.autorange()\n res.append(dt / nit)\n\nplt.loglog()\nfor name, times in result.items():\n plt.plot(niter, np.array(times) * 1000, '.-', label=name)\nplt.legend()\n\nWhich might produce output like:\n\nNote that in order to reduce variance it's helpful to turn off CPU frequency scaling and turn off turbo modes. The Arch wiki has useful info on how to do this under Linux.\n",
"you could convert the number you get with randbits(n) to a numpy.ndarray.\ndepending on the size of n the compute time of the conversion should be faster than the loop.\nn = 10\nl = np.random.bit_generator.randbits(n) # gives you the int 616\nl_string = f'{l:0{n}b}' # gives you a string representation of the int in length n 1001101000\nl_nparray = np.array(list(l_string), dtype=int) # gives you the numpy.ndarray like np.random.randint [1 0 0 1 1 0 1 0 0 0]\n\n"
] |
[
1,
1,
1,
0
] |
[] |
[] |
[
"bitstring",
"numpy",
"python",
"random"
] |
stackoverflow_0074557590_bitstring_numpy_python_random.txt
|
Q:
How to read a CSV file in Pandas with quote characters?
I have a csv file dataset that looks like this:
dataset header
Date,"TTF_1M_15m","Own Trades (Sell)","Own Trades (Buy)"
2022-01-03 09:00:00,"68.54485294117647","",""
2022-01-03 09:15:00,"66.46498579545455","",""
2022-01-03 09:30:00,"69.53991935483872","",""
.......
I'm having trouble reading this into pandas due to the quotations.
So far I've been trying to use this line of code but I am just getting error messages:
data = pd.read_csv("APE_Data_Export_15min_2022.csv", sep=',', engine='python')
I would like the first line to indicate the 3 columns: TTF_1M_15m, Own Trades (Sell), Own Trades (Buy) with corresponding data underneath.
Would appreciate any help, thanks!
A:
You can try using,
df = pd.read_csv('APE_Data_Export_15min_2022.csv', sep=',', engine='python').replace('"','', regex=True)
Output:
OUTPUT with actual data:
|
How to read a CSV file in Pandas with quote characters?
|
I have a csv file dataset that looks like this:
dataset header
Date,"TTF_1M_15m","Own Trades (Sell)","Own Trades (Buy)"
2022-01-03 09:00:00,"68.54485294117647","",""
2022-01-03 09:15:00,"66.46498579545455","",""
2022-01-03 09:30:00,"69.53991935483872","",""
.......
I'm having trouble reading this into pandas due to the quotations.
So far I've been trying to use this line of code but I am just getting error messages:
data = pd.read_csv("APE_Data_Export_15min_2022.csv", sep=',', engine='python')
I would like the first line to indicate the 3 columns: TTF_1M_15m, Own Trades (Sell), Own Trades (Buy) with corresponding data underneath.
Would appreciate any help, thanks!
|
[
"You can try using,\ndf = pd.read_csv('APE_Data_Export_15min_2022.csv', sep=',', engine='python').replace('\"','', regex=True)\n\nOutput:\n\nOUTPUT with actual data:\n\n"
] |
[
0
] |
[] |
[] |
[
"csv",
"export_to_csv",
"pandas",
"python"
] |
stackoverflow_0074594243_csv_export_to_csv_pandas_python.txt
|
Q:
Docker image url validation for django
I want to get docker image URL from the user but URLs can't be acceptable with models.URLField() in django.For example, this URL: hub.something.com/nginx:1.21, got an error.How can fix it?
A:
Try this out:
from django.core.validators import URLValidator
from django.utils.deconstruct import deconstructible
from django.db import models
# I suggest to move this class to validators.py outside of this app folder
# so it can be easily accessible by all models
@deconstructible
class DockerHubURLValidator(URLValidator):
domain_re = URLValidator.domain_re + '(?:[a-z0-9-.\/:]*)'
class ModelName(models.Model):
image = models.CharField(max_length=200, validators=[DockerHubURLValidator()])
I am not great at regexes but I believe I did it right, when I try new domain_re regex, it allows as domain: .com/nginx:1.21. The rest of url is handled automatically by django
If there will be another case of regex, or for some reason this regex won't work as I expect, I believe from here you will find a way ;)
Just check the URLValidator code and modify accordingly.
PS. Sorry for being late, was out with dog
A:
from django.db import models
from django.core.validators import RegexValidator
class App(models.Model):
image = models.CharField(
max_length=200,
validators=[
RegexValidator(
regex=r'^(?:(?=[^:\/]{1,253})(?!-)[a-zA-Z0-9-]{1,63}(?<!-)(?:\.(?!-)[a-zA-Z0-9-]{1,63}(?<!-))*(?::[0-9]{1,5})?/)?((?![._-])(?:[a-z0-9._-]*)(?<![._-])(?:/(?![._-])[a-z0-9._-]*(?<![._-]))*)(?::(?![.-])[a-zA-Z0-9_.-]{1,128})?$',
message='image is not valid',
code='invalid_url'
)
]
)
Regex reference is here and you can check matchs.
|
Docker image url validation for django
|
I want to get docker image URL from the user but URLs can't be acceptable with models.URLField() in django.For example, this URL: hub.something.com/nginx:1.21, got an error.How can fix it?
|
[
"Try this out:\nfrom django.core.validators import URLValidator\nfrom django.utils.deconstruct import deconstructible\nfrom django.db import models\n\n# I suggest to move this class to validators.py outside of this app folder \n# so it can be easily accessible by all models\n@deconstructible\nclass DockerHubURLValidator(URLValidator):\n domain_re = URLValidator.domain_re + '(?:[a-z0-9-.\\/:]*)'\n\n\nclass ModelName(models.Model):\n image = models.CharField(max_length=200, validators=[DockerHubURLValidator()])\n\nI am not great at regexes but I believe I did it right, when I try new domain_re regex, it allows as domain: .com/nginx:1.21. The rest of url is handled automatically by django\nIf there will be another case of regex, or for some reason this regex won't work as I expect, I believe from here you will find a way ;)\nJust check the URLValidator code and modify accordingly.\nPS. Sorry for being late, was out with dog\n",
"from django.db import models\nfrom django.core.validators import RegexValidator\n\n\nclass App(models.Model):\n image = models.CharField(\n max_length=200,\n validators=[\n RegexValidator(\n regex=r'^(?:(?=[^:\\/]{1,253})(?!-)[a-zA-Z0-9-]{1,63}(?<!-)(?:\\.(?!-)[a-zA-Z0-9-]{1,63}(?<!-))*(?::[0-9]{1,5})?/)?((?![._-])(?:[a-z0-9._-]*)(?<![._-])(?:/(?![._-])[a-z0-9._-]*(?<![._-]))*)(?::(?![.-])[a-zA-Z0-9_.-]{1,128})?$',\n message='image is not valid',\n code='invalid_url'\n )\n ]\n )\n\nRegex reference is here and you can check matchs.\n"
] |
[
2,
0
] |
[] |
[] |
[
"django",
"docker",
"docker_image",
"python",
"url"
] |
stackoverflow_0074593617_django_docker_docker_image_python_url.txt
|
Q:
Quotation Marks in Python while Reading a CSV File
I am a total newbie in Python I must admit. I have a CSV file and now I have to write the values in a specific column in a sorted list, there are same values that repeats itself I also need to get rid of those.
So I have a column called reason and the index is as follows;
allow, school, 'business', education, school etc.
Only 'business' has apostrophes.
The output should be:
reasons=['allow', 'business', 'education','school']
I have written a code like this
import pandas as pd
df.head()
reasons=sorted(df["reason"].unique())
But the output of this is actually
reasons=[“'business'”,'allow','education','school']
So because business has already '' this apostrophe, in the output it shows it also with a quotation mark. Therefore places it also in the first line instead of the second.
How can I solve this issue?
A:
The sort order is not what we desire.
because business has already ‘
To solve, simply edit the .CSV file, removing unwanted punctuation.
|
Quotation Marks in Python while Reading a CSV File
|
I am a total newbie in Python I must admit. I have a CSV file and now I have to write the values in a specific column in a sorted list, there are same values that repeats itself I also need to get rid of those.
So I have a column called reason and the index is as follows;
allow, school, 'business', education, school etc.
Only 'business' has apostrophes.
The output should be:
reasons=['allow', 'business', 'education','school']
I have written a code like this
import pandas as pd
df.head()
reasons=sorted(df["reason"].unique())
But the output of this is actually
reasons=[“'business'”,'allow','education','school']
So because business has already '' this apostrophe, in the output it shows it also with a quotation mark. Therefore places it also in the first line instead of the second.
How can I solve this issue?
|
[
"The sort order is not what we desire.\n\nbecause business has already ‘\n\nTo solve, simply edit the .CSV file, removing unwanted punctuation.\n"
] |
[
0
] |
[] |
[] |
[
"csv",
"pandas",
"python"
] |
stackoverflow_0074594329_csv_pandas_python.txt
|
Q:
Print out the index of the value that satisfy the list's condition
I am having trouble figuring out how to come up with the correct code for this particular problem that involving list. So the question is:
We have n fruit baskets, with some apples and oranges in them. We want to select the basket that have the most apples, but if there are several baskets with the same amount (biggest) of apples, we will then pick the one that has more oranges in it. (Harry has 2 fruit basket. - The first basket contains 2 apples and 3 oranges. - The second basket contains 1 apple and 4 oranges. - The third basket contains 2 apples and 5 oranges. We see that the first and third basket have the most apples of 2. But among these 2 baskets, the third basket have more oranges than the first basket. So Harry chooses the third one.)
I have thought of making two separate list for apple and orange and then find the max value of each list. But I haven't figured out how to return the correct basket (or index of the value of the list). Here is my code, please help if you can, thank you! (I have not learn panda or lambda yet, so just pure Python)
n = int(input())
a = []
b = []
for i in range(n):
x,y = map(int,input().split())
a.append(x)
b.append(y)
app = max(a)
oran = max(b)
idx = 0
for num in range(len(a)):
if a[num] < app:
continue
if a[num] == app:
if b[num] < oran:
idx = a.index(a[num])
elif b[num] == oran:
idx = b.index(oran)
print(idx+1)
A:
Your problem is in this section:
if a[num] == app:
if b[num] < oran:
idx = a.index(a[num])
elif b[num] == oran:
idx = b.index(oran)
When you find the max number of ranges with if b[num] == oran, you set idx to the index of the first occurance of oran in b, not num. Also, you keep iterating and checking, so if you were to find another b[num] < oran, that value would get overridden again - your logic there is flawed in several ways.
Something like this would be better:
b_max = 0
...
if a[num] == app:
if b[num] > b_max:
idx, b_max = num, b[num]
However, the entire solution is very complicated for what is required:
n = int(input())
apples, oranges = zip(*(map(int, input().split()) for _ in range(n)))
idx, ma, mo = 0, 0, 0
for i, (a, o) in enumerate(zip(apples, oranges)):
if a > ma or (a == ma and o > mo):
idx, ma, mo = i, a, o
print(idx + 1)
Whether you consider this more readable or less depends on who your audience is, but I provided the example because you seemed to be combining some operations in one-liners already.
|
Print out the index of the value that satisfy the list's condition
|
I am having trouble figuring out how to come up with the correct code for this particular problem that involving list. So the question is:
We have n fruit baskets, with some apples and oranges in them. We want to select the basket that have the most apples, but if there are several baskets with the same amount (biggest) of apples, we will then pick the one that has more oranges in it. (Harry has 2 fruit basket. - The first basket contains 2 apples and 3 oranges. - The second basket contains 1 apple and 4 oranges. - The third basket contains 2 apples and 5 oranges. We see that the first and third basket have the most apples of 2. But among these 2 baskets, the third basket have more oranges than the first basket. So Harry chooses the third one.)
I have thought of making two separate list for apple and orange and then find the max value of each list. But I haven't figured out how to return the correct basket (or index of the value of the list). Here is my code, please help if you can, thank you! (I have not learn panda or lambda yet, so just pure Python)
n = int(input())
a = []
b = []
for i in range(n):
x,y = map(int,input().split())
a.append(x)
b.append(y)
app = max(a)
oran = max(b)
idx = 0
for num in range(len(a)):
if a[num] < app:
continue
if a[num] == app:
if b[num] < oran:
idx = a.index(a[num])
elif b[num] == oran:
idx = b.index(oran)
print(idx+1)
|
[
"Your problem is in this section:\n if a[num] == app:\n if b[num] < oran:\n idx = a.index(a[num])\n elif b[num] == oran:\n idx = b.index(oran)\n\nWhen you find the max number of ranges with if b[num] == oran, you set idx to the index of the first occurance of oran in b, not num. Also, you keep iterating and checking, so if you were to find another b[num] < oran, that value would get overridden again - your logic there is flawed in several ways.\nSomething like this would be better:\nb_max = 0\n...\n if a[num] == app:\n if b[num] > b_max:\n idx, b_max = num, b[num]\n\nHowever, the entire solution is very complicated for what is required:\nn = int(input())\napples, oranges = zip(*(map(int, input().split()) for _ in range(n)))\nidx, ma, mo = 0, 0, 0\nfor i, (a, o) in enumerate(zip(apples, oranges)):\n if a > ma or (a == ma and o > mo):\n idx, ma, mo = i, a, o\n\nprint(idx + 1)\n\nWhether you consider this more readable or less depends on who your audience is, but I provided the example because you seemed to be combining some operations in one-liners already.\n"
] |
[
0
] |
[] |
[] |
[
"list",
"python"
] |
stackoverflow_0074594143_list_python.txt
|
Q:
Breaking items in a list into lists in Python3
I'm trying to make something that goes over my folders and find duplicates. That said, files cant have identical names, so the first part I made is to go over the folder and append a list of folders. Then I want to break the items in the list into lists and compare each other and find high similarities. I'm quite stuck with the 2nd part and don't know how to approach. If anyone can shed some light it'd be great, thanks!
import os
path = input("Where you want to look?")
myFolder = list()
print("Here's your folders:")
for dirname in os.listdir(path):
f = os.path.join(path,dirname)
if os.path.isdir(f):
myFolder.append(f)
print("\n".join(myFolder))
print(len(myFolder), "folders found!")
I'm thinking about creating a dictionary of lists, each list is a folder name broken down letter by letter
A:
First of all, I would recommend you to use the library pathlib, since os is a bit outdated for file exploring.
Here's how you can do what you want:
from pathlib import Path
folder_path = Path(input("Where you want to look?"))
folder_content = [file_or_dir for file_or_dir in folder_path.iterdir() if file_or_dir.is_file()]
# folder_content's code is the equivalent of:
folder_content = []
for file_or_dir in folder_path.iterdir(): # listing the files of folder_path
if file_or_dir.is_file(): # checking if file_or_dir is a file or not
folder_content.append(file_or_dir) # if it is, we add it to the list folder_content
# Then we check for duplicates
names_list = [file.name for file in folder_content] # We make a list that contains all of the names
for name in names_list:
if names_list.count(name) > 1: # if find more than one same name in the names_list
print(f'Name "{name}" found more than one time !!!')
|
Breaking items in a list into lists in Python3
|
I'm trying to make something that goes over my folders and find duplicates. That said, files cant have identical names, so the first part I made is to go over the folder and append a list of folders. Then I want to break the items in the list into lists and compare each other and find high similarities. I'm quite stuck with the 2nd part and don't know how to approach. If anyone can shed some light it'd be great, thanks!
import os
path = input("Where you want to look?")
myFolder = list()
print("Here's your folders:")
for dirname in os.listdir(path):
f = os.path.join(path,dirname)
if os.path.isdir(f):
myFolder.append(f)
print("\n".join(myFolder))
print(len(myFolder), "folders found!")
I'm thinking about creating a dictionary of lists, each list is a folder name broken down letter by letter
|
[
"First of all, I would recommend you to use the library pathlib, since os is a bit outdated for file exploring.\nHere's how you can do what you want:\nfrom pathlib import Path\n\nfolder_path = Path(input(\"Where you want to look?\"))\nfolder_content = [file_or_dir for file_or_dir in folder_path.iterdir() if file_or_dir.is_file()]\n\n# folder_content's code is the equivalent of:\nfolder_content = []\nfor file_or_dir in folder_path.iterdir(): # listing the files of folder_path\n if file_or_dir.is_file(): # checking if file_or_dir is a file or not\n folder_content.append(file_or_dir) # if it is, we add it to the list folder_content\n\n# Then we check for duplicates\nnames_list = [file.name for file in folder_content] # We make a list that contains all of the names\n\nfor name in names_list:\n if names_list.count(name) > 1: # if find more than one same name in the names_list\n print(f'Name \"{name}\" found more than one time !!!')\n\n"
] |
[
0
] |
[] |
[] |
[
"dictionary",
"list",
"python",
"python_3.x"
] |
stackoverflow_0074594331_dictionary_list_python_python_3.x.txt
|
Q:
Applying Jaro-Winkler distance to two dataframes
I have two dataframes of unequal length and would like to compare the similarity of strings in df2 with df1. Is it possible to apply Jaro-Winkler distance method to calculate the string similarity on two dataframes through map/lambda function.
df1
Behavioral disorders
Behçet disease
AV-Block
df2
Behavioral disorder
Behçet syndrome
The desired output is:
name_left name_right score
Behavioral disorders Behavioral disorder 0.933333
Behçet disease Behçet syndrome 0.865342
The scores mentioned above are hypothetical. Any help is highly appreciated
A:
Assuming you want the max score and that the original columns in the input are "name":
# pip install jaro-winkler
# https://pypi.org/project/jaro-winkler/
from jaro import jaro_winkler_metric as jw
pd.DataFrame([[n2, *max([(n1, jw(n1, n2)) for n1 in df1['name']],
lambda x: x[1])]
for n2 in df2['name']],
index=df2.index,
columns=['name_right', 'name_left', 'score']
)[['name_left', 'name_right', 'score']]
|
Applying Jaro-Winkler distance to two dataframes
|
I have two dataframes of unequal length and would like to compare the similarity of strings in df2 with df1. Is it possible to apply Jaro-Winkler distance method to calculate the string similarity on two dataframes through map/lambda function.
df1
Behavioral disorders
Behçet disease
AV-Block
df2
Behavioral disorder
Behçet syndrome
The desired output is:
name_left name_right score
Behavioral disorders Behavioral disorder 0.933333
Behçet disease Behçet syndrome 0.865342
The scores mentioned above are hypothetical. Any help is highly appreciated
|
[
"Assuming you want the max score and that the original columns in the input are \"name\":\n# pip install jaro-winkler\n# https://pypi.org/project/jaro-winkler/\nfrom jaro import jaro_winkler_metric as jw\n\npd.DataFrame([[n2, *max([(n1, jw(n1, n2)) for n1 in df1['name']],\n lambda x: x[1])]\n for n2 in df2['name']],\n index=df2.index,\n columns=['name_right', 'name_left', 'score']\n )[['name_left', 'name_right', 'score']]\n\n"
] |
[
0
] |
[] |
[] |
[
"jaro_winkler",
"pandas",
"python"
] |
stackoverflow_0074594265_jaro_winkler_pandas_python.txt
|
Q:
no output in vs code using python logging module
I'm on Windows 10 using VS Code 1.73.1 and am retrofitting my program with the Python logging module. My program is generally functioning. The main thing I did is change all the print statements to logger.debug and I know the variable formatting needs to be changed from {} to %s. I also added the encoding flag to my file handler.
A couple more things:
When I run it from the VS Code command line, it does create a file with debug statements but does not display any output to the Terminal, Debug Console or Output windows.
When I use the F5 function to run it, it does NOT create a file or display any console output anywhere.
print('something') works and displays in either the Terminal or Debug Console depending on the launch.json setting, but logger.debug('something') does not display in either console.
My request/question: Using logger.debug, why is nothing printing to console, and no file is even created?
I also tried running the code below in its own separate .py file with the same problem:
print displays to the Debug Console, but logger.debug does not.
Logging code block:
import logging
# Create a custom logger
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
# Streaming Handler
c_handler = logging.StreamHandler()
c_handler.setLevel(logging.INFO)
c_format = logging.Formatter('%(name)s - %(levelname)s - %(message)s')
c_handler.setFormatter(c_format)
logger.addHandler(c_handler)
# File Handler
f_handler = logging.FileHandler('download_shelf.log', encoding='utf-8')
f_handler.setLevel(logging.DEBUG)
f_format = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
f_handler.setFormatter(f_format)
logger.addHandler(f_handler)
logger.debug(f"Log messages:") ### `print` statement displays in either the Terminal
### or Debug Console depending on `launch.json` setting
I made sure my launch.json file was set up in the local .vscode folder (see bottom code section). However, I did have to manually create this file, then restarted VS Code. Information I found said there was a link in the Debug panel, which I know I've seen before. It wasn't there for this file.
My directory structure:
toplevel/
.vscode/
launch.json
src/
subdirectory/
myfile.py
My launch.json file contains "console": "internalConsole"
{
"configurations": [
{"name":"Python: Current File",
"type":"python",
"request":"launch",
"program":"${file}",
// "console":"integratedTerminal",
"console": "internalConsole",
"justMyCode":true
},
]
}
A:
After using a different logger example, I realized the problem was the Level setting in the first example I used had "INFO" and not "DEBUG" so nothing was showing up. Oops...
import logging
logger = logging.getLogger('simple_example')
logger.setLevel(logging.DEBUG)
console = logging.StreamHandler()
console.setLevel(level=logging.DEBUG)
formatter = logging.Formatter('%(levelname)s : %(message)s')
console.setFormatter(formatter)
logger.addHandler(console)
logger.debug('simple message')
|
no output in vs code using python logging module
|
I'm on Windows 10 using VS Code 1.73.1 and am retrofitting my program with the Python logging module. My program is generally functioning. The main thing I did is change all the print statements to logger.debug and I know the variable formatting needs to be changed from {} to %s. I also added the encoding flag to my file handler.
A couple more things:
When I run it from the VS Code command line, it does create a file with debug statements but does not display any output to the Terminal, Debug Console or Output windows.
When I use the F5 function to run it, it does NOT create a file or display any console output anywhere.
print('something') works and displays in either the Terminal or Debug Console depending on the launch.json setting, but logger.debug('something') does not display in either console.
My request/question: Using logger.debug, why is nothing printing to console, and no file is even created?
I also tried running the code below in its own separate .py file with the same problem:
print displays to the Debug Console, but logger.debug does not.
Logging code block:
import logging
# Create a custom logger
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
# Streaming Handler
c_handler = logging.StreamHandler()
c_handler.setLevel(logging.INFO)
c_format = logging.Formatter('%(name)s - %(levelname)s - %(message)s')
c_handler.setFormatter(c_format)
logger.addHandler(c_handler)
# File Handler
f_handler = logging.FileHandler('download_shelf.log', encoding='utf-8')
f_handler.setLevel(logging.DEBUG)
f_format = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
f_handler.setFormatter(f_format)
logger.addHandler(f_handler)
logger.debug(f"Log messages:") ### `print` statement displays in either the Terminal
### or Debug Console depending on `launch.json` setting
I made sure my launch.json file was set up in the local .vscode folder (see bottom code section). However, I did have to manually create this file, then restarted VS Code. Information I found said there was a link in the Debug panel, which I know I've seen before. It wasn't there for this file.
My directory structure:
toplevel/
.vscode/
launch.json
src/
subdirectory/
myfile.py
My launch.json file contains "console": "internalConsole"
{
"configurations": [
{"name":"Python: Current File",
"type":"python",
"request":"launch",
"program":"${file}",
// "console":"integratedTerminal",
"console": "internalConsole",
"justMyCode":true
},
]
}
|
[
"After using a different logger example, I realized the problem was the Level setting in the first example I used had \"INFO\" and not \"DEBUG\" so nothing was showing up. Oops...\nimport logging\n\nlogger = logging.getLogger('simple_example')\nlogger.setLevel(logging.DEBUG)\nconsole = logging.StreamHandler()\nconsole.setLevel(level=logging.DEBUG)\nformatter = logging.Formatter('%(levelname)s : %(message)s')\nconsole.setFormatter(formatter)\nlogger.addHandler(console)\n\nlogger.debug('simple message')\n\n"
] |
[
0
] |
[] |
[] |
[
"logging",
"python",
"python_logging",
"visual_studio_code"
] |
stackoverflow_0074585126_logging_python_python_logging_visual_studio_code.txt
|
Q:
Python: OCR - For loop is very slow
I have here some lines of code from the beginning of my OCR program. I can see with the Time() function that these few lines take 90% of the time of a run. Unfortunately, I have no more idea how to develop these lines more efficiently in terms of time. What would be your approaches to speed up this process?
for page_number,page_data in enumerate(doc):
txt = pytesseract.image_to_string(page_data,lang='eng').encode('utf-8')
Counter = 0
txt = txt.decode('utf-8')
tokens = txt.split()
for i in tokens:
ResultpageNumber.append([page_number+1,tokens[Counter],Counter])
Counter=Counter+1
A:
You're saying that .image_to_string() consumes most of the CPU cycles.
Yup. That's not surprising, it's a hard problem we're asking it to solve.
Delve into what that function is doing,
if you want to shave off some seconds of CPU time.
But you're probably better off consulting the fine documentation.
Depending on your source images, some preprocessing
to binarize or to reduce resolution might offer a
slightly easier problem, hard to say.
There's no magic bullet.
|
Python: OCR - For loop is very slow
|
I have here some lines of code from the beginning of my OCR program. I can see with the Time() function that these few lines take 90% of the time of a run. Unfortunately, I have no more idea how to develop these lines more efficiently in terms of time. What would be your approaches to speed up this process?
for page_number,page_data in enumerate(doc):
txt = pytesseract.image_to_string(page_data,lang='eng').encode('utf-8')
Counter = 0
txt = txt.decode('utf-8')
tokens = txt.split()
for i in tokens:
ResultpageNumber.append([page_number+1,tokens[Counter],Counter])
Counter=Counter+1
|
[
"You're saying that .image_to_string() consumes most of the CPU cycles.\nYup. That's not surprising, it's a hard problem we're asking it to solve.\nDelve into what that function is doing,\nif you want to shave off some seconds of CPU time.\nBut you're probably better off consulting the fine documentation.\nDepending on your source images, some preprocessing\nto binarize or to reduce resolution might offer a\nslightly easier problem, hard to say.\nThere's no magic bullet.\n"
] |
[
0
] |
[] |
[] |
[
"ocr",
"python",
"python_tesseract"
] |
stackoverflow_0074594275_ocr_python_python_tesseract.txt
|
Q:
How can I use fillna for a specific value?
I already know how to use fillna() but it fills every empty value with the same indicated value. In this case, I want to fill each empty value with different values, should I use the row number or how can it be done?
Failed try:
I want it to be
bmw 320i 2
plymouth reliant 1
honda civic 3
A:
Since the condition is not mentioned, the best solution I can provide is to use mask.
It replaces values where the condition is True.
A:
I want it to be bmw 320i 2 plymouth reliant 1 honda civic 3
You can fill the NaN in the first column with values from a series like this:
df = pd.DataFrame([[np.nan, "bmw 320i"],
[np.nan, "plymouth reliant"],
[np.nan, "honda civic"]],
columns=("origin", "car name"))
df2 = pd.Series([2,1,3]).to_frame(name="values")
df['origin'].fillna(value=df2["values"], inplace=True)
|
How can I use fillna for a specific value?
|
I already know how to use fillna() but it fills every empty value with the same indicated value. In this case, I want to fill each empty value with different values, should I use the row number or how can it be done?
Failed try:
I want it to be
bmw 320i 2
plymouth reliant 1
honda civic 3
|
[
"Since the condition is not mentioned, the best solution I can provide is to use mask.\nIt replaces values where the condition is True.\n",
"\nI want it to be bmw 320i 2 plymouth reliant 1 honda civic 3\n\nYou can fill the NaN in the first column with values from a series like this:\ndf = pd.DataFrame([[np.nan, \"bmw 320i\"],\n [np.nan, \"plymouth reliant\"],\n [np.nan, \"honda civic\"]],\n columns=(\"origin\", \"car name\"))\ndf2 = pd.Series([2,1,3]).to_frame(name=\"values\")\ndf['origin'].fillna(value=df2[\"values\"], inplace=True)\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"categories",
"fillna",
"function",
"pandas",
"python"
] |
stackoverflow_0074594059_categories_fillna_function_pandas_python.txt
|
Q:
How do I get a value based of the combinations of check buttons that are checked in tkinter?
I am making an application that creates a password based on the requirements of the password needed. The requirements are picked through check buttons, so if a check button is on, then the password should contain those values, if the check button is off then the password should not contain that value. All of the check buttons are turned on by default and the user can change them as needed.
Here is the code for the checkbuttons:
# This allows us to get the value (or the state of the checkbox: checked or unchecked) from the checkbox
var_LowercaseLtrsCheckBtn = IntVar(value=1)
var_UppercaseLtrsCheckBtn = IntVar(value=1)
var_NumbersCheckBtn = IntVar(value=1)
var_SymbolsCheckBtn = IntVar(value=1)
# Checkbox for including lowercase letters
includeLowercaseLtrsCheckBtn = Checkbutton(root, text="Include Lowercase Letters", variable=var_LowercaseLtrsCheckBtn, onvalue=1, offvalue=0)
includeLowercaseLtrsCheckBtn.pack()
# Checkbox for including uppercase letters
includeUppercaseLtrsCheckBtn = Checkbutton(root, text="Include Uppercase Letters", variable = var_UppercaseLtrsCheckBtn, onvalue=1, offvalue=0)
includeUppercaseLtrsCheckBtn.pack()
# Checkbox for including numbers
includeNumbersCheckBtn = Checkbutton(root, text="Include Numbers", variable = var_NumbersCheckBtn, onvalue=1, offvalue=0)
includeNumbersCheckBtn.pack()
# Checkbox for including symbols
includeSymbolsCheckBtn = Checkbutton(root, text="Include Symbols", variable = var_SymbolsCheckBtn, onvalue=1, offvalue=0)
includeSymbolsCheckBtn.pack()
This is the code for creating a password based on if the user wants lowercase letters, uppercase letters, numbers, and/or symbols. This code is in a function that is run when the generate password button is pressed.
# Create Phrases which the Password Must Be Compromised of:
lowercaseLetters = "abcdefghijklmnopqrstuvwxyz"
uppercaseLetters = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
numbers = "1234567890"
symbols = "~!@#$%^&*()[]<>?"
# Create Password with ONLY LOWERCASE LETTERS
for i in range(0, get_PasswordLength):
password = random.choice(lowercaseLetters)
returnPassword_Entry.insert(END, password)
I tried to create a bunch of if statements that try every possible combination but it seemed too complex. Is there a better way to do this - to check which check buttons are checked and then create a password based on those requirements?
A:
Here's a general idea, in pseudocode that you can modify. The general idea is to use the value of the checkboxes to make a "pool" of characters to choose from inside your generation function.
get the value of the checkboxes and assign them to obviously named variables.
If no checkboxes are present, do something...error message or such
use "if" statements to build the pool, kinda like:
pool = ''
if (include_lowers):
pool += lowercase_letters
if (include_symbols):
pool += symbols
...
This is basically concatenating the strings you already have into one big list, depending on the variables for the checkboxes.
Use the pool variable to sample from like you are already doing... maybe use random.sample to get a password of fixed length. (Note: sample will not include duplicates, probably OK)
pw_elements = random.sample(pool, password_length)
Use join() to smash them all together into one string. (The above will return a list)
pw = ''.join(pw_elements)
A:
This could certainly be done better but it's a complete solution;
import tkinter as tk
from tkinter import Label
from tkinter import Entry
from tkinter import Button
from tkinter import Checkbutton
from tkinter import IntVar
from tkinter import StringVar
import random
root = tk.Tk()
# Create Phrases which the Password Must Be Compromised of:
lowercaseLetters = "abcdefghijklmnopqrstuvwxyz"
uppercaseLetters = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
numbers = "1234567890"
symbols = "~!@#$%^&*()[]<>?"
var_LowercaseLtrsCheckBtn = IntVar(value=1)
var_UppercaseLtrsCheckBtn = IntVar(value=1)
var_NumbersCheckBtn = IntVar(value=1)
var_SymbolsCheckBtn = IntVar(value=1)
def createPassword():
print(passwordLength_Entry.get())
lowercaseGen = ''
uppercaseGen = ''
numberGen = ''
symbolsGen = ''
password = ''
passwordLength = int(passwordLength_Entry.get())
if var_LowercaseLtrsCheckBtn.get() == 1:
lowercaseGen = random.sample(lowercaseLetters, min(4, passwordLength))
lowercaseGen = ''.join(lowercaseGen)
if var_UppercaseLtrsCheckBtn.get() == 1:
uppercaseGen = random.sample(uppercaseLetters, min(4, passwordLength))
uppercaseGen = ''.join(uppercaseGen)
if var_NumbersCheckBtn.get() == 1:
numberGen = random.sample(numbers, min(4, passwordLength))
numberGen = ''.join(numberGen)
if var_SymbolsCheckBtn.get() == 1:
symbolsGen = random.sample(symbols, min(4, passwordLength))
symbolsGen = ''.join(symbolsGen)
for i in range(passwordLength):
password += random.choice(lowercaseGen +
uppercaseGen + numberGen + symbolsGen)
password = list(password)
random.shuffle(password)
password = "".join(password)
print(password)
password_Label = Label(
root, text=f"Result is {password}", width=100, height=6, fg="green", font=('arial', 10))
password_Label.grid(row=6, column=0, columnspan=3, padx=10, pady=10)
return password
generatePassword_Button = Button(
root, text="Generate Password", command=createPassword)
generatePassword_Button.grid(row=1, column=0, columnspan=3, padx=10, pady=10)
passwordLength_Entry = Entry(root, width=50, borderwidth=5)
passwordLength_Entry.insert(0, "100")
passwordLength_Entry.grid(row=2, column=0, columnspan=3, padx=10, pady=10)
lowercaseLtrsCheckBtn = Checkbutton(
root, text="Lowercase Letters", variable=var_LowercaseLtrsCheckBtn)
lowercaseLtrsCheckBtn.grid(row=3, column=0, padx=10, pady=10)
uppercaseLtrsCheckBtn = Checkbutton(
root, text="Uppercase Letters", variable=var_UppercaseLtrsCheckBtn)
uppercaseLtrsCheckBtn.grid(row=3, column=1, padx=10, pady=10)
numbersCheckBtn = Checkbutton(
root, text="Numbers", variable=var_NumbersCheckBtn)
numbersCheckBtn.grid(row=4, column=0, padx=10, pady=10)
symbolsCheckBtn = Checkbutton(
root, text="Symbols", variable=var_SymbolsCheckBtn)
symbolsCheckBtn.grid(row=4, column=1, padx=10, pady=10)
root.mainloop()
Output:
|
How do I get a value based of the combinations of check buttons that are checked in tkinter?
|
I am making an application that creates a password based on the requirements of the password needed. The requirements are picked through check buttons, so if a check button is on, then the password should contain those values, if the check button is off then the password should not contain that value. All of the check buttons are turned on by default and the user can change them as needed.
Here is the code for the checkbuttons:
# This allows us to get the value (or the state of the checkbox: checked or unchecked) from the checkbox
var_LowercaseLtrsCheckBtn = IntVar(value=1)
var_UppercaseLtrsCheckBtn = IntVar(value=1)
var_NumbersCheckBtn = IntVar(value=1)
var_SymbolsCheckBtn = IntVar(value=1)
# Checkbox for including lowercase letters
includeLowercaseLtrsCheckBtn = Checkbutton(root, text="Include Lowercase Letters", variable=var_LowercaseLtrsCheckBtn, onvalue=1, offvalue=0)
includeLowercaseLtrsCheckBtn.pack()
# Checkbox for including uppercase letters
includeUppercaseLtrsCheckBtn = Checkbutton(root, text="Include Uppercase Letters", variable = var_UppercaseLtrsCheckBtn, onvalue=1, offvalue=0)
includeUppercaseLtrsCheckBtn.pack()
# Checkbox for including numbers
includeNumbersCheckBtn = Checkbutton(root, text="Include Numbers", variable = var_NumbersCheckBtn, onvalue=1, offvalue=0)
includeNumbersCheckBtn.pack()
# Checkbox for including symbols
includeSymbolsCheckBtn = Checkbutton(root, text="Include Symbols", variable = var_SymbolsCheckBtn, onvalue=1, offvalue=0)
includeSymbolsCheckBtn.pack()
This is the code for creating a password based on if the user wants lowercase letters, uppercase letters, numbers, and/or symbols. This code is in a function that is run when the generate password button is pressed.
# Create Phrases which the Password Must Be Compromised of:
lowercaseLetters = "abcdefghijklmnopqrstuvwxyz"
uppercaseLetters = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
numbers = "1234567890"
symbols = "~!@#$%^&*()[]<>?"
# Create Password with ONLY LOWERCASE LETTERS
for i in range(0, get_PasswordLength):
password = random.choice(lowercaseLetters)
returnPassword_Entry.insert(END, password)
I tried to create a bunch of if statements that try every possible combination but it seemed too complex. Is there a better way to do this - to check which check buttons are checked and then create a password based on those requirements?
|
[
"Here's a general idea, in pseudocode that you can modify. The general idea is to use the value of the checkboxes to make a \"pool\" of characters to choose from inside your generation function.\n\nget the value of the checkboxes and assign them to obviously named variables.\n\nIf no checkboxes are present, do something...error message or such\n\nuse \"if\" statements to build the pool, kinda like:\n\n\n\npool = ''\nif (include_lowers):\n pool += lowercase_letters\nif (include_symbols):\n pool += symbols\n...\n\nThis is basically concatenating the strings you already have into one big list, depending on the variables for the checkboxes.\n\nUse the pool variable to sample from like you are already doing... maybe use random.sample to get a password of fixed length. (Note: sample will not include duplicates, probably OK)\n\n\npw_elements = random.sample(pool, password_length)\n\n\nUse join() to smash them all together into one string. (The above will return a list)\n\n\npw = ''.join(pw_elements)\n\n",
"This could certainly be done better but it's a complete solution;\nimport tkinter as tk\nfrom tkinter import Label\nfrom tkinter import Entry\nfrom tkinter import Button\nfrom tkinter import Checkbutton\nfrom tkinter import IntVar\nfrom tkinter import StringVar\nimport random\n\nroot = tk.Tk()\n\n# Create Phrases which the Password Must Be Compromised of:\nlowercaseLetters = \"abcdefghijklmnopqrstuvwxyz\"\nuppercaseLetters = \"ABCDEFGHIJKLMNOPQRSTUVWXYZ\"\nnumbers = \"1234567890\"\nsymbols = \"~!@#$%^&*()[]<>?\"\n\nvar_LowercaseLtrsCheckBtn = IntVar(value=1)\nvar_UppercaseLtrsCheckBtn = IntVar(value=1)\nvar_NumbersCheckBtn = IntVar(value=1)\nvar_SymbolsCheckBtn = IntVar(value=1)\n\n\ndef createPassword():\n print(passwordLength_Entry.get())\n\n lowercaseGen = ''\n uppercaseGen = ''\n numberGen = ''\n symbolsGen = ''\n password = ''\n passwordLength = int(passwordLength_Entry.get())\n\n if var_LowercaseLtrsCheckBtn.get() == 1:\n lowercaseGen = random.sample(lowercaseLetters, min(4, passwordLength))\n lowercaseGen = ''.join(lowercaseGen)\n if var_UppercaseLtrsCheckBtn.get() == 1:\n uppercaseGen = random.sample(uppercaseLetters, min(4, passwordLength))\n uppercaseGen = ''.join(uppercaseGen)\n if var_NumbersCheckBtn.get() == 1:\n numberGen = random.sample(numbers, min(4, passwordLength))\n numberGen = ''.join(numberGen)\n if var_SymbolsCheckBtn.get() == 1:\n symbolsGen = random.sample(symbols, min(4, passwordLength))\n symbolsGen = ''.join(symbolsGen)\n\n\n for i in range(passwordLength):\n password += random.choice(lowercaseGen +\n uppercaseGen + numberGen + symbolsGen)\n\n password = list(password)\n random.shuffle(password)\n password = \"\".join(password)\n print(password)\n password_Label = Label(\n root, text=f\"Result is {password}\", width=100, height=6, fg=\"green\", font=('arial', 10))\n password_Label.grid(row=6, column=0, columnspan=3, padx=10, pady=10)\n return password\n\n\ngeneratePassword_Button = Button(\n root, text=\"Generate Password\", command=createPassword)\ngeneratePassword_Button.grid(row=1, column=0, columnspan=3, padx=10, pady=10)\n\npasswordLength_Entry = Entry(root, width=50, borderwidth=5)\npasswordLength_Entry.insert(0, \"100\")\npasswordLength_Entry.grid(row=2, column=0, columnspan=3, padx=10, pady=10)\n\nlowercaseLtrsCheckBtn = Checkbutton(\n root, text=\"Lowercase Letters\", variable=var_LowercaseLtrsCheckBtn)\nlowercaseLtrsCheckBtn.grid(row=3, column=0, padx=10, pady=10)\nuppercaseLtrsCheckBtn = Checkbutton(\n root, text=\"Uppercase Letters\", variable=var_UppercaseLtrsCheckBtn)\nuppercaseLtrsCheckBtn.grid(row=3, column=1, padx=10, pady=10)\nnumbersCheckBtn = Checkbutton(\n root, text=\"Numbers\", variable=var_NumbersCheckBtn)\nnumbersCheckBtn.grid(row=4, column=0, padx=10, pady=10)\nsymbolsCheckBtn = Checkbutton(\n root, text=\"Symbols\", variable=var_SymbolsCheckBtn)\nsymbolsCheckBtn.grid(row=4, column=1, padx=10, pady=10)\n\nroot.mainloop()\n\nOutput:\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"if_statement",
"python",
"python_3.x",
"tkinter",
"tkinter_button"
] |
stackoverflow_0074594081_if_statement_python_python_3.x_tkinter_tkinter_button.txt
|
Q:
Matplotlib conditional scatterplot colors
I'm trying to change the colors of the points in a scatterplot to red based on the condition x > 0. Here's what I have:
x = np.random.rand(100,1)
y = np.random.rand(100,1)
plt.scatter(x, y, c=['r' if x > 0 else 'b' for v in x])
I get the following error:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
When I try to change the color value from x to x.any() or x.all() as such, I get the following error:
plt.scatter(x, y, c=['r' if x.all() > 0 else 'b' for v in x.all()])
TypeError: 'numpy.bool_' object is not iterable
Any idea how to get past this error? Thank you!
A:
There is a mistake in the comprehesion list in the first code block. Try the following:
plt.scatter(x, y, c=['r' if v > 0 else 'b' for v in x])
However, you will see all the values in red as the function np.random.rand() returns positive values (between 0 and 1). To confirm that it is working you can use this modification:
plt.scatter(x, y, c=['r' if v > 0.5 else 'b' for v in x])
|
Matplotlib conditional scatterplot colors
|
I'm trying to change the colors of the points in a scatterplot to red based on the condition x > 0. Here's what I have:
x = np.random.rand(100,1)
y = np.random.rand(100,1)
plt.scatter(x, y, c=['r' if x > 0 else 'b' for v in x])
I get the following error:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
When I try to change the color value from x to x.any() or x.all() as such, I get the following error:
plt.scatter(x, y, c=['r' if x.all() > 0 else 'b' for v in x.all()])
TypeError: 'numpy.bool_' object is not iterable
Any idea how to get past this error? Thank you!
|
[
"There is a mistake in the comprehesion list in the first code block. Try the following:\nplt.scatter(x, y, c=['r' if v > 0 else 'b' for v in x])\n\nHowever, you will see all the values in red as the function np.random.rand() returns positive values (between 0 and 1). To confirm that it is working you can use this modification:\nplt.scatter(x, y, c=['r' if v > 0.5 else 'b' for v in x])\n\n"
] |
[
1
] |
[] |
[] |
[
"matplotlib",
"python",
"scatter_plot"
] |
stackoverflow_0074594484_matplotlib_python_scatter_plot.txt
|
Q:
How to fix errors that involve Selenium
I am trying to make a Facebook marketplace scraper. I am using Microsoft Edge and every time I run the code, it gives me a few errors that I do not know how to fix. This is all I have so far, and it is supposed to print the year of a car and the name of it.
Ex: 2009 Honda Accord
from selenium import webdriver
from selenium.webdriver.edge.service import Service
import time
from selenium.webdriver.common.by import By
s=Service('C:\\Users\\CPM\\Downloads\\edgedriver_win64 (3) msedgedriver.exe')
driver = webdriver.Edge(service=s)
url = 'https://www.facebook.com'
driver.get(url)
time.sleep((20))
url = 'https://www.facebook.com/marketplace/category/vehicles?minPrice=0&maxPrice=5000&maxMileage=150000&minMileage=0&sortBy=creation_time_descend&topLevelVehicleType=car_truck&exact=false'
driver.get(url)
time.sleep(5)
elements = driver.find_element(By.CLASS_NAME,'x1i10hfl xjbqb8w x6umtig x1b1mbwd xaqea5y xav7gou x9f619 x1ypdohk xt0psk2 xe8uvvx xdj266r x11i5rnm xat24cr x1mh8g0r xexx8yu x4uap5 x18d9i69 xkhd6sd x16tdsg8 x1hl2dhg xggy1nq x1a2a7pz x1heor9g x1lku1pv')
for ele in elements:
print(ele.get_attribute('title'))
I've tried different class names and fixed the function names but I still get these errors,
DevTools listening on ws://127.0.0.1:63009/devtools/browser/99d7d32b-bcb8-49d3-bce9-a737cb9b9fef
[12308:2292:1127/164624.744:ERROR:edge_auth_errors.cc(450)] EDGE_IDENTITY: Get Default OS Account failed: Error: Primary Error: kImplicitSignInFailure, Secondary Error: kAccountProviderFetchError, Platform error: 0, Error string:
[12308:2292:1127/164646.905:ERROR:fallback_task_provider.cc(124)] Every renderer should have at least one task provided by a primary task provider. If a "Renderer" fallback task is shown, it is a bug. If you have repro steps, please file a new bug and tag it as a dependency of crbug.com/739782.
Traceback (most recent call last):
File "c:\Users\CPM\Downloads\rubiks_cube-master\rubiks_cube-master\new folder\improvedscraper.py", line 19, in <module>
elements = driver.find_element(By.CLASS_NAME,'x1i10hfl xjbqb8w x6umtig x1b1mbwd xaqea5y xav7gou x9f619 x1ypdohk xt0psk2 xe8uvvx xdj266r x11i5rnm xat24cr x1mh8g0r xexx8yu x4uap5 x18d9i69 xkhd6sd x16tdsg8 x1hl2dhg xggy1nq x1a2a7pz x1heor9g x1lku1pv')
File "C:\Users\CPM\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\selenium\webdriver\remote\webdriver.py", line 861, in find_element
return self.execute(Command.FIND_ELEMENT, {"using": by, "value": value})["value"]
File "C:\Users\CPM\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\selenium\webdriver\remote\webdriver.py", line 444, in execute
self.error_handler.check_response(response)
File "C:\Users\CPM\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\selenium\webdriver\remote\errorhandler.py", line 249, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"css selector","selector":".x1i10hfl xjbqb8w x6umtig x1b1mbwd xaqea5y xav7gou x9f619 x1ypdohk xt0psk2 xe8uvvx xdj266r x11i5rnm xat24cr x1mh8g0r xexx8yu x4uap5 x18d9i69 xkhd6sd x16tdsg8 x1hl2dhg xggy1nq x1a2a7pz x1heor9g x1lku1pv"}
(Session info: MicrosoftEdge=107.0.1418.56)
Stacktrace:
Backtrace:
Microsoft::Applications::Events::EventProperties::SetProperty [0x00007FF696738532+9986]
Microsoft::Applications::Events::EventProperty::EventProperty [0x00007FF6966D5D62+1445202]
Ordinal0 [0x00007FF6962BFC8C+654476]
Ordinal0 [0x00007FF6963036C2+931522]
Ordinal0 [0x00007FF696303B10+932624]
Ordinal0 [0x00007FF69633FC17+1178647]
Ordinal0 [0x00007FF696323BDF+1063903]
Ordinal0 [0x00007FF6962F5FF4+876532]
Ordinal0 [0x00007FF69633CF70+1167216]
Ordinal0 [0x00007FF6963239B3+1063347]
Ordinal0 [0x00007FF6962F506A+872554]
Ordinal0 [0x00007FF6962F402E+868398]
Ordinal0 [0x00007FF6962F570F+874255]
Microsoft::Applications::Events::EventProperty::EventProperty [0x00007FF696596108+135416]
Microsoft::Applications::Events::EventProperty::EventProperty [0x00007FF6965802CF+45759]
Microsoft::Applications::Events::EventProperty::EventProperty [0x00007FF69658374C+59196]
Ordinal0 [0x00007FF6963CB1F4+1749492]
Microsoft::Applications::Events::EventProperty::EventProperty [0x00007FF6966DB65A+1467978]
Microsoft::Applications::Events::EventProperty::EventProperty [0x00007FF6966DFEF4+1486564]
Microsoft::Applications::Events::EventProperty::EventProperty [0x00007FF6966E004D+1486909]
Microsoft::Applications::Events::EventProperty::EventProperty [0x00007FF6966E7E0B+1519099]
BaseThreadInitThunk [0x00007FFC227F74B4+20]
RtlUserThreadStart [0x00007FFC236626A1+33]
A:
It tells you what the error is:
selenium.common.exceptions.NoSuchElementException: Message: no such
element: Unable to locate element: {"method":"css
selector","selector":".x1i10hfl xjbqb8w x6umtig x1b1mbwd xaqea5y
xav7gou x9f619 x1ypdohk xt0psk2 xe8uvvx xdj266r x11i5rnm xat24cr
x1mh8g0r xexx8yu x4uap5 x18d9i69 xkhd6sd x16tdsg8 x1hl2dhg xggy1nq
x1a2a7pz x1heor9g x1lku1pv"}
Make sure you're searching for elements that actually exist.
|
How to fix errors that involve Selenium
|
I am trying to make a Facebook marketplace scraper. I am using Microsoft Edge and every time I run the code, it gives me a few errors that I do not know how to fix. This is all I have so far, and it is supposed to print the year of a car and the name of it.
Ex: 2009 Honda Accord
from selenium import webdriver
from selenium.webdriver.edge.service import Service
import time
from selenium.webdriver.common.by import By
s=Service('C:\\Users\\CPM\\Downloads\\edgedriver_win64 (3) msedgedriver.exe')
driver = webdriver.Edge(service=s)
url = 'https://www.facebook.com'
driver.get(url)
time.sleep((20))
url = 'https://www.facebook.com/marketplace/category/vehicles?minPrice=0&maxPrice=5000&maxMileage=150000&minMileage=0&sortBy=creation_time_descend&topLevelVehicleType=car_truck&exact=false'
driver.get(url)
time.sleep(5)
elements = driver.find_element(By.CLASS_NAME,'x1i10hfl xjbqb8w x6umtig x1b1mbwd xaqea5y xav7gou x9f619 x1ypdohk xt0psk2 xe8uvvx xdj266r x11i5rnm xat24cr x1mh8g0r xexx8yu x4uap5 x18d9i69 xkhd6sd x16tdsg8 x1hl2dhg xggy1nq x1a2a7pz x1heor9g x1lku1pv')
for ele in elements:
print(ele.get_attribute('title'))
I've tried different class names and fixed the function names but I still get these errors,
DevTools listening on ws://127.0.0.1:63009/devtools/browser/99d7d32b-bcb8-49d3-bce9-a737cb9b9fef
[12308:2292:1127/164624.744:ERROR:edge_auth_errors.cc(450)] EDGE_IDENTITY: Get Default OS Account failed: Error: Primary Error: kImplicitSignInFailure, Secondary Error: kAccountProviderFetchError, Platform error: 0, Error string:
[12308:2292:1127/164646.905:ERROR:fallback_task_provider.cc(124)] Every renderer should have at least one task provided by a primary task provider. If a "Renderer" fallback task is shown, it is a bug. If you have repro steps, please file a new bug and tag it as a dependency of crbug.com/739782.
Traceback (most recent call last):
File "c:\Users\CPM\Downloads\rubiks_cube-master\rubiks_cube-master\new folder\improvedscraper.py", line 19, in <module>
elements = driver.find_element(By.CLASS_NAME,'x1i10hfl xjbqb8w x6umtig x1b1mbwd xaqea5y xav7gou x9f619 x1ypdohk xt0psk2 xe8uvvx xdj266r x11i5rnm xat24cr x1mh8g0r xexx8yu x4uap5 x18d9i69 xkhd6sd x16tdsg8 x1hl2dhg xggy1nq x1a2a7pz x1heor9g x1lku1pv')
File "C:\Users\CPM\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\selenium\webdriver\remote\webdriver.py", line 861, in find_element
return self.execute(Command.FIND_ELEMENT, {"using": by, "value": value})["value"]
File "C:\Users\CPM\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\selenium\webdriver\remote\webdriver.py", line 444, in execute
self.error_handler.check_response(response)
File "C:\Users\CPM\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\selenium\webdriver\remote\errorhandler.py", line 249, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"css selector","selector":".x1i10hfl xjbqb8w x6umtig x1b1mbwd xaqea5y xav7gou x9f619 x1ypdohk xt0psk2 xe8uvvx xdj266r x11i5rnm xat24cr x1mh8g0r xexx8yu x4uap5 x18d9i69 xkhd6sd x16tdsg8 x1hl2dhg xggy1nq x1a2a7pz x1heor9g x1lku1pv"}
(Session info: MicrosoftEdge=107.0.1418.56)
Stacktrace:
Backtrace:
Microsoft::Applications::Events::EventProperties::SetProperty [0x00007FF696738532+9986]
Microsoft::Applications::Events::EventProperty::EventProperty [0x00007FF6966D5D62+1445202]
Ordinal0 [0x00007FF6962BFC8C+654476]
Ordinal0 [0x00007FF6963036C2+931522]
Ordinal0 [0x00007FF696303B10+932624]
Ordinal0 [0x00007FF69633FC17+1178647]
Ordinal0 [0x00007FF696323BDF+1063903]
Ordinal0 [0x00007FF6962F5FF4+876532]
Ordinal0 [0x00007FF69633CF70+1167216]
Ordinal0 [0x00007FF6963239B3+1063347]
Ordinal0 [0x00007FF6962F506A+872554]
Ordinal0 [0x00007FF6962F402E+868398]
Ordinal0 [0x00007FF6962F570F+874255]
Microsoft::Applications::Events::EventProperty::EventProperty [0x00007FF696596108+135416]
Microsoft::Applications::Events::EventProperty::EventProperty [0x00007FF6965802CF+45759]
Microsoft::Applications::Events::EventProperty::EventProperty [0x00007FF69658374C+59196]
Ordinal0 [0x00007FF6963CB1F4+1749492]
Microsoft::Applications::Events::EventProperty::EventProperty [0x00007FF6966DB65A+1467978]
Microsoft::Applications::Events::EventProperty::EventProperty [0x00007FF6966DFEF4+1486564]
Microsoft::Applications::Events::EventProperty::EventProperty [0x00007FF6966E004D+1486909]
Microsoft::Applications::Events::EventProperty::EventProperty [0x00007FF6966E7E0B+1519099]
BaseThreadInitThunk [0x00007FFC227F74B4+20]
RtlUserThreadStart [0x00007FFC236626A1+33]
|
[
"It tells you what the error is:\n\nselenium.common.exceptions.NoSuchElementException: Message: no such\nelement: Unable to locate element: {\"method\":\"css\nselector\",\"selector\":\".x1i10hfl xjbqb8w x6umtig x1b1mbwd xaqea5y\nxav7gou x9f619 x1ypdohk xt0psk2 xe8uvvx xdj266r x11i5rnm xat24cr\nx1mh8g0r xexx8yu x4uap5 x18d9i69 xkhd6sd x16tdsg8 x1hl2dhg xggy1nq\nx1a2a7pz x1heor9g x1lku1pv\"}\n\nMake sure you're searching for elements that actually exist.\n"
] |
[
0
] |
[] |
[] |
[
"python",
"selenium",
"web_scraping"
] |
stackoverflow_0074594496_python_selenium_web_scraping.txt
|
Q:
Logging printout of an executed python file within another file and printing out the result in terminal simultaneously
I have two Python files (main.py and main_test.py). The file main_test.py is executed within main.py. When I do not use a log file this is what gets printed out:
Main file: 17:41:18
Executed file: 17:41:18
Executed file: 17:41:19
Executed file: 17:41:20
When I use a log file and execute main.py>log, then I get the following:
Executed file: 17:41:18
Executed file: 17:41:19
Executed file: 17:41:20
Main file: 17:41:18
Also, when I use python3 main.py | tee log to print out and log the output, it waits and prints out after finishing everything. In addition, the problem of reversing remains.
Questions
How can I fix the reversed print out?
How can I print out results simultaneously in terminal and log them in a correct order?
Python files for replication
main.py
import os
import time
import datetime
import pytz
python_file_name = 'main_test'+'.py'
time_zone = pytz.timezone('US/Eastern') # Eastern-Time-Zone
curr_time = datetime.datetime.now().replace(microsecond=0).astimezone(time_zone).time()
print(f'Main file: {curr_time}')
cwd = os.path.join(os.getcwd(), python_file_name)
os.system(f'python3 {cwd}')
main_test.py
import pytz
import datetime
import time
time_zone = pytz.timezone('US/Eastern') # Eastern-Time-Zone
for i in range(3):
curr_time = datetime.datetime.now().replace(microsecond=0).astimezone(time_zone).time()
print(f'Executed file: {curr_time}')
time.sleep(1)
|
Logging printout of an executed python file within another file and printing out the result in terminal simultaneously
|
I have two Python files (main.py and main_test.py). The file main_test.py is executed within main.py. When I do not use a log file this is what gets printed out:
Main file: 17:41:18
Executed file: 17:41:18
Executed file: 17:41:19
Executed file: 17:41:20
When I use a log file and execute main.py>log, then I get the following:
Executed file: 17:41:18
Executed file: 17:41:19
Executed file: 17:41:20
Main file: 17:41:18
Also, when I use python3 main.py | tee log to print out and log the output, it waits and prints out after finishing everything. In addition, the problem of reversing remains.
Questions
How can I fix the reversed print out?
How can I print out results simultaneously in terminal and log them in a correct order?
Python files for replication
main.py
import os
import time
import datetime
import pytz
python_file_name = 'main_test'+'.py'
time_zone = pytz.timezone('US/Eastern') # Eastern-Time-Zone
curr_time = datetime.datetime.now().replace(microsecond=0).astimezone(time_zone).time()
print(f'Main file: {curr_time}')
cwd = os.path.join(os.getcwd(), python_file_name)
os.system(f'python3 {cwd}')
main_test.py
import pytz
import datetime
import time
time_zone = pytz.timezone('US/Eastern') # Eastern-Time-Zone
for i in range(3):
curr_time = datetime.datetime.now().replace(microsecond=0).astimezone(time_zone).time()
print(f'Executed file: {curr_time}')
time.sleep(1)
|
[] |
[] |
[
"When you run a script like this:\npython main.py>log\n\nThe shell redirects output from the script to a file called log. However, if the script launches other scripts in their own subshell (which is what os.system() does), the output of that does not get captured.\nWhat is surprising about your example is that you'd see anything at all when redirecting, since the output should have been redirected and no longer echo - so perhaps there's something you're leaving out here.\nAlso, tee waits for EOF on standard in, or for some error to occur, so the behaviour you're seeing there makes sense. This is intended behaviour.\nWhy bother with shells at all though? Why not write a few functions to call, and import the other Python module to call its functions? Or, if you need things to run in parallel (which they didn't in your example), look at multiprocessing.\nIn direct response to your questions:\n\n\"How can I fix the reversed print out?\"\nDon't use redirection, and write to file directly from the script, or ensure you use the same redirection when calling other scripts from the first (that will get messy), or capture the output from the subprocesses in the subshell and pipe it to the standard out of your main script.\n\n\"How can I print out results simultaneously in terminal and log them in a correct order?\"\nYou should probably just do it in the script, otherwise this is not a really a Python question and you should try SuperUser or similar sites to see if there's some way to have tee or similar tools write through live.\n\n\nIn general though, unless you have really strong reasons to have the other functionality running in other shells, you should look at solving your problems in the Python script. And if you can't, use you can use something like Popen or derivatives to capture the subscript's output and do what you need instead of relying on tools that may or may not be available on the host OS running your script.\n"
] |
[
-1
] |
[
"python"
] |
stackoverflow_0074594425_python.txt
|
Q:
Saving a dataframe after for loop
I run for loop on a dataframe. like below
for row in df["findings"]:
GPT2_model = TransformerSummarizer(transformer_type="GPT2",transformer_model_key="gpt2-medium")
full = ''.join(GPT2_model(row, min_length=60))
In this loop I extract one row at a time and then the GPT2_model model process and returns that row.
Now there are about 4000+ rows, I want to save these preprocessed rows in a datframe but don't know how?
A:
Try not using a for loop, cause the advantage of using pandas is exactly to avoid the for loops
in your place I would try :
GPT2_model = TransformerSummarizer(transformer_type="GPT2",transformer_model_key="gpt2-medium")
df["new_column"] = ''.join((df["findings"].apply(GPT2_model), min_length=60))
|
Saving a dataframe after for loop
|
I run for loop on a dataframe. like below
for row in df["findings"]:
GPT2_model = TransformerSummarizer(transformer_type="GPT2",transformer_model_key="gpt2-medium")
full = ''.join(GPT2_model(row, min_length=60))
In this loop I extract one row at a time and then the GPT2_model model process and returns that row.
Now there are about 4000+ rows, I want to save these preprocessed rows in a datframe but don't know how?
|
[
"Try not using a for loop, cause the advantage of using pandas is exactly to avoid the for loops\nin your place I would try :\nGPT2_model = TransformerSummarizer(transformer_type=\"GPT2\",transformer_model_key=\"gpt2-medium\")\ndf[\"new_column\"] = ''.join((df[\"findings\"].apply(GPT2_model), min_length=60)) \n\n"
] |
[
1
] |
[] |
[] |
[
"pandas",
"python"
] |
stackoverflow_0074591640_pandas_python.txt
|
Q:
Python Programming: syntax error question
Im completing a python course for school and found the following code. Ive submitted the code, btw I am a finance major very limited knowledge on coding, and I consitently get an error I do not understand.enter image description here
In []:
tuition_increase = 0.03
tuition = 8000
years = 5
print('{:10}{}'.format('tuition', 'years'))
print('-'*20)
for year, tuition in enumerate(it.accumulate(it.repeat(tuition, years+1), lambda x, y: x*(1+tuition_increase))):
print('{:<10.2f}{}'.format(tuition, year))
Pleasen help this is my last project, thank you and have a great day
A:
It should work perfectly fine, if you remove the first line of code.
|
Python Programming: syntax error question
|
Im completing a python course for school and found the following code. Ive submitted the code, btw I am a finance major very limited knowledge on coding, and I consitently get an error I do not understand.enter image description here
In []:
tuition_increase = 0.03
tuition = 8000
years = 5
print('{:10}{}'.format('tuition', 'years'))
print('-'*20)
for year, tuition in enumerate(it.accumulate(it.repeat(tuition, years+1), lambda x, y: x*(1+tuition_increase))):
print('{:<10.2f}{}'.format(tuition, year))
Pleasen help this is my last project, thank you and have a great day
|
[
"It should work perfectly fine, if you remove the first line of code.\n"
] |
[
0
] |
[] |
[] |
[
"error_handling",
"python",
"syntax"
] |
stackoverflow_0074594578_error_handling_python_syntax.txt
|
Q:
Pandas Rows MODE, AVERAGE Python
I have a pandas dataframe with a list of products in rows, and the columns are the sales of current month, current month - 1, current month -2 and current month - 3
for all rows I want to add a new column with MODE(most frequent number in row), average, and number off months with sales more than zero and get something like this
So, how can I calculate the MODE for each row and;
Example first row, 2 and zero are the most frequent, I only can have one value, which one? and how to calculate
Second row, all number are different, it doesn't have a mode?
How to add the average column off each row?
And last column, get for each row the number off values bigger the zero, I want to count the number of month with sales
Thanks
A:
Here is a solution on a toy pandas.DataFrame, albeit there might be some codes that are more efficient.
import pandas as pd
import numpy as np
df = pd.DataFrame({'MES-1':[1,2,3,4],'MES-2':[2,2,3,-2],'MES-3':[-1,2,-3,-1]})
modes, sup0, avg = [],[],[]
for line in range(df.shape[0]):
series = pd.Series(df.iloc[line])
if len(series.mode()) == 1: # If there is indeed a mode
modes.append(float(series.mode()))
else:
modes.append(np.nan)
i = 0
for element in series:
if element > 0:
i+= 1
sup0.append(i)
avg.append(series.mean())
df['Mode'],df['sup0'],df['avg'] = modes, sup0, avg
EDIT: Added the code for the average.
As for your question, I think that you should keep the Mode "unknown" for series that don't have any explicit value for it. This is the purpose of numpy.nan.
|
Pandas Rows MODE, AVERAGE Python
|
I have a pandas dataframe with a list of products in rows, and the columns are the sales of current month, current month - 1, current month -2 and current month - 3
for all rows I want to add a new column with MODE(most frequent number in row), average, and number off months with sales more than zero and get something like this
So, how can I calculate the MODE for each row and;
Example first row, 2 and zero are the most frequent, I only can have one value, which one? and how to calculate
Second row, all number are different, it doesn't have a mode?
How to add the average column off each row?
And last column, get for each row the number off values bigger the zero, I want to count the number of month with sales
Thanks
|
[
"Here is a solution on a toy pandas.DataFrame, albeit there might be some codes that are more efficient.\nimport pandas as pd\nimport numpy as np\n\ndf = pd.DataFrame({'MES-1':[1,2,3,4],'MES-2':[2,2,3,-2],'MES-3':[-1,2,-3,-1]})\n\nmodes, sup0, avg = [],[],[]\nfor line in range(df.shape[0]):\n series = pd.Series(df.iloc[line])\n if len(series.mode()) == 1: # If there is indeed a mode\n modes.append(float(series.mode()))\n else:\n modes.append(np.nan)\n i = 0\n for element in series:\n if element > 0:\n i+= 1\n sup0.append(i)\n avg.append(series.mean())\n\ndf['Mode'],df['sup0'],df['avg'] = modes, sup0, avg\n\nEDIT: Added the code for the average.\nAs for your question, I think that you should keep the Mode \"unknown\" for series that don't have any explicit value for it. This is the purpose of numpy.nan.\n"
] |
[
0
] |
[] |
[] |
[
"dataframe",
"pandas",
"python",
"statistics"
] |
stackoverflow_0074594511_dataframe_pandas_python_statistics.txt
|
Q:
Why when I try to search for a product, nothing comes up?
I've created an ecommerce website using Django, though the search results aren't coming up when I try to search for a product. For example, when I try to search for part of a product title, like throat spray, nothing comes up even though there is a throat spray in the database.
I tried using the Post and Get methods though it didn't really make any difference between the two. I checked to make sure the url for the show_product page works and it does. I'm expecting search results for what I searched for. Though, the search page goes through, nothing comes up as the search results. Instead I get a /search/?searched=throat-spray
My views.py:
def search(request):
if 'searched' in request.GET:
searched = request.GET['searched']
products = Product.objects.filter(title__icontains=searched)
return render(request, 'epharmacyweb/search.html', {'searched': searched, 'product': products})
My search.html:
<center>
{% if searched %}
<h1>Search Results for {{ searched }}</h1>
<br>
{% for product in products %}
<a href="{% url 'epharmacyweb/show-product' product.title %}">{{ product }}</a>
{% endfor %}
</br>
{% else %}
<h1>You haven't searched anything yet...</h1>
{% endif %}
</center>
My urls.py:
path('search/', views.search, name='search'),
path('search-error/', views.search_error, name='search_error'),
path('show-product/', views.show_product, name='show-product'),
My show_product.html:
<div class="col">
<div class="card shadow-sm">
<img class="img-fluid" alt="Responsive image" src="{{ product.image.url }}">
<div class="card-body">
<p class="card-text">
<a class="text-dark text-decoration-none" href="{{ product.get_absolute_url }}">{{ product.title }}</a>
</p>
<div class="d-flex justify-content-between align-items-center">
<small class="text-muted"></small>
</div>
</div>
</div>
</div>
A:
You're passing a variable named product to the template...
return render(request, 'epharmacyweb/search.html', {'searched': searched, 'product': products})
... but then the template tries to access a variable named products.
{% for product in products %}
You have a variable name mismatch. Change 'product': products to 'products': products in the view.
|
Why when I try to search for a product, nothing comes up?
|
I've created an ecommerce website using Django, though the search results aren't coming up when I try to search for a product. For example, when I try to search for part of a product title, like throat spray, nothing comes up even though there is a throat spray in the database.
I tried using the Post and Get methods though it didn't really make any difference between the two. I checked to make sure the url for the show_product page works and it does. I'm expecting search results for what I searched for. Though, the search page goes through, nothing comes up as the search results. Instead I get a /search/?searched=throat-spray
My views.py:
def search(request):
if 'searched' in request.GET:
searched = request.GET['searched']
products = Product.objects.filter(title__icontains=searched)
return render(request, 'epharmacyweb/search.html', {'searched': searched, 'product': products})
My search.html:
<center>
{% if searched %}
<h1>Search Results for {{ searched }}</h1>
<br>
{% for product in products %}
<a href="{% url 'epharmacyweb/show-product' product.title %}">{{ product }}</a>
{% endfor %}
</br>
{% else %}
<h1>You haven't searched anything yet...</h1>
{% endif %}
</center>
My urls.py:
path('search/', views.search, name='search'),
path('search-error/', views.search_error, name='search_error'),
path('show-product/', views.show_product, name='show-product'),
My show_product.html:
<div class="col">
<div class="card shadow-sm">
<img class="img-fluid" alt="Responsive image" src="{{ product.image.url }}">
<div class="card-body">
<p class="card-text">
<a class="text-dark text-decoration-none" href="{{ product.get_absolute_url }}">{{ product.title }}</a>
</p>
<div class="d-flex justify-content-between align-items-center">
<small class="text-muted"></small>
</div>
</div>
</div>
</div>
|
[
"You're passing a variable named product to the template...\nreturn render(request, 'epharmacyweb/search.html', {'searched': searched, 'product': products})\n\n... but then the template tries to access a variable named products.\n{% for product in products %}\n\nYou have a variable name mismatch. Change 'product': products to 'products': products in the view.\n"
] |
[
1
] |
[] |
[] |
[
"django",
"python"
] |
stackoverflow_0074594528_django_python.txt
|
Q:
How can I get the most common headers at this moment?
I am using the Python requests library to scrape, but I am pasting headers in the code:
headers_list = [
{'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36'},
{'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:105.0) Gecko/20100101 Firefox/105.0'},
{'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:104.0) Gecko/20100101 Firefox/104.0'},
{'User-Agent':'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36'},
{'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/106.0.0.0 Safari/537.36'}]
Over time, the headers will no longer be the most common headers.
Is there a way to get the most common headers at this moment?
A:
You could use the browser inspection. With that you could have a complete details of request, not only the headers
You only need to open the web in your favorite browser, inspect it, in network tab choose one of the first request and right click and get what you need from the request
Advice
Sometimes the headers are not the problem in a web scrapping. It could be:
Web content is loaded with ajax and takes long
Some security strategy like login, captcha, etc
White list access at ip layer
Some advanced and complex javascript protection
Alternatives
Selenium uses a real browser, so no matter what, the web page should be opened
from selenium import webdriver
driver = webdriver.Chrome(executable_path="/foo/bar/libs/chromedriver")
driver.get("http://www.google.com")
print(driver.title)
driver.close()
|
How can I get the most common headers at this moment?
|
I am using the Python requests library to scrape, but I am pasting headers in the code:
headers_list = [
{'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36'},
{'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:105.0) Gecko/20100101 Firefox/105.0'},
{'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:104.0) Gecko/20100101 Firefox/104.0'},
{'User-Agent':'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36'},
{'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/106.0.0.0 Safari/537.36'}]
Over time, the headers will no longer be the most common headers.
Is there a way to get the most common headers at this moment?
|
[
"You could use the browser inspection. With that you could have a complete details of request, not only the headers\nYou only need to open the web in your favorite browser, inspect it, in network tab choose one of the first request and right click and get what you need from the request\n\n\nAdvice\nSometimes the headers are not the problem in a web scrapping. It could be:\n\nWeb content is loaded with ajax and takes long\nSome security strategy like login, captcha, etc\nWhite list access at ip layer\nSome advanced and complex javascript protection\n\nAlternatives\nSelenium uses a real browser, so no matter what, the web page should be opened\nfrom selenium import webdriver\n\ndriver = webdriver.Chrome(executable_path=\"/foo/bar/libs/chromedriver\")\ndriver.get(\"http://www.google.com\")\nprint(driver.title)\ndriver.close()\n\n"
] |
[
0
] |
[] |
[] |
[
"python",
"web_scraping"
] |
stackoverflow_0074586905_python_web_scraping.txt
|
Q:
how do i use list comprehensions to print a list of all possible dimensions of a cuboid in python?
You are given three integers x,y and z representing the dimensions of a cuboid along with an integer n. Print a list of all possible coordinates given by (i,j,k) on a 3D grid where the sum of i+j+k is not equal to n. Here,0<=i<=x; 0<=j<=y;0<=k<=z. Please use list comprehensions rather than multiple loops, as a learning exercise.
I'm unable to solve this problem. Could anyone help me out with it?
A:
Try it online!
x, y, z, n = 2, 3, 4, 5
print([(i, j, k) for i in range(x + 1) for j in range(y + 1)
for k in range(z + 1) if i + j + k != n])
Output:
[(0, 0, 0), (0, 0, 1), (0, 0, 2), (0, 0, 3), (0, 0, 4), (0, 1, 0), (0, 1, 1), (0, 1, 2), (0, 1, 3), (0, 2, 0), (0, 2, 1), (0, 2, 2), (0, 2, 4), (0, 3, 0), (0, 3, 1), (0, 3, 3), (0, 3, 4), (1, 0, 0), (1, 0, 1), (1, 0, 2), (1, 0, 3), (1, 1, 0), (1, 1, 1), (1, 1, 2), (1, 1, 4), (1, 2, 0), (1, 2, 1), (1, 2, 3), (1, 2, 4), (1, 3, 0), (1, 3, 2), (1, 3, 3), (1, 3, 4), (2, 0, 0), (2, 0, 1), (2, 0, 2), (2, 0, 4), (2, 1, 0), (2, 1, 1), (2, 1, 3), (2, 1, 4), (2, 2, 0), (2, 2, 2), (2, 2, 3), (2, 2, 4), (2, 3, 1), (2, 3, 2), (2, 3, 3), (2, 3, 4)]
A:
if __name__ == '__main__':
x, y, z, n = (int(input().strip()) for _ in range(4))
print([[i,j,k] for i in range(x+1) for j in range(y+1) for k in range(z+1) if i+j+k!=n ])
A:
print([[a, b, c] for a in range(x + 1) for b in range(y + 1) for c in range(z + 1) if a + b + c != n])
A:
If your goal is to print a list of lists of all possible combinations of (i, j, k) for the given x, y, z values where the sum of i + j + k is not equal to n you can try:
print([[i, j, k] for i in range(x + 1) for j in range(y + 1)
for k in range(z + 1) if i + j + k != n])
|
how do i use list comprehensions to print a list of all possible dimensions of a cuboid in python?
|
You are given three integers x,y and z representing the dimensions of a cuboid along with an integer n. Print a list of all possible coordinates given by (i,j,k) on a 3D grid where the sum of i+j+k is not equal to n. Here,0<=i<=x; 0<=j<=y;0<=k<=z. Please use list comprehensions rather than multiple loops, as a learning exercise.
I'm unable to solve this problem. Could anyone help me out with it?
|
[
"Try it online!\nx, y, z, n = 2, 3, 4, 5\nprint([(i, j, k) for i in range(x + 1) for j in range(y + 1)\n for k in range(z + 1) if i + j + k != n])\n\nOutput:\n[(0, 0, 0), (0, 0, 1), (0, 0, 2), (0, 0, 3), (0, 0, 4), (0, 1, 0), (0, 1, 1), (0, 1, 2), (0, 1, 3), (0, 2, 0), (0, 2, 1), (0, 2, 2), (0, 2, 4), (0, 3, 0), (0, 3, 1), (0, 3, 3), (0, 3, 4), (1, 0, 0), (1, 0, 1), (1, 0, 2), (1, 0, 3), (1, 1, 0), (1, 1, 1), (1, 1, 2), (1, 1, 4), (1, 2, 0), (1, 2, 1), (1, 2, 3), (1, 2, 4), (1, 3, 0), (1, 3, 2), (1, 3, 3), (1, 3, 4), (2, 0, 0), (2, 0, 1), (2, 0, 2), (2, 0, 4), (2, 1, 0), (2, 1, 1), (2, 1, 3), (2, 1, 4), (2, 2, 0), (2, 2, 2), (2, 2, 3), (2, 2, 4), (2, 3, 1), (2, 3, 2), (2, 3, 3), (2, 3, 4)]\n\n",
"if __name__ == '__main__':\n x, y, z, n = (int(input().strip()) for _ in range(4))\n print([[i,j,k] for i in range(x+1) for j in range(y+1) for k in range(z+1) if i+j+k!=n ])\n\n",
"print([[a, b, c] for a in range(x + 1) for b in range(y + 1) for c in range(z + 1) if a + b + c != n])\n\n",
"If your goal is to print a list of lists of all possible combinations of (i, j, k) for the given x, y, z values where the sum of i + j + k is not equal to n you can try:\n\nprint([[i, j, k] for i in range(x + 1) for j in range(y + 1)\n for k in range(z + 1) if i + j + k != n])\n\n"
] |
[
1,
0,
0,
0
] |
[
"if name == 'main':\nx=int(input())\ny=int(input())\nz=int(input())\nn=int(input())\nans[]\nfor i in range(x+1):\n for j in range(y+1):\n for k in range(z+1):\n\n\n if(i+j+k)!=n:\n ans.append([i,j,k])\n\nprint(ans)\n\n"
] |
[
-1
] |
[
"list",
"python"
] |
stackoverflow_0070055982_list_python.txt
|
Q:
Selenium - How to open a browser with the driver once it's been closed
I have a Start button which once pressed, will navigate to a URL. The button then turns into a Stop button which will close the browser. Once the Stop button is pressed, it turns back into a Start button which I want to open the browser again. The problem is that I'm getting the error once the driver has been closed and the Start button is pressed again:
selenium.common.exceptions.InvalidSessionIdException: Message: invalid session id
I was wondering if it was possible to do this? I tried putting a try/exception which declares a new driver object but this doesn't seem to work.
CODE:
def start_stop_button_clicked(button_status = "Start"):
if button_status == "Start":
try:
driver.get("https://basketball-reference.com")
start_stop_button.configure(text = "STOP", fg = "red", command = lambda: start_stop_button_clicked("Stop"))
except InvalidSessionIdException:
driver = webdriver.Chrome("C:\\Users\\draze\\Documents\\Python Scripts\\chromedriver\\chromedriver.exe")
driver.get("https://basketball-reference.com")
start_stop_button.configure(text = "STOP", fg = "red", command = lambda: start_stop_button_clicked("Stop"))
elif button_status == "Stop":
driver.close()
start_stop_button.configure(text = "START", fg = "green", command = lambda: start_stop_button_clicked("Start"))
start_stop_button = Button(root_window, text = "START", fg = "green", command = lambda: start_stop_button_clicked("Start"))
start_stop_button.grid(column = 0, row = 1)
root_window.mainloop()
UPDATE:
I tried turning the web driver into an array so that every time the Stop button is pressed, it closes the driver and then increments the index by one. The first time I run the script everything looks good, but when I press the Stop button, it closes the driver and then the script freezes. Here is the code:
start_stop_button = Button(root_window, text = "START", fg = "green", command = lambda: start_stop_button_clicked("Start", 0))
start_stop_button.grid(column = 2, row = 1, padx = 10, pady = 5)
def start_stop_button_clicked(button_status = "Start", driver_instance = 0):
if button_status == "Start":
driver.append(webdriver.Chrome("C:\\Users\\draze\\Documents\\Python Scripts\\chromedriver\\chromedriver.exe"))
try:
url = url_textfield.get()
if url[:8] == "https://":
driver[driver_instance].get(url)
elif url[:7] == "http://":
driver[driver_instance].get(url)
else:
driver[driver_instance].get("https://" + url)
start_stop_button.configure(text = "STOP", fg = "red", command = lambda: start_stop_button_clicked("Stop", driver_instance))
except InvalidSessionIdException:
#driver = webdriver.Chrome("C:\\Users\\draze\\Documents\\Python Scripts\\chromedriver\\chromedriver.exe")
driver.get("https://basketball-reference.com")
start_stop_button.configure(text = "STOP", fg = "red", command = lambda: start_stop_button_clicked("Stop", driver_instance))
elif button_status == "Stop":
driver[driver_instance].close()
driver_instance = driver_instance + 1
start_stop_button.configure(text = "START", fg = "green", command = lambda: start_stop_button_clicked("Start", driver_instance))
root_window.mainloop()
A:
So after changing the driver into an array of drivers, I also had to change driver[instance].close() to driver[instance].quit() and it stopped freezing!
Thank you for the suggestion @AbiSaran!
|
Selenium - How to open a browser with the driver once it's been closed
|
I have a Start button which once pressed, will navigate to a URL. The button then turns into a Stop button which will close the browser. Once the Stop button is pressed, it turns back into a Start button which I want to open the browser again. The problem is that I'm getting the error once the driver has been closed and the Start button is pressed again:
selenium.common.exceptions.InvalidSessionIdException: Message: invalid session id
I was wondering if it was possible to do this? I tried putting a try/exception which declares a new driver object but this doesn't seem to work.
CODE:
def start_stop_button_clicked(button_status = "Start"):
if button_status == "Start":
try:
driver.get("https://basketball-reference.com")
start_stop_button.configure(text = "STOP", fg = "red", command = lambda: start_stop_button_clicked("Stop"))
except InvalidSessionIdException:
driver = webdriver.Chrome("C:\\Users\\draze\\Documents\\Python Scripts\\chromedriver\\chromedriver.exe")
driver.get("https://basketball-reference.com")
start_stop_button.configure(text = "STOP", fg = "red", command = lambda: start_stop_button_clicked("Stop"))
elif button_status == "Stop":
driver.close()
start_stop_button.configure(text = "START", fg = "green", command = lambda: start_stop_button_clicked("Start"))
start_stop_button = Button(root_window, text = "START", fg = "green", command = lambda: start_stop_button_clicked("Start"))
start_stop_button.grid(column = 0, row = 1)
root_window.mainloop()
UPDATE:
I tried turning the web driver into an array so that every time the Stop button is pressed, it closes the driver and then increments the index by one. The first time I run the script everything looks good, but when I press the Stop button, it closes the driver and then the script freezes. Here is the code:
start_stop_button = Button(root_window, text = "START", fg = "green", command = lambda: start_stop_button_clicked("Start", 0))
start_stop_button.grid(column = 2, row = 1, padx = 10, pady = 5)
def start_stop_button_clicked(button_status = "Start", driver_instance = 0):
if button_status == "Start":
driver.append(webdriver.Chrome("C:\\Users\\draze\\Documents\\Python Scripts\\chromedriver\\chromedriver.exe"))
try:
url = url_textfield.get()
if url[:8] == "https://":
driver[driver_instance].get(url)
elif url[:7] == "http://":
driver[driver_instance].get(url)
else:
driver[driver_instance].get("https://" + url)
start_stop_button.configure(text = "STOP", fg = "red", command = lambda: start_stop_button_clicked("Stop", driver_instance))
except InvalidSessionIdException:
#driver = webdriver.Chrome("C:\\Users\\draze\\Documents\\Python Scripts\\chromedriver\\chromedriver.exe")
driver.get("https://basketball-reference.com")
start_stop_button.configure(text = "STOP", fg = "red", command = lambda: start_stop_button_clicked("Stop", driver_instance))
elif button_status == "Stop":
driver[driver_instance].close()
driver_instance = driver_instance + 1
start_stop_button.configure(text = "START", fg = "green", command = lambda: start_stop_button_clicked("Start", driver_instance))
root_window.mainloop()
|
[
"So after changing the driver into an array of drivers, I also had to change driver[instance].close() to driver[instance].quit() and it stopped freezing!\nThank you for the suggestion @AbiSaran!\n"
] |
[
0
] |
[] |
[] |
[
"python",
"selenium"
] |
stackoverflow_0074577902_python_selenium.txt
|
Q:
How to insert image in HTML in python script?
I am writing a htmlfunction in python as below:
def html(function):
htmlfile = open(function.name+".html", "w")
htmlfile.write("<html>\n")
# statement for title
# statement for header
htmlfile.write('<img src = '+function.name+'.png alt ="cfg">\n')
htmlfile.write("</html>\n")
htmlfile.close()
earlier I had files in the same directory where I am running my script. Now I have created a folder images for it and all moved files to this folder. function.name+ pulls up different functions name
how to change img src line? when I substitute images/'function.name+', the image doesn't get inserted to HTML.
All images have name in the format
<name of the function>.png
A:
You can implement it as below:
from robot.api import logger
img = "example.jpg"
strImg = '"{}"'.format(img)
img_tag = "<img src=" + strImg + ">"
logger.info(img_tag, html=True)
A:
Try this. I think this works. The image does get inserted to HTML.
def html(function):
htmlfile = open(function.name+".html", "w")
htmlfile.write("<html>\n")
statement for title
statement for header
htmlfile.write('<img src = "' + image_path + '" alt ="cfg">\n')
htmlfile.write("</html>\n")
htmlfile.close()
|
How to insert image in HTML in python script?
|
I am writing a htmlfunction in python as below:
def html(function):
htmlfile = open(function.name+".html", "w")
htmlfile.write("<html>\n")
# statement for title
# statement for header
htmlfile.write('<img src = '+function.name+'.png alt ="cfg">\n')
htmlfile.write("</html>\n")
htmlfile.close()
earlier I had files in the same directory where I am running my script. Now I have created a folder images for it and all moved files to this folder. function.name+ pulls up different functions name
how to change img src line? when I substitute images/'function.name+', the image doesn't get inserted to HTML.
All images have name in the format
<name of the function>.png
|
[
"You can implement it as below:\nfrom robot.api import logger\n\nimg = \"example.jpg\"\nstrImg = '\"{}\"'.format(img)\n\nimg_tag = \"<img src=\" + strImg + \">\"\nlogger.info(img_tag, html=True)\n\n",
"Try this. I think this works. The image does get inserted to HTML.\ndef html(function):\n htmlfile = open(function.name+\".html\", \"w\")\n htmlfile.write(\"<html>\\n\")\n statement for title\n statement for header\n htmlfile.write('<img src = \"' + image_path + '\" alt =\"cfg\">\\n')\n htmlfile.write(\"</html>\\n\")\n htmlfile.close()\n\n"
] |
[
2,
0
] |
[] |
[] |
[
"html",
"python"
] |
stackoverflow_0031194637_html_python.txt
|
Q:
How can I get rid of the lxml download error?
Then i download lxml with command pip install lxml in visual studio code i get this mistake:
Collecting lxml
Using cached lxml-4.9.1.tar.gz (3.4 MB)
Preparing metadata (setup.py) ... done
Installing collected packages: lxml
DEPRECATION: lxml is being installed using the legacy 'setup.py install' method, because it does not have a 'pyproject.toml' and the 'wheel' package is not installed. pip 23.1 will enforce this behaviour change. A possible replacement is to enable the '--use-pep517' option. Discussion can be found at https://github.com/pypa/pip/issues/8559
Running setup.py install for lxml ... error
error: subprocess-exited-with-error
× Running setup.py install for lxml did not run successfully.
│ exit code: 1
╰─> [76 lines of output]
Building lxml version 4.9.1.
Building without Cython.
Building against pre-built libxml2 andl libxslt libraries
running install
C:\Users\79628\AppData\Local\Programs\Python\Python311\Lib\site-packages\setuptools\command\install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
running build
running build_py
creating build
creating build\lib.win-amd64-cpython-311
creating build\lib.win-amd64-cpython-311\lxml
copying src\lxml\builder.py -> build\lib.win-amd64-cpython-311\lxml
copying src\lxml\cssselect.py -> build\lib.win-amd64-cpython-311\lxml
copying src\lxml\doctestcompare.py -> build\lib.win-amd64-cpython-311\lxml
copying src\lxml\ElementInclude.py -> build\lib.win-amd64-cpython-311\lxml
copying src\lxml\pyclasslookup.py -> build\lib.win-amd64-cpython-311\lxml
copying src\lxml\sax.py -> build\lib.win-amd64-cpython-311\lxml
copying src\lxml\usedoctest.py -> build\lib.win-amd64-cpython-311\lxml
copying src\lxml\_elementpath.py -> build\lib.win-amd64-cpython-311\lxml
copying src\lxml\__init__.py -> build\lib.win-amd64-cpython-311\lxml
creating build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\__init__.py -> build\lib.win-amd64-cpython-311\lxml\includes
creating build\lib.win-amd64-cpython-311\lxml\html
copying src\lxml\html\builder.py -> build\lib.win-amd64-cpython-311\lxml\html
copying src\lxml\html\clean.py -> build\lib.win-amd64-cpython-311\lxml\html
copying src\lxml\html\defs.py -> build\lib.win-amd64-cpython-311\lxml\html
copying src\lxml\html\diff.py -> build\lib.win-amd64-cpython-311\lxml\html
copying src\lxml\html\ElementSoup.py -> build\lib.win-amd64-cpython-311\lxml\html
copying src\lxml\html\formfill.py -> build\lib.win-amd64-cpython-311\lxml\html
copying src\lxml\html\html5parser.py -> build\lib.win-amd64-cpython-311\lxml\html
copying src\lxml\html\soupparser.py -> build\lib.win-amd64-cpython-311\lxml\html
copying src\lxml\html\usedoctest.py -> build\lib.win-amd64-cpython-311\lxml\html
copying src\lxml\html\_diffcommand.py -> build\lib.win-amd64-cpython-311\lxml\html
copying src\lxml\html\_html5builder.py -> build\lib.win-amd64-cpython-311\lxml\html
copying src\lxml\html\_setmixin.py -> build\lib.win-amd64-cpython-311\lxml\html
copying src\lxml\html\__init__.py -> build\lib.win-amd64-cpython-311\lxml\html
creating build\lib.win-amd64-cpython-311\lxml\isoschematron
copying src\lxml\isoschematron\__init__.py -> build\lib.win-amd64-cpython-311\lxml\isoschematron
copying src\lxml\etree.h -> build\lib.win-amd64-cpython-311\lxml
copying src\lxml\etree_api.h -> build\lib.win-amd64-cpython-311\lxml
copying src\lxml\lxml.etree.h -> build\lib.win-amd64-cpython-311\lxml
copying src\lxml\lxml.etree_api.h -> build\lib.win-amd64-cpython-311\lxml
copying src\lxml\includes\c14n.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\config.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\dtdvalid.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\etreepublic.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\htmlparser.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\relaxng.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\schematron.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\tree.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\uri.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\xinclude.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\xmlerror.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\xmlparser.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\xmlschema.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\xpath.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\xslt.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\__init__.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\etree_defs.h -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\lxml-version.h -> build\lib.win-amd64-cpython-311\lxml\includes
creating build\lib.win-amd64-cpython-311\lxml\isoschematron\resources
creating build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\rng
copying src\lxml\isoschematron\resources\rng\iso-schematron.rng -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\rng
creating build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl
copying src\lxml\isoschematron\resources\xsl\RNG2Schtrn.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl
copying src\lxml\isoschematron\resources\xsl\XSD2Schtrn.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl
creating build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1
copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_abstract_expand.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1
copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_dsdl_include.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1
copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_schematron_message.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1
copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_schematron_skeleton_for_xslt1.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1
copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_svrl_for_xslt1.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1
copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\readme.txt -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1
running build_ext
building 'lxml.etree' extension
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: legacy-install-failure
× Encountered error while trying to install package.
╰─> lxml
note: This is an issue with the package mentioned above, not pip.
hint: See above for output from the failure.
Other libraries download fine except lxml
How can i get rid of it? Maybe someone faced such a problem?
I also try it in pycharm, but it didn't work too.
I need in visual studio code.
I will be very grateful for the solution
A:
Did you see this part of the error message:
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
You need the Visual C++ compiler to build this package on your computer.
|
How can I get rid of the lxml download error?
|
Then i download lxml with command pip install lxml in visual studio code i get this mistake:
Collecting lxml
Using cached lxml-4.9.1.tar.gz (3.4 MB)
Preparing metadata (setup.py) ... done
Installing collected packages: lxml
DEPRECATION: lxml is being installed using the legacy 'setup.py install' method, because it does not have a 'pyproject.toml' and the 'wheel' package is not installed. pip 23.1 will enforce this behaviour change. A possible replacement is to enable the '--use-pep517' option. Discussion can be found at https://github.com/pypa/pip/issues/8559
Running setup.py install for lxml ... error
error: subprocess-exited-with-error
× Running setup.py install for lxml did not run successfully.
│ exit code: 1
╰─> [76 lines of output]
Building lxml version 4.9.1.
Building without Cython.
Building against pre-built libxml2 andl libxslt libraries
running install
C:\Users\79628\AppData\Local\Programs\Python\Python311\Lib\site-packages\setuptools\command\install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
running build
running build_py
creating build
creating build\lib.win-amd64-cpython-311
creating build\lib.win-amd64-cpython-311\lxml
copying src\lxml\builder.py -> build\lib.win-amd64-cpython-311\lxml
copying src\lxml\cssselect.py -> build\lib.win-amd64-cpython-311\lxml
copying src\lxml\doctestcompare.py -> build\lib.win-amd64-cpython-311\lxml
copying src\lxml\ElementInclude.py -> build\lib.win-amd64-cpython-311\lxml
copying src\lxml\pyclasslookup.py -> build\lib.win-amd64-cpython-311\lxml
copying src\lxml\sax.py -> build\lib.win-amd64-cpython-311\lxml
copying src\lxml\usedoctest.py -> build\lib.win-amd64-cpython-311\lxml
copying src\lxml\_elementpath.py -> build\lib.win-amd64-cpython-311\lxml
copying src\lxml\__init__.py -> build\lib.win-amd64-cpython-311\lxml
creating build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\__init__.py -> build\lib.win-amd64-cpython-311\lxml\includes
creating build\lib.win-amd64-cpython-311\lxml\html
copying src\lxml\html\builder.py -> build\lib.win-amd64-cpython-311\lxml\html
copying src\lxml\html\clean.py -> build\lib.win-amd64-cpython-311\lxml\html
copying src\lxml\html\defs.py -> build\lib.win-amd64-cpython-311\lxml\html
copying src\lxml\html\diff.py -> build\lib.win-amd64-cpython-311\lxml\html
copying src\lxml\html\ElementSoup.py -> build\lib.win-amd64-cpython-311\lxml\html
copying src\lxml\html\formfill.py -> build\lib.win-amd64-cpython-311\lxml\html
copying src\lxml\html\html5parser.py -> build\lib.win-amd64-cpython-311\lxml\html
copying src\lxml\html\soupparser.py -> build\lib.win-amd64-cpython-311\lxml\html
copying src\lxml\html\usedoctest.py -> build\lib.win-amd64-cpython-311\lxml\html
copying src\lxml\html\_diffcommand.py -> build\lib.win-amd64-cpython-311\lxml\html
copying src\lxml\html\_html5builder.py -> build\lib.win-amd64-cpython-311\lxml\html
copying src\lxml\html\_setmixin.py -> build\lib.win-amd64-cpython-311\lxml\html
copying src\lxml\html\__init__.py -> build\lib.win-amd64-cpython-311\lxml\html
creating build\lib.win-amd64-cpython-311\lxml\isoschematron
copying src\lxml\isoschematron\__init__.py -> build\lib.win-amd64-cpython-311\lxml\isoschematron
copying src\lxml\etree.h -> build\lib.win-amd64-cpython-311\lxml
copying src\lxml\etree_api.h -> build\lib.win-amd64-cpython-311\lxml
copying src\lxml\lxml.etree.h -> build\lib.win-amd64-cpython-311\lxml
copying src\lxml\lxml.etree_api.h -> build\lib.win-amd64-cpython-311\lxml
copying src\lxml\includes\c14n.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\config.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\dtdvalid.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\etreepublic.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\htmlparser.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\relaxng.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\schematron.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\tree.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\uri.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\xinclude.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\xmlerror.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\xmlparser.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\xmlschema.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\xpath.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\xslt.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\__init__.pxd -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\etree_defs.h -> build\lib.win-amd64-cpython-311\lxml\includes
copying src\lxml\includes\lxml-version.h -> build\lib.win-amd64-cpython-311\lxml\includes
creating build\lib.win-amd64-cpython-311\lxml\isoschematron\resources
creating build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\rng
copying src\lxml\isoschematron\resources\rng\iso-schematron.rng -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\rng
creating build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl
copying src\lxml\isoschematron\resources\xsl\RNG2Schtrn.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl
copying src\lxml\isoschematron\resources\xsl\XSD2Schtrn.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl
creating build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1
copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_abstract_expand.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1
copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_dsdl_include.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1
copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_schematron_message.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1
copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_schematron_skeleton_for_xslt1.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1
copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\iso_svrl_for_xslt1.xsl -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1
copying src\lxml\isoschematron\resources\xsl\iso-schematron-xslt1\readme.txt -> build\lib.win-amd64-cpython-311\lxml\isoschematron\resources\xsl\iso-schematron-xslt1
running build_ext
building 'lxml.etree' extension
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: legacy-install-failure
× Encountered error while trying to install package.
╰─> lxml
note: This is an issue with the package mentioned above, not pip.
hint: See above for output from the failure.
Other libraries download fine except lxml
How can i get rid of it? Maybe someone faced such a problem?
I also try it in pycharm, but it didn't work too.
I need in visual studio code.
I will be very grateful for the solution
|
[
"Did you see this part of the error message:\nerror: Microsoft Visual C++ 14.0 or greater is required. Get it with \"Microsoft C++ Build Tools\": https://visualstudio.microsoft.com/visual-cpp-build-tools/\n\nYou need the Visual C++ compiler to build this package on your computer.\n"
] |
[
1
] |
[] |
[] |
[
"lxml",
"python",
"python_3.x"
] |
stackoverflow_0074594649_lxml_python_python_3.x.txt
|
Q:
Cannot install Python secrets package
I have few dependencies in a project listed in the requirements.txt file,
requests==2.18.4
secrets==1.0.2
PyYAML==3.12
I wanted to installed them and called the command inside the virtualenv,
$ pip install -r bin/requirements.txt
I get the message provided below,
Collecting requests==2.18.4 (from -r bin/requirements.txt (line 1))
Using cached https://files.pythonhosted.org/packages/49/df/50aa1999ab9bde74656c2919d9c0c085fd2b3775fd3eca826012bef76d8c/requests-2.18.4-py2.py3-none-any.whl
Collecting secrets==1.0.2 (from -r bin/requirements.txt (line 2))
Could not find a version that satisfies the requirement secrets==1.0.2 (from -r bin/requirements.txt (line 2)) (from versions: )
No matching distribution found for secrets==1.0.2 (from -r bin/requirements.txt (line 2))
Inside the virtualenv, I can have the versions provided,
$ python -V
Python 3.7.2
$ pip -V
pip 19.0.3 from /Users/chaklader/PycharmProjects/Welance-Craft/env/lib/python3.7/site-packages/pip (python 3.7)
Whats the issue here?
Update
I had to delete the secrets and update the other dependencies:
requests==2.21.0
PyYAML==3.13
A:
While there is a secrets package, it’s very old (2012), has only one release, a broken website, and no info. It doesn’t appear to install on Python 2.7 or 3.7.
You may instead be trying to use the secrets standard library that’s built-in to Python 3.6+. It’s not a package, so you don’t need to install it or add it to your requirements.txt, simply import secrets. If you need it for an earlier version, there does appear to be an unofficial backport.
A:
When trying to install the package myself, I get the same error.
However, when searching this package on pypi.org, it seems that the last released version was in 2012 and the link there to the project's homepage leads to an almost completely empty webpage. I would thus assume that this package doesn't exist anymore.
A:
Now there is a backport of the secrets module for Python 2.7, 3.4 and 3.5 by the name of python2-secrets. (the name is a bit confusing in my opinion)
Installation:
pip install --user python2-secrets
A:
I got the same issue recently(2022) and solved this with
pip install python-secrets
see documentation in https://pypi.org/project/python-secrets/
|
Cannot install Python secrets package
|
I have few dependencies in a project listed in the requirements.txt file,
requests==2.18.4
secrets==1.0.2
PyYAML==3.12
I wanted to installed them and called the command inside the virtualenv,
$ pip install -r bin/requirements.txt
I get the message provided below,
Collecting requests==2.18.4 (from -r bin/requirements.txt (line 1))
Using cached https://files.pythonhosted.org/packages/49/df/50aa1999ab9bde74656c2919d9c0c085fd2b3775fd3eca826012bef76d8c/requests-2.18.4-py2.py3-none-any.whl
Collecting secrets==1.0.2 (from -r bin/requirements.txt (line 2))
Could not find a version that satisfies the requirement secrets==1.0.2 (from -r bin/requirements.txt (line 2)) (from versions: )
No matching distribution found for secrets==1.0.2 (from -r bin/requirements.txt (line 2))
Inside the virtualenv, I can have the versions provided,
$ python -V
Python 3.7.2
$ pip -V
pip 19.0.3 from /Users/chaklader/PycharmProjects/Welance-Craft/env/lib/python3.7/site-packages/pip (python 3.7)
Whats the issue here?
Update
I had to delete the secrets and update the other dependencies:
requests==2.21.0
PyYAML==3.13
|
[
"While there is a secrets package, it’s very old (2012), has only one release, a broken website, and no info. It doesn’t appear to install on Python 2.7 or 3.7.\nYou may instead be trying to use the secrets standard library that’s built-in to Python 3.6+. It’s not a package, so you don’t need to install it or add it to your requirements.txt, simply import secrets. If you need it for an earlier version, there does appear to be an unofficial backport.\n",
"When trying to install the package myself, I get the same error.\nHowever, when searching this package on pypi.org, it seems that the last released version was in 2012 and the link there to the project's homepage leads to an almost completely empty webpage. I would thus assume that this package doesn't exist anymore.\n",
"Now there is a backport of the secrets module for Python 2.7, 3.4 and 3.5 by the name of python2-secrets. (the name is a bit confusing in my opinion)\nInstallation:\npip install --user python2-secrets\n\n",
"I got the same issue recently(2022) and solved this with\npip install python-secrets\n\nsee documentation in https://pypi.org/project/python-secrets/\n"
] |
[
6,
1,
0,
0
] |
[] |
[] |
[
"python",
"virtualenv"
] |
stackoverflow_0054966977_python_virtualenv.txt
|
Q:
How to make a function that changes initialized coordinates
I'm struggling with making a function that changes the value of the coordinates if the parameters appropriate.
That's what I made:
class Move:
def __init__(self, x, y):
self.x = x
self.y = y
move = Move(5, 5)
def obstacle(axis, value, plus):
if plus is True:
if axis == value:
axis = axis + 1
print(f"x = {move.x}, y = {move.y}")
elif plus is False:
if axis == value:
axis = axis - 1
print(f"x = {move.x}, y = {move.y}")
obstacle(move.x, 5, False)
The program should print:
x = 4, y = 5
A:
What the program is currently doing is running obstacle(), going inside the "if plus is false" block and changing the axis value from 5 to 4, and that's it.
To print: x=4, y=5 you can:
Instead of the axis value change the move.x value
Or print the axis value instead of move.x
A:
Here is the code rewritten to work as you expect. But note that as @IBeFrogs implies move is global so you may as well just change it directly.
class Move:
def __init__(self, x, y):
self.x = x
self.y = y
move = Move(5, 5)
def obstacle(inst, value, plus):
if plus is True:
if inst.x == value:
inst.x = inst.x + 1
elif plus is False:
if inst.x == value:
inst.x = inst.x - 1
print(f"x = {move.x}, y = {move.y}")
obstacle(move, 5, False)
A:
Parameters in function only takes the value in variable not the variable.
So when you write like this:
axis = axis + 1
and
axis = axis - 1
You just change the parameter value axis.
Replace the two lines with:
move.x = move.x + 1
and
move.x = move.x - 1
And the final code is like this:
class Move:
def __init__(self, x, y):
self.x = x
self.y = y
move = Move(5, 5)
def obstacle(axis, value, plus):
if plus is True:
if axis == value:
move.x = move.x + 1
print(f"x = {move.x}, y = {move.y}")
elif plus is False:
if axis == value:
move.x = move.x - 1
print(f"x = {move.x}, y = {move.y}")
obstacle(move.x, 5, False)
|
How to make a function that changes initialized coordinates
|
I'm struggling with making a function that changes the value of the coordinates if the parameters appropriate.
That's what I made:
class Move:
def __init__(self, x, y):
self.x = x
self.y = y
move = Move(5, 5)
def obstacle(axis, value, plus):
if plus is True:
if axis == value:
axis = axis + 1
print(f"x = {move.x}, y = {move.y}")
elif plus is False:
if axis == value:
axis = axis - 1
print(f"x = {move.x}, y = {move.y}")
obstacle(move.x, 5, False)
The program should print:
x = 4, y = 5
|
[
"What the program is currently doing is running obstacle(), going inside the \"if plus is false\" block and changing the axis value from 5 to 4, and that's it.\nTo print: x=4, y=5 you can:\n\nInstead of the axis value change the move.x value\nOr print the axis value instead of move.x\n\n",
"Here is the code rewritten to work as you expect. But note that as @IBeFrogs implies move is global so you may as well just change it directly.\nclass Move:\n def __init__(self, x, y):\n self.x = x\n self.y = y\n\n\nmove = Move(5, 5)\n\ndef obstacle(inst, value, plus):\n if plus is True:\n if inst.x == value:\n inst.x = inst.x + 1\n elif plus is False:\n if inst.x == value:\n inst.x = inst.x - 1\n print(f\"x = {move.x}, y = {move.y}\")\n\n\nobstacle(move, 5, False)\n\n",
"Parameters in function only takes the value in variable not the variable.\nSo when you write like this:\naxis = axis + 1\n\nand\naxis = axis - 1\n\nYou just change the parameter value axis.\nReplace the two lines with:\nmove.x = move.x + 1\n\nand\nmove.x = move.x - 1\n\nAnd the final code is like this:\nclass Move:\n def __init__(self, x, y):\n self.x = x\n self.y = y\n\nmove = Move(5, 5)\ndef obstacle(axis, value, plus):\n if plus is True:\n if axis == value:\n move.x = move.x + 1\n print(f\"x = {move.x}, y = {move.y}\")\n elif plus is False:\n if axis == value:\n move.x = move.x - 1\n print(f\"x = {move.x}, y = {move.y}\")\n\nobstacle(move.x, 5, False)\n\n"
] |
[
0,
0,
0
] |
[] |
[] |
[
"class",
"constructor",
"coordinates",
"function",
"python"
] |
stackoverflow_0074594613_class_constructor_coordinates_function_python.txt
|
Q:
With list of tuples corresponding to a list of int values, create list corresponding to sum of each value in the list (python)
I have a huge list of sublists, each sublist consisting of a tuple and a list of 4 integers.
I want to create a list of unique tuples that adds each integer values of the list (keeping the four integers in the list separate).
Short Example:
[[(30, 40), [4, 7, 7, 1]],[(30, 40), [2, 9, 3, 4]],[(30, 40), [6, 5, 10, 0]],[(20, 40), [4, 0, 4, 0]],[(20, 40), [3, 4, 14, 5]],[(20, 40), [3, 2, 12, 0]],[(10, 40), [223, 22, 12, 9]]]
Output wanted:
[[(30, 40), [12, 21, 20, 5]],[(20, 40), [2, 9, 3, 4]],[(10, 40), [223, 22, 12, 9]]
I have tried using a dictionary
l = [[(30, 40), [4, 7, 7, 1]],[(30, 40), [2, 9, 3, 4]],[(30, 40), [6, 5, 10, 0]],[(20, 40), [4, 0, 4, 0]],[(20, 40), [3, 4, 14, 5]],[(20, 40), [3, 2, 12, 0]],[(10, 40), [223, 22, 12, 9]]]
dict_tuples = {}
for item in l:
if item[0] in dict_tuples:
dict_tuples[item[0]] += item[1]
else:
dict_tuples[item[0]] = item[1]
But here I am just getting a long list of integer values for each tuple. I want to sum of each index in the list of four integers.
A:
You can create a dictionary where keys are the first tuples and values are lists of sublists. In second step sum the values at each index:
lst = [
[(30, 40), [4, 7, 7, 1]],
[(30, 40), [2, 9, 3, 4]],
[(30, 40), [6, 5, 10, 0]],
[(20, 40), [4, 0, 4, 0]],
[(20, 40), [3, 4, 14, 5]],
[(20, 40), [3, 2, 12, 0]],
[(10, 40), [223, 22, 12, 9]],
]
out = {}
for t, l in lst:
out.setdefault(t, []).append(l)
out = [[k, [sum(t) for t in zip(*v)]] for k, v in out.items()]
print(out)
Prints:
[
[(30, 40), [12, 21, 20, 5]],
[(20, 40), [10, 6, 30, 5]],
[(10, 40), [223, 22, 12, 9]],
]
A:
itertools.groupby makes this trivial. This could be done in one go, but for the sake of seeing each step of the transformation:
from itertools import groupby
from operator import itemgetter
l = [[(30, 40), [4, 7, 7, 1]], [(30, 40), [2, 9, 3, 4]], [(30, 40), [6, 5, 10, 0]], [(20, 40), [4, 0, 4, 0]], [(20, 40), [3, 4, 14, 5]], [(20, 40), [3, 2, 12, 0]], [(10, 40), [223, 22, 12, 9]]]
s = sorted(l, key=itemgetter(0))
# [[(10, 40), [223, 22, 12, 9]], [(20, 40), [4, 0, 4, 0]], [(20, 40), [3, 4, 14, 5]], [(20, 40), [3, 2, 12, 0]], [(30, 40), [4, 7, 7, 1]], [(30, 40), [2, 9, 3, 4]], [(30, 40), [6, 5, 10, 0]]]
g = groupby(s, key=itemgetter(0))
l2 = [(k, [x[1] for x in v]) for k, v in g]
# [((10, 40), [[223, 22, 12, 9]]), ((20, 40), [[4, 0, 4, 0], [3, 4, 14, 5], [3, 2, 12, 0]]), ((30, 40), [[4, 7, 7, 1], [2, 9, 3, 4], [6, 5, 10, 0]])]
l3 = [(k, list(zip(*v))) for k, v in l2]
# [((10, 40), [(223,), (22,), (12,), (9,)]), ((20, 40), [(4, 3, 3), (0, 4, 2), (4, 14, 12), (0, 5, 0)]), ((30, 40), [(4, 2, 6), (7, 9, 5), (7, 3, 10), (1, 4, 0)])]
l4 = [(k, [sum(x) for x in v]) for k, v in l3]
# [((10, 40), [223, 22, 12, 9]), ((20, 40), [10, 6, 30, 5]), ((30, 40), [12, 21, 20, 5])]
|
With list of tuples corresponding to a list of int values, create list corresponding to sum of each value in the list (python)
|
I have a huge list of sublists, each sublist consisting of a tuple and a list of 4 integers.
I want to create a list of unique tuples that adds each integer values of the list (keeping the four integers in the list separate).
Short Example:
[[(30, 40), [4, 7, 7, 1]],[(30, 40), [2, 9, 3, 4]],[(30, 40), [6, 5, 10, 0]],[(20, 40), [4, 0, 4, 0]],[(20, 40), [3, 4, 14, 5]],[(20, 40), [3, 2, 12, 0]],[(10, 40), [223, 22, 12, 9]]]
Output wanted:
[[(30, 40), [12, 21, 20, 5]],[(20, 40), [2, 9, 3, 4]],[(10, 40), [223, 22, 12, 9]]
I have tried using a dictionary
l = [[(30, 40), [4, 7, 7, 1]],[(30, 40), [2, 9, 3, 4]],[(30, 40), [6, 5, 10, 0]],[(20, 40), [4, 0, 4, 0]],[(20, 40), [3, 4, 14, 5]],[(20, 40), [3, 2, 12, 0]],[(10, 40), [223, 22, 12, 9]]]
dict_tuples = {}
for item in l:
if item[0] in dict_tuples:
dict_tuples[item[0]] += item[1]
else:
dict_tuples[item[0]] = item[1]
But here I am just getting a long list of integer values for each tuple. I want to sum of each index in the list of four integers.
|
[
"You can create a dictionary where keys are the first tuples and values are lists of sublists. In second step sum the values at each index:\nlst = [\n [(30, 40), [4, 7, 7, 1]],\n [(30, 40), [2, 9, 3, 4]],\n [(30, 40), [6, 5, 10, 0]],\n [(20, 40), [4, 0, 4, 0]],\n [(20, 40), [3, 4, 14, 5]],\n [(20, 40), [3, 2, 12, 0]],\n [(10, 40), [223, 22, 12, 9]],\n]\n\nout = {}\nfor t, l in lst:\n out.setdefault(t, []).append(l)\n\nout = [[k, [sum(t) for t in zip(*v)]] for k, v in out.items()]\n\nprint(out)\n\nPrints:\n[\n [(30, 40), [12, 21, 20, 5]],\n [(20, 40), [10, 6, 30, 5]],\n [(10, 40), [223, 22, 12, 9]],\n]\n\n",
"itertools.groupby makes this trivial. This could be done in one go, but for the sake of seeing each step of the transformation:\nfrom itertools import groupby\nfrom operator import itemgetter\n\nl = [[(30, 40), [4, 7, 7, 1]], [(30, 40), [2, 9, 3, 4]], [(30, 40), [6, 5, 10, 0]], [(20, 40), [4, 0, 4, 0]], [(20, 40), [3, 4, 14, 5]], [(20, 40), [3, 2, 12, 0]], [(10, 40), [223, 22, 12, 9]]]\n\ns = sorted(l, key=itemgetter(0))\n# [[(10, 40), [223, 22, 12, 9]], [(20, 40), [4, 0, 4, 0]], [(20, 40), [3, 4, 14, 5]], [(20, 40), [3, 2, 12, 0]], [(30, 40), [4, 7, 7, 1]], [(30, 40), [2, 9, 3, 4]], [(30, 40), [6, 5, 10, 0]]]\n\ng = groupby(s, key=itemgetter(0))\n\nl2 = [(k, [x[1] for x in v]) for k, v in g]\n# [((10, 40), [[223, 22, 12, 9]]), ((20, 40), [[4, 0, 4, 0], [3, 4, 14, 5], [3, 2, 12, 0]]), ((30, 40), [[4, 7, 7, 1], [2, 9, 3, 4], [6, 5, 10, 0]])]\n\nl3 = [(k, list(zip(*v))) for k, v in l2]\n# [((10, 40), [(223,), (22,), (12,), (9,)]), ((20, 40), [(4, 3, 3), (0, 4, 2), (4, 14, 12), (0, 5, 0)]), ((30, 40), [(4, 2, 6), (7, 9, 5), (7, 3, 10), (1, 4, 0)])]\n\nl4 = [(k, [sum(x) for x in v]) for k, v in l3]\n# [((10, 40), [223, 22, 12, 9]), ((20, 40), [10, 6, 30, 5]), ((30, 40), [12, 21, 20, 5])]\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"dictionary",
"list",
"python",
"tuples",
"unique"
] |
stackoverflow_0074594645_dictionary_list_python_tuples_unique.txt
|
Q:
How can I better check for a pair using a data set of card numbers and their suits?
I have recently taken it upon my self to create a program that plays DJ Wild the poker game. I haven't ran into many bumps but I am not very familiar with time complexity which I know that many programs can run into. This is making me cautious about how many and how long my if statements are. Thus a question occurred, can I simplify the following if statement that uses the count method.
`
#imports
import random
import itertools
#declaration of the variables
ante = 0
bonus = 0
balance = 200
cards = []
hands0 = ['A','2','3','4','5','6','7','8','9','10','J','Q','K']
hands1 = ["Spade", "Club", "Diamond", "Heart"]
#initializing the card deck
carddeck = list(itertools.product(['A','2','3','4','5','6','7','8','9','10','J','Q','K'],["Spade", "Club", "Diamond", "Heart"]))
#shuffling the deck
random.shuffle(carddeck)
#drawing n number of cards from the shuffled deck
def user(n):
for i in range(n):
print("Player:", carddeck[i][0], carddeck[i][1])
cards.append(carddeck[i][0])
cards.append(carddeck[i][1])
carddeck.remove(carddeck[i])
user(5)
#print(cards)
if cards.count('2') == 2 or \
cards.count('3') == 2 or \
cards.count('4') == 2 or \
cards.count('5') == 2 or \
cards.count('6') == 2 or \
cards.count('7') == 2 or \
cards.count('8') == 2 or \
cards.count('9') == 2 or \
cards.count('10') == 2 or \
cards.count('J') == 2 or \
cards.count('Q') == 2 or \
cards.count('K') == 2 or \
cards.count('A') == 2:
print("You have a pair")
else:
print("You don't have a pair")
`
I have tried using the line breaks with all the \ implemented but I can't help but think that there is a more simplistic way to check for pairs using the list data for the cards created and dealt to the player.
A:
initialize boolean to false
loop the hands0 array
check value of array
if its true set your boolean to true and break your for loop
then you can check against your boolean
|
How can I better check for a pair using a data set of card numbers and their suits?
|
I have recently taken it upon my self to create a program that plays DJ Wild the poker game. I haven't ran into many bumps but I am not very familiar with time complexity which I know that many programs can run into. This is making me cautious about how many and how long my if statements are. Thus a question occurred, can I simplify the following if statement that uses the count method.
`
#imports
import random
import itertools
#declaration of the variables
ante = 0
bonus = 0
balance = 200
cards = []
hands0 = ['A','2','3','4','5','6','7','8','9','10','J','Q','K']
hands1 = ["Spade", "Club", "Diamond", "Heart"]
#initializing the card deck
carddeck = list(itertools.product(['A','2','3','4','5','6','7','8','9','10','J','Q','K'],["Spade", "Club", "Diamond", "Heart"]))
#shuffling the deck
random.shuffle(carddeck)
#drawing n number of cards from the shuffled deck
def user(n):
for i in range(n):
print("Player:", carddeck[i][0], carddeck[i][1])
cards.append(carddeck[i][0])
cards.append(carddeck[i][1])
carddeck.remove(carddeck[i])
user(5)
#print(cards)
if cards.count('2') == 2 or \
cards.count('3') == 2 or \
cards.count('4') == 2 or \
cards.count('5') == 2 or \
cards.count('6') == 2 or \
cards.count('7') == 2 or \
cards.count('8') == 2 or \
cards.count('9') == 2 or \
cards.count('10') == 2 or \
cards.count('J') == 2 or \
cards.count('Q') == 2 or \
cards.count('K') == 2 or \
cards.count('A') == 2:
print("You have a pair")
else:
print("You don't have a pair")
`
I have tried using the line breaks with all the \ implemented but I can't help but think that there is a more simplistic way to check for pairs using the list data for the cards created and dealt to the player.
|
[
"\ninitialize boolean to false\nloop the hands0 array\ncheck value of array\nif its true set your boolean to true and break your for loop\nthen you can check against your boolean\n\n"
] |
[
0
] |
[] |
[] |
[
"if_statement",
"python",
"simplify",
"time_complexity"
] |
stackoverflow_0074594730_if_statement_python_simplify_time_complexity.txt
|
Q:
How to add a surcharge to fine calculator in Python?
I am trying to create a fine amount calculator, but I don't know how to add a surcharge to the calculations.
For each fine amount in the code, I need to add a victims surcharge that varies depending on fine amount. If the fine amount is between $0 and $99 surcharge is $40, between $100 and $200 surcharge is $50, $201 and $350 surcharge is $60, $351 and $500 surcharge is $80, and over $500 surcharge is 40%.
Any suggestions for the best way to implement this into my current code?
thank you!
def ask_limit():
limit = float(input ("What was the speed limit? "))
return limit
def ask_speed():
speed = float(input ("What was your clocked speed? "))
return speed
def findfine(speed, limit):
if speed > 35 + limit :
over35fine = ((speed - limit) * 8 + 170)
print("Total fine amount is:", over35fine)
elif speed > 30 + limit :
over30fine = ((speed - limit) * 4 + 100)
print("Total fine amount is:", over30fine)
elif speed > limit :
normalfine = ((speed - limit) * 2 + 100)
print("Total fine amount is:", normalfine)
elif speed <= limit:
print("No fine, vehicle did not exceed allowed speed limit.")
def main():
limit = ask_limit()
speed = ask_speed()
findfine(speed, limit)
main()
A:
You can add a function to do that based on the fine value:
def ask_limit():
limit = float(input ("What was the speed limit? "))
return limit
def ask_speed():
speed = float(input ("What was your clocked speed? "))
return speed
def findfine(speed, limit):
if speed > 35 + limit :
return ((speed - limit) * 8 + 170)
elif speed > 30 + limit :
return ((speed - limit) * 4 + 100)
elif speed > limit :
return ((speed - limit) * 2 + 100)
elif speed <= limit:
0
def findSurcharge(fine):
if fine < 100:
return 40
elif fine < 200:
return 50
elif fine < 350:
return 60
elif fine < 500:
return 80
elif fine > 500:
return fine * 0.4
else:
return 0
def main():
limit = ask_limit()
speed = ask_speed()
fineAmount = findfine(speed, limit)
surcharge = findSurcharge(fineAmount)
print(f"fine: {fineAmount}, surcharge: {surcharge}")
main()
now you can print your message in the main function based on the surcharge and fine amount.
Note: adjust the if-elses in the findSurcharge function if they are not correct.
|
How to add a surcharge to fine calculator in Python?
|
I am trying to create a fine amount calculator, but I don't know how to add a surcharge to the calculations.
For each fine amount in the code, I need to add a victims surcharge that varies depending on fine amount. If the fine amount is between $0 and $99 surcharge is $40, between $100 and $200 surcharge is $50, $201 and $350 surcharge is $60, $351 and $500 surcharge is $80, and over $500 surcharge is 40%.
Any suggestions for the best way to implement this into my current code?
thank you!
def ask_limit():
limit = float(input ("What was the speed limit? "))
return limit
def ask_speed():
speed = float(input ("What was your clocked speed? "))
return speed
def findfine(speed, limit):
if speed > 35 + limit :
over35fine = ((speed - limit) * 8 + 170)
print("Total fine amount is:", over35fine)
elif speed > 30 + limit :
over30fine = ((speed - limit) * 4 + 100)
print("Total fine amount is:", over30fine)
elif speed > limit :
normalfine = ((speed - limit) * 2 + 100)
print("Total fine amount is:", normalfine)
elif speed <= limit:
print("No fine, vehicle did not exceed allowed speed limit.")
def main():
limit = ask_limit()
speed = ask_speed()
findfine(speed, limit)
main()
|
[
"You can add a function to do that based on the fine value:\ndef ask_limit():\n limit = float(input (\"What was the speed limit? \"))\n return limit\n\ndef ask_speed():\n speed = float(input (\"What was your clocked speed? \"))\n return speed\n\ndef findfine(speed, limit):\n if speed > 35 + limit :\n return ((speed - limit) * 8 + 170)\n elif speed > 30 + limit :\n return ((speed - limit) * 4 + 100)\n elif speed > limit :\n return ((speed - limit) * 2 + 100)\n elif speed <= limit:\n 0\n\ndef findSurcharge(fine):\n if fine < 100:\n return 40\n elif fine < 200:\n return 50\n elif fine < 350:\n return 60\n elif fine < 500:\n return 80\n elif fine > 500:\n return fine * 0.4\n else:\n return 0\n\ndef main():\n limit = ask_limit()\n speed = ask_speed()\n fineAmount = findfine(speed, limit)\n surcharge = findSurcharge(fineAmount)\n print(f\"fine: {fineAmount}, surcharge: {surcharge}\")\nmain() \n\nnow you can print your message in the main function based on the surcharge and fine amount.\nNote: adjust the if-elses in the findSurcharge function if they are not correct.\n"
] |
[
0
] |
[] |
[] |
[
"function",
"if_statement",
"python",
"python_3.x"
] |
stackoverflow_0074594737_function_if_statement_python_python_3.x.txt
|
Q:
Drawing line plot for a histogram
I'm trying to reproduce this chart using Altair as much as I can.
https://fivethirtyeight.com/wp-content/uploads/2014/04/hickey-bechdel-11.png?w=575
I'm stuck at getting the black line dividing pass/fail. This is similar to this Altair example: https://altair-viz.github.io/gallery/step_chart.html.
However: in the 538 viz the value for the final date must be extended for the full width of that last element. In the step chart example and my solution, the line stops as soon as the last date element is met.
I have looked at altair's github and google groups and found nothing similar to this problem.
import altair as alt
import pandas as pd
movies=pd.read_csv('https://raw.githubusercontent.com/fivethirtyeight/data/master/bechdel/movies.csv')
domain = ['ok', 'dubious','men', 'notalk', 'nowomen']
base=alt.Chart(movies).encode(
alt.X("year:N",bin=alt.BinParams(step=5,extent=[1970,2015]),axis=alt.Axis(labelAngle=0, labelLimit=50,labelFontSize=8),title=None), alt.Y("count()",stack='normalize',title=None,axis=alt.Axis(format='%',values=[0, 0.25,0.50,0.75,1]))
).properties(width=400)
main=base.transform_calculate(cleanrank='datum.clean_test == "ok" ? 1 : datum.clean_test == "dubious" ? 2 : datum.clean_test == "men" ? 3 : datum.clean_test == "notalk" ? 4 : 5'
).mark_bar(stroke='white' #add horizontal lines
).encode(
alt.Color("clean_test:N",scale=alt.Scale(
domain=domain,
range=['dodgerblue', 'skyblue', 'pink', 'coral','red']))
,order=alt.Order('cleanrank:O', sort='ascending')
)
extra=base.transform_calculate(cleanpass='datum.clean_test == "ok" ? "PASS" : datum.clean_test == "dubious" ? "PASS" : "FAIL"'
).mark_line(interpolate='step-after'
).encode(alt.Color("cleanpass:N",scale=alt.Scale(domain=['PASS','FAIL'],range=['black','white']))
)
alt.layer(main,extra).configure_scale(
bandPaddingInner=0.01 #smaller vertical lines
).resolve_scale(color='independent')
A:
One - rather hacky - way to make the step chart cover the beginning of the first until the end of the last bin is to control the bin positions manually (using the rank of the ordered bins).
This way we can add two lines: one with 'step-after' and another one with step-before shifted by one bin. From here on, the tick labels would still need to be replaced & centered with the appropriate bin labels, e.g. the levels from pd.cut...
Dataframe preparation
import altair as alt
import pandas as pd
movies=pd.read_csv('https://raw.githubusercontent.com/fivethirtyeight/data/master/bechdel/movies.csv')
domain = ['ok', 'dubious','men', 'notalk', 'nowomen']
movies['year_bin'] = pd.cut(movies['year'], range(1970, 2016, 5))
movies['year_rank'] = movies['year_bin'].cat.codes
movies = movies[movies['year_rank']>=0]
df_plot = movies[['year_rank', 'clean_test']].copy()
df_plot['year_rank_end'] = df_plot['year_rank'] + 1
df_plot['clean_pass'] = df_plot['clean_test'].apply(lambda x: 'PASS' if x in ['ok', 'dubious'] else 'FAIL')
Chart declaration
base=alt.Chart(df_plot).encode(
x=alt.X('year_rank',
axis=alt.Axis(labelAngle=0, labelLimit=50,labelFontSize=8),
title=None
),
x2='year_rank_end',
y=alt.Y('count()',title=None, stack='normalize',
axis=alt.Axis(format='%',values=[0, 0.25,0.50,0.75,1])
)
).properties(width=400)
main=base.transform_calculate(
cleanrank='datum.clean_test == "ok" ? 1 : datum.clean_test == "dubious" ? 2 : datum.clean_test == "men" ? 3 : datum.clean_test == "notalk" ? 4 : 5'
).mark_bar(
stroke='white' #add horizontal lines
).encode(
alt.Color("clean_test:N",scale=alt.Scale(
domain=domain,
range=['dodgerblue', 'skyblue', 'pink', 'coral','red']))
,order=alt.Order('cleanrank:O', sort='ascending')
)
extra=base.transform_calculate(
).mark_line(
interpolate='step-after'
).encode(
alt.Color("clean_pass:N",scale=alt.Scale(domain=['PASS','FAIL'],range=['black','white']))
)
extra2=base.transform_calculate(
# shift data by one bin, so that step-before matches the unshifted step-after
year_rank='datum.year_rank +1'
).mark_line(
interpolate='step-before'
).encode(
alt.Color("clean_pass:N",scale=alt.Scale(domain=['PASS','FAIL'],range=['black','white']), legend=None)
)
alt.layer(main, extra, extra2).configure_scale(
bandPaddingInner=0.01 #smaller vertical lines
).resolve_scale(color='independent')
|
Drawing line plot for a histogram
|
I'm trying to reproduce this chart using Altair as much as I can.
https://fivethirtyeight.com/wp-content/uploads/2014/04/hickey-bechdel-11.png?w=575
I'm stuck at getting the black line dividing pass/fail. This is similar to this Altair example: https://altair-viz.github.io/gallery/step_chart.html.
However: in the 538 viz the value for the final date must be extended for the full width of that last element. In the step chart example and my solution, the line stops as soon as the last date element is met.
I have looked at altair's github and google groups and found nothing similar to this problem.
import altair as alt
import pandas as pd
movies=pd.read_csv('https://raw.githubusercontent.com/fivethirtyeight/data/master/bechdel/movies.csv')
domain = ['ok', 'dubious','men', 'notalk', 'nowomen']
base=alt.Chart(movies).encode(
alt.X("year:N",bin=alt.BinParams(step=5,extent=[1970,2015]),axis=alt.Axis(labelAngle=0, labelLimit=50,labelFontSize=8),title=None), alt.Y("count()",stack='normalize',title=None,axis=alt.Axis(format='%',values=[0, 0.25,0.50,0.75,1]))
).properties(width=400)
main=base.transform_calculate(cleanrank='datum.clean_test == "ok" ? 1 : datum.clean_test == "dubious" ? 2 : datum.clean_test == "men" ? 3 : datum.clean_test == "notalk" ? 4 : 5'
).mark_bar(stroke='white' #add horizontal lines
).encode(
alt.Color("clean_test:N",scale=alt.Scale(
domain=domain,
range=['dodgerblue', 'skyblue', 'pink', 'coral','red']))
,order=alt.Order('cleanrank:O', sort='ascending')
)
extra=base.transform_calculate(cleanpass='datum.clean_test == "ok" ? "PASS" : datum.clean_test == "dubious" ? "PASS" : "FAIL"'
).mark_line(interpolate='step-after'
).encode(alt.Color("cleanpass:N",scale=alt.Scale(domain=['PASS','FAIL'],range=['black','white']))
)
alt.layer(main,extra).configure_scale(
bandPaddingInner=0.01 #smaller vertical lines
).resolve_scale(color='independent')
|
[
"One - rather hacky - way to make the step chart cover the beginning of the first until the end of the last bin is to control the bin positions manually (using the rank of the ordered bins).\nThis way we can add two lines: one with 'step-after' and another one with step-before shifted by one bin. From here on, the tick labels would still need to be replaced & centered with the appropriate bin labels, e.g. the levels from pd.cut...\n\nDataframe preparation\nimport altair as alt\nimport pandas as pd\n\nmovies=pd.read_csv('https://raw.githubusercontent.com/fivethirtyeight/data/master/bechdel/movies.csv')\ndomain = ['ok', 'dubious','men', 'notalk', 'nowomen']\n\nmovies['year_bin'] = pd.cut(movies['year'], range(1970, 2016, 5))\nmovies['year_rank'] = movies['year_bin'].cat.codes\nmovies = movies[movies['year_rank']>=0]\ndf_plot = movies[['year_rank', 'clean_test']].copy()\ndf_plot['year_rank_end'] = df_plot['year_rank'] + 1\ndf_plot['clean_pass'] = df_plot['clean_test'].apply(lambda x: 'PASS' if x in ['ok', 'dubious'] else 'FAIL')\n\nChart declaration\nbase=alt.Chart(df_plot).encode(\n x=alt.X('year_rank', \n axis=alt.Axis(labelAngle=0, labelLimit=50,labelFontSize=8),\n title=None\n ), \n x2='year_rank_end',\n y=alt.Y('count()',title=None, stack='normalize',\n axis=alt.Axis(format='%',values=[0, 0.25,0.50,0.75,1])\n )\n).properties(width=400)\n\nmain=base.transform_calculate(\n cleanrank='datum.clean_test == \"ok\" ? 1 : datum.clean_test == \"dubious\" ? 2 : datum.clean_test == \"men\" ? 3 : datum.clean_test == \"notalk\" ? 4 : 5'\n ).mark_bar(\n stroke='white' #add horizontal lines\n ).encode( \n alt.Color(\"clean_test:N\",scale=alt.Scale(\n domain=domain,\n range=['dodgerblue', 'skyblue', 'pink', 'coral','red']))\n ,order=alt.Order('cleanrank:O', sort='ascending')\n)\n\nextra=base.transform_calculate(\n ).mark_line(\n interpolate='step-after'\n ).encode(\n alt.Color(\"clean_pass:N\",scale=alt.Scale(domain=['PASS','FAIL'],range=['black','white']))\n )\n\nextra2=base.transform_calculate(\n # shift data by one bin, so that step-before matches the unshifted step-after\n year_rank='datum.year_rank +1' \n ).mark_line(\n interpolate='step-before'\n ).encode(\n alt.Color(\"clean_pass:N\",scale=alt.Scale(domain=['PASS','FAIL'],range=['black','white']), legend=None)\n )\n\nalt.layer(main, extra, extra2).configure_scale(\n bandPaddingInner=0.01 #smaller vertical lines\n).resolve_scale(color='independent')\n\n"
] |
[
0
] |
[] |
[] |
[
"altair",
"histogram",
"line",
"python",
"vega_lite"
] |
stackoverflow_0057878892_altair_histogram_line_python_vega_lite.txt
|
Q:
Detect passive or active sentence from text
Using the Python package spaCy, how can one detect whether a sentence uses a passive or active voice? For example, the following sentences should be detected as using a passive and active voice respectively:
passive_sentence = "John was accused of committing crimes by David"
# passive voice "John was accused"
active_sentence = "David accused John of committing crimes"
# active voice "David accused John"
A:
There is no easy solution for this. If you're looking for something simple, accuracy might take a hit. There is a wealth of info about NLP detecting passive and active voice in a text, proprietary algorithms being the most accurate, but they come at a cost.
What you're looking for, if it's for a custom hobby project, could have a quick solution trying out something like this, but if you follow the comments, you'll notice even here the accuracy is not in the double or even single 9 percentage rates.
You'll have to go more complex for higher accuracy, but don't expect double 9.
A:
The following solution employs spaCy's rule-based matching engine to detect and display the parts of a sentence that use the active or passive voice. No method is going to correctly identify 100% of sentences, especially those that are more complex, however, the solution below handles the vast majority of cases and can likely be improved to handle more edge cases.
Overview of Rule/Pattern Matching
The key components are the rules you provide to the matcher. I'll explain one of the passive voice rules below---if you understand one, you should be able to understand all the other rules and begin to construct your own rules to match particular patterns using the spaCy token-based matching documentation. Consider the following passive voice rule:
[{'DEP': 'nsubjpass'}, {'DEP': 'aux', 'OP': '*'}, {'DEP': 'auxpass'}, {'TAG': 'VBN'}]
This rule/pattern is used by the matcher to find a sequential combination of tokens. Specifically, the matcher will:
Find a token whose dependency label (DEP) is passive nominal subject (nsubjpass).
Find a token whose DEP is passive auxiliary (auxpass), preceded by zero or more tokens whose DEP is auxiliary (aux). Note that the key "OP" stands for "operator", which defines how often a token pattern should be matched. See the operators and quantifiers subsection of the spaCy documentation for more information.
Find a final token whose part of speech is tagged (TAG) as verb past participle (VBN).
If you are unfamiliar with Part of Speech (PoS) tags, please see this tutorial. Additionally, in-depth explanations of the dependency labels and what they mean are provided on the Universal Dependencies (UD) dependency documentation page.
Solution
import spacy
from spacy.matcher import Matcher
passive_sentences = [
"John was accused of committing crimes by David.",
"She was sent a cheque for a thousand euros.",
"He was given a book for his birthday.",
"He will be sent away to school.",
"The meeting was called off.",
"He was looked after by his grandmother.",
]
active_sentences = [
"David accused John of committing crimes.",
"Someone sent her a cheque for a thousand euros.",
"I gave him a book for his birthday.",
"They will send him away to school.",
"They called off the meeting.",
"His grandmother looked after him."
]
composite_sentences = [
"Three men seized me, and I was carried to the car."
]
# Load spaCy pipeline (model)
nlp = spacy.load('en_core_web_trf')
# Create pattern to match passive voice use
passive_rules = [
[{'DEP': 'nsubjpass'}, {'DEP': 'aux', 'OP': '*'}, {'DEP': 'auxpass'}, {'TAG': 'VBN'}],
[{'DEP': 'nsubjpass'}, {'DEP': 'aux', 'OP': '*'}, {'DEP': 'auxpass'}, {'TAG': 'VBZ'}],
[{'DEP': 'nsubjpass'}, {'DEP': 'aux', 'OP': '*'}, {'DEP': 'auxpass'}, {'TAG': 'RB'}, {'TAG': 'VBN'}],
]
# Create pattern to match active voice use
active_rules = [
[{'DEP': 'nsubj'}, {'TAG': 'VBD', 'DEP': 'ROOT'}],
[{'DEP': 'nsubj'}, {'TAG': 'VBP'}, {'TAG': 'VBG', 'OP': '!'}],
[{'DEP': 'nsubj'}, {'DEP': 'aux', 'OP': '*'}, {'TAG': 'VB'}],
[{'DEP': 'nsubj'}, {'DEP': 'aux', 'OP': '*'}, {'TAG': 'VBG'}],
[{'DEP': 'nsubj'}, {'TAG': 'RB', 'OP': '*'}, {'TAG': 'VBG'}],
[{'DEP': 'nsubj'}, {'TAG': 'RB', 'OP': '*'}, {'TAG': 'VBZ'}],
[{'DEP': 'nsubj'}, {'TAG': 'RB', 'OP': '+'}, {'TAG': 'VBD'}],
]
matcher = Matcher(nlp.vocab) # Init. the matcher with a vocab (note matcher vocab must share same vocab with docs)
matcher.add('Passive', passive_rules) # Add passive rules to matcher
matcher.add('Active', active_rules) # Add active rules to matcher
text = passive_sentences + active_sentences + composite_sentences # Combine various passive/active sentences
for sentence in text:
doc = nlp(sentence) # Process text with spaCy model
matches = matcher(doc) # Get matches
print("-"*40 + "\n" + sentence)
if len(matches) > 0:
for match_id, start, end in matches:
string_id = nlp.vocab.strings[match_id]
span = doc[start:end] # the matched span
print("\t{}: {}".format(string_id, span.text))
else:
print("\tNo active or passive voice detected.")
Output
----------------------------------------
John was accused of committing crimes by David.
Passive: John was accused
----------------------------------------
She was sent a cheque for a thousand euros.
Passive: She was sent
----------------------------------------
He was given a book for his birthday.
Passive: He was given
----------------------------------------
He will be sent away to school.
Passive: He will be sent
----------------------------------------
The meeting was called off.
Passive: meeting was called
----------------------------------------
He was looked after by his grandmother
Passive: He was looked
----------------------------------------
David accused John of committing crimes.
Active: David accused
----------------------------------------
Someone sent her a cheque for a thousand euros.
Active: Someone sent
----------------------------------------
I gave him a book for his birthday.
Active: I gave
----------------------------------------
They will send him away to school.
Active: They will send
----------------------------------------
They called off the meeting.
Active: They called
----------------------------------------
His grandmother looked after him..
Active: grandmother looked
----------------------------------------
Three men seized me, and I was carried to the car.
Active: men seized
Passive: I was carried
|
Detect passive or active sentence from text
|
Using the Python package spaCy, how can one detect whether a sentence uses a passive or active voice? For example, the following sentences should be detected as using a passive and active voice respectively:
passive_sentence = "John was accused of committing crimes by David"
# passive voice "John was accused"
active_sentence = "David accused John of committing crimes"
# active voice "David accused John"
|
[
"There is no easy solution for this. If you're looking for something simple, accuracy might take a hit. There is a wealth of info about NLP detecting passive and active voice in a text, proprietary algorithms being the most accurate, but they come at a cost.\nWhat you're looking for, if it's for a custom hobby project, could have a quick solution trying out something like this, but if you follow the comments, you'll notice even here the accuracy is not in the double or even single 9 percentage rates.\nYou'll have to go more complex for higher accuracy, but don't expect double 9.\n",
"The following solution employs spaCy's rule-based matching engine to detect and display the parts of a sentence that use the active or passive voice. No method is going to correctly identify 100% of sentences, especially those that are more complex, however, the solution below handles the vast majority of cases and can likely be improved to handle more edge cases.\nOverview of Rule/Pattern Matching\nThe key components are the rules you provide to the matcher. I'll explain one of the passive voice rules below---if you understand one, you should be able to understand all the other rules and begin to construct your own rules to match particular patterns using the spaCy token-based matching documentation. Consider the following passive voice rule:\n[{'DEP': 'nsubjpass'}, {'DEP': 'aux', 'OP': '*'}, {'DEP': 'auxpass'}, {'TAG': 'VBN'}]\n\nThis rule/pattern is used by the matcher to find a sequential combination of tokens. Specifically, the matcher will:\n\nFind a token whose dependency label (DEP) is passive nominal subject (nsubjpass).\nFind a token whose DEP is passive auxiliary (auxpass), preceded by zero or more tokens whose DEP is auxiliary (aux). Note that the key \"OP\" stands for \"operator\", which defines how often a token pattern should be matched. See the operators and quantifiers subsection of the spaCy documentation for more information.\nFind a final token whose part of speech is tagged (TAG) as verb past participle (VBN).\n\nIf you are unfamiliar with Part of Speech (PoS) tags, please see this tutorial. Additionally, in-depth explanations of the dependency labels and what they mean are provided on the Universal Dependencies (UD) dependency documentation page.\nSolution\nimport spacy\nfrom spacy.matcher import Matcher\n\npassive_sentences = [\n \"John was accused of committing crimes by David.\",\n \"She was sent a cheque for a thousand euros.\",\n \"He was given a book for his birthday.\",\n \"He will be sent away to school.\",\n \"The meeting was called off.\",\n \"He was looked after by his grandmother.\",\n]\nactive_sentences = [\n \"David accused John of committing crimes.\",\n \"Someone sent her a cheque for a thousand euros.\",\n \"I gave him a book for his birthday.\",\n \"They will send him away to school.\",\n \"They called off the meeting.\",\n \"His grandmother looked after him.\"\n]\ncomposite_sentences = [\n \"Three men seized me, and I was carried to the car.\"\n]\n\n# Load spaCy pipeline (model)\nnlp = spacy.load('en_core_web_trf')\n# Create pattern to match passive voice use\npassive_rules = [\n [{'DEP': 'nsubjpass'}, {'DEP': 'aux', 'OP': '*'}, {'DEP': 'auxpass'}, {'TAG': 'VBN'}],\n [{'DEP': 'nsubjpass'}, {'DEP': 'aux', 'OP': '*'}, {'DEP': 'auxpass'}, {'TAG': 'VBZ'}],\n [{'DEP': 'nsubjpass'}, {'DEP': 'aux', 'OP': '*'}, {'DEP': 'auxpass'}, {'TAG': 'RB'}, {'TAG': 'VBN'}],\n]\n# Create pattern to match active voice use\nactive_rules = [\n [{'DEP': 'nsubj'}, {'TAG': 'VBD', 'DEP': 'ROOT'}],\n [{'DEP': 'nsubj'}, {'TAG': 'VBP'}, {'TAG': 'VBG', 'OP': '!'}],\n [{'DEP': 'nsubj'}, {'DEP': 'aux', 'OP': '*'}, {'TAG': 'VB'}],\n [{'DEP': 'nsubj'}, {'DEP': 'aux', 'OP': '*'}, {'TAG': 'VBG'}],\n [{'DEP': 'nsubj'}, {'TAG': 'RB', 'OP': '*'}, {'TAG': 'VBG'}],\n [{'DEP': 'nsubj'}, {'TAG': 'RB', 'OP': '*'}, {'TAG': 'VBZ'}],\n [{'DEP': 'nsubj'}, {'TAG': 'RB', 'OP': '+'}, {'TAG': 'VBD'}],\n]\n\nmatcher = Matcher(nlp.vocab) # Init. the matcher with a vocab (note matcher vocab must share same vocab with docs)\nmatcher.add('Passive', passive_rules) # Add passive rules to matcher\nmatcher.add('Active', active_rules) # Add active rules to matcher\ntext = passive_sentences + active_sentences + composite_sentences # Combine various passive/active sentences\n\nfor sentence in text:\n doc = nlp(sentence) # Process text with spaCy model\n matches = matcher(doc) # Get matches\n print(\"-\"*40 + \"\\n\" + sentence)\n if len(matches) > 0:\n for match_id, start, end in matches:\n string_id = nlp.vocab.strings[match_id]\n span = doc[start:end] # the matched span\n print(\"\\t{}: {}\".format(string_id, span.text))\n else:\n print(\"\\tNo active or passive voice detected.\")\n\nOutput\n----------------------------------------\nJohn was accused of committing crimes by David.\n Passive: John was accused\n----------------------------------------\nShe was sent a cheque for a thousand euros.\n Passive: She was sent\n----------------------------------------\nHe was given a book for his birthday.\n Passive: He was given\n----------------------------------------\nHe will be sent away to school.\n Passive: He will be sent\n----------------------------------------\nThe meeting was called off.\n Passive: meeting was called\n----------------------------------------\nHe was looked after by his grandmother\n Passive: He was looked\n----------------------------------------\nDavid accused John of committing crimes.\n Active: David accused\n----------------------------------------\nSomeone sent her a cheque for a thousand euros.\n Active: Someone sent\n----------------------------------------\nI gave him a book for his birthday.\n Active: I gave\n----------------------------------------\nThey will send him away to school.\n Active: They will send\n----------------------------------------\nThey called off the meeting.\n Active: They called\n----------------------------------------\nHis grandmother looked after him..\n Active: grandmother looked\n----------------------------------------\nThree men seized me, and I was carried to the car.\n Active: men seized\n Passive: I was carried\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"nlp",
"python",
"spacy"
] |
stackoverflow_0074528441_nlp_python_spacy.txt
|
Q:
OOP Tkinter how to pass a value to a function
I'm rewriting my program in OOP and I'm faced with the problem that I can't turn to graphInA and graphInB in the calBut function. How can I implement this?
import customtkinter as CTtk
from tkinter import *
from tkinter import messagebox
from tkinter.ttk import Style
class App(CTtk.CTk):
def __init__(self):
super().__init__()
self.initUI()
self.checkNaN()
def initUI(self):
self.title("График функции")
CTtk.set_appearance_mode("dark")
CTtk.CTkLabel(text="Введите пределы интегрирования").grid(row=0,column=0)
CTtk.CTkLabel(text="до", width=50).grid(row=0,column=1)
CTtk.CTkLabel(text="и от", width=50).grid(row=0,column=3)
graphInA = CTtk.CTkEntry(width=50)
graphInA.grid(row=0, column=2)
graphInB = CTtk.CTkEntry(width=50)
graphInB.grid(row=0, column=4)
but = CTtk.CTkButton(text="Рассчитать", fg_color="black", width=50, command=self.calBut)
but.grid(row=0, column=5, padx=10)
def calBut(self):
if len(graphInA.get()) > 0 and len(graphInB.get()) > 0:
try:
float(graphInA.get())
float(graphInB.get())
except TypeError:
messagebox.showinfo("Ошибка", "Значение не число")
else: return 0
else:
messagebox.showinfo("Ошибка", "Введите значение")
app = App()
app.mainloop()
I wanted to use the function parameters, but I didn't figure it out
A:
You have two options. You can bind the variables you want as instance variables on self (i.e. the application object)
self.graphInA = CTtk.CTkEntry(width=50)
# Then later ...
if len(self.graphInA.get()) > 0:
...
or you can write a lambda that explicitly closes around the variables you want.
def calBut(self, graphInA, graphInB):
...
# Then, to bind the command ...
but = CTtk.CTkButton(
text="Рассчитать",
fg_color="black",
width=50,
command=lambda: self.calBut(graphInA, graphInB),
)
|
OOP Tkinter how to pass a value to a function
|
I'm rewriting my program in OOP and I'm faced with the problem that I can't turn to graphInA and graphInB in the calBut function. How can I implement this?
import customtkinter as CTtk
from tkinter import *
from tkinter import messagebox
from tkinter.ttk import Style
class App(CTtk.CTk):
def __init__(self):
super().__init__()
self.initUI()
self.checkNaN()
def initUI(self):
self.title("График функции")
CTtk.set_appearance_mode("dark")
CTtk.CTkLabel(text="Введите пределы интегрирования").grid(row=0,column=0)
CTtk.CTkLabel(text="до", width=50).grid(row=0,column=1)
CTtk.CTkLabel(text="и от", width=50).grid(row=0,column=3)
graphInA = CTtk.CTkEntry(width=50)
graphInA.grid(row=0, column=2)
graphInB = CTtk.CTkEntry(width=50)
graphInB.grid(row=0, column=4)
but = CTtk.CTkButton(text="Рассчитать", fg_color="black", width=50, command=self.calBut)
but.grid(row=0, column=5, padx=10)
def calBut(self):
if len(graphInA.get()) > 0 and len(graphInB.get()) > 0:
try:
float(graphInA.get())
float(graphInB.get())
except TypeError:
messagebox.showinfo("Ошибка", "Значение не число")
else: return 0
else:
messagebox.showinfo("Ошибка", "Введите значение")
app = App()
app.mainloop()
I wanted to use the function parameters, but I didn't figure it out
|
[
"You have two options. You can bind the variables you want as instance variables on self (i.e. the application object)\nself.graphInA = CTtk.CTkEntry(width=50)\n# Then later ...\nif len(self.graphInA.get()) > 0:\n ...\n\nor you can write a lambda that explicitly closes around the variables you want.\ndef calBut(self, graphInA, graphInB):\n ...\n\n# Then, to bind the command ...\nbut = CTtk.CTkButton(\n text=\"Рассчитать\",\n fg_color=\"black\",\n width=50,\n command=lambda: self.calBut(graphInA, graphInB),\n)\n\n"
] |
[
1
] |
[] |
[] |
[
"python",
"tkinter"
] |
stackoverflow_0074594753_python_tkinter.txt
|
Q:
Pandas: IndexingError: Unalignable boolean Series provided as indexer
I'm trying to run what I think is simple code to eliminate any columns with all NaNs, but can't get this to work (axis = 1 works just fine when eliminating rows):
import pandas as pd
import numpy as np
df = pd.DataFrame({'a':[1,2,np.nan,np.nan], 'b':[4,np.nan,6,np.nan], 'c':[np.nan, 8,9,np.nan], 'd':[np.nan,np.nan,np.nan,np.nan]})
df = df[df.notnull().any(axis = 0)]
print df
Full error:
raise IndexingError('Unalignable boolean Series provided as 'pandas.core.indexing.IndexingError: Unalignable boolean Series provided as indexer (index of the boolean Series and of the indexed object do not match
Expected output:
a b c
0 1.0 4.0 NaN
1 2.0 NaN 8.0
2 NaN 6.0 9.0
3 NaN NaN NaN
A:
You need loc, because filter by columns:
print (df.notnull().any(axis = 0))
a True
b True
c True
d False
dtype: bool
df = df.loc[:, df.notnull().any(axis = 0)]
print (df)
a b c
0 1.0 4.0 NaN
1 2.0 NaN 8.0
2 NaN 6.0 9.0
3 NaN NaN NaN
Or filter columns and then select by []:
print (df.columns[df.notnull().any(axis = 0)])
Index(['a', 'b', 'c'], dtype='object')
df = df[df.columns[df.notnull().any(axis = 0)]]
print (df)
a b c
0 1.0 4.0 NaN
1 2.0 NaN 8.0
2 NaN 6.0 9.0
3 NaN NaN NaN
Or dropna with parameter how='all' for remove all columns filled by NaNs only:
print (df.dropna(axis=1, how='all'))
a b c
0 1.0 4.0 NaN
1 2.0 NaN 8.0
2 NaN 6.0 9.0
3 NaN NaN NaN
A:
You can use dropna with axis=1 and thresh=1:
In[19]:
df.dropna(axis=1, thresh=1)
Out[19]:
a b c
0 1.0 4.0 NaN
1 2.0 NaN 8.0
2 NaN 6.0 9.0
3 NaN NaN NaN
This will drop any column which doesn't have at least 1 non-NaN value which will mean any column with all NaN will get dropped
The reason what you tried failed is because the boolean mask:
In[20]:
df.notnull().any(axis = 0)
Out[20]:
a True
b True
c True
d False
dtype: bool
cannot be aligned on the index which is what is used by default, as this produces a boolean mask on the columns
A:
I came here because I tried to filter the 1st 2 letters like this:
filtered = df[(df.Name[0:2] != 'xx')]
The fix was:
filtered = df[(df.Name.str[0:2] != 'xx')]
A:
I was facing the same issue while using a function in fairlearn package. Resetting the index inplace worked for me.
|
Pandas: IndexingError: Unalignable boolean Series provided as indexer
|
I'm trying to run what I think is simple code to eliminate any columns with all NaNs, but can't get this to work (axis = 1 works just fine when eliminating rows):
import pandas as pd
import numpy as np
df = pd.DataFrame({'a':[1,2,np.nan,np.nan], 'b':[4,np.nan,6,np.nan], 'c':[np.nan, 8,9,np.nan], 'd':[np.nan,np.nan,np.nan,np.nan]})
df = df[df.notnull().any(axis = 0)]
print df
Full error:
raise IndexingError('Unalignable boolean Series provided as 'pandas.core.indexing.IndexingError: Unalignable boolean Series provided as indexer (index of the boolean Series and of the indexed object do not match
Expected output:
a b c
0 1.0 4.0 NaN
1 2.0 NaN 8.0
2 NaN 6.0 9.0
3 NaN NaN NaN
|
[
"You need loc, because filter by columns:\nprint (df.notnull().any(axis = 0))\na True\nb True\nc True\nd False\ndtype: bool\n\ndf = df.loc[:, df.notnull().any(axis = 0)]\nprint (df)\n\n a b c\n0 1.0 4.0 NaN\n1 2.0 NaN 8.0\n2 NaN 6.0 9.0\n3 NaN NaN NaN\n\nOr filter columns and then select by []:\nprint (df.columns[df.notnull().any(axis = 0)])\nIndex(['a', 'b', 'c'], dtype='object')\n\ndf = df[df.columns[df.notnull().any(axis = 0)]]\nprint (df)\n\n a b c\n0 1.0 4.0 NaN\n1 2.0 NaN 8.0\n2 NaN 6.0 9.0\n3 NaN NaN NaN\n\nOr dropna with parameter how='all' for remove all columns filled by NaNs only:\nprint (df.dropna(axis=1, how='all'))\n a b c\n0 1.0 4.0 NaN\n1 2.0 NaN 8.0\n2 NaN 6.0 9.0\n3 NaN NaN NaN\n\n",
"You can use dropna with axis=1 and thresh=1:\nIn[19]:\ndf.dropna(axis=1, thresh=1)\n\nOut[19]: \n a b c\n0 1.0 4.0 NaN\n1 2.0 NaN 8.0\n2 NaN 6.0 9.0\n3 NaN NaN NaN\n\nThis will drop any column which doesn't have at least 1 non-NaN value which will mean any column with all NaN will get dropped\nThe reason what you tried failed is because the boolean mask: \nIn[20]:\ndf.notnull().any(axis = 0)\n\nOut[20]: \na True\nb True\nc True\nd False\ndtype: bool\n\ncannot be aligned on the index which is what is used by default, as this produces a boolean mask on the columns\n",
"I came here because I tried to filter the 1st 2 letters like this:\nfiltered = df[(df.Name[0:2] != 'xx')] \n\nThe fix was:\nfiltered = df[(df.Name.str[0:2] != 'xx')]\n\n",
"I was facing the same issue while using a function in fairlearn package. Resetting the index inplace worked for me.\n"
] |
[
34,
5,
1,
0
] |
[] |
[] |
[
"pandas",
"python"
] |
stackoverflow_0045352909_pandas_python.txt
|
Q:
Can I establish a reference variable that will then let me assign a new value?
I'm writing a script interpreter in Python using Sly. While figuring out how to best write assignment interpretation, I found myself unable to quite understand how to handle the left-hand side being different sorts of values. The scripting language I'm using, the left-hand side could be a variable or a field on an object (possibly a few layers deep). So the following assignments in the scripting language would all be correct:
bob = 4.5
this.speed = 5.3
eachthing.child[prevRef+2].name = "William"
Currently, I have the regular variables stored in a dictionary with type and value. The other sort of "variable" involves object fields which are sometimes a simple reference to the current object that the script is attached to and sometimes a reference to a global object, and it's possible to navigate through a hierarchy of objects (for example, going to the hero, finding his second sword, referencing its magical effect, and getting its name). Thus, it's not a simple lookup in some cases.
Initially writing this for variables, it made sense to look up the variable, verify its existence (and fail with an exception if it doesn't exist), evaluate the right-hand side, check their types against each other, and then assign the value. When thinking through the fields, there are multiple steps to determine where the field exists (FWIW, no operations are allowed in the LHS that would change state) and its details, and it seems slightly wasteful to go through all of those steps to determine the existence of the field, and then do all of the RHS evaluation, and then have to go through the LHS steps to assign a new value.
Is there a way to establish a "reference" like a pointer where I can then use it to assign the new value?
After reading through this article about how variables work in Python, I was pretty certain that it's method of pointing one variable at the other wouldn't allow me to make the assignment, and unfortunately, I was right. I poked around for various "by reference" pages on the web, but they were all discussing passing by ref into functions, not getting a reference to a variable/field.
Alternately, I'm probably looking at creating my own index where the values of the fields are stored in a dictionary such that I can directly access them with a reference value (maybe making that second sword's name accessible with an index of (hero, weapon7, effect, name) although that feels clunky).
A:
All of your assignment operations can be reduced to "set selector S of container C to value V". While Python doesn't let you create a reference value C[S], it certainly lets you pass around the tuple (C, S); that works because Python containers dictionaries, lists, etc.) are effectively reference values.
(In the case of bob = 4.5, the container is whatever you use to hold global variables and the selector might be the global's index or it might be the name, depending on how you handle globals. But it will certainly be some kind of value.)
You could use a triple instead, where the first value is a function to call with C and S as arguments. That might turn out to be easier. Or not. You haven't revealed much of your approach so it's difficult to provide an answer which isn't just generalities.
|
Can I establish a reference variable that will then let me assign a new value?
|
I'm writing a script interpreter in Python using Sly. While figuring out how to best write assignment interpretation, I found myself unable to quite understand how to handle the left-hand side being different sorts of values. The scripting language I'm using, the left-hand side could be a variable or a field on an object (possibly a few layers deep). So the following assignments in the scripting language would all be correct:
bob = 4.5
this.speed = 5.3
eachthing.child[prevRef+2].name = "William"
Currently, I have the regular variables stored in a dictionary with type and value. The other sort of "variable" involves object fields which are sometimes a simple reference to the current object that the script is attached to and sometimes a reference to a global object, and it's possible to navigate through a hierarchy of objects (for example, going to the hero, finding his second sword, referencing its magical effect, and getting its name). Thus, it's not a simple lookup in some cases.
Initially writing this for variables, it made sense to look up the variable, verify its existence (and fail with an exception if it doesn't exist), evaluate the right-hand side, check their types against each other, and then assign the value. When thinking through the fields, there are multiple steps to determine where the field exists (FWIW, no operations are allowed in the LHS that would change state) and its details, and it seems slightly wasteful to go through all of those steps to determine the existence of the field, and then do all of the RHS evaluation, and then have to go through the LHS steps to assign a new value.
Is there a way to establish a "reference" like a pointer where I can then use it to assign the new value?
After reading through this article about how variables work in Python, I was pretty certain that it's method of pointing one variable at the other wouldn't allow me to make the assignment, and unfortunately, I was right. I poked around for various "by reference" pages on the web, but they were all discussing passing by ref into functions, not getting a reference to a variable/field.
Alternately, I'm probably looking at creating my own index where the values of the fields are stored in a dictionary such that I can directly access them with a reference value (maybe making that second sword's name accessible with an index of (hero, weapon7, effect, name) although that feels clunky).
|
[
"All of your assignment operations can be reduced to \"set selector S of container C to value V\". While Python doesn't let you create a reference value C[S], it certainly lets you pass around the tuple (C, S); that works because Python containers dictionaries, lists, etc.) are effectively reference values.\n(In the case of bob = 4.5, the container is whatever you use to hold global variables and the selector might be the global's index or it might be the name, depending on how you handle globals. But it will certainly be some kind of value.)\nYou could use a triple instead, where the first value is a function to call with C and S as arguments. That might turn out to be easier. Or not. You haven't revealed much of your approach so it's difficult to provide an answer which isn't just generalities.\n"
] |
[
1
] |
[] |
[] |
[
"interpreter",
"python",
"sly"
] |
stackoverflow_0074594595_interpreter_python_sly.txt
|
Q:
Converting a pandas dataframe into a torch Dataset
I have a pandas dataframe with the following structure:
path
sentence
speech
input_values
labels
audio1.mp3
This is the first audio
[[0.0, 0.0, 0.0, ..., 0.0, 0.0]]
[[0.00005, ..., 0.0003]]
[23, 4, 6, 11, ..., 12
audio2.mp3
This is the second audio
[[0.0, 0.0, 0.0, ..., 0.0, 0.0]]
[[0.000044, ..., 0.00033]]
[23, 4, 6, 11, ..., 12
The sentence is the transcription of the audio, the speech column is the array representation of the audio, and labels is the number representation of the each letter of the sentence based on a defined vocab list.
I'm fine-tuning a pre-trained ASR model, but when I try to pass the pandas df to the Trainer class and call .train() on it, it errors out (KeyError: 0). From the documentation, it only accepts torch.utils.data.Dataset or torch.utils.data.IterableDataset as train_/eval_dataset arguments. This is how my Trainer definition looks like:
trainer = Trainer(
model=model,
data_collator=data_collator,
args=training_args,
compute_metrics=compute_metrics,
train_dataset=ds_train,
eval_dataset=ds_test,
tokenizer=processor.feature_extractor
)
ds_train and ds_test are my training and validation dataframes respectively. I just split my main dataframe (80/20). How can I convert my pandas dataframes into the required Dataset type? I tried tailoring the data_collator class definition to a pandas df but that predictably didn't work either. I'm assuming the train and eval datasets both call the data_collator class when you call .train() on the trainer?
EDIT: I tried using Dataset.from_pandas(ds_train) but it couldn't convert it because I had columns with two-dimensional arrays and it can apparently only convert one-dimensional array values.
A:
Depends on how you will use your labels column.
I don't know how your your trainer use these data but I suggest to define your own Dataset class (https://pytorch.org/tutorials/beginner/basics/data_tutorial.html#creating-a-custom-dataset-for-your-files)
class CustomDataset(Dataset):
def __init__(self, dataframe):
self.path = dataframe["path"]
self.sentence = dataframe["sentence"]
self.speech = dataframe["speech"]
self.input_values = dataframe["input_values"]
self.labels = dataframe["labels"]
def __len__(self):
return len(self.text)
def __getitem__(self, idx):
path = self.path.iloc[idx]
sentence = self.sentence.iloc[idx]
speech = self.speech.iloc[idx]
input_values = self.input_values .iloc[idx]
labels = self.labels.iloc[idx]
return path, sentence, speech, input_values, labels
|
Converting a pandas dataframe into a torch Dataset
|
I have a pandas dataframe with the following structure:
path
sentence
speech
input_values
labels
audio1.mp3
This is the first audio
[[0.0, 0.0, 0.0, ..., 0.0, 0.0]]
[[0.00005, ..., 0.0003]]
[23, 4, 6, 11, ..., 12
audio2.mp3
This is the second audio
[[0.0, 0.0, 0.0, ..., 0.0, 0.0]]
[[0.000044, ..., 0.00033]]
[23, 4, 6, 11, ..., 12
The sentence is the transcription of the audio, the speech column is the array representation of the audio, and labels is the number representation of the each letter of the sentence based on a defined vocab list.
I'm fine-tuning a pre-trained ASR model, but when I try to pass the pandas df to the Trainer class and call .train() on it, it errors out (KeyError: 0). From the documentation, it only accepts torch.utils.data.Dataset or torch.utils.data.IterableDataset as train_/eval_dataset arguments. This is how my Trainer definition looks like:
trainer = Trainer(
model=model,
data_collator=data_collator,
args=training_args,
compute_metrics=compute_metrics,
train_dataset=ds_train,
eval_dataset=ds_test,
tokenizer=processor.feature_extractor
)
ds_train and ds_test are my training and validation dataframes respectively. I just split my main dataframe (80/20). How can I convert my pandas dataframes into the required Dataset type? I tried tailoring the data_collator class definition to a pandas df but that predictably didn't work either. I'm assuming the train and eval datasets both call the data_collator class when you call .train() on the trainer?
EDIT: I tried using Dataset.from_pandas(ds_train) but it couldn't convert it because I had columns with two-dimensional arrays and it can apparently only convert one-dimensional array values.
|
[
"Depends on how you will use your labels column.\nI don't know how your your trainer use these data but I suggest to define your own Dataset class (https://pytorch.org/tutorials/beginner/basics/data_tutorial.html#creating-a-custom-dataset-for-your-files)\nclass CustomDataset(Dataset):\n def __init__(self, dataframe):\n self.path = dataframe[\"path\"]\n self.sentence = dataframe[\"sentence\"]\n self.speech = dataframe[\"speech\"]\n self.input_values = dataframe[\"input_values\"]\n self.labels = dataframe[\"labels\"]\n\n def __len__(self):\n return len(self.text)\n\n def __getitem__(self, idx):\n path = self.path.iloc[idx]\n sentence = self.sentence.iloc[idx]\n speech = self.speech.iloc[idx]\n input_values = self.input_values .iloc[idx]\n labels = self.labels.iloc[idx]\n return path, sentence, speech, input_values, labels\n\n"
] |
[
0
] |
[] |
[] |
[
"pandas",
"python",
"pytorch",
"torchaudio",
"transformer_model"
] |
stackoverflow_0069724009_pandas_python_pytorch_torchaudio_transformer_model.txt
|
Q:
How to save data to user models when using a resume parser in django
I am working on a website whereby users will be uploading resumes and a resume parser script will be run to get skills and save the skills to the profile of the user. I have managed to obtain the skills before saving the form but I cant save the extracted skills now. Anyone who can help with this issue will be highly appreciated.
Here is my views file
def homepage(request):
if request.method == 'POST':
# Resume.objects.all().delete()
file_form = UploadResumeModelForm(request.POST, request.FILES, instance=request.user.profile)
files = request.FILES.getlist('resume')
resumes_data = []
if file_form.is_valid():
for file in files:
try:
# saving the file
# resume = Profile(resume=file)
resume = file_form.cleaned_data['resume']
# resume.save()
# resume = profile_form.cleaned_data['resume']
# print(file.temporary_file_path())
# extracting resume entities
# parser = ResumeParser(os.path.join(settings.MEDIA_ROOT, resume.resume.name))
parser = ResumeParser(file.temporary_file_path())
# extracting resume entities
# parser = ResumeParser(os.path.join(settings.MEDIA_ROOT, resume.resume.name))
data = parser.get_extracted_data()
resumes_data.append(data)
resume.name = data.get('name')
resume.email = data.get('email')
resume.mobile_number = data.get('mobile_number')
if data.get('degree') is not None:
resume.education = ', '.join(data.get('degree'))
else:
resume.education = None
resume.company_names = data.get('company_names')
resume.college_name = data.get('college_name')
resume.designation = data.get('designation')
resume.total_experience = data.get('total_experience')
if data.get('skills') is not None:
resume.skills = ', '.join(data.get('skills'))
else:
resume.skills = None
if data.get('experience') is not None:
resume.experience = ', '.join(data.get('experience'))
else:
resume.experience = None
# import pdb; pdb.set_trace()
resume.save()
except IntegrityError:
messages.warning(request, 'Duplicate resume found:', file.name)
return redirect('homepage')
resumes = Profile.objects.all()
messages.success(request, 'Resumes uploaded!')
context = {
'resumes': resumes,
}
file_form.save()
return render(request, 'authentication/resume.html', context)
else:
form = UploadResumeModelForm()
return render(request, 'authentication/resume.html', {'form': form})
And here is my models:
# Extending User Model Using a One-To-One Link
class Profile(models.Model):
user = models.OneToOneField(User, on_delete=models.CASCADE)
avatar = models.ImageField(default='default.jpg', upload_to='profile_images')
bio = models.TextField()
resume = models.FileField('Upload Resumes', upload_to='resumes/', null=True, blank=True)
name = models.CharField('Name', max_length=255, null=True, blank=True)
email = models.CharField('Email', max_length=255, null=True, blank=True)
mobile_number = models.CharField('Mobile Number', max_length=255, null=True, blank=True)
education = models.CharField('Education', max_length=255, null=True, blank=True)
skills = models.CharField('Skills', max_length=1000, null=True, blank=True)
company_name = models.CharField('Company Name', max_length=1000, null=True, blank=True)
college_name = models.CharField('College Name', max_length=1000, null=True, blank=True)
designation = models.CharField('Designation', max_length=1000, null=True, blank=True)
experience = models.CharField('Experience', max_length=1000, null=True, blank=True)
total_experience = models.CharField('Total Experience (in Years)', max_length=1000, null=True, blank=True)
def __str__(self):
return self.user.username
I have tried following it step by step with the pdb but when it comes to saving I get an error. Here are some of the errorsSome errors I am getting in pdb
A:
The cause of your error is when you cycle through the files submitted in your resume form, you are trying to save the resume field (remember, resume = file_form.cleaned_data['resume'] ). Presumably you want to be saving a Profile object
In all those lines where you add things to resume from your parsed resume file eg
resume.name = data.get('name')
just replace them with
file_form.instance.name = data.get('name')
and then
file_form.save()
at the end.
Also, it seems like you don't need the user to be able to submit multiple resumes. You probably don't need to loop through each file in resumes either.
|
How to save data to user models when using a resume parser in django
|
I am working on a website whereby users will be uploading resumes and a resume parser script will be run to get skills and save the skills to the profile of the user. I have managed to obtain the skills before saving the form but I cant save the extracted skills now. Anyone who can help with this issue will be highly appreciated.
Here is my views file
def homepage(request):
if request.method == 'POST':
# Resume.objects.all().delete()
file_form = UploadResumeModelForm(request.POST, request.FILES, instance=request.user.profile)
files = request.FILES.getlist('resume')
resumes_data = []
if file_form.is_valid():
for file in files:
try:
# saving the file
# resume = Profile(resume=file)
resume = file_form.cleaned_data['resume']
# resume.save()
# resume = profile_form.cleaned_data['resume']
# print(file.temporary_file_path())
# extracting resume entities
# parser = ResumeParser(os.path.join(settings.MEDIA_ROOT, resume.resume.name))
parser = ResumeParser(file.temporary_file_path())
# extracting resume entities
# parser = ResumeParser(os.path.join(settings.MEDIA_ROOT, resume.resume.name))
data = parser.get_extracted_data()
resumes_data.append(data)
resume.name = data.get('name')
resume.email = data.get('email')
resume.mobile_number = data.get('mobile_number')
if data.get('degree') is not None:
resume.education = ', '.join(data.get('degree'))
else:
resume.education = None
resume.company_names = data.get('company_names')
resume.college_name = data.get('college_name')
resume.designation = data.get('designation')
resume.total_experience = data.get('total_experience')
if data.get('skills') is not None:
resume.skills = ', '.join(data.get('skills'))
else:
resume.skills = None
if data.get('experience') is not None:
resume.experience = ', '.join(data.get('experience'))
else:
resume.experience = None
# import pdb; pdb.set_trace()
resume.save()
except IntegrityError:
messages.warning(request, 'Duplicate resume found:', file.name)
return redirect('homepage')
resumes = Profile.objects.all()
messages.success(request, 'Resumes uploaded!')
context = {
'resumes': resumes,
}
file_form.save()
return render(request, 'authentication/resume.html', context)
else:
form = UploadResumeModelForm()
return render(request, 'authentication/resume.html', {'form': form})
And here is my models:
# Extending User Model Using a One-To-One Link
class Profile(models.Model):
user = models.OneToOneField(User, on_delete=models.CASCADE)
avatar = models.ImageField(default='default.jpg', upload_to='profile_images')
bio = models.TextField()
resume = models.FileField('Upload Resumes', upload_to='resumes/', null=True, blank=True)
name = models.CharField('Name', max_length=255, null=True, blank=True)
email = models.CharField('Email', max_length=255, null=True, blank=True)
mobile_number = models.CharField('Mobile Number', max_length=255, null=True, blank=True)
education = models.CharField('Education', max_length=255, null=True, blank=True)
skills = models.CharField('Skills', max_length=1000, null=True, blank=True)
company_name = models.CharField('Company Name', max_length=1000, null=True, blank=True)
college_name = models.CharField('College Name', max_length=1000, null=True, blank=True)
designation = models.CharField('Designation', max_length=1000, null=True, blank=True)
experience = models.CharField('Experience', max_length=1000, null=True, blank=True)
total_experience = models.CharField('Total Experience (in Years)', max_length=1000, null=True, blank=True)
def __str__(self):
return self.user.username
I have tried following it step by step with the pdb but when it comes to saving I get an error. Here are some of the errorsSome errors I am getting in pdb
|
[
"The cause of your error is when you cycle through the files submitted in your resume form, you are trying to save the resume field (remember, resume = file_form.cleaned_data['resume'] ). Presumably you want to be saving a Profile object\nIn all those lines where you add things to resume from your parsed resume file eg\n resume.name = data.get('name')\n\njust replace them with\nfile_form.instance.name = data.get('name')\n\nand then\nfile_form.save() \n\nat the end.\nAlso, it seems like you don't need the user to be able to submit multiple resumes. You probably don't need to loop through each file in resumes either.\n"
] |
[
1
] |
[] |
[] |
[
"django",
"python",
"temporary_files"
] |
stackoverflow_0074594399_django_python_temporary_files.txt
|
Q:
Python error: the following arguments are required :
I am not familiar with Python, trying to build some DNN. So when I tried to parse some arguments I got this error in main.
usage: main.py [-h] [-j N] [--resume PATH] [--epochs N] [--start-epoch N] [-b N] [--lr LR]
[--weight-decay W] [-e] [--print-freq N]
DIR
main.py: error: the following arguments are required: DIR
Here is some part of the code:
# Parse arguments and prepare program
parser = argparse.ArgumentParser(description='Training and Using ColorNet')
parser.add_argument('data', metavar='DIR', help='path to dataset')
parser.add_argument('-j', '--workers', default=0, type=int, metavar='N', help='number of data loading workers (default: 0)')
parser.add_argument('--resume', default='', type=str, metavar='PATH', help='path to .pth file checkpoint (default: none)')
parser.add_argument('--epochs', default=50, type=int, metavar='N', help='number of total epochs to run')
parser.add_argument('--start-epoch', default=0, type=int, metavar='N', help='manual epoch number (overridden if loading from checkpoint)')
parser.add_argument('-b', '--batch-size', default=16, type=int, metavar='N', help='size of mini-batch (default: 16)')
parser.add_argument('--lr', '--learning-rate', default=0.1, type=float, metavar='LR', help='learning rate at start of training')
parser.add_argument('--weight-decay', '--wd', default=1e-10, type=float, metavar='W', help='weight decay (default: 1e-4)')
parser.add_argument('-e', '--evaluate', dest='evaluate', action='store_true', help='use this flag to validate without training')
parser.add_argument('--print-freq', '-p', default=10, type=int, metavar='N', help='print frequency (default: 10)')
# Current best losses
best_losses = 1000.0
use_gpu = torch.cuda.is_available()
def main():
global args, best_losses, use_gpu
args = parser.parse_args()
print('Arguments: {}'.format(args))
I read some comments to change
parser.parse_args()
to
parser.parse_args(args)
but it didn't work :)
A:
As DIR doesn't have a default value, you need to supply one when running the program. The easiest way to do this is via a command line interface. Consult the documentation of the library you are using for further hints on that.
|
Python error: the following arguments are required :
|
I am not familiar with Python, trying to build some DNN. So when I tried to parse some arguments I got this error in main.
usage: main.py [-h] [-j N] [--resume PATH] [--epochs N] [--start-epoch N] [-b N] [--lr LR]
[--weight-decay W] [-e] [--print-freq N]
DIR
main.py: error: the following arguments are required: DIR
Here is some part of the code:
# Parse arguments and prepare program
parser = argparse.ArgumentParser(description='Training and Using ColorNet')
parser.add_argument('data', metavar='DIR', help='path to dataset')
parser.add_argument('-j', '--workers', default=0, type=int, metavar='N', help='number of data loading workers (default: 0)')
parser.add_argument('--resume', default='', type=str, metavar='PATH', help='path to .pth file checkpoint (default: none)')
parser.add_argument('--epochs', default=50, type=int, metavar='N', help='number of total epochs to run')
parser.add_argument('--start-epoch', default=0, type=int, metavar='N', help='manual epoch number (overridden if loading from checkpoint)')
parser.add_argument('-b', '--batch-size', default=16, type=int, metavar='N', help='size of mini-batch (default: 16)')
parser.add_argument('--lr', '--learning-rate', default=0.1, type=float, metavar='LR', help='learning rate at start of training')
parser.add_argument('--weight-decay', '--wd', default=1e-10, type=float, metavar='W', help='weight decay (default: 1e-4)')
parser.add_argument('-e', '--evaluate', dest='evaluate', action='store_true', help='use this flag to validate without training')
parser.add_argument('--print-freq', '-p', default=10, type=int, metavar='N', help='print frequency (default: 10)')
# Current best losses
best_losses = 1000.0
use_gpu = torch.cuda.is_available()
def main():
global args, best_losses, use_gpu
args = parser.parse_args()
print('Arguments: {}'.format(args))
I read some comments to change
parser.parse_args()
to
parser.parse_args(args)
but it didn't work :)
|
[
"As DIR doesn't have a default value, you need to supply one when running the program. The easiest way to do this is via a command line interface. Consult the documentation of the library you are using for further hints on that.\n"
] |
[
0
] |
[] |
[] |
[
"argparse",
"conv_neural_network",
"deep_learning",
"python"
] |
stackoverflow_0074594831_argparse_conv_neural_network_deep_learning_python.txt
|
Q:
How to extract element from HTML code in Python
I'm trying to webscrape multiple webpages of similar HTML code. I can already get the HTML of each page and I can manually find the part of the code's string where the information I need is placed - I just don't know how to properly extract it. I believe my problem might be solved with REGEX, actually, but I don't know how.
I'm using Python 3
This is how I extract the page's HTML code:
import requests
resp = requests.get("https://statusinvest.com.br/fundos-imobiliarios/knri11",headers={'User-Agent': 'Mozilla/5.0'})
from bs4 import BeautifulSoup
soup = BeautifulSoup(resp.content, features="html.parser")
Below is the string of the HTML code ( code -> str(soup) ). I want to extract the list between those two pink brackets. This block is always after the line between blue parenthesis (the text in green is different at each page)
part of page's HTML code I want to extract
A:
You can use beautifulsoup to find the correct tag and json module to parse the values:
import json
import requests
from bs4 import BeautifulSoup
resp = requests.get(
"https://statusinvest.com.br/fundos-imobiliarios/knri11",
headers={"User-Agent": "Mozilla/5.0"},
)
soup = BeautifulSoup(resp.content, "html.parser")
data = json.loads(soup.select_one("#results")["value"])
print(data)
Prints:
[
{
"y": 0,
"m": 0,
"d": 0,
"ad": None,
"ed": "31/10/2022",
"pd": "16/11/2022",
"et": "Rendimento",
"etd": "Rendimento",
"v": 0.91,
"ov": None,
"sv": "0,91000000",
"sov": "-",
"adj": False,
},
{
"y": 0,
"m": 0,
"d": 0,
"ad": None,
"ed": "30/09/2022",
"pd": "17/10/2022",
"et": "Rendimento",
"etd": "Rendimento",
"v": 0.91,
"ov": None,
"sv": "0,91000000",
"sov": "-",
"adj": False,
},
...and so on.
A:
import json
import requests
resp = requests.get("https://statusinvest.com.br/fundos-imobiliarios/knri11", headers={'User-Agent': 'Mozilla/5.0'})
from bs4 import BeautifulSoup
soup = BeautifulSoup(resp.content, features="html.parser")
data = json.loads(soup.find("input", {"id": "results"}).get("value")
print(data)
To get the first value:
print(data[0]["y"])
|
How to extract element from HTML code in Python
|
I'm trying to webscrape multiple webpages of similar HTML code. I can already get the HTML of each page and I can manually find the part of the code's string where the information I need is placed - I just don't know how to properly extract it. I believe my problem might be solved with REGEX, actually, but I don't know how.
I'm using Python 3
This is how I extract the page's HTML code:
import requests
resp = requests.get("https://statusinvest.com.br/fundos-imobiliarios/knri11",headers={'User-Agent': 'Mozilla/5.0'})
from bs4 import BeautifulSoup
soup = BeautifulSoup(resp.content, features="html.parser")
Below is the string of the HTML code ( code -> str(soup) ). I want to extract the list between those two pink brackets. This block is always after the line between blue parenthesis (the text in green is different at each page)
part of page's HTML code I want to extract
|
[
"You can use beautifulsoup to find the correct tag and json module to parse the values:\nimport json\nimport requests\nfrom bs4 import BeautifulSoup\n\nresp = requests.get(\n \"https://statusinvest.com.br/fundos-imobiliarios/knri11\",\n headers={\"User-Agent\": \"Mozilla/5.0\"},\n)\nsoup = BeautifulSoup(resp.content, \"html.parser\")\n\ndata = json.loads(soup.select_one(\"#results\")[\"value\"])\n\nprint(data)\n\nPrints:\n[\n {\n \"y\": 0,\n \"m\": 0,\n \"d\": 0,\n \"ad\": None,\n \"ed\": \"31/10/2022\",\n \"pd\": \"16/11/2022\",\n \"et\": \"Rendimento\",\n \"etd\": \"Rendimento\",\n \"v\": 0.91,\n \"ov\": None,\n \"sv\": \"0,91000000\",\n \"sov\": \"-\",\n \"adj\": False,\n },\n {\n \"y\": 0,\n \"m\": 0,\n \"d\": 0,\n \"ad\": None,\n \"ed\": \"30/09/2022\",\n \"pd\": \"17/10/2022\",\n \"et\": \"Rendimento\",\n \"etd\": \"Rendimento\",\n \"v\": 0.91,\n \"ov\": None,\n \"sv\": \"0,91000000\",\n \"sov\": \"-\",\n \"adj\": False,\n },\n\n\n...and so on.\n\n",
"import json\nimport requests\n\nresp = requests.get(\"https://statusinvest.com.br/fundos-imobiliarios/knri11\", headers={'User-Agent': 'Mozilla/5.0'})\n\nfrom bs4 import BeautifulSoup\n\nsoup = BeautifulSoup(resp.content, features=\"html.parser\")\ndata = json.loads(soup.find(\"input\", {\"id\": \"results\"}).get(\"value\")\nprint(data)\n\nTo get the first value:\nprint(data[0][\"y\"])\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"html",
"python",
"web_scraping"
] |
stackoverflow_0074594806_html_python_web_scraping.txt
|
Q:
Expand sin(acot(...)) in sympy?
Is there a way to expand the trigonometric function of an inverse trigonometric function? I have a long-expression f that contains many such subexpressions, e.g.:
sin(0.5 acot(x))**2
cos(0.5 acot(x))**2
sin(acot(x))
These expressions can be rewritten without trigonometric functions, e.g.:
1/2 - 1/2 * x / sp.sqrt(x**2 + 1)
I've tried expand_trig and trigsimp to no avail. Also, I can't find a way to directly substitute the analytical expressions in.
Any suggestions?
A:
There are various ways to do this. Some examples:
In [1]: sin(acot(x))
Out[1]:
1
───────────────
________
╱ 1
x⋅ ╱ 1 + ──
╱ 2
╲╱ x
In [2]: sin(acot(x)/2)**2
Out[2]:
2⎛acot(x)⎞
sin ⎜───────⎟
⎝ 2 ⎠
In [3]: e = sin(acot(x)/2)**2
In [4]: e.rewrite(log)
Out[4]:
⎛ ⎛ ⎛ ⅈ⎞ ⎛ ⅈ⎞⎞⎞
⎜ⅈ⋅⎜log⎜1 - ─⎟ - log⎜1 + ─⎟⎟⎟
2⎜ ⎝ ⎝ x⎠ ⎝ x⎠⎠⎟
sin ⎜───────────────────────────⎟
⎝ 4 ⎠
In [5]: e.rewrite(log).rewrite(exp)
Out[5]:
⎛ ⎛ _______ _______⎞ ⎞
⎜ ⎜ ╱ ⅈ ╱ ⅈ ⎟ ⎟
⎜ ⎜ 4 ╱ 1 - ─ 4 ╱ 1 + ─ ⎟ ⎟
⎜ ⎜ ╲╱ x ╲╱ x ⎟ ⎟
⎜-ⅈ⋅⎜- ─────────── + ───────────⎟ ⎟
⎜ ⎜ _______ _______⎟ ⎟
⎜ ⎜ ╱ ⅈ ╱ ⅈ ⎟ ⎟
⎜ ⎜ 4 ╱ 1 + ─ 4 ╱ 1 - ─ ⎟ ⎟
⎜ ⎝ ╲╱ x ╲╱ x ⎠ ⎟
2⋅log⎜─────────────────────────────────⎟
⎝ 2 ⎠
ℯ
In [6]: e.rewrite(log).rewrite(exp).expand()
Out[6]:
_______ _______
╱ ⅈ ╱ ⅈ
╱ 1 - ─ ╱ 1 + ─
╲╱ x 1 ╲╱ x
- ───────────── + ─ - ─────────────
_______ 2 _______
╱ ⅈ ╱ ⅈ
4⋅ ╱ 1 + ─ 4⋅ ╱ 1 - ─
╲╱ x ╲╱ x
In [7]: simplify(_)
Out[7]:
1 1
─ - ─────────────────────────
2 _______ _______
╱ ⅈ ╱ ⅈ
2⋅ ╱ 1 - ─ ⋅ ╱ 1 + ─
╲╱ x ╲╱ x
Some cases will work differently if you declare x as real or positive e.g. x = symbols('x', real=True).
|
Expand sin(acot(...)) in sympy?
|
Is there a way to expand the trigonometric function of an inverse trigonometric function? I have a long-expression f that contains many such subexpressions, e.g.:
sin(0.5 acot(x))**2
cos(0.5 acot(x))**2
sin(acot(x))
These expressions can be rewritten without trigonometric functions, e.g.:
1/2 - 1/2 * x / sp.sqrt(x**2 + 1)
I've tried expand_trig and trigsimp to no avail. Also, I can't find a way to directly substitute the analytical expressions in.
Any suggestions?
|
[
"There are various ways to do this. Some examples:\nIn [1]: sin(acot(x))\nOut[1]: \n 1 \n───────────────\n ________\n ╱ 1 \nx⋅ ╱ 1 + ── \n ╱ 2 \n ╲╱ x \n\nIn [2]: sin(acot(x)/2)**2\nOut[2]: \n 2⎛acot(x)⎞\nsin ⎜───────⎟\n ⎝ 2 ⎠\n\nIn [3]: e = sin(acot(x)/2)**2\n\nIn [4]: e.rewrite(log)\nOut[4]: \n ⎛ ⎛ ⎛ ⅈ⎞ ⎛ ⅈ⎞⎞⎞\n ⎜ⅈ⋅⎜log⎜1 - ─⎟ - log⎜1 + ─⎟⎟⎟\n 2⎜ ⎝ ⎝ x⎠ ⎝ x⎠⎠⎟\nsin ⎜───────────────────────────⎟\n ⎝ 4 ⎠\n\nIn [5]: e.rewrite(log).rewrite(exp)\nOut[5]: \n ⎛ ⎛ _______ _______⎞ ⎞\n ⎜ ⎜ ╱ ⅈ ╱ ⅈ ⎟ ⎟\n ⎜ ⎜ 4 ╱ 1 - ─ 4 ╱ 1 + ─ ⎟ ⎟\n ⎜ ⎜ ╲╱ x ╲╱ x ⎟ ⎟\n ⎜-ⅈ⋅⎜- ─────────── + ───────────⎟ ⎟\n ⎜ ⎜ _______ _______⎟ ⎟\n ⎜ ⎜ ╱ ⅈ ╱ ⅈ ⎟ ⎟\n ⎜ ⎜ 4 ╱ 1 + ─ 4 ╱ 1 - ─ ⎟ ⎟\n ⎜ ⎝ ╲╱ x ╲╱ x ⎠ ⎟\n 2⋅log⎜─────────────────────────────────⎟\n ⎝ 2 ⎠\nℯ \n\nIn [6]: e.rewrite(log).rewrite(exp).expand()\nOut[6]: \n _______ _______ \n ╱ ⅈ ╱ ⅈ \n ╱ 1 - ─ ╱ 1 + ─ \n ╲╱ x 1 ╲╱ x \n- ───────────── + ─ - ─────────────\n _______ 2 _______\n ╱ ⅈ ╱ ⅈ \n 4⋅ ╱ 1 + ─ 4⋅ ╱ 1 - ─ \n ╲╱ x ╲╱ x \n\nIn [7]: simplify(_)\nOut[7]: \n1 1 \n─ - ─────────────────────────\n2 _______ _______\n ╱ ⅈ ╱ ⅈ \n 2⋅ ╱ 1 - ─ ⋅ ╱ 1 + ─ \n ╲╱ x ╲╱ x \n\nSome cases will work differently if you declare x as real or positive e.g. x = symbols('x', real=True).\n"
] |
[
2
] |
[] |
[] |
[
"python",
"sympy",
"trigonometry"
] |
stackoverflow_0074594679_python_sympy_trigonometry.txt
|
Q:
Adding an XML element within an existing document with Python
Hello this is my first post, if something is not clear, please say so!
I have this xml file from which I have to extract all the names found between the square brackets of eanch transc tag (the one inside newsFrom) and then put them in a new tag called person under it. Obviously if there are two names I need two separate person tags with their respective names, as is done for newstopic.
This is what I need
<newsFrom>
<from date="15/01/1649" dateUnsure="y">London</from>
<transc>Questo Parlamento generale Farfax [Thomas Fairfax, 3rd Lord Fairfax of Cameron] et suo consiglio dio et ordinato di pocessare il re [Charles I, King of England]</transc>
<person>Thomas Fairfax, 3rd Lord Fairfax of Cameron</person>
<person>Charles I, King of England</person>
<newsTopic>Military</newsTopic>
<wordCount>103</wordCount>
<position>1</position>
</newsFrom>
This is the XML file
<news xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="news.xsd">
<xmlCorpusDate>2022-10-16</xmlCorpusDate>
<xmlCorpusTime>15:17:52</xmlCorpusTime>
<newsDocument>
<docid>50992</docid>
<repository>Archivio di Stato di Firenze</repository>
<collection>Mediceo del Principato</collection>
<volume>4202</volume>
<newsHeader>
<hub>London</hub>
<date>15/01/1649</date>
<transc>Di Londra 15 gennaio 1648 ab Incarnatione</transc>
<newsFrom>
<from date="15/01/1649" dateUnsure="y">London</from>
<transc>Questo Parlamento generale Farfax [Thomas Fairfax, 3rd Lord Fairfax of Cameron] et suo consiglio dio et ordinato di pocessare il re [Charles I, King of England]</transc>
<newsTopic>Military</newsTopic>
<wordCount>103</wordCount>
<position>1</position>
</newsFrom>
<newsFrom>
<from date="15/01/1649" dateUnsure="y">Manchester</from>
<transc>Ieri è giunto Rossini [Cardinal Rossini] et suo figlio [Gianmarco Rossini]</transc>
<newsTopic>Politics</newsTopic>
<wordCount>53</wordCount>
<position>2</position>
</newsFrom>
<writtenPagesNo>5</writtenPagesNo>
</newsHeader>
</newsDocument>
<newsDocument>
<docid>50492</docid>
<repository>Archivio di Stato di Firenze</repository>
<collection>Mediceo del Principato</collection>
<volume>4202</volume>
<newsHeader>
<hub>London</hub>
<date>21/01/1649</date>
<transc>Di Londra 21 gennaio 1648 ab Incarnatione</transc>
<newsFrom>
<from date="21/01/1649" dateUnsure="y">London</from>
<transc>Il consiglio di guerra con la Camera [English Parliament]</transc>
<newsTopic>Government</newsTopic>
<newsTopic>Politics</newsTopic>
<wordCount>78</wordCount>
<position>1</position>
</newsFrom>
<newsFrom>
<from date="21/01/1649" dateUnsure="y">Manchester</from>
<transc>Si è data notizia [Marco Cioni] di cose di poco conto</transc>
<newsTopic>Politics</newsTopic>
<wordCount>144</wordCount>
<position>2</position>
</newsFrom>
<writtenPagesNo>5</writtenPagesNo>
</newsHeader>
</newsDocument>
</news>
As for the extraction of names, this was relatively easy, in fact I created the following code in python
import xml.etree.ElementTree as ET
import re
file = open("1649.xml")
tree=ET.parse('1649.xml')
root=tree.getroot()
for document in root.findall("newsDocument"):
names=document.find("./newsHeader/newsFrom/transc").text
people=re.findall("\[(.*?)\]",names)
The problem now arises in creating the new tags, assigning them names extracted from the text and making sure that each individual name corresponds to the exact text.
I've tried different ways, looked at the library guide, but I can't, what I can do at best is to get a messy list at the head of the file.
Thanks to anyone who can help me
A:
In this case, it's easier to use lxml rather than ElementTree, because of lxml's better support for xpath.
So try this:
from lxml import etree
import re
tree=etree.parse('1649.xml')
#find all <trasnc> elements
trs = root.xpath(".//transc")
for t in trs:
#use regex to find the data between "[" and "]"
persons = re.findall('(?<=\[)([^]]+)(?=\])', t.text)
if len(persons)>0:
#EDIT
for person in set(persons):
#create a new element using f-strings
np = etree.fromstring(f"<person>{person}</person>")
#add the new element in the appropriate place
t.addnext(np)
#pretty print
etree.indent(root, space=' ', level=0)
print(etree.tostring(root).decode())
The output should be your expected output.
|
Adding an XML element within an existing document with Python
|
Hello this is my first post, if something is not clear, please say so!
I have this xml file from which I have to extract all the names found between the square brackets of eanch transc tag (the one inside newsFrom) and then put them in a new tag called person under it. Obviously if there are two names I need two separate person tags with their respective names, as is done for newstopic.
This is what I need
<newsFrom>
<from date="15/01/1649" dateUnsure="y">London</from>
<transc>Questo Parlamento generale Farfax [Thomas Fairfax, 3rd Lord Fairfax of Cameron] et suo consiglio dio et ordinato di pocessare il re [Charles I, King of England]</transc>
<person>Thomas Fairfax, 3rd Lord Fairfax of Cameron</person>
<person>Charles I, King of England</person>
<newsTopic>Military</newsTopic>
<wordCount>103</wordCount>
<position>1</position>
</newsFrom>
This is the XML file
<news xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="news.xsd">
<xmlCorpusDate>2022-10-16</xmlCorpusDate>
<xmlCorpusTime>15:17:52</xmlCorpusTime>
<newsDocument>
<docid>50992</docid>
<repository>Archivio di Stato di Firenze</repository>
<collection>Mediceo del Principato</collection>
<volume>4202</volume>
<newsHeader>
<hub>London</hub>
<date>15/01/1649</date>
<transc>Di Londra 15 gennaio 1648 ab Incarnatione</transc>
<newsFrom>
<from date="15/01/1649" dateUnsure="y">London</from>
<transc>Questo Parlamento generale Farfax [Thomas Fairfax, 3rd Lord Fairfax of Cameron] et suo consiglio dio et ordinato di pocessare il re [Charles I, King of England]</transc>
<newsTopic>Military</newsTopic>
<wordCount>103</wordCount>
<position>1</position>
</newsFrom>
<newsFrom>
<from date="15/01/1649" dateUnsure="y">Manchester</from>
<transc>Ieri è giunto Rossini [Cardinal Rossini] et suo figlio [Gianmarco Rossini]</transc>
<newsTopic>Politics</newsTopic>
<wordCount>53</wordCount>
<position>2</position>
</newsFrom>
<writtenPagesNo>5</writtenPagesNo>
</newsHeader>
</newsDocument>
<newsDocument>
<docid>50492</docid>
<repository>Archivio di Stato di Firenze</repository>
<collection>Mediceo del Principato</collection>
<volume>4202</volume>
<newsHeader>
<hub>London</hub>
<date>21/01/1649</date>
<transc>Di Londra 21 gennaio 1648 ab Incarnatione</transc>
<newsFrom>
<from date="21/01/1649" dateUnsure="y">London</from>
<transc>Il consiglio di guerra con la Camera [English Parliament]</transc>
<newsTopic>Government</newsTopic>
<newsTopic>Politics</newsTopic>
<wordCount>78</wordCount>
<position>1</position>
</newsFrom>
<newsFrom>
<from date="21/01/1649" dateUnsure="y">Manchester</from>
<transc>Si è data notizia [Marco Cioni] di cose di poco conto</transc>
<newsTopic>Politics</newsTopic>
<wordCount>144</wordCount>
<position>2</position>
</newsFrom>
<writtenPagesNo>5</writtenPagesNo>
</newsHeader>
</newsDocument>
</news>
As for the extraction of names, this was relatively easy, in fact I created the following code in python
import xml.etree.ElementTree as ET
import re
file = open("1649.xml")
tree=ET.parse('1649.xml')
root=tree.getroot()
for document in root.findall("newsDocument"):
names=document.find("./newsHeader/newsFrom/transc").text
people=re.findall("\[(.*?)\]",names)
The problem now arises in creating the new tags, assigning them names extracted from the text and making sure that each individual name corresponds to the exact text.
I've tried different ways, looked at the library guide, but I can't, what I can do at best is to get a messy list at the head of the file.
Thanks to anyone who can help me
|
[
"In this case, it's easier to use lxml rather than ElementTree, because of lxml's better support for xpath.\nSo try this:\nfrom lxml import etree\nimport re\n\ntree=etree.parse('1649.xml')\n\n#find all <trasnc> elements\ntrs = root.xpath(\".//transc\")\nfor t in trs:\n #use regex to find the data between \"[\" and \"]\"\n persons = re.findall('(?<=\\[)([^]]+)(?=\\])', t.text)\n if len(persons)>0:\n #EDIT\n for person in set(persons):\n #create a new element using f-strings\n np = etree.fromstring(f\"<person>{person}</person>\")\n #add the new element in the appropriate place\n t.addnext(np)\n#pretty print\netree.indent(root, space=' ', level=0)\nprint(etree.tostring(root).decode())\n\nThe output should be your expected output.\n"
] |
[
0
] |
[] |
[] |
[
"python",
"xml"
] |
stackoverflow_0074594523_python_xml.txt
|
Q:
Why am i getting an error when i import a moduel
Im following a postgresql tutorial and in the video he does
from . import models
then when i try it i can an error
i did exactly what he did in the video and i get this error
from . import models
ImportError: attempted relative import with no known parent package
does anyone know why?
A:
The ImportError message is stating that Python expected a module to import models from, but didn't find it. Are you working from the same directory that the instructor is working from?
I'm not familiar with the tutorial, but given the import, it seems you're also learning Django. If so, are you sure you have Django installed?
As a general tip, in case you are indeed new to Stack Overflow, you may find it helpful to copy and paste the error message you're seeing, like this.
You might find these questions' answers helpful as well:
What does mean from . import?
Relative imports in Python 3
|
Why am i getting an error when i import a moduel
|
Im following a postgresql tutorial and in the video he does
from . import models
then when i try it i can an error
i did exactly what he did in the video and i get this error
from . import models
ImportError: attempted relative import with no known parent package
does anyone know why?
|
[
"The ImportError message is stating that Python expected a module to import models from, but didn't find it. Are you working from the same directory that the instructor is working from?\nI'm not familiar with the tutorial, but given the import, it seems you're also learning Django. If so, are you sure you have Django installed?\nAs a general tip, in case you are indeed new to Stack Overflow, you may find it helpful to copy and paste the error message you're seeing, like this.\nYou might find these questions' answers helpful as well:\n\nWhat does mean from . import?\nRelative imports in Python 3\n\n"
] |
[
0
] |
[] |
[] |
[
"fastapi",
"postgresql",
"python",
"sql",
"uvicorn"
] |
stackoverflow_0074594722_fastapi_postgresql_python_sql_uvicorn.txt
|
Q:
Save the data in CSV after every update
Hi I have some data I want to save it in dataframe after every update. but It always override my previous data. is there any method to keep my previous data save and add new to it.
df = pd.DataFrame(columns=['Entry','Middle','Exit'])
def function():
entry_value = 178.184 # data comming from server
middle_value = 14.121 # data comming from server
exit_value = 19.21 # data comming from server
df1 = df.append({'Entry' : entry_value , 'Middle' : middle_value, 'Exit' : exit_value}, ignore_index = True)
df1.to_csv('abc.csv')
i = 0
while i < 5:
function()
i += 1
this entry_value, middle_value and exit_value is change. sometime it's not change. in this example i want that my csv have same data 5 times
Note:: here the value is hard codded but it's comming from server but in this format
A:
You can use concat function (https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.concat.html)
for example:
import pandas as pd
df = pd.DataFrame(columns=['Entry','Middle','Exit'])
def function():
global df
entry_value = 178.184 # data comming from server
middle_value = 14.121 # data comming from server
exit_value = 19.21 # data comming from server'
new_row = pd.DataFrame.from_dict([{'Entry' : entry_value , 'Middle' : middle_value, 'Exit' : exit_value}], orient='columns')
df = pd.concat([df, new_row])
df.to_csv('abc.csv')
i = 0
while i < 5:
function()
i += 1
Also if you want to have every version of your CSV file you can add a counter to the end of your CSV file name.
for example:
import pandas as pd
df = pd.DataFrame(columns=['Entry','Middle','Exit'])
def function(n):
global df
entry_value = 178.184 # data comming from server
middle_value = 14.121 # data comming from server
exit_value = 19.21 # data comming from server'
new_row = pd.DataFrame.from_dict([{'Entry' : entry_value , 'Middle' : middle_value, 'Exit' : exit_value}], orient='columns')
df = pd.concat([df, new_row])
df.to_csv(f'abc{n}.csv')
i = 0
while i < 5:
function(i)
i += 1
A:
This answer might be able to help you much better by giving you clarity on multiple ways of appending the data in dataframe.
|
Save the data in CSV after every update
|
Hi I have some data I want to save it in dataframe after every update. but It always override my previous data. is there any method to keep my previous data save and add new to it.
df = pd.DataFrame(columns=['Entry','Middle','Exit'])
def function():
entry_value = 178.184 # data comming from server
middle_value = 14.121 # data comming from server
exit_value = 19.21 # data comming from server
df1 = df.append({'Entry' : entry_value , 'Middle' : middle_value, 'Exit' : exit_value}, ignore_index = True)
df1.to_csv('abc.csv')
i = 0
while i < 5:
function()
i += 1
this entry_value, middle_value and exit_value is change. sometime it's not change. in this example i want that my csv have same data 5 times
Note:: here the value is hard codded but it's comming from server but in this format
|
[
"You can use concat function (https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.concat.html)\nfor example:\nimport pandas as pd\n\ndf = pd.DataFrame(columns=['Entry','Middle','Exit'])\ndef function():\n global df\n entry_value = 178.184 # data comming from server\n middle_value = 14.121 # data comming from server\n exit_value = 19.21 # data comming from server'\n \n new_row = pd.DataFrame.from_dict([{'Entry' : entry_value , 'Middle' : middle_value, 'Exit' : exit_value}], orient='columns')\n df = pd.concat([df, new_row])\n df.to_csv('abc.csv')\n \ni = 0\nwhile i < 5:\n function()\n i += 1\n\n\nAlso if you want to have every version of your CSV file you can add a counter to the end of your CSV file name.\nfor example:\nimport pandas as pd\n\ndf = pd.DataFrame(columns=['Entry','Middle','Exit'])\ndef function(n):\n global df\n entry_value = 178.184 # data comming from server\n middle_value = 14.121 # data comming from server\n exit_value = 19.21 # data comming from server'\n \n new_row = pd.DataFrame.from_dict([{'Entry' : entry_value , 'Middle' : middle_value, 'Exit' : exit_value}], orient='columns')\n df = pd.concat([df, new_row])\n df.to_csv(f'abc{n}.csv')\n \ni = 0\nwhile i < 5:\n function(i)\n i += 1\n\n",
"This answer might be able to help you much better by giving you clarity on multiple ways of appending the data in dataframe.\n"
] |
[
1,
0
] |
[] |
[] |
[
"dataframe",
"pandas",
"python",
"python_3.x"
] |
stackoverflow_0074594554_dataframe_pandas_python_python_3.x.txt
|
Q:
Python change every n-th pixel of an image on x and y axis to a different color
As the title says, I have to take an image and write code that colors in every n-th pixel on x axis and every n-th pixel on y axis.
I tried using coloring every pixel manually but it will take too much time because image is 500x500 and it will take eternity to change every pixel based on its number on x and y axis.
A:
I think (@PranavHosangadi) and (@Mike L) are correct.
I don't see how it is possible without a loop. But you can use the loop in this way by skipping the pixels and not iterate over each position.
This is an example to change the value at a location by skipping 2 rows and 2 columns.
import numpy as np
img = np.ones((10,10))
print(f"Array 1: \n {img}")
w, h = img.shape
for i in range(0, h, 3):
#if i % 2 == 0:
img[i,i] = 0
print(f"Array 2: \n {img}")
|
Python change every n-th pixel of an image on x and y axis to a different color
|
As the title says, I have to take an image and write code that colors in every n-th pixel on x axis and every n-th pixel on y axis.
I tried using coloring every pixel manually but it will take too much time because image is 500x500 and it will take eternity to change every pixel based on its number on x and y axis.
|
[
"I think (@PranavHosangadi) and (@Mike L) are correct.\nI don't see how it is possible without a loop. But you can use the loop in this way by skipping the pixels and not iterate over each position.\nThis is an example to change the value at a location by skipping 2 rows and 2 columns.\nimport numpy as np\n\nimg = np.ones((10,10))\nprint(f\"Array 1: \\n {img}\")\nw, h = img.shape\n\nfor i in range(0, h, 3):\n #if i % 2 == 0:\n img[i,i] = 0\n\nprint(f\"Array 2: \\n {img}\") \n\n"
] |
[
0
] |
[] |
[] |
[
"jupyter",
"jupyter_notebook",
"python",
"python_3.x"
] |
stackoverflow_0074594351_jupyter_jupyter_notebook_python_python_3.x.txt
|
Q:
Python 2D self-avoiding random walk
I want to make a self-avoiding 2D random walk in python. Imagine it like the dot is on the square grid and it can only go up, down, left or right but it cannot land twice on the same point. I have an idea how to do it, but my programming skills aren't very good (I'm a beginner) and the code doesn't work.
The end product should look something like this:
enter image description here
My idea was to create two lists: one where I store x and y (i.e. the coordinates I've already been to) and one with the points that are around the point I'm currently at (I marked it as neighbors). I want to create a new variable surviving_neighbors. This would be a list of coordinates of the surrounding points where I have not yet been to (e.g. I am currently at (1,1); I have already been at (0,1) and (1,2) so that my surviving neighbors would be (1,0 ) and (2,1)). I want to get Surviving_neighbors using the difference method: I put neighbors.difference(list of coordinates) and save in a variable what is in the list of neighbors, but not in the list of coordinates I was on. The first problem I have is that I don't have one list with coordinates, but x and y are stored separately. Next, I would use choice(surviving_neighbors) and thus choose new coordinates. This creates another problem: I probably won't be able to call it a trajectory, but I'll have to define it again in terms of x and y...
The teacher suggested that I store x and y as vectors, but I have no idea how to do that.
Code:
from random import choice
import numpy as np
from matplotlib import pyplot as plt
plt.style.use(['science', 'notebook', 'dark background'])
x, y = 0, 0
coordinates = [(x, y)]
for time in range(10):
dx, dy = choice([(0, 1), (-1, 0), (0, 1), (0, -1)])
x, y = x + dx, y + dy
X.append(x)
Y.append(y)
neighbors = [x+1, y
x-1
y
x, y+1
x, y-1]
surviving_neighbors = neighbors.difference(X, Y)
trajectory = choice(surviving_neighbors)
plt.plot()
A:
Hard to know where you are going with this, here is a basic working example;
This is invalid as they don't exist;
plt.style.use(['science', 'notebook', 'dark background'])
Possible values are;
['Solarize_Light2', '_classic_test_patch', '_mpl-gallery',
'_mpl-gallery-nogrid', 'bmh', 'classic', 'dark_background', 'fast',
'fivethirtyeight', 'ggplot', 'grayscale', 'seaborn-v0_8',
'seaborn-v0_8-bright', 'seaborn-v0_8-colorblind', 'seaborn-v0_8-dark',
'seaborn-v0_8-dark-palette', 'seaborn-v0_8-darkgrid',
'seaborn-v0_8-deep', 'seaborn-v0_8-muted', 'seaborn-v0_8-notebook',
'seaborn-v0_8-paper', 'seaborn-v0_8-pastel', 'seaborn-v0_8-poster',
'seaborn-v0_8-talk', 'seaborn-v0_8-ticks', 'seaborn-v0_8-white',
'seaborn-v0_8-whitegrid', 'tableau-colorblind10']
from random import choice
from matplotlib import pyplot as plt
plt.style.use('seaborn-v0_8-darkgrid')
print(plt.style.available)
#2D self-avoiding random walk
def dotty(n):
x, y = 0, 0
path = [(x, y)]
for i in range(n):
# pick the closest point but it must complete without crossing itself
x, y = choice([(x+1, y), (x-1, y), (x, y+1), (x, y-1)])
if (x, y) in path:
return path
path.append((x, y))
return path
# show plot
def show_path(path):
plt.figure(figsize=(10, 10))
# draw points
plt.scatter(*zip(*path), s=5, c='k')
# draw lines in red
plt.plot(*zip(*path), c='r')
plt.show()
# main
if __name__ == '__main__':
path = dotty(100000)
show_path(path)
Output:
|
Python 2D self-avoiding random walk
|
I want to make a self-avoiding 2D random walk in python. Imagine it like the dot is on the square grid and it can only go up, down, left or right but it cannot land twice on the same point. I have an idea how to do it, but my programming skills aren't very good (I'm a beginner) and the code doesn't work.
The end product should look something like this:
enter image description here
My idea was to create two lists: one where I store x and y (i.e. the coordinates I've already been to) and one with the points that are around the point I'm currently at (I marked it as neighbors). I want to create a new variable surviving_neighbors. This would be a list of coordinates of the surrounding points where I have not yet been to (e.g. I am currently at (1,1); I have already been at (0,1) and (1,2) so that my surviving neighbors would be (1,0 ) and (2,1)). I want to get Surviving_neighbors using the difference method: I put neighbors.difference(list of coordinates) and save in a variable what is in the list of neighbors, but not in the list of coordinates I was on. The first problem I have is that I don't have one list with coordinates, but x and y are stored separately. Next, I would use choice(surviving_neighbors) and thus choose new coordinates. This creates another problem: I probably won't be able to call it a trajectory, but I'll have to define it again in terms of x and y...
The teacher suggested that I store x and y as vectors, but I have no idea how to do that.
Code:
from random import choice
import numpy as np
from matplotlib import pyplot as plt
plt.style.use(['science', 'notebook', 'dark background'])
x, y = 0, 0
coordinates = [(x, y)]
for time in range(10):
dx, dy = choice([(0, 1), (-1, 0), (0, 1), (0, -1)])
x, y = x + dx, y + dy
X.append(x)
Y.append(y)
neighbors = [x+1, y
x-1
y
x, y+1
x, y-1]
surviving_neighbors = neighbors.difference(X, Y)
trajectory = choice(surviving_neighbors)
plt.plot()
|
[
"Hard to know where you are going with this, here is a basic working example;\nThis is invalid as they don't exist;\n\nplt.style.use(['science', 'notebook', 'dark background'])\n\n\nPossible values are;\n\n['Solarize_Light2', '_classic_test_patch', '_mpl-gallery',\n'_mpl-gallery-nogrid', 'bmh', 'classic', 'dark_background', 'fast',\n'fivethirtyeight', 'ggplot', 'grayscale', 'seaborn-v0_8',\n'seaborn-v0_8-bright', 'seaborn-v0_8-colorblind', 'seaborn-v0_8-dark',\n'seaborn-v0_8-dark-palette', 'seaborn-v0_8-darkgrid',\n'seaborn-v0_8-deep', 'seaborn-v0_8-muted', 'seaborn-v0_8-notebook',\n'seaborn-v0_8-paper', 'seaborn-v0_8-pastel', 'seaborn-v0_8-poster',\n'seaborn-v0_8-talk', 'seaborn-v0_8-ticks', 'seaborn-v0_8-white',\n'seaborn-v0_8-whitegrid', 'tableau-colorblind10']\n\nfrom random import choice\nfrom matplotlib import pyplot as plt\nplt.style.use('seaborn-v0_8-darkgrid')\nprint(plt.style.available)\n\n#2D self-avoiding random walk\ndef dotty(n):\n x, y = 0, 0\n path = [(x, y)]\n for i in range(n):\n # pick the closest point but it must complete without crossing itself\n x, y = choice([(x+1, y), (x-1, y), (x, y+1), (x, y-1)])\n if (x, y) in path:\n return path\n path.append((x, y))\n return path\n\n# show plot\ndef show_path(path):\n plt.figure(figsize=(10, 10))\n # draw points\n plt.scatter(*zip(*path), s=5, c='k')\n # draw lines in red\n plt.plot(*zip(*path), c='r')\n plt.show()\n\n# main\nif __name__ == '__main__':\n path = dotty(100000)\n show_path(path)\n\nOutput:\n\n"
] |
[
0
] |
[] |
[] |
[
"python",
"random_walk"
] |
stackoverflow_0074594765_python_random_walk.txt
|
Q:
Show protocols of packets captured and saved in a .pcap with scapy on python
I am capturing live air WiFi traffic and saving only the headers of the packets captures in a .pcap file.
Is it possible to find out what protocols have been used on the whole capture? If yes, how can I keep track of the number of packets under every protocol found?
I've found a lot of info on injecting packets with Scapy but not on analyzing.
So far I've tried:
from scapy.all import * # import scapy package
from scapy.utils import rdpcap # import module for loading pcaps
pkts = rdpcap("./traffic/capture20131120-001.pcap") # load pcap
pkts.summary(lambda(r): r.sprintf("%Dot11.proto%")) # protocol?
print -(256-ord(pkts[24].notdecoded[-4:-3])) # signal strength of packet 24
Seems like pkts.summary(lambda(r): r.sprintf("%Dot11.proto%")) returns 0L and I don't understand that.
A:
Currently, Scapy does not support very many protocols, so it's great for some tasks, but not others. Using pyshark instead (a Python wrapper for Wireshark), there are many more supported protocols.
Using Scapy:
from scapy.all import *
def process_with_scapy(fileName):
protocol_count = {}
pcap_data = rdpcap(fileName)
sessions = pcap_data.sessions()
for session in sessions:
for packet in sessions[session]:
for i in range(len(packet.layers())):
layer = packet.getlayer(i)
protocol = layer.name
# Count the number of occurences for each protocol type
if protocol not in protocol_count: protocol_count[protocol] = 1
else: protocol_count[protocol] += 1
# Sort the dictionary in descending order
protocol_count = dict(sorted(protocol_count.items(), key=lambda item: item[1], reverse=True))
# Print the output
for protocol in protocol_count:
print(f'{protocol_count[protocol]} packets have layer "{protocol}"')
process_with_scapy('./traffic/capture20131120-001.pcap')
Documentation:
https://readthedocs.org/projects/scapy/downloads/pdf/latest
Using PyShark (slower but more supported):
import pyshark
def process_with_pyshark(fileName):
protocol_count = {}
pcap_data = pyshark.FileCapture(fileName)
for packet in pcap_data:
for layer in packet:
protocol = layer.layer_name
# Count the number of occurences for each protocol type
if protocol not in protocol_count: protocol_count[protocol] = 1
else: protocol_count[protocol] += 1
# Sort the dictionary in descending order
protocol_count = dict(sorted(protocol_count.items(), key=lambda item: item[1], reverse=True))
# Print the output
for protocol in protocol_count:
print(f'{protocol_count[protocol]} packets have layer "{protocol}"')
process_with_pyshark('./traffic/capture20131120-001.pcap')
For information on a specific protocol:
https://www.wireshark.org/docs/dfref/
The source code for a specific protocol dissector can also sometimes be useful:
https://github.com/wireshark/wireshark/tree/master/epan/dissectors
|
Show protocols of packets captured and saved in a .pcap with scapy on python
|
I am capturing live air WiFi traffic and saving only the headers of the packets captures in a .pcap file.
Is it possible to find out what protocols have been used on the whole capture? If yes, how can I keep track of the number of packets under every protocol found?
I've found a lot of info on injecting packets with Scapy but not on analyzing.
So far I've tried:
from scapy.all import * # import scapy package
from scapy.utils import rdpcap # import module for loading pcaps
pkts = rdpcap("./traffic/capture20131120-001.pcap") # load pcap
pkts.summary(lambda(r): r.sprintf("%Dot11.proto%")) # protocol?
print -(256-ord(pkts[24].notdecoded[-4:-3])) # signal strength of packet 24
Seems like pkts.summary(lambda(r): r.sprintf("%Dot11.proto%")) returns 0L and I don't understand that.
|
[
"Currently, Scapy does not support very many protocols, so it's great for some tasks, but not others. Using pyshark instead (a Python wrapper for Wireshark), there are many more supported protocols.\n\nUsing Scapy:\nfrom scapy.all import *\n\ndef process_with_scapy(fileName):\n protocol_count = {}\n\n pcap_data = rdpcap(fileName)\n sessions = pcap_data.sessions()\n for session in sessions:\n for packet in sessions[session]:\n for i in range(len(packet.layers())):\n layer = packet.getlayer(i)\n protocol = layer.name\n\n # Count the number of occurences for each protocol type\n if protocol not in protocol_count: protocol_count[protocol] = 1\n else: protocol_count[protocol] += 1\n\n # Sort the dictionary in descending order\n protocol_count = dict(sorted(protocol_count.items(), key=lambda item: item[1], reverse=True))\n \n # Print the output\n for protocol in protocol_count:\n print(f'{protocol_count[protocol]} packets have layer \"{protocol}\"')\n\nprocess_with_scapy('./traffic/capture20131120-001.pcap')\n\nDocumentation:\nhttps://readthedocs.org/projects/scapy/downloads/pdf/latest\n\nUsing PyShark (slower but more supported):\nimport pyshark\n\ndef process_with_pyshark(fileName):\n protocol_count = {}\n\n pcap_data = pyshark.FileCapture(fileName)\n for packet in pcap_data:\n for layer in packet:\n protocol = layer.layer_name\n\n # Count the number of occurences for each protocol type\n if protocol not in protocol_count: protocol_count[protocol] = 1\n else: protocol_count[protocol] += 1\n\n # Sort the dictionary in descending order\n protocol_count = dict(sorted(protocol_count.items(), key=lambda item: item[1], reverse=True))\n\n # Print the output\n for protocol in protocol_count:\n print(f'{protocol_count[protocol]} packets have layer \"{protocol}\"')\n\n\nprocess_with_pyshark('./traffic/capture20131120-001.pcap')\n\n\nFor information on a specific protocol:\nhttps://www.wireshark.org/docs/dfref/\nThe source code for a specific protocol dissector can also sometimes be useful:\nhttps://github.com/wireshark/wireshark/tree/master/epan/dissectors\n\n"
] |
[
0
] |
[] |
[] |
[
"analysis",
"pcap",
"protocols",
"python",
"scapy"
] |
stackoverflow_0020088735_analysis_pcap_protocols_python_scapy.txt
|
Q:
how to change the date format in every first element of a sublist
I have a nested list like this: datelist = [["2019/04/12", 7.0], ["2019/02/09", 7.3], ["2018/08/14", 6.1]]
I need to change the date format from yyyy/mm/dd/ to yyyy.mm.dd and then return the list as it is.
So the result should be [["12.04.2019", 7.0], ["09.02.2019", 7.3], ["14.08.2018", 6.1]].
I'm a beginner, so I'm really not sure how to do it.
I tried the following:
import datetime
datelist = [datetime.datetime.strptime(str(i[0]), "%Y/%m/%d").strftime('%d.%m.%Y') for i in datelist]
print(datelist)
and the output was:
['12.04.2019', '09.02.2019', '14.08.2016']
So the change of the data format worked, but how do I return the the original nested list with the corrected data format?
I need to implement this as a function which takes lists like datelist as an input.
A:
It's simple:
datelist = [[datetime.datetime.strptime(str(i[0]), "%Y/%m/%d").strftime('%d.%m.%Y'), i[1]] for i in mylist]
When iterating throughout your list, you get back a list, knowing the position of your elements in the list helps, thus using i[0] for the first element (datetime), and i[1] for the second (number).
A:
You can update your script like below:
datelist = [[datetime.datetime.strptime(str(i[0]), "%Y/%m/%d").strftime('%d.%m.%Y'), i[1]] for i in datelist]
|
how to change the date format in every first element of a sublist
|
I have a nested list like this: datelist = [["2019/04/12", 7.0], ["2019/02/09", 7.3], ["2018/08/14", 6.1]]
I need to change the date format from yyyy/mm/dd/ to yyyy.mm.dd and then return the list as it is.
So the result should be [["12.04.2019", 7.0], ["09.02.2019", 7.3], ["14.08.2018", 6.1]].
I'm a beginner, so I'm really not sure how to do it.
I tried the following:
import datetime
datelist = [datetime.datetime.strptime(str(i[0]), "%Y/%m/%d").strftime('%d.%m.%Y') for i in datelist]
print(datelist)
and the output was:
['12.04.2019', '09.02.2019', '14.08.2016']
So the change of the data format worked, but how do I return the the original nested list with the corrected data format?
I need to implement this as a function which takes lists like datelist as an input.
|
[
"It's simple:\ndatelist = [[datetime.datetime.strptime(str(i[0]), \"%Y/%m/%d\").strftime('%d.%m.%Y'), i[1]] for i in mylist]\n\nWhen iterating throughout your list, you get back a list, knowing the position of your elements in the list helps, thus using i[0] for the first element (datetime), and i[1] for the second (number).\n",
"You can update your script like below:\ndatelist = [[datetime.datetime.strptime(str(i[0]), \"%Y/%m/%d\").strftime('%d.%m.%Y'), i[1]] for i in datelist]\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"datetime",
"function",
"nested_lists",
"python"
] |
stackoverflow_0074595039_datetime_function_nested_lists_python.txt
|
Q:
Get size of a file before downloading in Python
I'm downloading an entire directory from a web server. It works OK, but I can't figure how to get the file size before download to compare if it was updated on the server or not. Can this be done as if I was downloading the file from a FTP server?
import urllib
import re
url = "http://www.someurl.com"
# Download the page locally
f = urllib.urlopen(url)
html = f.read()
f.close()
f = open ("temp.htm", "w")
f.write (html)
f.close()
# List only the .TXT / .ZIP files
fnames = re.findall('^.*<a href="(\w+(?:\.txt|.zip)?)".*$', html, re.MULTILINE)
for fname in fnames:
print fname, "..."
f = urllib.urlopen(url + "/" + fname)
#### Here I want to check the filesize to download or not ####
file = f.read()
f.close()
f = open (fname, "w")
f.write (file)
f.close()
@Jon: thank for your quick answer. It works, but the filesize on the web server is slightly less than the filesize of the downloaded file.
Examples:
Local Size Server Size
2.223.533 2.115.516
664.603 662.121
It has anything to do with the CR/LF conversion?
A:
I have reproduced what you are seeing:
import urllib, os
link = "http://python.org"
print "opening url:", link
site = urllib.urlopen(link)
meta = site.info()
print "Content-Length:", meta.getheaders("Content-Length")[0]
f = open("out.txt", "r")
print "File on disk:",len(f.read())
f.close()
f = open("out.txt", "w")
f.write(site.read())
site.close()
f.close()
f = open("out.txt", "r")
print "File on disk after download:",len(f.read())
f.close()
print "os.stat().st_size returns:", os.stat("out.txt").st_size
Outputs this:
opening url: http://python.org
Content-Length: 16535
File on disk: 16535
File on disk after download: 16535
os.stat().st_size returns: 16861
What am I doing wrong here? Is os.stat().st_size not returning the correct size?
Edit:
OK, I figured out what the problem was:
import urllib, os
link = "http://python.org"
print "opening url:", link
site = urllib.urlopen(link)
meta = site.info()
print "Content-Length:", meta.getheaders("Content-Length")[0]
f = open("out.txt", "rb")
print "File on disk:",len(f.read())
f.close()
f = open("out.txt", "wb")
f.write(site.read())
site.close()
f.close()
f = open("out.txt", "rb")
print "File on disk after download:",len(f.read())
f.close()
print "os.stat().st_size returns:", os.stat("out.txt").st_size
this outputs:
$ python test.py
opening url: http://python.org
Content-Length: 16535
File on disk: 16535
File on disk after download: 16535
os.stat().st_size returns: 16535
Make sure you are opening both files for binary read/write.
// open for binary write
open(filename, "wb")
// open for binary read
open(filename, "rb")
A:
Using the returned-urllib-object method info(), you can get various information on the retrieved document. Example of grabbing the current Google logo:
>>> import urllib
>>> d = urllib.urlopen("http://www.google.co.uk/logos/olympics08_opening.gif")
>>> print d.info()
Content-Type: image/gif
Last-Modified: Thu, 07 Aug 2008 16:20:19 GMT
Expires: Sun, 17 Jan 2038 19:14:07 GMT
Cache-Control: public
Date: Fri, 08 Aug 2008 13:40:41 GMT
Server: gws
Content-Length: 20172
Connection: Close
It's a dict, so to get the size of the file, you do urllibobject.info()['Content-Length']
print f.info()['Content-Length']
And to get the size of the local file (for comparison), you can use the os.stat() command:
os.stat("/the/local/file.zip").st_size
A:
A requests-based solution using HEAD instead of GET (also prints HTTP headers):
#!/usr/bin/python
# display size of a remote file without downloading
from __future__ import print_function
import sys
import requests
# number of bytes in a megabyte
MBFACTOR = float(1 << 20)
response = requests.head(sys.argv[1], allow_redirects=True)
print("\n".join([('{:<40}: {}'.format(k, v)) for k, v in response.headers.items()]))
size = response.headers.get('content-length', 0)
print('{:<40}: {:.2f} MB'.format('FILE SIZE', int(size) / MBFACTOR))
Usage
$ python filesize-remote-url.py https://httpbin.org/image/jpeg
...
Content-Length : 35588
FILE SIZE (MB) : 0.03 MB
A:
The size of the file is sent as the Content-Length header. Here is how to get it with urllib:
>>> site = urllib.urlopen("http://python.org")
>>> meta = site.info()
>>> print meta.getheaders("Content-Length")
['16535']
>>>
A:
Also if the server you are connecting to supports it, look at Etags and the If-Modified-Since and If-None-Match headers.
Using these will take advantage of the webserver's caching rules and will return a 304 Not Modified status code if the content hasn't changed.
A:
In Python3:
>>> import urllib.request
>>> site = urllib.request.urlopen("http://python.org")
>>> print("FileSize: ", site.length)
A:
For a python3 (tested on 3.5) approach I'd recommend:
with urlopen(file_url) as in_file, open(local_file_address, 'wb') as out_file:
print(in_file.getheader('Content-Length'))
out_file.write(response.read())
A:
For anyone using Python 3 and looking for a quick solution using the requests package:
import requests
response = requests.head(
"https://website.com/yourfile.mp4", # Example file
allow_redirects=True
)
print(response.headers['Content-Length'])
Note: Not all responses will have a Content-Length so your application will want to check to see if it exists.
if 'Content-Length' in response.headers:
... # Do your stuff here
A:
Here is a much more safer way for Python 3:
import urllib.request
site = urllib.request.urlopen("http://python.org")
meta = site.info()
meta.get('Content-Length')
Returns:
'49829'
meta.get('Content-Length') will return the "Content-Length" header if exists. Otherwise it will be blank
A:
@PabloG Regarding the local/server filesize difference
Following is high-level illustrative explanation of why it may occur:
The size on disk sometimes is different from the actual size of the data.
It depends on the underlying file-system and how it operates on data.
As you may have seen in Windows when formatting a flash drive you are asked to provide 'block/cluster size' and it varies [512b - 8kb].
When a file is written on the disk, it is stored in a 'sort-of linked list' of disk blocks.
When a certain block is used to store part of a file, no other file contents will be stored in the same blok, so even if the chunk is no occupuing the entire block space, the block is rendered unusable by other files.
Example:
When the filesystem is divided on 512b blocks, and we need to store 600b file, two blocks will be occupied. The first block will be fully utilized, while the second block will have only 88b utilized and the remaining (512-88)b will be unusable resulting in 'file-size-on-disk' being 1024b.
This is why Windows has different notations for 'file size' and 'size on disk'.
NOTE:
There are different pros & cons that come with smaller/bigger FS block, so do a better research before playing with your filesystem.
A:
Quick and reliable one-liner for Python3 using urllib:
import urllib
url = 'https://<your url here>'
size = urllib.request.urlopen(url).info().get('Content-Length', 0)
.get(<dict key>, 0) gets the key from dict and if the key is absent returns 0 (or whatever the 2nd argument is)
A:
you can use requests to pull this data
File_Name=requests.head(LINK).headers["X-File-Name"]
#And other useful info** like the size of the file from this dict (headers)
#like
File_size=requests.head(LINK).headers["Content-Length"]
|
Get size of a file before downloading in Python
|
I'm downloading an entire directory from a web server. It works OK, but I can't figure how to get the file size before download to compare if it was updated on the server or not. Can this be done as if I was downloading the file from a FTP server?
import urllib
import re
url = "http://www.someurl.com"
# Download the page locally
f = urllib.urlopen(url)
html = f.read()
f.close()
f = open ("temp.htm", "w")
f.write (html)
f.close()
# List only the .TXT / .ZIP files
fnames = re.findall('^.*<a href="(\w+(?:\.txt|.zip)?)".*$', html, re.MULTILINE)
for fname in fnames:
print fname, "..."
f = urllib.urlopen(url + "/" + fname)
#### Here I want to check the filesize to download or not ####
file = f.read()
f.close()
f = open (fname, "w")
f.write (file)
f.close()
@Jon: thank for your quick answer. It works, but the filesize on the web server is slightly less than the filesize of the downloaded file.
Examples:
Local Size Server Size
2.223.533 2.115.516
664.603 662.121
It has anything to do with the CR/LF conversion?
|
[
"I have reproduced what you are seeing:\nimport urllib, os\nlink = \"http://python.org\"\nprint \"opening url:\", link\nsite = urllib.urlopen(link)\nmeta = site.info()\nprint \"Content-Length:\", meta.getheaders(\"Content-Length\")[0]\n\nf = open(\"out.txt\", \"r\")\nprint \"File on disk:\",len(f.read())\nf.close()\n\n\nf = open(\"out.txt\", \"w\")\nf.write(site.read())\nsite.close()\nf.close()\n\nf = open(\"out.txt\", \"r\")\nprint \"File on disk after download:\",len(f.read())\nf.close()\n\nprint \"os.stat().st_size returns:\", os.stat(\"out.txt\").st_size\n\nOutputs this:\nopening url: http://python.org\nContent-Length: 16535\nFile on disk: 16535\nFile on disk after download: 16535\nos.stat().st_size returns: 16861\n\nWhat am I doing wrong here? Is os.stat().st_size not returning the correct size?\n\nEdit:\nOK, I figured out what the problem was:\nimport urllib, os\nlink = \"http://python.org\"\nprint \"opening url:\", link\nsite = urllib.urlopen(link)\nmeta = site.info()\nprint \"Content-Length:\", meta.getheaders(\"Content-Length\")[0]\n\nf = open(\"out.txt\", \"rb\")\nprint \"File on disk:\",len(f.read())\nf.close()\n\n\nf = open(\"out.txt\", \"wb\")\nf.write(site.read())\nsite.close()\nf.close()\n\nf = open(\"out.txt\", \"rb\")\nprint \"File on disk after download:\",len(f.read())\nf.close()\n\nprint \"os.stat().st_size returns:\", os.stat(\"out.txt\").st_size\n\nthis outputs:\n$ python test.py\nopening url: http://python.org\nContent-Length: 16535\nFile on disk: 16535\nFile on disk after download: 16535\nos.stat().st_size returns: 16535\n\nMake sure you are opening both files for binary read/write.\n// open for binary write\nopen(filename, \"wb\")\n// open for binary read\nopen(filename, \"rb\")\n\n",
"Using the returned-urllib-object method info(), you can get various information on the retrieved document. Example of grabbing the current Google logo:\n>>> import urllib\n>>> d = urllib.urlopen(\"http://www.google.co.uk/logos/olympics08_opening.gif\")\n>>> print d.info()\n\nContent-Type: image/gif\nLast-Modified: Thu, 07 Aug 2008 16:20:19 GMT \nExpires: Sun, 17 Jan 2038 19:14:07 GMT \nCache-Control: public \nDate: Fri, 08 Aug 2008 13:40:41 GMT \nServer: gws \nContent-Length: 20172 \nConnection: Close\n\nIt's a dict, so to get the size of the file, you do urllibobject.info()['Content-Length']\nprint f.info()['Content-Length']\n\nAnd to get the size of the local file (for comparison), you can use the os.stat() command:\nos.stat(\"/the/local/file.zip\").st_size\n\n",
"A requests-based solution using HEAD instead of GET (also prints HTTP headers):\n#!/usr/bin/python\n# display size of a remote file without downloading\n\nfrom __future__ import print_function\nimport sys\nimport requests\n\n# number of bytes in a megabyte\nMBFACTOR = float(1 << 20)\n\nresponse = requests.head(sys.argv[1], allow_redirects=True)\n\nprint(\"\\n\".join([('{:<40}: {}'.format(k, v)) for k, v in response.headers.items()]))\nsize = response.headers.get('content-length', 0)\nprint('{:<40}: {:.2f} MB'.format('FILE SIZE', int(size) / MBFACTOR))\n\nUsage\n\n$ python filesize-remote-url.py https://httpbin.org/image/jpeg\n...\nContent-Length : 35588\nFILE SIZE (MB) : 0.03 MB\n\n\n",
"The size of the file is sent as the Content-Length header. Here is how to get it with urllib:\n>>> site = urllib.urlopen(\"http://python.org\")\n>>> meta = site.info()\n>>> print meta.getheaders(\"Content-Length\")\n['16535']\n>>>\n\n",
"Also if the server you are connecting to supports it, look at Etags and the If-Modified-Since and If-None-Match headers.\nUsing these will take advantage of the webserver's caching rules and will return a 304 Not Modified status code if the content hasn't changed.\n",
"In Python3:\n>>> import urllib.request\n>>> site = urllib.request.urlopen(\"http://python.org\")\n>>> print(\"FileSize: \", site.length)\n\n",
"For a python3 (tested on 3.5) approach I'd recommend:\nwith urlopen(file_url) as in_file, open(local_file_address, 'wb') as out_file:\n print(in_file.getheader('Content-Length'))\n out_file.write(response.read())\n\n",
"For anyone using Python 3 and looking for a quick solution using the requests package:\nimport requests \nresponse = requests.head( \n \"https://website.com/yourfile.mp4\", # Example file \n allow_redirects=True\n)\nprint(response.headers['Content-Length']) \n\nNote: Not all responses will have a Content-Length so your application will want to check to see if it exists.\nif 'Content-Length' in response.headers:\n ... # Do your stuff here \n\n",
"Here is a much more safer way for Python 3:\nimport urllib.request\nsite = urllib.request.urlopen(\"http://python.org\")\nmeta = site.info()\nmeta.get('Content-Length') \n\nReturns:\n'49829'\n\nmeta.get('Content-Length') will return the \"Content-Length\" header if exists. Otherwise it will be blank\n",
"@PabloG Regarding the local/server filesize difference\nFollowing is high-level illustrative explanation of why it may occur:\nThe size on disk sometimes is different from the actual size of the data.\nIt depends on the underlying file-system and how it operates on data.\nAs you may have seen in Windows when formatting a flash drive you are asked to provide 'block/cluster size' and it varies [512b - 8kb].\nWhen a file is written on the disk, it is stored in a 'sort-of linked list' of disk blocks.\nWhen a certain block is used to store part of a file, no other file contents will be stored in the same blok, so even if the chunk is no occupuing the entire block space, the block is rendered unusable by other files.\nExample:\nWhen the filesystem is divided on 512b blocks, and we need to store 600b file, two blocks will be occupied. The first block will be fully utilized, while the second block will have only 88b utilized and the remaining (512-88)b will be unusable resulting in 'file-size-on-disk' being 1024b.\nThis is why Windows has different notations for 'file size' and 'size on disk'.\nNOTE:\nThere are different pros & cons that come with smaller/bigger FS block, so do a better research before playing with your filesystem.\n",
"Quick and reliable one-liner for Python3 using urllib:\nimport urllib\n\nurl = 'https://<your url here>'\n\nsize = urllib.request.urlopen(url).info().get('Content-Length', 0)\n\n.get(<dict key>, 0) gets the key from dict and if the key is absent returns 0 (or whatever the 2nd argument is)\n",
"you can use requests to pull this data\n\nFile_Name=requests.head(LINK).headers[\"X-File-Name\"]\n\n#And other useful info** like the size of the file from this dict (headers)\n#like \n\nFile_size=requests.head(LINK).headers[\"Content-Length\"]\n\n"
] |
[
39,
28,
12,
7,
6,
5,
3,
2,
1,
0,
0,
0
] |
[] |
[] |
[
"python",
"urllib"
] |
stackoverflow_0000005909_python_urllib.txt
|
Q:
Plotly: Remove legend title using template
Even after passing 'title':None inside layout.legend in the template, the chart still shows a legend title, whereas it should change the default setting to no legend title.
If I manually pass it though with fig.update_layout(), it then removes the title.
Why is this happening and how do I change the default setting to no legend title?
Here's the code to recreate the graph (The manual passing in update.layout() is commented out)-
import plotly.graph_objects as go
import plotly.io as pio
import plotly.express as px
import pandas as pd
pio.templates['my_theme'] = go.layout.Template({
'layout': {'annotationdefaults': {'arrowcolor': '#2a3f5f', 'arrowhead': 0, 'arrowwidth': 1},
'autotypenumbers': 'strict',
'coloraxis': {'colorbar': {'outlinewidth': 0, 'ticks': ''}},
'colorscale': {'diverging': [[0, '#8e0152'], [0.1, '#c51b7d'],
[0.2, '#de77ae'], [0.3, '#f1b6da'],
[0.4, '#fde0ef'], [0.5, '#f7f7f7'],
[0.6, '#e6f5d0'], [0.7, '#b8e186'],
[0.8, '#7fbc41'], [0.9, '#4d9221'], [1,
'#276419']],
'sequential': [[0.0, '#0d0887'],
[0.1111111111111111, '#46039f'],
[0.2222222222222222, '#7201a8'],
[0.3333333333333333, '#9c179e'],
[0.4444444444444444, '#bd3786'],
[0.5555555555555556, '#d8576b'],
[0.6666666666666666, '#ed7953'],
[0.7777777777777778, '#fb9f3a'],
[0.8888888888888888, '#fdca26'], [1.0,
'#f0f921']],
'sequentialminus': [[0.0, '#0d0887'],
[0.1111111111111111, '#46039f'],
[0.2222222222222222, '#7201a8'],
[0.3333333333333333, '#9c179e'],
[0.4444444444444444, '#bd3786'],
[0.5555555555555556, '#d8576b'],
[0.6666666666666666, '#ed7953'],
[0.7777777777777778, '#fb9f3a'],
[0.8888888888888888, '#fdca26'],
[1.0, '#f0f921']]},
'colorway': ["#db2b39","#3d405b","#2fbf71","#faa613","#00a6fb"],
'font': {'color': '#2a3f5f'},
'geo': {'bgcolor': 'white',
'lakecolor': 'white',
'landcolor': '#E5ECF6',
'showlakes': True,
'showland': True,
'subunitcolor': 'white'},
'hoverlabel': {'align': 'left'},
'hovermode': 'closest',
'legend': {'orientation': 'v',
'bordercolor': '#000000',
'borderwidth': 0.7,
'itemwidth': 30,
'x': 0.01,
'y': 1.075,
'title': None,
'bgcolor':'#F6F5F4'},
'mapbox': {'style': 'light'},
'paper_bgcolor': 'white',
'plot_bgcolor': 'white',
'polar': {'angularaxis': {'gridcolor': 'white', 'linecolor': 'white', 'ticks': ''},
'bgcolor': '#E5ECF6',
'radialaxis': {'gridcolor': 'white', 'linecolor': 'white', 'ticks': ''}},
'scene': {'xaxis': {'backgroundcolor': '#E5ECF6',
'gridcolor': 'white',
'gridwidth': 2,
'linecolor': 'white',
'showbackground': True,
'ticks': '',
'zerolinecolor': 'white'},
'yaxis': {'backgroundcolor': '#E5ECF6',
'gridcolor': 'white',
'gridwidth': 2,
'linecolor': 'white',
'showbackground': True,
'ticks': '',
'zerolinecolor': 'white'},
'zaxis': {'backgroundcolor': '#E5ECF6',
'gridcolor': 'white',
'gridwidth': 2,
'linecolor': 'white',
'showbackground': True,
'ticks': '',
'zerolinecolor': 'white'}},
'separators':'.',
'shapedefaults': {'line': {'color': '#2a3f5f'}},
'ternary': {'aaxis': {'gridcolor': 'white', 'linecolor': 'white', 'ticks': ''},
'baxis': {'gridcolor': 'white', 'linecolor': 'white', 'ticks': ''},
'bgcolor': '#E5ECF6',
'caxis': {'gridcolor': 'white', 'linecolor': 'white', 'ticks': ''}},
'title': {'x': 0.5,
'font_size':30},
'xaxis': {'automargin': True,
'gridcolor': '#eeeeee',
'linecolor': 'white',
'ticks': '',
'title': {'standoff': 15},
'zerolinecolor': 'white',
'zerolinewidth': 2},
'yaxis': {'automargin': True,
'gridcolor': '#eeeeee',
'linecolor': 'white',
'ticks': '',
'title': {'standoff': 15},
'zerolinecolor': 'white',
'zerolinewidth': 2}}
})
pio.templates.default = 'my_theme'
df = pd.DataFrame({'date': {27: '2020-01-28',
28: '2020-01-29',
29: '2020-01-30',
30: '2020-01-31',
31: '2020-02-01'},
'new_cases': {27: 2651.0, 28: 589.0, 29: 2068.0, 30: 1692.0, 31: 2111.0},
'new_cases_smoothed': {27: 717.286,
28: 801.429,
29: 1082.857,
30: 1283.714,
31: 1515.0}})
fig = px.line(df, x='date', y=['new_cases','new_cases_smoothed'],title='New cases',
color_discrete_sequence = ['#DB2B39','#0D0628'])
fig.update_traces(hovertemplate=None)
fig.update_layout(hovermode='x unified')#, legend=dict(title=None))
fig.show()
A:
I was certain that the following would do the trick:
'title': {'text': None}
But to my surprise, the text 'variable' still pops up. An empty string '' doesn't work, and neither does 'title': {'text': False}.
And I find this very interesting, since you're able to edit all other attributes of the legend title except the title text itself. Like color, for example, with:
'title': {'font': {'color':'blue'}}
And this opens up for a sub-optimal solution with:
'title': {'font': {'color':''rgba(0,0,0,0'}}
Which gives you:
But this arguably looks a bit weird since you've still got the extra space for the text.
So this seems to be a bug of some kind.
A:
The following worked for me with plotly.graph_objects, go.Figure and plotly==5.5.0
fig.update_layout(
title="Performance Results",
legend_title="",
...,
)
A:
I am using "plotly-express" and version is "4.14.3".
Here's what has worked for me to actually remove the title:
from plotly import express as px
# make your plot
fig = px.scatter(...)
# udpate the legend's title by setting it to none
fig.update_layout(legend={'title_text':''})
## fig.update_layout({'legend_title_text': ''}) worked too.
# display it
fig.show()
A:
This works
fig.update_layout(title_text='ALPACA Queries',
title_x=0.5, showlegend=True,
legend_title=None)
|
Plotly: Remove legend title using template
|
Even after passing 'title':None inside layout.legend in the template, the chart still shows a legend title, whereas it should change the default setting to no legend title.
If I manually pass it though with fig.update_layout(), it then removes the title.
Why is this happening and how do I change the default setting to no legend title?
Here's the code to recreate the graph (The manual passing in update.layout() is commented out)-
import plotly.graph_objects as go
import plotly.io as pio
import plotly.express as px
import pandas as pd
pio.templates['my_theme'] = go.layout.Template({
'layout': {'annotationdefaults': {'arrowcolor': '#2a3f5f', 'arrowhead': 0, 'arrowwidth': 1},
'autotypenumbers': 'strict',
'coloraxis': {'colorbar': {'outlinewidth': 0, 'ticks': ''}},
'colorscale': {'diverging': [[0, '#8e0152'], [0.1, '#c51b7d'],
[0.2, '#de77ae'], [0.3, '#f1b6da'],
[0.4, '#fde0ef'], [0.5, '#f7f7f7'],
[0.6, '#e6f5d0'], [0.7, '#b8e186'],
[0.8, '#7fbc41'], [0.9, '#4d9221'], [1,
'#276419']],
'sequential': [[0.0, '#0d0887'],
[0.1111111111111111, '#46039f'],
[0.2222222222222222, '#7201a8'],
[0.3333333333333333, '#9c179e'],
[0.4444444444444444, '#bd3786'],
[0.5555555555555556, '#d8576b'],
[0.6666666666666666, '#ed7953'],
[0.7777777777777778, '#fb9f3a'],
[0.8888888888888888, '#fdca26'], [1.0,
'#f0f921']],
'sequentialminus': [[0.0, '#0d0887'],
[0.1111111111111111, '#46039f'],
[0.2222222222222222, '#7201a8'],
[0.3333333333333333, '#9c179e'],
[0.4444444444444444, '#bd3786'],
[0.5555555555555556, '#d8576b'],
[0.6666666666666666, '#ed7953'],
[0.7777777777777778, '#fb9f3a'],
[0.8888888888888888, '#fdca26'],
[1.0, '#f0f921']]},
'colorway': ["#db2b39","#3d405b","#2fbf71","#faa613","#00a6fb"],
'font': {'color': '#2a3f5f'},
'geo': {'bgcolor': 'white',
'lakecolor': 'white',
'landcolor': '#E5ECF6',
'showlakes': True,
'showland': True,
'subunitcolor': 'white'},
'hoverlabel': {'align': 'left'},
'hovermode': 'closest',
'legend': {'orientation': 'v',
'bordercolor': '#000000',
'borderwidth': 0.7,
'itemwidth': 30,
'x': 0.01,
'y': 1.075,
'title': None,
'bgcolor':'#F6F5F4'},
'mapbox': {'style': 'light'},
'paper_bgcolor': 'white',
'plot_bgcolor': 'white',
'polar': {'angularaxis': {'gridcolor': 'white', 'linecolor': 'white', 'ticks': ''},
'bgcolor': '#E5ECF6',
'radialaxis': {'gridcolor': 'white', 'linecolor': 'white', 'ticks': ''}},
'scene': {'xaxis': {'backgroundcolor': '#E5ECF6',
'gridcolor': 'white',
'gridwidth': 2,
'linecolor': 'white',
'showbackground': True,
'ticks': '',
'zerolinecolor': 'white'},
'yaxis': {'backgroundcolor': '#E5ECF6',
'gridcolor': 'white',
'gridwidth': 2,
'linecolor': 'white',
'showbackground': True,
'ticks': '',
'zerolinecolor': 'white'},
'zaxis': {'backgroundcolor': '#E5ECF6',
'gridcolor': 'white',
'gridwidth': 2,
'linecolor': 'white',
'showbackground': True,
'ticks': '',
'zerolinecolor': 'white'}},
'separators':'.',
'shapedefaults': {'line': {'color': '#2a3f5f'}},
'ternary': {'aaxis': {'gridcolor': 'white', 'linecolor': 'white', 'ticks': ''},
'baxis': {'gridcolor': 'white', 'linecolor': 'white', 'ticks': ''},
'bgcolor': '#E5ECF6',
'caxis': {'gridcolor': 'white', 'linecolor': 'white', 'ticks': ''}},
'title': {'x': 0.5,
'font_size':30},
'xaxis': {'automargin': True,
'gridcolor': '#eeeeee',
'linecolor': 'white',
'ticks': '',
'title': {'standoff': 15},
'zerolinecolor': 'white',
'zerolinewidth': 2},
'yaxis': {'automargin': True,
'gridcolor': '#eeeeee',
'linecolor': 'white',
'ticks': '',
'title': {'standoff': 15},
'zerolinecolor': 'white',
'zerolinewidth': 2}}
})
pio.templates.default = 'my_theme'
df = pd.DataFrame({'date': {27: '2020-01-28',
28: '2020-01-29',
29: '2020-01-30',
30: '2020-01-31',
31: '2020-02-01'},
'new_cases': {27: 2651.0, 28: 589.0, 29: 2068.0, 30: 1692.0, 31: 2111.0},
'new_cases_smoothed': {27: 717.286,
28: 801.429,
29: 1082.857,
30: 1283.714,
31: 1515.0}})
fig = px.line(df, x='date', y=['new_cases','new_cases_smoothed'],title='New cases',
color_discrete_sequence = ['#DB2B39','#0D0628'])
fig.update_traces(hovertemplate=None)
fig.update_layout(hovermode='x unified')#, legend=dict(title=None))
fig.show()
|
[
"I was certain that the following would do the trick:\n'title': {'text': None}\n\nBut to my surprise, the text 'variable' still pops up. An empty string '' doesn't work, and neither does 'title': {'text': False}.\nAnd I find this very interesting, since you're able to edit all other attributes of the legend title except the title text itself. Like color, for example, with:\n'title': {'font': {'color':'blue'}}\n\n\nAnd this opens up for a sub-optimal solution with:\n'title': {'font': {'color':''rgba(0,0,0,0'}}\n\nWhich gives you:\n\nBut this arguably looks a bit weird since you've still got the extra space for the text.\nSo this seems to be a bug of some kind.\n",
"The following worked for me with plotly.graph_objects, go.Figure and plotly==5.5.0\nfig.update_layout(\n title=\"Performance Results\",\n legend_title=\"\",\n ...,\n)\n\n",
"I am using \"plotly-express\" and version is \"4.14.3\".\nHere's what has worked for me to actually remove the title:\nfrom plotly import express as px\n\n# make your plot\nfig = px.scatter(...)\n\n# udpate the legend's title by setting it to none\nfig.update_layout(legend={'title_text':''})\n## fig.update_layout({'legend_title_text': ''}) worked too.\n\n# display it\nfig.show()\n\n",
"This works\nfig.update_layout(title_text='ALPACA Queries',\n title_x=0.5, showlegend=True,\n legend_title=None)\n\n"
] |
[
2,
2,
0,
0
] |
[] |
[] |
[
"data_visualization",
"plotly",
"plotly_python",
"python"
] |
stackoverflow_0067622972_data_visualization_plotly_plotly_python_python.txt
|
Q:
Element-wise multiplication of matrices in Tensorflow : how to avoid for loop
I want to do the following multiplication in tensorflow (TF 2.10), but I'm not sure how to.
I have an image tensor a, which is of shape 224x224x3 and a tensor b, which is of shape 224x224xf. I want to multiply (element-wise) a by each 2D matrix of b sliced by f to get a matrix c of shape 224x224xf.
So for example, the 1st multiplication would be done as follows:
tf.reduce_sum(a * b[:,:,0][:,:,None],axis=-1)
(broadcasting + summation, result is shape 224x224)
and so on until the fth multiplication. Result would be the aggregation of f matrices of shape 224x224 in c matrix of shape 224x224xf.
I would greatly appreciate help on how to do this using tensorflow functionality.
EDIT: I realize that what I want to do is equivalent to a Conv2D operation with kernel_size=1 and filters=f. Maybe it can help.
A:
You could multiply each channel of a with b and then sum:
X = a[:,:,0:1] * b + a[:,:,1:2] * b + a[:,:,2:3] * b
The shape of X is (224, 224, f) and it will give the same results as your multiplications:
(X[:, :, 0] == tf.reduce_sum(a * b[:, :, 0][:, :, None], axis=-1)).numpy().all()
Output:
True
The following gives slightly different results, I guess because of floating point rounding:
tf.reduce_sum(a, axis=-1, keepdims=True) * b
A:
You can expand the two tensors in next-to-last and last dimension, respectively, then take advantage of broadcasting.
tf.reduce_sum(tf.expand_dims(a, axis=-2) * tf.expand_dims(b[..., :f+1], axis=-1), axis=-1)
Proof that this produces correct result
a = tf.random.uniform(shape=(224,224,3))
b = tf.random.uniform(shape=(224,224,10))
f = 4
ref = None
for i in range(f+1):
if ref is None:
ref = tf.reduce_sum(a * b[...,i][...,None], axis=-1)[...,None]
else:
ref = tf.concat([ref, tf.reduce_sum(a * b[...,i][...,None], axis=-1)[...,None]], axis=-1)
tf.reduce_all(tf.reduce_sum(tf.expand_dims(a, axis=-2) * tf.expand_dims(b[..., :f+1], axis=-1), axis=-1) == ref)
<tf.Tensor: shape=(), dtype=bool, numpy=True>
|
Element-wise multiplication of matrices in Tensorflow : how to avoid for loop
|
I want to do the following multiplication in tensorflow (TF 2.10), but I'm not sure how to.
I have an image tensor a, which is of shape 224x224x3 and a tensor b, which is of shape 224x224xf. I want to multiply (element-wise) a by each 2D matrix of b sliced by f to get a matrix c of shape 224x224xf.
So for example, the 1st multiplication would be done as follows:
tf.reduce_sum(a * b[:,:,0][:,:,None],axis=-1)
(broadcasting + summation, result is shape 224x224)
and so on until the fth multiplication. Result would be the aggregation of f matrices of shape 224x224 in c matrix of shape 224x224xf.
I would greatly appreciate help on how to do this using tensorflow functionality.
EDIT: I realize that what I want to do is equivalent to a Conv2D operation with kernel_size=1 and filters=f. Maybe it can help.
|
[
"You could multiply each channel of a with b and then sum:\nX = a[:,:,0:1] * b + a[:,:,1:2] * b + a[:,:,2:3] * b\n\nThe shape of X is (224, 224, f) and it will give the same results as your multiplications:\n(X[:, :, 0] == tf.reduce_sum(a * b[:, :, 0][:, :, None], axis=-1)).numpy().all()\n\nOutput:\nTrue\n\nThe following gives slightly different results, I guess because of floating point rounding:\ntf.reduce_sum(a, axis=-1, keepdims=True) * b\n\n",
"You can expand the two tensors in next-to-last and last dimension, respectively, then take advantage of broadcasting.\ntf.reduce_sum(tf.expand_dims(a, axis=-2) * tf.expand_dims(b[..., :f+1], axis=-1), axis=-1)\n\nProof that this produces correct result\na = tf.random.uniform(shape=(224,224,3))\nb = tf.random.uniform(shape=(224,224,10))\nf = 4\nref = None\nfor i in range(f+1):\n if ref is None:\n ref = tf.reduce_sum(a * b[...,i][...,None], axis=-1)[...,None]\n else:\n ref = tf.concat([ref, tf.reduce_sum(a * b[...,i][...,None], axis=-1)[...,None]], axis=-1)\ntf.reduce_all(tf.reduce_sum(tf.expand_dims(a, axis=-2) * tf.expand_dims(b[..., :f+1], axis=-1), axis=-1) == ref)\n\n<tf.Tensor: shape=(), dtype=bool, numpy=True>\n\n"
] |
[
3,
1
] |
[] |
[] |
[
"matrix_multiplication",
"python",
"tensorflow"
] |
stackoverflow_0074592109_matrix_multiplication_python_tensorflow.txt
|
Q:
Filling NaN on conditions
I have the following input data:
df = pd.DataFrame({"ID" : [1, 1, 1, 2, 2, 2, 2],
"length" : [0.7, 0.7, 0.7, 0.8, 0.6, 0.6, 0.7],
"height" : [7, 9, np.nan, 4, 8, np.nan, 5]})
df
ID length height
0 1 0.7 7
1 1 0.7 9
2 1 0.7 np.nan
3 2 0.8 4
4 2 0.6 8
5 2 0.6 np.nan
6 2 0.7 5
I want to be able to fill the NaN if a group of "ID" all have the same "length", fill with the maximum "height" in that group of "ID", else fill with the "height" that correspond to the maximum length in that group.
Required Output:
ID length height
0 1 0.7 7
1 1 0.7 9
2 1 0.7 9
3 2 0.8 4
4 2 0.6 8
5 2 0.6 4
6 2 0.7 5
Thanks.
A:
You could try with sort_value then we use groupby find the last
#last will find the last not NaN value
df.height.fillna(df.sort_values(['length','height']).groupby(['ID'])['height'].transform('last'),inplace=True)
df
Out[296]:
ID length height
0 1 0.7 7.0
1 1 0.7 9.0
2 1 0.7 9.0
3 2 0.8 4.0
4 2 0.6 8.0
5 2 0.6 4.0
6 2 0.7 5.0
|
Filling NaN on conditions
|
I have the following input data:
df = pd.DataFrame({"ID" : [1, 1, 1, 2, 2, 2, 2],
"length" : [0.7, 0.7, 0.7, 0.8, 0.6, 0.6, 0.7],
"height" : [7, 9, np.nan, 4, 8, np.nan, 5]})
df
ID length height
0 1 0.7 7
1 1 0.7 9
2 1 0.7 np.nan
3 2 0.8 4
4 2 0.6 8
5 2 0.6 np.nan
6 2 0.7 5
I want to be able to fill the NaN if a group of "ID" all have the same "length", fill with the maximum "height" in that group of "ID", else fill with the "height" that correspond to the maximum length in that group.
Required Output:
ID length height
0 1 0.7 7
1 1 0.7 9
2 1 0.7 9
3 2 0.8 4
4 2 0.6 8
5 2 0.6 4
6 2 0.7 5
Thanks.
|
[
"You could try with sort_value then we use groupby find the last\n#last will find the last not NaN value\n\ndf.height.fillna(df.sort_values(['length','height']).groupby(['ID'])['height'].transform('last'),inplace=True)\ndf\nOut[296]: \n ID length height\n0 1 0.7 7.0\n1 1 0.7 9.0\n2 1 0.7 9.0\n3 2 0.8 4.0\n4 2 0.6 8.0\n5 2 0.6 4.0\n6 2 0.7 5.0\n\n"
] |
[
2
] |
[] |
[] |
[
"numpy",
"pandas",
"python"
] |
stackoverflow_0074595035_numpy_pandas_python.txt
|
Q:
Why does python replace every object of a column, when only referring to one, if all lines are identical?
When trying to change one value in a matrix, python will change all items of that column with the desired value, despite the fact I am only trying to change one. But this only happens when all rows are identical.
Example:
def print_matrix(matrix: list[list], dlm: str) -> None:
for row in matrix:
for col in row:
print(col, end = dlm)
print()
one_row = list(range(4))
test_matrix = []
for i in range(5):
test_matrix.append(one_row)
test_matrix[0][0] = 5
sec_matrix =[
[0,1,2,3],
[0,1,2,3],
[0,1,2,3],
[0,1,2,4]
]
sec_matrix[0][0]=5
print_matrix(test_matrix, ' ')
print()
print_matrix(sec_matrix, ' ')
In the first matrix every 0 gets replaced with a 5, despite only referencing the first item of the first list.
In the second one it works the way I want it to, because the last list is slightly different.
Why is there a difference in the way test_matrix and sec_matrix are treated? Is this a bug, or intended?
Does python just think they are the same list because they look the same?
Or are they even the same to increase performance? Either way I don't think it should happen.
I tried to update a matrix item on certain coordinates.
I expected only the desired item to be altered, instead every single one of that column got changed. Problem is fixed by not having identical rows.
A:
The reason is when you write test_matrix.append(one_row) you are appending actually [0,1,2,3] 5 times to test_matrix, essentially, i.e the list will look like [[0, 1, 2, 3], [0, 1, 2, 3], [0, 1, 2, 3], [0, 1, 2, 3], [0, 1, 2, 3]]. Here each list element is a list with [0,1,2,3] references to the same [0,1,2,3]. When you then modify this single [0,1,2,3] it is visible via all references to it. For debugging purposes, you can check it,
print(id(test_matrix[0]))
print(id(test_matrix[1]))
So you will see all are the same id, if you want to do it then you can do it like below- where test_matrix = [ list(range(4)) for n in range(5) ] will re-generate value each time
def print_matrix(matrix, dlm):
for row in matrix:
for col in row:
print(col, end = dlm)
print()
test_matrix = []
test_matrix = [ list(range(4)) for n in range(5) ] # re-generate and appending
test_matrix[0][0] = 7
sec_matrix =[
[0,1,2,3],
[0,1,2,3],
[0,1,2,3],
[0,1,2,4]
]
sec_matrix[0][0]=5
print_matrix(test_matrix, ' ')
print()
print_matrix(sec_matrix, ' ')
Output:
7 1 2 3
0 1 2 3
0 1 2 3
0 1 2 3
0 1 2 3
5 1 2 3
0 1 2 3
0 1 2 3
0 1 2 4
|
Why does python replace every object of a column, when only referring to one, if all lines are identical?
|
When trying to change one value in a matrix, python will change all items of that column with the desired value, despite the fact I am only trying to change one. But this only happens when all rows are identical.
Example:
def print_matrix(matrix: list[list], dlm: str) -> None:
for row in matrix:
for col in row:
print(col, end = dlm)
print()
one_row = list(range(4))
test_matrix = []
for i in range(5):
test_matrix.append(one_row)
test_matrix[0][0] = 5
sec_matrix =[
[0,1,2,3],
[0,1,2,3],
[0,1,2,3],
[0,1,2,4]
]
sec_matrix[0][0]=5
print_matrix(test_matrix, ' ')
print()
print_matrix(sec_matrix, ' ')
In the first matrix every 0 gets replaced with a 5, despite only referencing the first item of the first list.
In the second one it works the way I want it to, because the last list is slightly different.
Why is there a difference in the way test_matrix and sec_matrix are treated? Is this a bug, or intended?
Does python just think they are the same list because they look the same?
Or are they even the same to increase performance? Either way I don't think it should happen.
I tried to update a matrix item on certain coordinates.
I expected only the desired item to be altered, instead every single one of that column got changed. Problem is fixed by not having identical rows.
|
[
"The reason is when you write test_matrix.append(one_row) you are appending actually [0,1,2,3] 5 times to test_matrix, essentially, i.e the list will look like [[0, 1, 2, 3], [0, 1, 2, 3], [0, 1, 2, 3], [0, 1, 2, 3], [0, 1, 2, 3]]. Here each list element is a list with [0,1,2,3] references to the same [0,1,2,3]. When you then modify this single [0,1,2,3] it is visible via all references to it. For debugging purposes, you can check it,\nprint(id(test_matrix[0]))\nprint(id(test_matrix[1]))\n\nSo you will see all are the same id, if you want to do it then you can do it like below- where test_matrix = [ list(range(4)) for n in range(5) ] will re-generate value each time\ndef print_matrix(matrix, dlm):\n for row in matrix:\n for col in row:\n print(col, end = dlm)\n print()\n\n \ntest_matrix = []\ntest_matrix = [ list(range(4)) for n in range(5) ] # re-generate and appending\n\ntest_matrix[0][0] = 7\nsec_matrix =[\n [0,1,2,3],\n [0,1,2,3],\n [0,1,2,3],\n [0,1,2,4]\n]\n\nsec_matrix[0][0]=5\nprint_matrix(test_matrix, ' ')\nprint()\nprint_matrix(sec_matrix, ' ')\n\nOutput:\n7 1 2 3 \n0 1 2 3 \n0 1 2 3 \n0 1 2 3 \n0 1 2 3 \n\n5 1 2 3 \n0 1 2 3 \n0 1 2 3 \n0 1 2 4 \n\n"
] |
[
0
] |
[] |
[] |
[
"list",
"matrix",
"python"
] |
stackoverflow_0074595031_list_matrix_python.txt
|
Q:
Python read dict values from lists?
In python I have:
my_dict = dict({'98:1E:19:7E:8F:30': ['SAGEMCOM BROADBAND SAS', '22'], '98:1E:19:7E:8F:32': ['SAGEMCOM BROADBAND SAS1']})
and would like to generate a list of all values, so I tried:
[[sub_val for sub_val in val] for val in my_dict.values()]
But this gives me:
[['SAGEMCOM BROADBAND SAS', '22'], ['SAGEMCOM BROADBAND SAS1']]
while I wanted:
['SAGEMCOM BROADBAND SAS', '22', 'SAGEMCOM BROADBAND SAS1']
What's wrong with what I've done
A:
You can use an additional for clause in the list comprehension to iterate through the sub-lists:
[value for values in my_dict.values() for value in values]
|
Python read dict values from lists?
|
In python I have:
my_dict = dict({'98:1E:19:7E:8F:30': ['SAGEMCOM BROADBAND SAS', '22'], '98:1E:19:7E:8F:32': ['SAGEMCOM BROADBAND SAS1']})
and would like to generate a list of all values, so I tried:
[[sub_val for sub_val in val] for val in my_dict.values()]
But this gives me:
[['SAGEMCOM BROADBAND SAS', '22'], ['SAGEMCOM BROADBAND SAS1']]
while I wanted:
['SAGEMCOM BROADBAND SAS', '22', 'SAGEMCOM BROADBAND SAS1']
What's wrong with what I've done
|
[
"You can use an additional for clause in the list comprehension to iterate through the sub-lists:\n[value for values in my_dict.values() for value in values]\n\n"
] |
[
1
] |
[] |
[] |
[
"list",
"python",
"python_3.x"
] |
stackoverflow_0074595083_list_python_python_3.x.txt
|
Q:
Calculating Pearson correlation and significance in Python
I am looking for a function that takes as input two lists, and returns the Pearson correlation, and the significance of the correlation.
A:
You can have a look at scipy.stats:
from pydoc import help
from scipy.stats.stats import pearsonr
help(pearsonr)
>>>
Help on function pearsonr in module scipy.stats.stats:
pearsonr(x, y)
Calculates a Pearson correlation coefficient and the p-value for testing
non-correlation.
The Pearson correlation coefficient measures the linear relationship
between two datasets. Strictly speaking, Pearson's correlation requires
that each dataset be normally distributed. Like other correlation
coefficients, this one varies between -1 and +1 with 0 implying no
correlation. Correlations of -1 or +1 imply an exact linear
relationship. Positive correlations imply that as x increases, so does
y. Negative correlations imply that as x increases, y decreases.
The p-value roughly indicates the probability of an uncorrelated system
producing datasets that have a Pearson correlation at least as extreme
as the one computed from these datasets. The p-values are not entirely
reliable but are probably reasonable for datasets larger than 500 or so.
Parameters
----------
x : 1D array
y : 1D array the same length as x
Returns
-------
(Pearson's correlation coefficient,
2-tailed p-value)
References
----------
http://www.statsoft.com/textbook/glosp.html#Pearson%20Correlation
A:
The Pearson correlation can be calculated with numpy's corrcoef.
import numpy
numpy.corrcoef(list1, list2)[0, 1]
A:
An alternative can be a native scipy function from linregress which calculates:
slope : slope of the regression line
intercept : intercept of the regression line
r-value : correlation coefficient
p-value : two-sided p-value for a hypothesis test whose null hypothesis is that the slope is zero
stderr : Standard error of the estimate
And here is an example:
a = [15, 12, 8, 8, 7, 7, 7, 6, 5, 3]
b = [10, 25, 17, 11, 13, 17, 20, 13, 9, 15]
from scipy.stats import linregress
linregress(a, b)
will return you:
LinregressResult(slope=0.20833333333333337, intercept=13.375, rvalue=0.14499815458068521, pvalue=0.68940144811669501, stderr=0.50261704627083648)
A:
If you don't feel like installing scipy, I've used this quick hack, slightly modified from Programming Collective Intelligence:
def pearsonr(x, y):
# Assume len(x) == len(y)
n = len(x)
sum_x = float(sum(x))
sum_y = float(sum(y))
sum_x_sq = sum(xi*xi for xi in x)
sum_y_sq = sum(yi*yi for yi in y)
psum = sum(xi*yi for xi, yi in zip(x, y))
num = psum - (sum_x * sum_y/n)
den = pow((sum_x_sq - pow(sum_x, 2) / n) * (sum_y_sq - pow(sum_y, 2) / n), 0.5)
if den == 0: return 0
return num / den
A:
The following code is a straight-up interpretation of the definition:
import math
def average(x):
assert len(x) > 0
return float(sum(x)) / len(x)
def pearson_def(x, y):
assert len(x) == len(y)
n = len(x)
assert n > 0
avg_x = average(x)
avg_y = average(y)
diffprod = 0
xdiff2 = 0
ydiff2 = 0
for idx in range(n):
xdiff = x[idx] - avg_x
ydiff = y[idx] - avg_y
diffprod += xdiff * ydiff
xdiff2 += xdiff * xdiff
ydiff2 += ydiff * ydiff
return diffprod / math.sqrt(xdiff2 * ydiff2)
Test:
print pearson_def([1,2,3], [1,5,7])
returns
0.981980506062
This agrees with Excel, this calculator, SciPy (also NumPy), which return 0.981980506 and 0.9819805060619657, and 0.98198050606196574, respectively.
R:
> cor( c(1,2,3), c(1,5,7))
[1] 0.9819805
EDIT: Fixed a bug pointed out by a commenter.
A:
You can do this with pandas.DataFrame.corr, too:
import pandas as pd
a = [[1, 2, 3],
[5, 6, 9],
[5, 6, 11],
[5, 6, 13],
[5, 3, 13]]
df = pd.DataFrame(data=a)
df.corr()
This gives
0 1 2
0 1.000000 0.745601 0.916579
1 0.745601 1.000000 0.544248
2 0.916579 0.544248 1.000000
A:
Rather than rely on numpy/scipy, I think my answer should be the easiest to code and understand the steps in calculating the Pearson Correlation Coefficient (PCC) .
import math
# calculates the mean
def mean(x):
sum = 0.0
for i in x:
sum += i
return sum / len(x)
# calculates the sample standard deviation
def sampleStandardDeviation(x):
sumv = 0.0
for i in x:
sumv += (i - mean(x))**2
return math.sqrt(sumv/(len(x)-1))
# calculates the PCC using both the 2 functions above
def pearson(x,y):
scorex = []
scorey = []
for i in x:
scorex.append((i - mean(x))/sampleStandardDeviation(x))
for j in y:
scorey.append((j - mean(y))/sampleStandardDeviation(y))
# multiplies both lists together into 1 list (hence zip) and sums the whole list
return (sum([i*j for i,j in zip(scorex,scorey)]))/(len(x)-1)
The significance of PCC is basically to show you how strongly correlated the two variables/lists are.
It is important to note that the PCC value ranges from -1 to 1.
A value between 0 to 1 denotes a positive correlation.
Value of 0 = highest variation (no correlation whatsoever).
A value between -1 to 0 denotes a negative correlation.
A:
Pearson coefficient calculation using pandas in python:
I would suggest trying this approach since your data contains lists. It will be easy to interact with your data and manipulate it from the console since you can visualise your data structure and update it as you wish. You can also export the data set and save it and add new data out of the python console for later analysis. This code is simpler and contains less lines of code. I am assuming you need a few quick lines of code to screen your data for further analysis
Example:
data = {'list 1':[2,4,6,8],'list 2':[4,16,36,64]}
import pandas as pd #To Convert your lists to pandas data frames convert your lists into pandas dataframes
df = pd.DataFrame(data, columns = ['list 1','list 2'])
from scipy import stats # For in-built method to get PCC
pearson_coef, p_value = stats.pearsonr(df["list 1"], df["list 2"]) #define the columns to perform calculations on
print("Pearson Correlation Coefficient: ", pearson_coef, "and a P-value of:", p_value) # Results
However, you did not post your data for me to see the size of the data set or the transformations that might be needed before the analysis.
A:
Hmm, many of these responses have long and hard to read code...
I'd suggest using numpy with its nifty features when working with arrays:
import numpy as np
def pcc(X, Y):
''' Compute Pearson Correlation Coefficient. '''
# Normalise X and Y
X -= X.mean(0)
Y -= Y.mean(0)
# Standardise X and Y
X /= X.std(0)
Y /= Y.std(0)
# Compute mean product
return np.mean(X*Y)
# Using it on a random example
from random import random
X = np.array([random() for x in xrange(100)])
Y = np.array([random() for x in xrange(100)])
pcc(X, Y)
A:
Here's a variant on mkh's answer that runs much faster than it, and scipy.stats.pearsonr, using numba.
import numba
@numba.jit
def corr(data1, data2):
M = data1.size
sum1 = 0.
sum2 = 0.
for i in range(M):
sum1 += data1[i]
sum2 += data2[i]
mean1 = sum1 / M
mean2 = sum2 / M
var_sum1 = 0.
var_sum2 = 0.
cross_sum = 0.
for i in range(M):
var_sum1 += (data1[i] - mean1) ** 2
var_sum2 += (data2[i] - mean2) ** 2
cross_sum += (data1[i] * data2[i])
std1 = (var_sum1 / M) ** .5
std2 = (var_sum2 / M) ** .5
cross_mean = cross_sum / M
return (cross_mean - mean1 * mean2) / (std1 * std2)
A:
This is a implementation of Pearson Correlation function using numpy:
def corr(data1, data2):
"data1 & data2 should be numpy arrays."
mean1 = data1.mean()
mean2 = data2.mean()
std1 = data1.std()
std2 = data2.std()
# corr = ((data1-mean1)*(data2-mean2)).mean()/(std1*std2)
corr = ((data1*data2).mean()-mean1*mean2)/(std1*std2)
return corr
A:
Here is an implementation for pearson correlation based on sparse vector. The vectors here are expressed as a list of tuples expressed as (index, value). The two sparse vectors can be of different length but over all vector size will have to be same. This is useful for text mining applications where the vector size is extremely large due to most features being bag of words and hence calculations are usually performed using sparse vectors.
def get_pearson_corelation(self, first_feature_vector=[], second_feature_vector=[], length_of_featureset=0):
indexed_feature_dict = {}
if first_feature_vector == [] or second_feature_vector == [] or length_of_featureset == 0:
raise ValueError("Empty feature vectors or zero length of featureset in get_pearson_corelation")
sum_a = sum(value for index, value in first_feature_vector)
sum_b = sum(value for index, value in second_feature_vector)
avg_a = float(sum_a) / length_of_featureset
avg_b = float(sum_b) / length_of_featureset
mean_sq_error_a = sqrt((sum((value - avg_a) ** 2 for index, value in first_feature_vector)) + ((
length_of_featureset - len(first_feature_vector)) * ((0 - avg_a) ** 2)))
mean_sq_error_b = sqrt((sum((value - avg_b) ** 2 for index, value in second_feature_vector)) + ((
length_of_featureset - len(second_feature_vector)) * ((0 - avg_b) ** 2)))
covariance_a_b = 0
#calculate covariance for the sparse vectors
for tuple in first_feature_vector:
if len(tuple) != 2:
raise ValueError("Invalid feature frequency tuple in featureVector: %s") % (tuple,)
indexed_feature_dict[tuple[0]] = tuple[1]
count_of_features = 0
for tuple in second_feature_vector:
count_of_features += 1
if len(tuple) != 2:
raise ValueError("Invalid feature frequency tuple in featureVector: %s") % (tuple,)
if tuple[0] in indexed_feature_dict:
covariance_a_b += ((indexed_feature_dict[tuple[0]] - avg_a) * (tuple[1] - avg_b))
del (indexed_feature_dict[tuple[0]])
else:
covariance_a_b += (0 - avg_a) * (tuple[1] - avg_b)
for index in indexed_feature_dict:
count_of_features += 1
covariance_a_b += (indexed_feature_dict[index] - avg_a) * (0 - avg_b)
#adjust covariance with rest of vector with 0 value
covariance_a_b += (length_of_featureset - count_of_features) * -avg_a * -avg_b
if mean_sq_error_a == 0 or mean_sq_error_b == 0:
return -1
else:
return float(covariance_a_b) / (mean_sq_error_a * mean_sq_error_b)
Unit tests:
def test_get_get_pearson_corelation(self):
vector_a = [(1, 1), (2, 2), (3, 3)]
vector_b = [(1, 1), (2, 5), (3, 7)]
self.assertAlmostEquals(self.sim_calculator.get_pearson_corelation(vector_a, vector_b, 3), 0.981980506062, 3, None, None)
vector_a = [(1, 1), (2, 2), (3, 3)]
vector_b = [(1, 1), (2, 5), (3, 7), (4, 14)]
self.assertAlmostEquals(self.sim_calculator.get_pearson_corelation(vector_a, vector_b, 5), -0.0137089240555, 3, None, None)
A:
I have a very simple and easy to understand solution for this. For two arrays of equal length, Pearson coefficient can be easily computed as follows:
def manual_pearson(a,b):
"""
Accepts two arrays of equal length, and computes correlation coefficient.
Numerator is the sum of product of (a - a_avg) and (b - b_avg),
while denominator is the product of a_std and b_std multiplied by
length of array.
"""
a_avg, b_avg = np.average(a), np.average(b)
a_stdev, b_stdev = np.std(a), np.std(b)
n = len(a)
denominator = a_stdev * b_stdev * n
numerator = np.sum(np.multiply(a-a_avg, b-b_avg))
p_coef = numerator/denominator
return p_coef
A:
Starting in Python 3.10, the Pearson’s correlation coefficient (statistics.correlation) is directly available in the standard library:
from statistics import correlation
# a = [15, 12, 8, 8, 7, 7, 7, 6, 5, 3]
# b = [10, 25, 17, 11, 13, 17, 20, 13, 9, 15]
correlation(a, b)
# 0.1449981545806852
A:
You may wonder how to interpret your probability in the context of looking for a correlation in a particular direction (negative or positive correlation.) Here is a function I wrote to help with that. It might even be right!
It's based on info I gleaned from http://www.vassarstats.net/rsig.html and http://en.wikipedia.org/wiki/Student%27s_t_distribution, thanks to other answers posted here.
# Given (possibly random) variables, X and Y, and a correlation direction,
# returns:
# (r, p),
# where r is the Pearson correlation coefficient, and p is the probability
# that there is no correlation in the given direction.
#
# direction:
# if positive, p is the probability that there is no positive correlation in
# the population sampled by X and Y
# if negative, p is the probability that there is no negative correlation
# if 0, p is the probability that there is no correlation in either direction
def probabilityNotCorrelated(X, Y, direction=0):
x = len(X)
if x != len(Y):
raise ValueError("variables not same len: " + str(x) + ", and " + \
str(len(Y)))
if x < 6:
raise ValueError("must have at least 6 samples, but have " + str(x))
(corr, prb_2_tail) = stats.pearsonr(X, Y)
if not direction:
return (corr, prb_2_tail)
prb_1_tail = prb_2_tail / 2
if corr * direction > 0:
return (corr, prb_1_tail)
return (corr, 1 - prb_1_tail)
A:
You can take a look at this article. This is a well-documented example for calculating correlation based on historical forex currency pairs data from multiple files using pandas library (for Python), and then generating a heatmap plot using seaborn library.
http://www.tradinggeeks.net/2015/08/calculating-correlation-in-python/
A:
Calculating Correlation:
Correlation - measures similarity of two different variables
Using pearson correlation
from scipy.stats import pearsonr
# final_data is the dataframe with n set of columns
pearson_correlation = final_data.corr(method='pearson')
pearson_correlation
# print correlation of n*n column
Using Spearman correlation
from scipy.stats import spearmanr
# final_data is the dataframe with n set of columns
spearman_correlation = final_data.corr(method='spearman')
spearman_correlation
# print correlation of n*n column
Using Kendall correlation
kendall_correlation=final_data.corr(method='kendall')
kendall_correlation
A:
def correlation_score(y_true, y_pred):
"""Scores the predictions according to the competition rules.
It is assumed that the predictions are not constant.
Returns the average of each sample's Pearson correlation coefficient"""
y2 = y_pred.copy()
y2 -= y2.mean(axis=0); y2 /= y2.std(axis=0)
y1 = y_true.copy();
y1 -= y1.mean(axis=0); y1 /= y1.std(axis=0)
c = (y1*y2).mean().mean()# Correlation for rescaled matrices is just matrix product and average
return c
|
Calculating Pearson correlation and significance in Python
|
I am looking for a function that takes as input two lists, and returns the Pearson correlation, and the significance of the correlation.
|
[
"You can have a look at scipy.stats:\nfrom pydoc import help\nfrom scipy.stats.stats import pearsonr\nhelp(pearsonr)\n\n>>>\nHelp on function pearsonr in module scipy.stats.stats:\n\npearsonr(x, y)\n Calculates a Pearson correlation coefficient and the p-value for testing\n non-correlation.\n\n The Pearson correlation coefficient measures the linear relationship\n between two datasets. Strictly speaking, Pearson's correlation requires\n that each dataset be normally distributed. Like other correlation\n coefficients, this one varies between -1 and +1 with 0 implying no\n correlation. Correlations of -1 or +1 imply an exact linear\n relationship. Positive correlations imply that as x increases, so does\n y. Negative correlations imply that as x increases, y decreases.\n\n The p-value roughly indicates the probability of an uncorrelated system\n producing datasets that have a Pearson correlation at least as extreme\n as the one computed from these datasets. The p-values are not entirely\n reliable but are probably reasonable for datasets larger than 500 or so.\n\n Parameters\n ----------\n x : 1D array\n y : 1D array the same length as x\n\n Returns\n -------\n (Pearson's correlation coefficient,\n 2-tailed p-value)\n\n References\n ----------\n http://www.statsoft.com/textbook/glosp.html#Pearson%20Correlation\n\n",
"The Pearson correlation can be calculated with numpy's corrcoef.\nimport numpy\nnumpy.corrcoef(list1, list2)[0, 1]\n\n",
"An alternative can be a native scipy function from linregress which calculates:\n\nslope : slope of the regression line\nintercept : intercept of the regression line\nr-value : correlation coefficient\np-value : two-sided p-value for a hypothesis test whose null hypothesis is that the slope is zero\nstderr : Standard error of the estimate\n\nAnd here is an example:\na = [15, 12, 8, 8, 7, 7, 7, 6, 5, 3]\nb = [10, 25, 17, 11, 13, 17, 20, 13, 9, 15]\nfrom scipy.stats import linregress\nlinregress(a, b)\n\nwill return you:\nLinregressResult(slope=0.20833333333333337, intercept=13.375, rvalue=0.14499815458068521, pvalue=0.68940144811669501, stderr=0.50261704627083648)\n\n",
"If you don't feel like installing scipy, I've used this quick hack, slightly modified from Programming Collective Intelligence:\ndef pearsonr(x, y):\n # Assume len(x) == len(y)\n n = len(x)\n sum_x = float(sum(x))\n sum_y = float(sum(y))\n sum_x_sq = sum(xi*xi for xi in x)\n sum_y_sq = sum(yi*yi for yi in y)\n psum = sum(xi*yi for xi, yi in zip(x, y))\n num = psum - (sum_x * sum_y/n)\n den = pow((sum_x_sq - pow(sum_x, 2) / n) * (sum_y_sq - pow(sum_y, 2) / n), 0.5)\n if den == 0: return 0\n return num / den\n\n",
"The following code is a straight-up interpretation of the definition:\nimport math\n\ndef average(x):\n assert len(x) > 0\n return float(sum(x)) / len(x)\n\ndef pearson_def(x, y):\n assert len(x) == len(y)\n n = len(x)\n assert n > 0\n avg_x = average(x)\n avg_y = average(y)\n diffprod = 0\n xdiff2 = 0\n ydiff2 = 0\n for idx in range(n):\n xdiff = x[idx] - avg_x\n ydiff = y[idx] - avg_y\n diffprod += xdiff * ydiff\n xdiff2 += xdiff * xdiff\n ydiff2 += ydiff * ydiff\n\n return diffprod / math.sqrt(xdiff2 * ydiff2)\n\nTest:\nprint pearson_def([1,2,3], [1,5,7])\n\nreturns\n0.981980506062\n\nThis agrees with Excel, this calculator, SciPy (also NumPy), which return 0.981980506 and 0.9819805060619657, and 0.98198050606196574, respectively.\nR:\n> cor( c(1,2,3), c(1,5,7))\n[1] 0.9819805\n\nEDIT: Fixed a bug pointed out by a commenter.\n",
"You can do this with pandas.DataFrame.corr, too:\nimport pandas as pd\na = [[1, 2, 3],\n [5, 6, 9],\n [5, 6, 11],\n [5, 6, 13],\n [5, 3, 13]]\ndf = pd.DataFrame(data=a)\ndf.corr()\n\nThis gives\n 0 1 2\n0 1.000000 0.745601 0.916579\n1 0.745601 1.000000 0.544248\n2 0.916579 0.544248 1.000000\n\n",
"Rather than rely on numpy/scipy, I think my answer should be the easiest to code and understand the steps in calculating the Pearson Correlation Coefficient (PCC) .\nimport math\n\n# calculates the mean\ndef mean(x):\n sum = 0.0\n for i in x:\n sum += i\n return sum / len(x) \n\n# calculates the sample standard deviation\ndef sampleStandardDeviation(x):\n sumv = 0.0\n for i in x:\n sumv += (i - mean(x))**2\n return math.sqrt(sumv/(len(x)-1))\n\n# calculates the PCC using both the 2 functions above\ndef pearson(x,y):\n scorex = []\n scorey = []\n\n for i in x: \n scorex.append((i - mean(x))/sampleStandardDeviation(x)) \n\n for j in y:\n scorey.append((j - mean(y))/sampleStandardDeviation(y))\n\n# multiplies both lists together into 1 list (hence zip) and sums the whole list \n return (sum([i*j for i,j in zip(scorex,scorey)]))/(len(x)-1)\n\nThe significance of PCC is basically to show you how strongly correlated the two variables/lists are. \nIt is important to note that the PCC value ranges from -1 to 1.\nA value between 0 to 1 denotes a positive correlation.\nValue of 0 = highest variation (no correlation whatsoever).\nA value between -1 to 0 denotes a negative correlation.\n",
"Pearson coefficient calculation using pandas in python: \nI would suggest trying this approach since your data contains lists. It will be easy to interact with your data and manipulate it from the console since you can visualise your data structure and update it as you wish. You can also export the data set and save it and add new data out of the python console for later analysis. This code is simpler and contains less lines of code. I am assuming you need a few quick lines of code to screen your data for further analysis \nExample:\ndata = {'list 1':[2,4,6,8],'list 2':[4,16,36,64]}\n\nimport pandas as pd #To Convert your lists to pandas data frames convert your lists into pandas dataframes\n\ndf = pd.DataFrame(data, columns = ['list 1','list 2'])\n\nfrom scipy import stats # For in-built method to get PCC\n\npearson_coef, p_value = stats.pearsonr(df[\"list 1\"], df[\"list 2\"]) #define the columns to perform calculations on\nprint(\"Pearson Correlation Coefficient: \", pearson_coef, \"and a P-value of:\", p_value) # Results \n\nHowever, you did not post your data for me to see the size of the data set or the transformations that might be needed before the analysis. \n",
"Hmm, many of these responses have long and hard to read code...\nI'd suggest using numpy with its nifty features when working with arrays:\nimport numpy as np\ndef pcc(X, Y):\n ''' Compute Pearson Correlation Coefficient. '''\n # Normalise X and Y\n X -= X.mean(0)\n Y -= Y.mean(0)\n # Standardise X and Y\n X /= X.std(0)\n Y /= Y.std(0)\n # Compute mean product\n return np.mean(X*Y)\n\n# Using it on a random example\nfrom random import random\nX = np.array([random() for x in xrange(100)])\nY = np.array([random() for x in xrange(100)])\npcc(X, Y)\n\n",
"Here's a variant on mkh's answer that runs much faster than it, and scipy.stats.pearsonr, using numba.\nimport numba\n\n@numba.jit\ndef corr(data1, data2):\n M = data1.size\n\n sum1 = 0.\n sum2 = 0.\n for i in range(M):\n sum1 += data1[i]\n sum2 += data2[i]\n mean1 = sum1 / M\n mean2 = sum2 / M\n\n var_sum1 = 0.\n var_sum2 = 0.\n cross_sum = 0.\n for i in range(M):\n var_sum1 += (data1[i] - mean1) ** 2\n var_sum2 += (data2[i] - mean2) ** 2\n cross_sum += (data1[i] * data2[i])\n\n std1 = (var_sum1 / M) ** .5\n std2 = (var_sum2 / M) ** .5\n cross_mean = cross_sum / M\n\n return (cross_mean - mean1 * mean2) / (std1 * std2)\n\n",
"This is a implementation of Pearson Correlation function using numpy:\n\n\ndef corr(data1, data2):\n \"data1 & data2 should be numpy arrays.\"\n mean1 = data1.mean() \n mean2 = data2.mean()\n std1 = data1.std()\n std2 = data2.std()\n\n# corr = ((data1-mean1)*(data2-mean2)).mean()/(std1*std2)\n corr = ((data1*data2).mean()-mean1*mean2)/(std1*std2)\n return corr\n\n\n",
"Here is an implementation for pearson correlation based on sparse vector. The vectors here are expressed as a list of tuples expressed as (index, value). The two sparse vectors can be of different length but over all vector size will have to be same. This is useful for text mining applications where the vector size is extremely large due to most features being bag of words and hence calculations are usually performed using sparse vectors. \ndef get_pearson_corelation(self, first_feature_vector=[], second_feature_vector=[], length_of_featureset=0):\n indexed_feature_dict = {}\n if first_feature_vector == [] or second_feature_vector == [] or length_of_featureset == 0:\n raise ValueError(\"Empty feature vectors or zero length of featureset in get_pearson_corelation\")\n\n sum_a = sum(value for index, value in first_feature_vector)\n sum_b = sum(value for index, value in second_feature_vector)\n\n avg_a = float(sum_a) / length_of_featureset\n avg_b = float(sum_b) / length_of_featureset\n\n mean_sq_error_a = sqrt((sum((value - avg_a) ** 2 for index, value in first_feature_vector)) + ((\n length_of_featureset - len(first_feature_vector)) * ((0 - avg_a) ** 2)))\n mean_sq_error_b = sqrt((sum((value - avg_b) ** 2 for index, value in second_feature_vector)) + ((\n length_of_featureset - len(second_feature_vector)) * ((0 - avg_b) ** 2)))\n\n covariance_a_b = 0\n\n #calculate covariance for the sparse vectors\n for tuple in first_feature_vector:\n if len(tuple) != 2:\n raise ValueError(\"Invalid feature frequency tuple in featureVector: %s\") % (tuple,)\n indexed_feature_dict[tuple[0]] = tuple[1]\n count_of_features = 0\n for tuple in second_feature_vector:\n count_of_features += 1\n if len(tuple) != 2:\n raise ValueError(\"Invalid feature frequency tuple in featureVector: %s\") % (tuple,)\n if tuple[0] in indexed_feature_dict:\n covariance_a_b += ((indexed_feature_dict[tuple[0]] - avg_a) * (tuple[1] - avg_b))\n del (indexed_feature_dict[tuple[0]])\n else:\n covariance_a_b += (0 - avg_a) * (tuple[1] - avg_b)\n\n for index in indexed_feature_dict:\n count_of_features += 1\n covariance_a_b += (indexed_feature_dict[index] - avg_a) * (0 - avg_b)\n\n #adjust covariance with rest of vector with 0 value\n covariance_a_b += (length_of_featureset - count_of_features) * -avg_a * -avg_b\n\n if mean_sq_error_a == 0 or mean_sq_error_b == 0:\n return -1\n else:\n return float(covariance_a_b) / (mean_sq_error_a * mean_sq_error_b)\n\nUnit tests:\ndef test_get_get_pearson_corelation(self):\n vector_a = [(1, 1), (2, 2), (3, 3)]\n vector_b = [(1, 1), (2, 5), (3, 7)]\n self.assertAlmostEquals(self.sim_calculator.get_pearson_corelation(vector_a, vector_b, 3), 0.981980506062, 3, None, None)\n\n vector_a = [(1, 1), (2, 2), (3, 3)]\n vector_b = [(1, 1), (2, 5), (3, 7), (4, 14)]\n self.assertAlmostEquals(self.sim_calculator.get_pearson_corelation(vector_a, vector_b, 5), -0.0137089240555, 3, None, None)\n\n",
"I have a very simple and easy to understand solution for this. For two arrays of equal length, Pearson coefficient can be easily computed as follows: \ndef manual_pearson(a,b):\n\"\"\"\nAccepts two arrays of equal length, and computes correlation coefficient. \nNumerator is the sum of product of (a - a_avg) and (b - b_avg), \nwhile denominator is the product of a_std and b_std multiplied by \nlength of array. \n\"\"\"\n a_avg, b_avg = np.average(a), np.average(b)\n a_stdev, b_stdev = np.std(a), np.std(b)\n n = len(a)\n denominator = a_stdev * b_stdev * n\n numerator = np.sum(np.multiply(a-a_avg, b-b_avg))\n p_coef = numerator/denominator\n return p_coef\n\n",
"Starting in Python 3.10, the Pearson’s correlation coefficient (statistics.correlation) is directly available in the standard library:\nfrom statistics import correlation\n\n# a = [15, 12, 8, 8, 7, 7, 7, 6, 5, 3]\n# b = [10, 25, 17, 11, 13, 17, 20, 13, 9, 15]\ncorrelation(a, b)\n# 0.1449981545806852\n\n",
"You may wonder how to interpret your probability in the context of looking for a correlation in a particular direction (negative or positive correlation.) Here is a function I wrote to help with that. It might even be right!\nIt's based on info I gleaned from http://www.vassarstats.net/rsig.html and http://en.wikipedia.org/wiki/Student%27s_t_distribution, thanks to other answers posted here.\n# Given (possibly random) variables, X and Y, and a correlation direction,\n# returns:\n# (r, p),\n# where r is the Pearson correlation coefficient, and p is the probability\n# that there is no correlation in the given direction.\n#\n# direction:\n# if positive, p is the probability that there is no positive correlation in\n# the population sampled by X and Y\n# if negative, p is the probability that there is no negative correlation\n# if 0, p is the probability that there is no correlation in either direction\ndef probabilityNotCorrelated(X, Y, direction=0):\n x = len(X)\n if x != len(Y):\n raise ValueError(\"variables not same len: \" + str(x) + \", and \" + \\\n str(len(Y)))\n if x < 6:\n raise ValueError(\"must have at least 6 samples, but have \" + str(x))\n (corr, prb_2_tail) = stats.pearsonr(X, Y)\n\n if not direction:\n return (corr, prb_2_tail)\n\n prb_1_tail = prb_2_tail / 2\n if corr * direction > 0:\n return (corr, prb_1_tail)\n\n return (corr, 1 - prb_1_tail)\n\n",
"You can take a look at this article. This is a well-documented example for calculating correlation based on historical forex currency pairs data from multiple files using pandas library (for Python), and then generating a heatmap plot using seaborn library.\nhttp://www.tradinggeeks.net/2015/08/calculating-correlation-in-python/\n",
"Calculating Correlation:\nCorrelation - measures similarity of two different variables\nUsing pearson correlation\nfrom scipy.stats import pearsonr\n# final_data is the dataframe with n set of columns\npearson_correlation = final_data.corr(method='pearson')\npearson_correlation\n# print correlation of n*n column\n\nUsing Spearman correlation\nfrom scipy.stats import spearmanr\n# final_data is the dataframe with n set of columns\nspearman_correlation = final_data.corr(method='spearman')\nspearman_correlation\n# print correlation of n*n column\n\nUsing Kendall correlation\nkendall_correlation=final_data.corr(method='kendall')\nkendall_correlation\n\n",
"def correlation_score(y_true, y_pred):\n \"\"\"Scores the predictions according to the competition rules. \n \n It is assumed that the predictions are not constant.\n \n Returns the average of each sample's Pearson correlation coefficient\"\"\"\n \n y2 = y_pred.copy()\n y2 -= y2.mean(axis=0); y2 /= y2.std(axis=0) \n y1 = y_true.copy(); \n y1 -= y1.mean(axis=0); y1 /= y1.std(axis=0) \n \n c = (y1*y2).mean().mean()# Correlation for rescaled matrices is just matrix product and average \n \n return c\n\n"
] |
[
213,
120,
60,
39,
31,
28,
11,
11,
7,
6,
5,
3,
3,
3,
1,
1,
0,
0
] |
[
"def pearson(x,y):\n n=len(x)\n vals=range(n)\n\n sumx=sum([float(x[i]) for i in vals])\n sumy=sum([float(y[i]) for i in vals])\n\n sumxSq=sum([x[i]**2.0 for i in vals])\n sumySq=sum([y[i]**2.0 for i in vals])\n\n pSum=sum([x[i]*y[i] for i in vals])\n # Calculating Pearson correlation\n num=pSum-(sumx*sumy/n)\n den=((sumxSq-pow(sumx,2)/n)*(sumySq-pow(sumy,2)/n))**.5\n if den==0: return 0\n r=num/den\n return r\n\n"
] |
[
-1
] |
[
"numpy",
"python",
"scipy",
"statistics"
] |
stackoverflow_0003949226_numpy_python_scipy_statistics.txt
|
Q:
Display PDF in django
I need to display a pdf file in a browser, but I cannot find the solution to take the PDF for the folder media, the PDF file was save in my database, but I cannot show.
my urls.py:
urlpatterns = [
path('uploadfile/', views.uploadFile, name="uploadFile"),
path('verPDF/<idtermsCondition>', views.verPDF, name='verPDF'),
]
my models.py:
class termsCondition(models.Model):
title = models.CharField(max_length=20, verbose_name="title")
uploadPDF = models.FileField(
upload_to="PDF/", null=True, blank=True)
dateTimeUploaded = models.DateTimeField(auto_now_add=True)
deleted_at = models.DateTimeField(
auto_now=False, verbose_name="Fecha eliminacion", blank=True, null=True)
class Meta:
verbose_name = "termsCondition"
verbose_name_plural = "termsConditions"
my views.py:
def uploadFile(request):
user = request.user
if user.is_authenticated:
if user.is_admin:
if request.method == "POST":
# Fetching the form data
fileTitle = request.POST["fileTitle"]
loadPDF = request.FILES["uploadPDF"]
# Saving the information in the database
termscondition = termsCondition.objects.create(
title=fileTitle,
uploadPDF=loadPDF
)
termscondition.save()
else:
listfiles = termsCondition.objects.all()[:1].get()
return render(request, 'subirTerminos.html', context={
"files": listfiles
})
else:
messages.add_message(request=request, level=messages.SUCCESS,
message="No tiene suficientes permisos para ingresar a esta página")
return redirect('customer')
else:
return redirect('login2')
def verPDF(request, idtermsCondition):
user = request.user
if user.is_authenticated():
if user.is_admin:
getPDF = termsCondition.objects.get(pk=idtermsCondition)
seePDF = {'PDF': getPDF.uploadPDF}
print(seePDF)
return render(request, 'subirTerminos.html', {'termsCondition': getPDF, 'uploadPDF': getPDF.uploadPDF})
else:
messages.error(request, 'Do not have permission')
else:
return redirect('login2')
my html:
<div>
<iframe id="verPDF" src="media/PDF/{{ uploadPDF.url }}"
style="width:800px; height:800px;"></iframe>
</div>
I want to see my pdf and I cannot do, I want to know how to do, I tried many solutions, I accept js, embed iframe whatever to can solve.
A:
It should be user.is_authenticated not user.is_authenticated() in verPDF view and also I'd recommend you to change <idtermsCondition> to <int:idtermsCondition> as by default (if nothing is given) it is considered as string.
urls.py
urlpatterns = [
path('uploadfile/', views.uploadFile, name="uploadFile"),
path('verPDF/<int:idtermsCondition>/', views.verPDF, name='verPDF'),
]
And the {{uploadPDF.url}} already has the url (full path to the media directory) and try to use <embed> tag so:
<div>
<embed id="verPDF" src="{{uploadPDF.url}}" width="500" height="375" type="application/pdf">
</div>
Note: Always add / at the end of every route
A:
Finally I can solve it, I had problems in my views.py and in the html, when I called uploadPDF my views called another name which was loadpdf and when I rendered it it was another name.
now, views.py:
``def uploadFile(request):
user = request.user
if user.is_authenticated:
if user.is_admin:
if request.method == "POST":
# Fetching the form data
fileTitle = request.POST["fileTitle"]
loadPDF = request.FILES["uploadPDF"]
if termsCondition.objects.all().exists():
listfiles = termsCondition.objects.all()[:1].get()
listfiles.uploadPDF = loadPDF
listfiles.save()
else:
# Saving the information in the database
termscondition = termsCondition.objects.create(
title=fileTitle,
uploadPDF=loadPDF
)
return redirect('uploadFile')
else:
if termsCondition.objects.all().exists():
listfiles = termsCondition.objects.all()[:1].get()
return render(request, 'subirTerminos.html', context={
"files": listfiles.uploadPDF
})
else:
listfiles = {}
return render(request, 'subirTerminos.html', context={"files": listfiles})
else:
messages.add_message(request=request, level=messages.SUCCESS,
message="No tiene suficientes permisos para ingresar a esta página")
return redirect('customer')
else:
return redirect('login2') ``
and html:
<h1 class="title">Visualizador de PDF</h1>
<embed id="verPDF" src="{{files.url}}" width="500" height="375" type="application/pdf">
|
Display PDF in django
|
I need to display a pdf file in a browser, but I cannot find the solution to take the PDF for the folder media, the PDF file was save in my database, but I cannot show.
my urls.py:
urlpatterns = [
path('uploadfile/', views.uploadFile, name="uploadFile"),
path('verPDF/<idtermsCondition>', views.verPDF, name='verPDF'),
]
my models.py:
class termsCondition(models.Model):
title = models.CharField(max_length=20, verbose_name="title")
uploadPDF = models.FileField(
upload_to="PDF/", null=True, blank=True)
dateTimeUploaded = models.DateTimeField(auto_now_add=True)
deleted_at = models.DateTimeField(
auto_now=False, verbose_name="Fecha eliminacion", blank=True, null=True)
class Meta:
verbose_name = "termsCondition"
verbose_name_plural = "termsConditions"
my views.py:
def uploadFile(request):
user = request.user
if user.is_authenticated:
if user.is_admin:
if request.method == "POST":
# Fetching the form data
fileTitle = request.POST["fileTitle"]
loadPDF = request.FILES["uploadPDF"]
# Saving the information in the database
termscondition = termsCondition.objects.create(
title=fileTitle,
uploadPDF=loadPDF
)
termscondition.save()
else:
listfiles = termsCondition.objects.all()[:1].get()
return render(request, 'subirTerminos.html', context={
"files": listfiles
})
else:
messages.add_message(request=request, level=messages.SUCCESS,
message="No tiene suficientes permisos para ingresar a esta página")
return redirect('customer')
else:
return redirect('login2')
def verPDF(request, idtermsCondition):
user = request.user
if user.is_authenticated():
if user.is_admin:
getPDF = termsCondition.objects.get(pk=idtermsCondition)
seePDF = {'PDF': getPDF.uploadPDF}
print(seePDF)
return render(request, 'subirTerminos.html', {'termsCondition': getPDF, 'uploadPDF': getPDF.uploadPDF})
else:
messages.error(request, 'Do not have permission')
else:
return redirect('login2')
my html:
<div>
<iframe id="verPDF" src="media/PDF/{{ uploadPDF.url }}"
style="width:800px; height:800px;"></iframe>
</div>
I want to see my pdf and I cannot do, I want to know how to do, I tried many solutions, I accept js, embed iframe whatever to can solve.
|
[
"It should be user.is_authenticated not user.is_authenticated() in verPDF view and also I'd recommend you to change <idtermsCondition> to <int:idtermsCondition> as by default (if nothing is given) it is considered as string.\nurls.py\nurlpatterns = [\n path('uploadfile/', views.uploadFile, name=\"uploadFile\"),\n path('verPDF/<int:idtermsCondition>/', views.verPDF, name='verPDF'),\n]\n\nAnd the {{uploadPDF.url}} already has the url (full path to the media directory) and try to use <embed> tag so:\n<div>\n <embed id=\"verPDF\" src=\"{{uploadPDF.url}}\" width=\"500\" height=\"375\" type=\"application/pdf\">\n</div>\n\n\nNote: Always add / at the end of every route\n\n",
"Finally I can solve it, I had problems in my views.py and in the html, when I called uploadPDF my views called another name which was loadpdf and when I rendered it it was another name.\nnow, views.py:\n\n``def uploadFile(request):\n user = request.user\n if user.is_authenticated:\n if user.is_admin:\n if request.method == \"POST\":\n # Fetching the form data\n fileTitle = request.POST[\"fileTitle\"]\n loadPDF = request.FILES[\"uploadPDF\"]\n \n if termsCondition.objects.all().exists():\n listfiles = termsCondition.objects.all()[:1].get()\n listfiles.uploadPDF = loadPDF\n listfiles.save()\n else:\n # Saving the information in the database\n termscondition = termsCondition.objects.create(\n title=fileTitle,\n uploadPDF=loadPDF\n )\n return redirect('uploadFile')\n else:\n if termsCondition.objects.all().exists():\n listfiles = termsCondition.objects.all()[:1].get()\n return render(request, 'subirTerminos.html', context={\n \"files\": listfiles.uploadPDF\n })\n else:\n listfiles = {}\n return render(request, 'subirTerminos.html', context={\"files\": listfiles})\n else:\n messages.add_message(request=request, level=messages.SUCCESS,\n message=\"No tiene suficientes permisos para ingresar a esta página\")\n return redirect('customer')\n \n else:\n return redirect('login2') ``\n \n and html:\n \n <h1 class=\"title\">Visualizador de PDF</h1>\n <embed id=\"verPDF\" src=\"{{files.url}}\" width=\"500\" height=\"375\" type=\"application/pdf\">\n\n"
] |
[
3,
2
] |
[] |
[] |
[
"django",
"django_forms",
"django_templates",
"django_urls",
"python"
] |
stackoverflow_0074587558_django_django_forms_django_templates_django_urls_python.txt
|
Q:
How to find the best minimal distance path between list of words and their indices?
Here's an example data I have,
word_indices = [
('bus', 554, 1),
('bus', 719, 1),
('bus', 808, 1),
('accessibility', 572, 2),
('accessibility', 724, 2),
('accessibility', 809, 2),
('ada', 725, 3),
('ada', 810, 3),
('accessible', 695, 4),
('accessible', 707, 4),
('accessible', 726, 4),
('accessible', 811, 4),
('get', 10, 5),
('get', 17, 5),
('get', 98, 5),
('get', 179, 5),
('get', 733, 5),
('get', 812, 5),
('tickets', 734, 6),
('tickets', 813, 6),
('tickets', 907, 6),
('nov', 736, 7),
('nov', 815, 7),
('ticket', 816, 8),
('ticket', 818, 8),
('ticket', 828, 8),
('information', 817, 9),
('ticket', 816, 10),
('ticket', 818, 10),
('ticket', 828, 10),
('includes', 819, 11),
('includes', 834, 11),
('wine', 760, 12),
('wine', 820, 12),
('beer', 821, 13),
('supper', 822, 14),
('performance', 262, 15),
('performance', 278, 15),
('performance', 399, 15),
('performance', 823, 15),
('and', 97, 16),
('and', 178, 16),
('and', 261, 16),
('and', 366, 16),
('and', 370, 16),
('and', 397, 16),
('and', 501, 16),
('and', 581, 16),
('and', 636, 16),
('and', 677, 16),
('and', 711, 16),
('and', 824, 16),
('and', 833, 16),
('and', 852, 16),
('and', 871, 16),
('and', 928, 16),
('and', 1017, 16),
('and', 1026, 16),
('and', 1044, 16),
('and', 1088, 16),
('and', 1092, 16),
('and', 1111, 16),
('and', 1126, 16),
('and', 1150, 16),
('and', 1160, 16),
('and', 1166, 16),
('and', 1178, 16),
('and', 1181, 16),
('light', 502, 17),
('light', 825, 17),
('dessert', 826, 18),
('benefactor', 827, 19),
('ticket', 816, 20),
('ticket', 818, 20),
('ticket', 828, 20),
('adds', 829, 21),
('additional', 831, 22),
('contribution', 832, 23),
('and', 97, 24),
('and', 178, 24),
('and', 261, 24),
('and', 366, 24),
('and', 370, 24),
('and', 397, 24),
('and', 501, 24),
('and', 581, 24),
('and', 636, 24),
('and', 677, 24),
('and', 711, 24),
('and', 824, 24),
('and', 833, 24),
('and', 852, 24),
('and', 871, 24),
('and', 928, 24),
('and', 1017, 24),
('and', 1026, 24),
('and', 1044, 24),
('and', 1088, 24),
('and', 1092, 24),
('and', 1111, 24),
('and', 1126, 24),
('and', 1150, 24),
('and', 1160, 24),
('and', 1166, 24),
('and', 1178, 24),
('and', 1181, 24),
('includes', 819, 25),
('includes', 834, 25),
('special', 240, 26),
('special', 255, 26),
('special', 316, 26),
('special', 465, 26),
('special', 759, 26),
('special', 836, 26),
('special', 1027, 26),
('goodie', 837, 27),
('bag', 838, 28),
('upon', 839, 29),
('departure', 841, 30),
('health', 90, 31),
('health', 171, 31),
('health', 842, 31),
('health', 1000, 31),
('health', 1292, 31),
('health', 1313, 31),
('adhere', 845, 32),
('centers', 848, 33),
('disease', 850, 34),
('control', 851, 35),
('and', 97, 36),
('and', 178, 36),
('and', 261, 36),
('and', 366, 36),
('and', 370, 36),
('and', 397, 36),
('and', 501, 36),
('and', 581, 36),
('and', 636, 36),
('and', 677, 36),
('and', 711, 36),
('and', 824, 36),
('and', 833, 36),
('and', 852, 36),
('and', 871, 36),
('and', 928, 36),
('and', 1017, 36),
('and', 1026, 36),
('and', 1044, 36),
('and', 1088, 36),
('and', 1092, 36),
('and', 1111, 36),
('and', 1126, 36),
('and', 1150, 36),
('and', 1160, 36),
('and', 1166, 36),
('and', 1178, 36),
('and', 1181, 36),
('prevention', 853, 37),
('cdc', 268, 38),
('cdc', 854, 38),
('covid', 855, 39),
('guidelines', 271, 40),
('guidelines', 856, 40),
('living', 51, 41),
('living', 132, 41),
('living', 188, 41),
('living', 195, 41),
('living', 213, 41),
('living', 233, 41),
('living', 303, 41),
('living', 588, 41),
('living', 859, 41),
('living', 942, 41),
('living', 978, 41),
('living', 986, 41),
('living', 1227, 41),
('living', 1245, 41),
('room', 52, 42),
('room', 133, 42),
('room', 189, 42),
('room', 196, 42),
('room', 214, 42),
('room', 234, 42),
('room', 304, 42),
('room', 860, 42),
('room', 943, 42),
('room', 979, 42),
('room', 987, 42),
('room', 1228, 42),
('room', 1246, 42),
('tour', 40, 43),
('tour', 53, 43),
('tour', 121, 43),
('tour', 134, 43),
('tour', 190, 43),
('tour', 197, 43),
('tour', 215, 43),
('tour', 235, 43),
('tour', 305, 43),
('tour', 861, 43),
('tour', 944, 43),
('tour', 980, 43),
('tour', 988, 43),
('tour', 1201, 43),
('tour', 1220, 43),
('tour', 1229, 43),
('tour', 1247, 43)]
('bus', 719, 1) - i.e., word, word index and group number
I am trying to find best path (i.e., minimal distance) between each group word indices. (A path is a sequential group numbers)
Distance is sum of absolute differences between word group indices.
Example Output:
In group 1, we will have to select ('bus', 808, '1')
In group 2, we should get ('accessibility', 809, 2)
In group 3, we should get ('ada', 810, 3)
In group 4, We should get ('accessible', 811, 4)
In group 5, we should get ('get', 812, 5)
In group 6, we should get ('tickets', 813, 6) and so on....
Choose the path - (808, 809, 810, 811, 812, 813) rather than
(719, 724, 725, 726, 733, 734) because the first path has minimal distance (i.e., absolute difference).
Trying to find an efficient & scalable approach but I can't figure out the logic.
from itertools import groupby
matches = []
first_group = None
for en, (key, group) in enumerate(groupby(word_indices, key=lambda x: (x[0], x[-1]))):
current_group = list(group)
if en < 1:
first_group = current_group
continue
if first_group is not None and group:
for rowx, ix, group_idx in first_group:
for rowy, iy, group_idy in current_group:
if ix - iy <= 10:
break
Please do help on this, if someone is familiar with an approach that will work. Much appreciated! Thank you
A:
from collections import defaultdict
import heapq
def dijkstra(word_indices):
groups = defaultdict(list)
for word, group_index, group in word_indices:
groups[group].append((word, group_index))
start, stop = min(groups), max(groups)
# queue contains distance, path, where path is a tuple of word_indices triplets
# we start with all words from the first group and distance 0
q = [(0, ((word, group_index, start),)) for word, group_index in groups[start]]
while q:
past_distance, path = heapq.heappop(q)
_, last_group_index, last_group = path[-1]
if last_group == stop:
return past_distance, path
next_group = last_group + 1
for word, group_index in groups[next_group]:
heapq.heappush(q, (past_distance + abs(last_group_index - group_index), path + ((word, group_index, next_group),)))
dijkstra(word_indices) returns a path of length 53:
(53,
(('bus', 808, 1),
('accessibility', 809, 2),
('ada', 810, 3),
('accessible', 811, 4),
('get', 812, 5),
('tickets', 813, 6),
('nov', 815, 7),
('ticket', 816, 8),
('information', 817, 9),
('ticket', 818, 10),
('includes', 819, 11),
('wine', 820, 12),
('beer', 821, 13),
('supper', 822, 14),
('performance', 823, 15),
('and', 824, 16),
('light', 825, 17),
('dessert', 826, 18),
('benefactor', 827, 19),
('ticket', 828, 20),
('adds', 829, 21),
('additional', 831, 22),
('contribution', 832, 23),
('and', 833, 24),
('includes', 834, 25),
('special', 836, 26),
('goodie', 837, 27),
('bag', 838, 28),
('upon', 839, 29),
('departure', 841, 30),
('health', 842, 31),
('adhere', 845, 32),
('centers', 848, 33),
('disease', 850, 34),
('control', 851, 35),
('and', 852, 36),
('prevention', 853, 37),
('cdc', 854, 38),
('covid', 855, 39),
('guidelines', 856, 40),
('living', 859, 41),
('room', 860, 42),
('tour', 861, 43)))
|
How to find the best minimal distance path between list of words and their indices?
|
Here's an example data I have,
word_indices = [
('bus', 554, 1),
('bus', 719, 1),
('bus', 808, 1),
('accessibility', 572, 2),
('accessibility', 724, 2),
('accessibility', 809, 2),
('ada', 725, 3),
('ada', 810, 3),
('accessible', 695, 4),
('accessible', 707, 4),
('accessible', 726, 4),
('accessible', 811, 4),
('get', 10, 5),
('get', 17, 5),
('get', 98, 5),
('get', 179, 5),
('get', 733, 5),
('get', 812, 5),
('tickets', 734, 6),
('tickets', 813, 6),
('tickets', 907, 6),
('nov', 736, 7),
('nov', 815, 7),
('ticket', 816, 8),
('ticket', 818, 8),
('ticket', 828, 8),
('information', 817, 9),
('ticket', 816, 10),
('ticket', 818, 10),
('ticket', 828, 10),
('includes', 819, 11),
('includes', 834, 11),
('wine', 760, 12),
('wine', 820, 12),
('beer', 821, 13),
('supper', 822, 14),
('performance', 262, 15),
('performance', 278, 15),
('performance', 399, 15),
('performance', 823, 15),
('and', 97, 16),
('and', 178, 16),
('and', 261, 16),
('and', 366, 16),
('and', 370, 16),
('and', 397, 16),
('and', 501, 16),
('and', 581, 16),
('and', 636, 16),
('and', 677, 16),
('and', 711, 16),
('and', 824, 16),
('and', 833, 16),
('and', 852, 16),
('and', 871, 16),
('and', 928, 16),
('and', 1017, 16),
('and', 1026, 16),
('and', 1044, 16),
('and', 1088, 16),
('and', 1092, 16),
('and', 1111, 16),
('and', 1126, 16),
('and', 1150, 16),
('and', 1160, 16),
('and', 1166, 16),
('and', 1178, 16),
('and', 1181, 16),
('light', 502, 17),
('light', 825, 17),
('dessert', 826, 18),
('benefactor', 827, 19),
('ticket', 816, 20),
('ticket', 818, 20),
('ticket', 828, 20),
('adds', 829, 21),
('additional', 831, 22),
('contribution', 832, 23),
('and', 97, 24),
('and', 178, 24),
('and', 261, 24),
('and', 366, 24),
('and', 370, 24),
('and', 397, 24),
('and', 501, 24),
('and', 581, 24),
('and', 636, 24),
('and', 677, 24),
('and', 711, 24),
('and', 824, 24),
('and', 833, 24),
('and', 852, 24),
('and', 871, 24),
('and', 928, 24),
('and', 1017, 24),
('and', 1026, 24),
('and', 1044, 24),
('and', 1088, 24),
('and', 1092, 24),
('and', 1111, 24),
('and', 1126, 24),
('and', 1150, 24),
('and', 1160, 24),
('and', 1166, 24),
('and', 1178, 24),
('and', 1181, 24),
('includes', 819, 25),
('includes', 834, 25),
('special', 240, 26),
('special', 255, 26),
('special', 316, 26),
('special', 465, 26),
('special', 759, 26),
('special', 836, 26),
('special', 1027, 26),
('goodie', 837, 27),
('bag', 838, 28),
('upon', 839, 29),
('departure', 841, 30),
('health', 90, 31),
('health', 171, 31),
('health', 842, 31),
('health', 1000, 31),
('health', 1292, 31),
('health', 1313, 31),
('adhere', 845, 32),
('centers', 848, 33),
('disease', 850, 34),
('control', 851, 35),
('and', 97, 36),
('and', 178, 36),
('and', 261, 36),
('and', 366, 36),
('and', 370, 36),
('and', 397, 36),
('and', 501, 36),
('and', 581, 36),
('and', 636, 36),
('and', 677, 36),
('and', 711, 36),
('and', 824, 36),
('and', 833, 36),
('and', 852, 36),
('and', 871, 36),
('and', 928, 36),
('and', 1017, 36),
('and', 1026, 36),
('and', 1044, 36),
('and', 1088, 36),
('and', 1092, 36),
('and', 1111, 36),
('and', 1126, 36),
('and', 1150, 36),
('and', 1160, 36),
('and', 1166, 36),
('and', 1178, 36),
('and', 1181, 36),
('prevention', 853, 37),
('cdc', 268, 38),
('cdc', 854, 38),
('covid', 855, 39),
('guidelines', 271, 40),
('guidelines', 856, 40),
('living', 51, 41),
('living', 132, 41),
('living', 188, 41),
('living', 195, 41),
('living', 213, 41),
('living', 233, 41),
('living', 303, 41),
('living', 588, 41),
('living', 859, 41),
('living', 942, 41),
('living', 978, 41),
('living', 986, 41),
('living', 1227, 41),
('living', 1245, 41),
('room', 52, 42),
('room', 133, 42),
('room', 189, 42),
('room', 196, 42),
('room', 214, 42),
('room', 234, 42),
('room', 304, 42),
('room', 860, 42),
('room', 943, 42),
('room', 979, 42),
('room', 987, 42),
('room', 1228, 42),
('room', 1246, 42),
('tour', 40, 43),
('tour', 53, 43),
('tour', 121, 43),
('tour', 134, 43),
('tour', 190, 43),
('tour', 197, 43),
('tour', 215, 43),
('tour', 235, 43),
('tour', 305, 43),
('tour', 861, 43),
('tour', 944, 43),
('tour', 980, 43),
('tour', 988, 43),
('tour', 1201, 43),
('tour', 1220, 43),
('tour', 1229, 43),
('tour', 1247, 43)]
('bus', 719, 1) - i.e., word, word index and group number
I am trying to find best path (i.e., minimal distance) between each group word indices. (A path is a sequential group numbers)
Distance is sum of absolute differences between word group indices.
Example Output:
In group 1, we will have to select ('bus', 808, '1')
In group 2, we should get ('accessibility', 809, 2)
In group 3, we should get ('ada', 810, 3)
In group 4, We should get ('accessible', 811, 4)
In group 5, we should get ('get', 812, 5)
In group 6, we should get ('tickets', 813, 6) and so on....
Choose the path - (808, 809, 810, 811, 812, 813) rather than
(719, 724, 725, 726, 733, 734) because the first path has minimal distance (i.e., absolute difference).
Trying to find an efficient & scalable approach but I can't figure out the logic.
from itertools import groupby
matches = []
first_group = None
for en, (key, group) in enumerate(groupby(word_indices, key=lambda x: (x[0], x[-1]))):
current_group = list(group)
if en < 1:
first_group = current_group
continue
if first_group is not None and group:
for rowx, ix, group_idx in first_group:
for rowy, iy, group_idy in current_group:
if ix - iy <= 10:
break
Please do help on this, if someone is familiar with an approach that will work. Much appreciated! Thank you
|
[
"from collections import defaultdict\nimport heapq\n\ndef dijkstra(word_indices):\n groups = defaultdict(list)\n for word, group_index, group in word_indices:\n groups[group].append((word, group_index))\n start, stop = min(groups), max(groups)\n # queue contains distance, path, where path is a tuple of word_indices triplets\n # we start with all words from the first group and distance 0\n q = [(0, ((word, group_index, start),)) for word, group_index in groups[start]]\n while q:\n past_distance, path = heapq.heappop(q)\n _, last_group_index, last_group = path[-1]\n if last_group == stop:\n return past_distance, path\n next_group = last_group + 1\n for word, group_index in groups[next_group]:\n heapq.heappush(q, (past_distance + abs(last_group_index - group_index), path + ((word, group_index, next_group),)))\n \n\ndijkstra(word_indices) returns a path of length 53:\n(53,\n (('bus', 808, 1),\n ('accessibility', 809, 2),\n ('ada', 810, 3),\n ('accessible', 811, 4),\n ('get', 812, 5),\n ('tickets', 813, 6),\n ('nov', 815, 7),\n ('ticket', 816, 8),\n ('information', 817, 9),\n ('ticket', 818, 10),\n ('includes', 819, 11),\n ('wine', 820, 12),\n ('beer', 821, 13),\n ('supper', 822, 14),\n ('performance', 823, 15),\n ('and', 824, 16),\n ('light', 825, 17),\n ('dessert', 826, 18),\n ('benefactor', 827, 19),\n ('ticket', 828, 20),\n ('adds', 829, 21),\n ('additional', 831, 22),\n ('contribution', 832, 23),\n ('and', 833, 24),\n ('includes', 834, 25),\n ('special', 836, 26),\n ('goodie', 837, 27),\n ('bag', 838, 28),\n ('upon', 839, 29),\n ('departure', 841, 30),\n ('health', 842, 31),\n ('adhere', 845, 32),\n ('centers', 848, 33),\n ('disease', 850, 34),\n ('control', 851, 35),\n ('and', 852, 36),\n ('prevention', 853, 37),\n ('cdc', 854, 38),\n ('covid', 855, 39),\n ('guidelines', 856, 40),\n ('living', 859, 41),\n ('room', 860, 42),\n ('tour', 861, 43)))\n\n"
] |
[
1
] |
[] |
[] |
[
"python",
"python_3.x"
] |
stackoverflow_0074594885_python_python_3.x.txt
|
Q:
It is saying that mean, and median when chosen, is not defined?
def average(vals, method):
if method == mean:
mean == (sum(a)/len(a))
print('The mean is', str(mean))
if method == median:
median == (len(a)-1)//2
print('The median is', str(median))
average((-1,0,1,1,1,2,3), mean)
I dont understand what needs fixing, can anyone help?
A:
Here is some fixed code:
def average(vals, method):
if method == 'mean':
mean = (sum(a)/len(a))
print('The mean is', mean)
if method == 'median':
midpoint = (len(a)-1)//2
median = vals[midpoint]
print('The median is', median)
average((-1,0,1,1,1,2,3), 'mean')
average((-1,0,1,1,1,2,3), 'median')
|
It is saying that mean, and median when chosen, is not defined?
|
def average(vals, method):
if method == mean:
mean == (sum(a)/len(a))
print('The mean is', str(mean))
if method == median:
median == (len(a)-1)//2
print('The median is', str(median))
average((-1,0,1,1,1,2,3), mean)
I dont understand what needs fixing, can anyone help?
|
[
"Here is some fixed code:\ndef average(vals, method):\n if method == 'mean':\n mean = (sum(a)/len(a))\n print('The mean is', mean)\n if method == 'median':\n midpoint = (len(a)-1)//2\n median = vals[midpoint]\n print('The median is', median)\n\naverage((-1,0,1,1,1,2,3), 'mean')\naverage((-1,0,1,1,1,2,3), 'median')\n\n"
] |
[
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074595056_python.txt
|
Q:
Assign number to python from range
I'm looking at a data set of scores.
I want to know the probability of each score based on the bin the score falls in using pd.cut
How can I take a value and assign it a probability based on the outputted table?
Code as follows
import pandas as pd
data = pd.DataFrame({'scores':[168.0, 44.0, 352.0, 128.0, 268.0, 228.0, 160.0, 376.0, 304.0, 124.0, 360.0, 36.0, 224.0, 176.0, 40.0, 28.0, 264.0, 292.0, 228.0, 80.0, 216.0, 132.0, 88.0, 220.0, 284.0, 308.0, 256.0, 360.0, 364.0, 128.0, 268.0, 72.0, 100.0, 320.0, 224.0, 300.0, 232.0, 316.0, 196.0, 248.0, 24.0, 396.0, 8.0, 248.0, 244.0, 392.0, 240.0, 28.0, 260.0, 220.0, 120.0, 56.0, 232.0, 216.0, 228.0, 232.0, 332.0, 280.0, 148.0, 84.0, 284.0, 268.0, 176.0, 324.0, 52.0, 112.0, 344.0, 296.0, 164.0, 28.0, 304.0, 344.0, 232.0, 340.0, 324.0, 248.0, 232.0, 400.0, 396.0, 36.0, 52.0, 204.0, 292.0, 96.0, 68.0, 392.0, 260.0, 224.0, 236.0, 248.0, 316.0, 292.0, 212.0, 276.0, 304.0, 124.0, 216.0, 48.0, 64.0, 228.0]})
frequencyTable = pd.cut(data['scores'], bins = 20, include_lowest=True, ordered=True, precision=4, right=False)
frequencyTable = frequencyTable.value_counts(sort=False)
frequencyTable = frequencyTable.reset_index()
frequencyTable['probability'] = frequencyTable['scores']/len(data)
print(frequencyTable)
Output as follows
index scores probability
0 [8.0, 27.6) 2 0.02
1 [27.6, 47.2) 7 0.07
2 [47.2, 66.8) 5 0.05
3 [66.8, 86.4) 4 0.04
4 [86.4, 106.0) 3 0.03
5 [106.0, 125.6) 4 0.04
6 [125.6, 145.2) 3 0.03
7 [145.2, 164.8) 3 0.03
8 [164.8, 184.4) 3 0.03
9 [184.4, 204.0) 1 0.01
10 [204.0, 223.6) 7 0.07
11 [223.6, 243.2) 14 0.14
12 [243.2, 262.8) 8 0.08
13 [262.8, 282.4) 6 0.06
14 [282.4, 302.0) 7 0.07
15 [302.0, 321.6) 7 0.07
16 [321.6, 341.2) 4 0.04
17 [341.2, 360.8) 5 0.05
18 [360.8, 380.4) 2 0.02
19 [380.4, 400.392) 5 0.05
I'd like to be able to take input = 265 and return 6%
A:
Your frequencyTable is a table where the first column is an Interval and the third column is the percentage. So to get what you want, you iterate over the table, looking for the item where the input value (v=265) is in the Interval of that row, and if it is, you take the value in the third column. So something like this:
v = 265
p = -1
for bin in frequencyTable.values:
if v in bin[0]:
p = bin[2] * 100
break
print(p, '%')
Result:
6.0 %
|
Assign number to python from range
|
I'm looking at a data set of scores.
I want to know the probability of each score based on the bin the score falls in using pd.cut
How can I take a value and assign it a probability based on the outputted table?
Code as follows
import pandas as pd
data = pd.DataFrame({'scores':[168.0, 44.0, 352.0, 128.0, 268.0, 228.0, 160.0, 376.0, 304.0, 124.0, 360.0, 36.0, 224.0, 176.0, 40.0, 28.0, 264.0, 292.0, 228.0, 80.0, 216.0, 132.0, 88.0, 220.0, 284.0, 308.0, 256.0, 360.0, 364.0, 128.0, 268.0, 72.0, 100.0, 320.0, 224.0, 300.0, 232.0, 316.0, 196.0, 248.0, 24.0, 396.0, 8.0, 248.0, 244.0, 392.0, 240.0, 28.0, 260.0, 220.0, 120.0, 56.0, 232.0, 216.0, 228.0, 232.0, 332.0, 280.0, 148.0, 84.0, 284.0, 268.0, 176.0, 324.0, 52.0, 112.0, 344.0, 296.0, 164.0, 28.0, 304.0, 344.0, 232.0, 340.0, 324.0, 248.0, 232.0, 400.0, 396.0, 36.0, 52.0, 204.0, 292.0, 96.0, 68.0, 392.0, 260.0, 224.0, 236.0, 248.0, 316.0, 292.0, 212.0, 276.0, 304.0, 124.0, 216.0, 48.0, 64.0, 228.0]})
frequencyTable = pd.cut(data['scores'], bins = 20, include_lowest=True, ordered=True, precision=4, right=False)
frequencyTable = frequencyTable.value_counts(sort=False)
frequencyTable = frequencyTable.reset_index()
frequencyTable['probability'] = frequencyTable['scores']/len(data)
print(frequencyTable)
Output as follows
index scores probability
0 [8.0, 27.6) 2 0.02
1 [27.6, 47.2) 7 0.07
2 [47.2, 66.8) 5 0.05
3 [66.8, 86.4) 4 0.04
4 [86.4, 106.0) 3 0.03
5 [106.0, 125.6) 4 0.04
6 [125.6, 145.2) 3 0.03
7 [145.2, 164.8) 3 0.03
8 [164.8, 184.4) 3 0.03
9 [184.4, 204.0) 1 0.01
10 [204.0, 223.6) 7 0.07
11 [223.6, 243.2) 14 0.14
12 [243.2, 262.8) 8 0.08
13 [262.8, 282.4) 6 0.06
14 [282.4, 302.0) 7 0.07
15 [302.0, 321.6) 7 0.07
16 [321.6, 341.2) 4 0.04
17 [341.2, 360.8) 5 0.05
18 [360.8, 380.4) 2 0.02
19 [380.4, 400.392) 5 0.05
I'd like to be able to take input = 265 and return 6%
|
[
"Your frequencyTable is a table where the first column is an Interval and the third column is the percentage. So to get what you want, you iterate over the table, looking for the item where the input value (v=265) is in the Interval of that row, and if it is, you take the value in the third column. So something like this:\nv = 265\np = -1\nfor bin in frequencyTable.values:\n if v in bin[0]:\n p = bin[2] * 100\n break\n\nprint(p, '%')\n\nResult:\n6.0 %\n\n"
] |
[
0
] |
[] |
[] |
[
"pandas",
"python",
"statistics"
] |
stackoverflow_0074594997_pandas_python_statistics.txt
|
Q:
IndexError: tuple index out of range when creating PySpark DataFrame
I want to create test data in a pyspark dataframe but I always get the same "tuple index out of range" error. I do not get this error when reading a csv. Would appreciate any thoughts on why I'm getting this error.
The first thing I tried was create a pandas dataframe and convert it to a pyspark dataframe:
columns = ["id","col_"]
data = [("1", "blue"), ("2", "green"),
("3", "purple"), ("4", "red"),
("5", "yellow")]
df = pd.DataFrame(data=data, columns=columns)
sparkdf = spark.createDataFrame(df)
sparkdf.show()
output:
PicklingError: Could not serialize object: IndexError: tuple index out of range
I get the same error if I try to create the dataframe from RDD per SparkbyExamples.com instructions:
rdd = spark.sparkContext.parallelize(data)
sparkdf = spark.createDataFrame(rdd).toDF(*columns)
sparkdf.show()
I also tried the following and got the same error:
import pyspark.pandas as ps
df1 = ps.from_pandas(df)
Here is the full error when running the above code:
IndexError Traceback (most recent call last)
File c:\Users\jonat\AppData\Local\Programs\Python\Python311\Lib\site-packages\pyspark\serializers.py:458, in CloudPickleSerializer.dumps(self, obj)
457 try:
--> 458 return cloudpickle.dumps(obj, pickle_protocol)
459 except pickle.PickleError:
File c:\Users\jonat\AppData\Local\Programs\Python\Python311\Lib\site-packages\pyspark\cloudpickle\cloudpickle_fast.py:73, in dumps(obj, protocol, buffer_callback)
70 cp = CloudPickler(
71 file, protocol=protocol, buffer_callback=buffer_callback
72 )
---> 73 cp.dump(obj)
74 return file.getvalue()
File c:\Users\jonat\AppData\Local\Programs\Python\Python311\Lib\site-packages\pyspark\cloudpickle\cloudpickle_fast.py:602, in CloudPickler.dump(self, obj)
601 try:
--> 602 return Pickler.dump(self, obj)
603 except RuntimeError as e:
File c:\Users\jonat\AppData\Local\Programs\Python\Python311\Lib\site-packages\pyspark\cloudpickle\cloudpickle_fast.py:692, in CloudPickler.reducer_override(self, obj)
691 elif isinstance(obj, types.FunctionType):
--> 692 return self._function_reduce(obj)
693 else:
694 # fallback to save_global, including the Pickler's
695 # dispatch_table
File c:\Users\jonat\AppData\Local\Programs\Python\Python311\Lib\site-packages\pyspark\cloudpickle\cloudpickle_fast.py:565, in CloudPickler._function_reduce(self, obj)
564 else:
--> 565 return self._dynamic_function_reduce(obj)
File c:\Users\jonat\AppData\Local\Programs\Python\Python311\Lib\site-packages\pyspark\cloudpickle\cloudpickle_fast.py:546, in CloudPickler._dynamic_function_reduce(self, func)
545 newargs = self._function_getnewargs(func)
--> 546 state = _function_getstate(func)
547 return (types.FunctionType, newargs, state, None, None,
548 _function_setstate)
File c:\Users\jonat\AppData\Local\Programs\Python\Python311\Lib\site-packages\pyspark\cloudpickle\cloudpickle_fast.py:157, in _function_getstate(func)
146 slotstate = {
147 "__name__": func.__name__,
148 "__qualname__": func.__qualname__,
(...)
154 "__closure__": func.__closure__,
155 }
--> 157 f_globals_ref = _extract_code_globals(func.__code__)
158 f_globals = {k: func.__globals__[k] for k in f_globals_ref if k in
159 func.__globals__}
File c:\Users\jonat\AppData\Local\Programs\Python\Python311\Lib\site-packages\pyspark\cloudpickle\cloudpickle.py:334, in _extract_code_globals(co)
331 # We use a dict with None values instead of a set to get a
332 # deterministic order (assuming Python 3.6+) and avoid introducing
333 # non-deterministic pickle bytes as a results.
--> 334 out_names = {names[oparg]: None for _, oparg in _walk_global_ops(co)}
336 # Declaring a function inside another one using the "def ..."
337 # syntax generates a constant code object corresponding to the one
338 # of the nested function's As the nested function may itself need
339 # global variables, we need to introspect its code, extract its
340 # globals, (look for code object in it's co_consts attribute..) and
341 # add the result to code_globals
File c:\Users\jonat\AppData\Local\Programs\Python\Python311\Lib\site-packages\pyspark\cloudpickle\cloudpickle.py:334, in <dictcomp>(.0)
331 # We use a dict with None values instead of a set to get a
332 # deterministic order (assuming Python 3.6+) and avoid introducing
333 # non-deterministic pickle bytes as a results.
--> 334 out_names = {names[oparg]: None for _, oparg in _walk_global_ops(co)}
336 # Declaring a function inside another one using the "def ..."
337 # syntax generates a constant code object corresponding to the one
338 # of the nested function's As the nested function may itself need
339 # global variables, we need to introspect its code, extract its
340 # globals, (look for code object in it's co_consts attribute..) and
341 # add the result to code_globals
IndexError: tuple index out of range
During handling of the above exception, another exception occurred:
PicklingError Traceback (most recent call last)
Cell In [67], line 2
1 rdd = spark.sparkContext.parallelize(data)
----> 2 df1 = ps.from_pandas(df)
3 sparkdf = spark.createDataFrame(rdd).toDF(*columns)
4 #Create a dictionary from each row in col_
File c:\Users\jonat\AppData\Local\Programs\Python\Python311\Lib\site-packages\pyspark\pandas\namespace.py:153, in from_pandas(pobj)
151 return Series(pobj)
152 elif isinstance(pobj, pd.DataFrame):
--> 153 return DataFrame(pobj)
154 elif isinstance(pobj, pd.Index):
155 return DataFrame(pd.DataFrame(index=pobj)).index
File c:\Users\jonat\AppData\Local\Programs\Python\Python311\Lib\site-packages\pyspark\pandas\frame.py:450, in DataFrame.__init__(self, data, index, columns, dtype, copy)
448 else:
449 pdf = pd.DataFrame(data=data, index=index, columns=columns, dtype=dtype, copy=copy)
--> 450 internal = InternalFrame.from_pandas(pdf)
452 object.__setattr__(self, "_internal_frame", internal)
...
466 msg = "Could not serialize object: %s: %s" % (e.__class__.__name__, emsg)
467 print_exec(sys.stderr)
--> 468 raise pickle.PicklingError(msg)
PicklingError: Could not serialize object: IndexError: tuple index out of range
A:
After doing some reading I checked https://pyreadiness.org/3.11 and it looks like the latest version of python is not supported by pyspark. I was able to resolve this problem by downgrading to python 3.9
|
IndexError: tuple index out of range when creating PySpark DataFrame
|
I want to create test data in a pyspark dataframe but I always get the same "tuple index out of range" error. I do not get this error when reading a csv. Would appreciate any thoughts on why I'm getting this error.
The first thing I tried was create a pandas dataframe and convert it to a pyspark dataframe:
columns = ["id","col_"]
data = [("1", "blue"), ("2", "green"),
("3", "purple"), ("4", "red"),
("5", "yellow")]
df = pd.DataFrame(data=data, columns=columns)
sparkdf = spark.createDataFrame(df)
sparkdf.show()
output:
PicklingError: Could not serialize object: IndexError: tuple index out of range
I get the same error if I try to create the dataframe from RDD per SparkbyExamples.com instructions:
rdd = spark.sparkContext.parallelize(data)
sparkdf = spark.createDataFrame(rdd).toDF(*columns)
sparkdf.show()
I also tried the following and got the same error:
import pyspark.pandas as ps
df1 = ps.from_pandas(df)
Here is the full error when running the above code:
IndexError Traceback (most recent call last)
File c:\Users\jonat\AppData\Local\Programs\Python\Python311\Lib\site-packages\pyspark\serializers.py:458, in CloudPickleSerializer.dumps(self, obj)
457 try:
--> 458 return cloudpickle.dumps(obj, pickle_protocol)
459 except pickle.PickleError:
File c:\Users\jonat\AppData\Local\Programs\Python\Python311\Lib\site-packages\pyspark\cloudpickle\cloudpickle_fast.py:73, in dumps(obj, protocol, buffer_callback)
70 cp = CloudPickler(
71 file, protocol=protocol, buffer_callback=buffer_callback
72 )
---> 73 cp.dump(obj)
74 return file.getvalue()
File c:\Users\jonat\AppData\Local\Programs\Python\Python311\Lib\site-packages\pyspark\cloudpickle\cloudpickle_fast.py:602, in CloudPickler.dump(self, obj)
601 try:
--> 602 return Pickler.dump(self, obj)
603 except RuntimeError as e:
File c:\Users\jonat\AppData\Local\Programs\Python\Python311\Lib\site-packages\pyspark\cloudpickle\cloudpickle_fast.py:692, in CloudPickler.reducer_override(self, obj)
691 elif isinstance(obj, types.FunctionType):
--> 692 return self._function_reduce(obj)
693 else:
694 # fallback to save_global, including the Pickler's
695 # dispatch_table
File c:\Users\jonat\AppData\Local\Programs\Python\Python311\Lib\site-packages\pyspark\cloudpickle\cloudpickle_fast.py:565, in CloudPickler._function_reduce(self, obj)
564 else:
--> 565 return self._dynamic_function_reduce(obj)
File c:\Users\jonat\AppData\Local\Programs\Python\Python311\Lib\site-packages\pyspark\cloudpickle\cloudpickle_fast.py:546, in CloudPickler._dynamic_function_reduce(self, func)
545 newargs = self._function_getnewargs(func)
--> 546 state = _function_getstate(func)
547 return (types.FunctionType, newargs, state, None, None,
548 _function_setstate)
File c:\Users\jonat\AppData\Local\Programs\Python\Python311\Lib\site-packages\pyspark\cloudpickle\cloudpickle_fast.py:157, in _function_getstate(func)
146 slotstate = {
147 "__name__": func.__name__,
148 "__qualname__": func.__qualname__,
(...)
154 "__closure__": func.__closure__,
155 }
--> 157 f_globals_ref = _extract_code_globals(func.__code__)
158 f_globals = {k: func.__globals__[k] for k in f_globals_ref if k in
159 func.__globals__}
File c:\Users\jonat\AppData\Local\Programs\Python\Python311\Lib\site-packages\pyspark\cloudpickle\cloudpickle.py:334, in _extract_code_globals(co)
331 # We use a dict with None values instead of a set to get a
332 # deterministic order (assuming Python 3.6+) and avoid introducing
333 # non-deterministic pickle bytes as a results.
--> 334 out_names = {names[oparg]: None for _, oparg in _walk_global_ops(co)}
336 # Declaring a function inside another one using the "def ..."
337 # syntax generates a constant code object corresponding to the one
338 # of the nested function's As the nested function may itself need
339 # global variables, we need to introspect its code, extract its
340 # globals, (look for code object in it's co_consts attribute..) and
341 # add the result to code_globals
File c:\Users\jonat\AppData\Local\Programs\Python\Python311\Lib\site-packages\pyspark\cloudpickle\cloudpickle.py:334, in <dictcomp>(.0)
331 # We use a dict with None values instead of a set to get a
332 # deterministic order (assuming Python 3.6+) and avoid introducing
333 # non-deterministic pickle bytes as a results.
--> 334 out_names = {names[oparg]: None for _, oparg in _walk_global_ops(co)}
336 # Declaring a function inside another one using the "def ..."
337 # syntax generates a constant code object corresponding to the one
338 # of the nested function's As the nested function may itself need
339 # global variables, we need to introspect its code, extract its
340 # globals, (look for code object in it's co_consts attribute..) and
341 # add the result to code_globals
IndexError: tuple index out of range
During handling of the above exception, another exception occurred:
PicklingError Traceback (most recent call last)
Cell In [67], line 2
1 rdd = spark.sparkContext.parallelize(data)
----> 2 df1 = ps.from_pandas(df)
3 sparkdf = spark.createDataFrame(rdd).toDF(*columns)
4 #Create a dictionary from each row in col_
File c:\Users\jonat\AppData\Local\Programs\Python\Python311\Lib\site-packages\pyspark\pandas\namespace.py:153, in from_pandas(pobj)
151 return Series(pobj)
152 elif isinstance(pobj, pd.DataFrame):
--> 153 return DataFrame(pobj)
154 elif isinstance(pobj, pd.Index):
155 return DataFrame(pd.DataFrame(index=pobj)).index
File c:\Users\jonat\AppData\Local\Programs\Python\Python311\Lib\site-packages\pyspark\pandas\frame.py:450, in DataFrame.__init__(self, data, index, columns, dtype, copy)
448 else:
449 pdf = pd.DataFrame(data=data, index=index, columns=columns, dtype=dtype, copy=copy)
--> 450 internal = InternalFrame.from_pandas(pdf)
452 object.__setattr__(self, "_internal_frame", internal)
...
466 msg = "Could not serialize object: %s: %s" % (e.__class__.__name__, emsg)
467 print_exec(sys.stderr)
--> 468 raise pickle.PicklingError(msg)
PicklingError: Could not serialize object: IndexError: tuple index out of range
|
[
"After doing some reading I checked https://pyreadiness.org/3.11 and it looks like the latest version of python is not supported by pyspark. I was able to resolve this problem by downgrading to python 3.9\n"
] |
[
1
] |
[] |
[] |
[
"dataframe",
"pandas",
"pyspark",
"python"
] |
stackoverflow_0074579273_dataframe_pandas_pyspark_python.txt
|
Q:
How to fix pydev debugger error in Pycharm?
Yesterday I updated my python, this caused my debugger to not function properly.
I keep getting the following error in output:
-------------------------------------------------------------------------------
pydev debugger: CRITICAL WARNING: This version of python seems to be incorrectly compiled (internal generated filenames are not absolute)
pydev debugger: The debugger may still function, but it will work slower and may miss breakpoints.
pydev debugger: Related bug: http://bugs.python.org/issue1666807
-------------------------------------------------------------------------------
Connected to pydev debugger (build 222.4459.20)
pydev debugger: Unable to find real location for: <frozen codecs>
pydev debugger: Unable to find real location for: <frozen importlib._bootstrap>
pydev debugger: Unable to find real location for: <frozen importlib._bootstrap_external>
pydev debugger: Unable to find real location for: <frozen zipimport>
pydev debugger: Unable to find real location for: <frozen ntpath>
pydev debugger: Unable to find real location for: <frozen genericpath>
pydev debugger: Unable to find real location for: <frozen os>
pydev debugger: Unable to find real location for: <frozen _collections_abc>
pydev debugger: Unable to find real location for: <string>
pydev debugger: Unable to find real location for: <frozen abc>
pydev debugger: Unable to find real location for: <__array_function__ internals>
pydev debugger: Unable to find real location for: <frozen io>
pydev debugger: Unable to find real location for: <decorator-gen-0>
pydev debugger: Unable to find real location for: <decorator-gen-1>
pydev debugger: Unable to find real location for: <decorator-gen-2>
pydev debugger: Unable to find real location for: <decorator-gen-3>
pydev debugger: Unable to find real location for: <decorator-gen-4>
pydev debugger: Unable to find real location for: <decorator-gen-5>
pydev debugger: Unable to find real location for: <decorator-gen-6>
pydev debugger: Unable to find real location for: <frozen importlib.util>
pydev debugger: Unable to find real location for: <frozen runpy>
pydev debugger: Unable to find real location for: <decorator-gen-7>
pydev debugger: Unable to find real location for: <decorator-gen-8>
pydev debugger: Unable to find real location for: <decorator-gen-9>
pydev debugger: Unable to find real location for: <decorator-gen-10>
What can be done to fix it?
I Tried to freshly install Python and Pycharm. nothing really changed.
A:
So it seems that there is a bug in Pycharm current version and What helped me fix it is to download the EAP version 2022.3. I did not receive anymore errors.
|
How to fix pydev debugger error in Pycharm?
|
Yesterday I updated my python, this caused my debugger to not function properly.
I keep getting the following error in output:
-------------------------------------------------------------------------------
pydev debugger: CRITICAL WARNING: This version of python seems to be incorrectly compiled (internal generated filenames are not absolute)
pydev debugger: The debugger may still function, but it will work slower and may miss breakpoints.
pydev debugger: Related bug: http://bugs.python.org/issue1666807
-------------------------------------------------------------------------------
Connected to pydev debugger (build 222.4459.20)
pydev debugger: Unable to find real location for: <frozen codecs>
pydev debugger: Unable to find real location for: <frozen importlib._bootstrap>
pydev debugger: Unable to find real location for: <frozen importlib._bootstrap_external>
pydev debugger: Unable to find real location for: <frozen zipimport>
pydev debugger: Unable to find real location for: <frozen ntpath>
pydev debugger: Unable to find real location for: <frozen genericpath>
pydev debugger: Unable to find real location for: <frozen os>
pydev debugger: Unable to find real location for: <frozen _collections_abc>
pydev debugger: Unable to find real location for: <string>
pydev debugger: Unable to find real location for: <frozen abc>
pydev debugger: Unable to find real location for: <__array_function__ internals>
pydev debugger: Unable to find real location for: <frozen io>
pydev debugger: Unable to find real location for: <decorator-gen-0>
pydev debugger: Unable to find real location for: <decorator-gen-1>
pydev debugger: Unable to find real location for: <decorator-gen-2>
pydev debugger: Unable to find real location for: <decorator-gen-3>
pydev debugger: Unable to find real location for: <decorator-gen-4>
pydev debugger: Unable to find real location for: <decorator-gen-5>
pydev debugger: Unable to find real location for: <decorator-gen-6>
pydev debugger: Unable to find real location for: <frozen importlib.util>
pydev debugger: Unable to find real location for: <frozen runpy>
pydev debugger: Unable to find real location for: <decorator-gen-7>
pydev debugger: Unable to find real location for: <decorator-gen-8>
pydev debugger: Unable to find real location for: <decorator-gen-9>
pydev debugger: Unable to find real location for: <decorator-gen-10>
What can be done to fix it?
I Tried to freshly install Python and Pycharm. nothing really changed.
|
[
"So it seems that there is a bug in Pycharm current version and What helped me fix it is to download the EAP version 2022.3. I did not receive anymore errors.\n"
] |
[
0
] |
[] |
[] |
[
"debugging",
"pycharm",
"python"
] |
stackoverflow_0074583310_debugging_pycharm_python.txt
|
Q:
How can resources be provided in PyQt6 (which has no pyrcc)?
The documentation for PyQt6 states that
Support for Qt’s resource system has been removed (i.e. there is no pyrcc6).
In light of this, how should one provide resources for a PyQt6 application?
A:
There has been some discussion on the PyQt mailing list when this was found out.
The maintainer is not interested in maintaining pyrcc anymore as he believes that it doesn't provide any major benefit considering that python already uses multiple files anyway.
The easiest solution is probably to use the static methods of QDir setSearchPaths() or addSearchPath().
The difference will be that resources will be loaded using the prefix used for the methods above.
Considering the previous situation:
icon = QtGui.QIcon(':/icons/myicon.png')
Now it would become like this:
# somewhere at the beginning of your program
QtCore.QDir.addSearchPath('icons', 'path_to_icons/')
icon = QtGui.QIcon('icons:myicon.png')
A:
UPDATE:
As of PyQt-6.3.1, it's possible to use Qt’s resource system again. (This version now includes the qRegisterResourceData and qUnregisterResourceData functions which are required by the generated python resource module.)
There's still no pyrcc6 tool, but Qt's own rcc tool can now be used to convert the qrc file. This tool should be installed by default with a full Qt6 installation, but if you can't find it, you could also use the PySide6 tools to convert the qrc file. (PySide6 simply searches for the Qt6 rcc tool and runs it using subprocess, so it will produce exactly the same output).
Thus, to convert the qrc file, you can now use either:
rcc -g python -o resources.py resources.qrc
or:
pyside6-rcc -o resources.py resources.qrc
However, it's very important to note that the import line at the top of the generated file must be modified to work correctly with PyQt6:
# Resource object code (Python 3)
# Created by: object code
# Created by: The Resource Compiler for Qt version 6.4.0
# WARNING! All changes made in this file will be lost!
# from PySide6 import QtCore <-- replace this line
from PyQt6 import QtCore
The whole operation can be done with this unix one-liner (requires GNU sed):
rcc -g python resources.qrc | sed '0,/PySide6/s//PyQt6/' > resources.py
or:
pyside6-rcc reources.qrc | sed '0,/PySide6/s//PyQt6/' > resources.py
Once this small change has been made, the generated module can be safely imported into the main application, like this:
from PyQt6 import QtCore, QtGui, QtWidgets
from test_ui import Ui_Window
import resources
class Window(QtWidgets.QWidget, Ui_Window):
def __init__(self):
super().__init__()
self.setupUi(self)
if __name__ == '__main__':
app = QtWidgets.QApplication(['Test'])
window = Window()
window.show()
app.exec()
Note that it is NOT SAFE to use the generated module without making the changes noted above. This is because the unmodfied module will attempt to import PySide6, which is obviously inappropriate for a PyQt6 application. Whilst it may seem to work on the development machine, there's no guarantee that mixing the two libararies in this way will always remain compatible - and in any case, it's bad practice to enforce the installation of PySide6 on a user's system just so that they can run a PyQt6 application.
OLD ANSWER:
The consensus seems to be that the existing python facilities should be used instead of pyrrc. So the resources would be stored directly in the file-system (perhaps within archive files), and then located using importlib.resources (python >= 3.7), or pkg_resources, or a third-party solution like importlib_resources. Exactly how this maps to existing uses of pyrcc will probably be application-specific, so some experimentation will be needed to find the best approach.
For more details on how to use these facilities, see:
How to read a (static) file from inside a Python package?
A:
for those people who want a real and simple solution just watch it here: link
a guy figured it out by converting the "resource.qrc" into a .py file by using the pyrcc of PySide6. then importing the resource.py (same as before) in your PyQt6 project. everything is the same, including the special filepath syntax: ":/image.jpg" instead of "./image.jpg"
hope it helps, always feels good to have a simpler solution.
A:
As I started to use PyQt6, I found missing full support for Qt6 Resources.
Especially when using designer and using images for buttons, labels etc.
I tried addSearchPath, but still had to edit generated .py template.
After some research I found using importlab the best solution for my problem.
I made simple script, which is using .qrc file and generates .py templates with importpath.
For example changing:
icon = QtGui.QIcon()
icon.addPixmap(QtGui.QPixmap(":/icons/icon1.png"), QtGui.QIcon.Mode.Normal, QtGui.QIcon.State.Off)
to:
icon = QtGui.QIcon()
with path("myPackage.resources.icons", "icon1.png") as f_path:
icon.addPixmap(QtGui.QPixmap(str(f_path)), QtGui.QIcon.Mode.Normal, QtGui.QIcon.State.Off)
Here is a link to GitLab repo: https://github.com/domarm-comat/pyqt6rc
Or install via pip:
python3 -m pip install pyqt6rc
A:
The owner decided pyrcc6 wasn't useful and no longer provides it. However, what he doesn't understand is how useful it is for those of us that use qt designer to define our resources like icons.. etc and then using pyinstaller to package and find all resources, so that pyinstaller can build a stand-alone exe and these icons are properly embedded and used in the .ui user interface.
You may find this link useful (I had no success with it though): https://pypi.org/project/pyqt6rc/
Ultimately, even though I am using pyqt6, I also installed pyside6 using pip and used pyside6-rcc command to do the same thing that the old pyrcc5 command used to do.
The best full explanation can be found at this YouTube link:
https://www.youtube.com/watch?v=u5BLPTkbaM8
|
How can resources be provided in PyQt6 (which has no pyrcc)?
|
The documentation for PyQt6 states that
Support for Qt’s resource system has been removed (i.e. there is no pyrcc6).
In light of this, how should one provide resources for a PyQt6 application?
|
[
"There has been some discussion on the PyQt mailing list when this was found out.\nThe maintainer is not interested in maintaining pyrcc anymore as he believes that it doesn't provide any major benefit considering that python already uses multiple files anyway.\nThe easiest solution is probably to use the static methods of QDir setSearchPaths() or addSearchPath().\nThe difference will be that resources will be loaded using the prefix used for the methods above.\nConsidering the previous situation:\nicon = QtGui.QIcon(':/icons/myicon.png')\n\nNow it would become like this:\n# somewhere at the beginning of your program\nQtCore.QDir.addSearchPath('icons', 'path_to_icons/')\n\nicon = QtGui.QIcon('icons:myicon.png')\n\n",
"UPDATE:\nAs of PyQt-6.3.1, it's possible to use Qt’s resource system again. (This version now includes the qRegisterResourceData and qUnregisterResourceData functions which are required by the generated python resource module.)\nThere's still no pyrcc6 tool, but Qt's own rcc tool can now be used to convert the qrc file. This tool should be installed by default with a full Qt6 installation, but if you can't find it, you could also use the PySide6 tools to convert the qrc file. (PySide6 simply searches for the Qt6 rcc tool and runs it using subprocess, so it will produce exactly the same output).\nThus, to convert the qrc file, you can now use either:\nrcc -g python -o resources.py resources.qrc\n\nor:\npyside6-rcc -o resources.py resources.qrc\n\nHowever, it's very important to note that the import line at the top of the generated file must be modified to work correctly with PyQt6:\n# Resource object code (Python 3)\n# Created by: object code\n# Created by: The Resource Compiler for Qt version 6.4.0\n# WARNING! All changes made in this file will be lost!\n\n# from PySide6 import QtCore <-- replace this line\nfrom PyQt6 import QtCore\n\nThe whole operation can be done with this unix one-liner (requires GNU sed):\nrcc -g python resources.qrc | sed '0,/PySide6/s//PyQt6/' > resources.py\n\nor:\npyside6-rcc reources.qrc | sed '0,/PySide6/s//PyQt6/' > resources.py \n\nOnce this small change has been made, the generated module can be safely imported into the main application, like this:\nfrom PyQt6 import QtCore, QtGui, QtWidgets\nfrom test_ui import Ui_Window\nimport resources\n\nclass Window(QtWidgets.QWidget, Ui_Window):\n def __init__(self):\n super().__init__()\n self.setupUi(self)\n\nif __name__ == '__main__':\n\n app = QtWidgets.QApplication(['Test'])\n window = Window()\n window.show()\n app.exec()\n\nNote that it is NOT SAFE to use the generated module without making the changes noted above. This is because the unmodfied module will attempt to import PySide6, which is obviously inappropriate for a PyQt6 application. Whilst it may seem to work on the development machine, there's no guarantee that mixing the two libararies in this way will always remain compatible - and in any case, it's bad practice to enforce the installation of PySide6 on a user's system just so that they can run a PyQt6 application.\n\nOLD ANSWER:\nThe consensus seems to be that the existing python facilities should be used instead of pyrrc. So the resources would be stored directly in the file-system (perhaps within archive files), and then located using importlib.resources (python >= 3.7), or pkg_resources, or a third-party solution like importlib_resources. Exactly how this maps to existing uses of pyrcc will probably be application-specific, so some experimentation will be needed to find the best approach.\nFor more details on how to use these facilities, see:\n\nHow to read a (static) file from inside a Python package?\n\n",
"for those people who want a real and simple solution just watch it here: link\na guy figured it out by converting the \"resource.qrc\" into a .py file by using the pyrcc of PySide6. then importing the resource.py (same as before) in your PyQt6 project. everything is the same, including the special filepath syntax: \":/image.jpg\" instead of \"./image.jpg\"\nhope it helps, always feels good to have a simpler solution.\n",
"As I started to use PyQt6, I found missing full support for Qt6 Resources.\nEspecially when using designer and using images for buttons, labels etc.\nI tried addSearchPath, but still had to edit generated .py template.\nAfter some research I found using importlab the best solution for my problem.\nI made simple script, which is using .qrc file and generates .py templates with importpath.\nFor example changing:\nicon = QtGui.QIcon()\nicon.addPixmap(QtGui.QPixmap(\":/icons/icon1.png\"), QtGui.QIcon.Mode.Normal, QtGui.QIcon.State.Off)\n\nto:\nicon = QtGui.QIcon()\nwith path(\"myPackage.resources.icons\", \"icon1.png\") as f_path:\n icon.addPixmap(QtGui.QPixmap(str(f_path)), QtGui.QIcon.Mode.Normal, QtGui.QIcon.State.Off)\n\nHere is a link to GitLab repo: https://github.com/domarm-comat/pyqt6rc\nOr install via pip:\npython3 -m pip install pyqt6rc\n\n",
"The owner decided pyrcc6 wasn't useful and no longer provides it. However, what he doesn't understand is how useful it is for those of us that use qt designer to define our resources like icons.. etc and then using pyinstaller to package and find all resources, so that pyinstaller can build a stand-alone exe and these icons are properly embedded and used in the .ui user interface.\nYou may find this link useful (I had no success with it though): https://pypi.org/project/pyqt6rc/\nUltimately, even though I am using pyqt6, I also installed pyside6 using pip and used pyside6-rcc command to do the same thing that the old pyrcc5 command used to do.\nThe best full explanation can be found at this YouTube link:\nhttps://www.youtube.com/watch?v=u5BLPTkbaM8\n"
] |
[
9,
6,
1,
0,
0
] |
[] |
[] |
[
"pyqt",
"pyqt6",
"pyrcc",
"python",
"resources"
] |
stackoverflow_0066099225_pyqt_pyqt6_pyrcc_python_resources.txt
|
Q:
How do I use the debug console in VSCode?
I really like the debug console feature in VScode, it makes it a lot easier for me to do Python writing. How do I get it to stay on? Is it possible to write launch.json so that the code runs without closing the run afterwards?
I can use 'time.sleep()' to continue this console on.
Can I edit the'launch.json'?
What are other ways?
A:
It's not possible to keep the debug console open after the script ends, because the memory is released back to the operating system.
Edit: as @nigh_anxiety mentioned, setting a breakpoint at the end of the script is probably a more elegant solution.
Old Answer:
Instead, you could wait for user input before exiting, with a line like
input('press enter to quit')
at the end of your file.
A:
The interactive window should work for you. Open an interactive window with the command Jupyter: Create Interactive Window
Or right-click on the code editor interface and select Run Current File in Python Interactive Window.
In this window, you can directly run the code file or write the code and then shift+enter to run.
More details can be found in the documentation.
|
How do I use the debug console in VSCode?
|
I really like the debug console feature in VScode, it makes it a lot easier for me to do Python writing. How do I get it to stay on? Is it possible to write launch.json so that the code runs without closing the run afterwards?
I can use 'time.sleep()' to continue this console on.
Can I edit the'launch.json'?
What are other ways?
|
[
"It's not possible to keep the debug console open after the script ends, because the memory is released back to the operating system.\nEdit: as @nigh_anxiety mentioned, setting a breakpoint at the end of the script is probably a more elegant solution.\n\nOld Answer:\nInstead, you could wait for user input before exiting, with a line like\ninput('press enter to quit')\n\nat the end of your file.\n",
"The interactive window should work for you. Open an interactive window with the command Jupyter: Create Interactive Window\n\nOr right-click on the code editor interface and select Run Current File in Python Interactive Window.\n\nIn this window, you can directly run the code file or write the code and then shift+enter to run.\nMore details can be found in the documentation.\n"
] |
[
0,
0
] |
[] |
[] |
[
"python",
"visual_studio_code"
] |
stackoverflow_0074567962_python_visual_studio_code.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.