repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
laughingman7743/PyAthena
|
sqlalchemy
| 576
|
Using VPC endpoint and PandasCursor together
|
I am accessing Athena from a closed network via a VPC endpoint.
Specifying the URL of the VPC endpoint to `endpoint_url=` works as expected, but it did not work well when used with `PandasCursor`.
I checked code and found that when creating a boto3 client for S3, `endpoint_url=` is also applied, and I suspect that is the cause of the error.
If possible, I would appreciate it if `endpoint_url=` and `PandasCursor` can be used together.
- Python: 3.12.1
- PyAthena: 3.12.2
```
from pyathena import connect
from pyathena.pandas.cursor import PandasCursor
cursor = connect(
work_group='XXXXXXX',
endpoint_url='https://vpce-XXXXXXX.athena.XXXXXXX.vpce.amazonaws.com',
region_name='XXXXXXX').cursor(PandasCursor)
df = cursor.execute('''
SELECT * FROM XXXXXXX.XXXXXXX LIMIT 10
''').as_pandas()
print(df)
```
```
Failed to get content length.
Traceback (most recent call last):
File "C:\Users\Kitauchi.Shinji\AppData\Local\Programs\Python\Python312\Lib\site-packages\pyathena\result_set.py", line 434, in _get_content_length
response = retry_api_call(
^^^^^^^^^^^^^^^
File "C:\Users\Kitauchi.Shinji\AppData\Local\Programs\Python\Python312\Lib\site-packages\pyathena\util.py", line 84, in retry_api_call
return retry(func, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kitauchi.Shinji\AppData\Local\Programs\Python\Python312\Lib\site-packages\tenacity\__init__.py", line 475, in __call__
do = self.iter(retry_state=retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kitauchi.Shinji\AppData\Local\Programs\Python\Python312\Lib\site-packages\tenacity\__init__.py", line 376, in iter
result = action(retry_state)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kitauchi.Shinji\AppData\Local\Programs\Python\Python312\Lib\site-packages\tenacity\__init__.py", line 398, in <lambda>
self._add_action_func(lambda rs: rs.outcome.result())
^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kitauchi.Shinji\AppData\Local\Programs\Python\Python312\Lib\concurrent\futures\_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kitauchi.Shinji\AppData\Local\Programs\Python\Python312\Lib\concurrent\futures\_base.py", line 401, in __get_result
raise self._exception
File "C:\Users\Kitauchi.Shinji\AppData\Local\Programs\Python\Python312\Lib\site-packages\tenacity\__init__.py", line 478, in __call__
result = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kitauchi.Shinji\AppData\Local\Programs\Python\Python312\Lib\site-packages\botocore\client.py", line 569, in _api_call
return self._make_api_call(operation_name, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kitauchi.Shinji\AppData\Local\Programs\Python\Python312\Lib\site-packages\botocore\client.py", line 1023, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (404) when calling the HeadObject operation: Not Found
Traceback (most recent call last):
File "C:\Users\Kitauchi.Shinji\AppData\Local\Programs\Python\Python312\Lib\site-packages\pyathena\result_set.py", line 434, in _get_content_length
response = retry_api_call(
^^^^^^^^^^^^^^^
File "C:\Users\Kitauchi.Shinji\AppData\Local\Programs\Python\Python312\Lib\site-packages\pyathena\util.py", line 84, in retry_api_call
return retry(func, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kitauchi.Shinji\AppData\Local\Programs\Python\Python312\Lib\site-packages\tenacity\__init__.py", line 475, in __call__
do = self.iter(retry_state=retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kitauchi.Shinji\AppData\Local\Programs\Python\Python312\Lib\site-packages\tenacity\__init__.py", line 376, in iter
result = action(retry_state)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kitauchi.Shinji\AppData\Local\Programs\Python\Python312\Lib\site-packages\tenacity\__init__.py", line 398, in <lambda>
self._add_action_func(lambda rs: rs.outcome.result())
^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kitauchi.Shinji\AppData\Local\Programs\Python\Python312\Lib\concurrent\futures\_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kitauchi.Shinji\AppData\Local\Programs\Python\Python312\Lib\concurrent\futures\_base.py", line 401, in __get_result
raise self._exception
File "C:\Users\Kitauchi.Shinji\AppData\Local\Programs\Python\Python312\Lib\site-packages\tenacity\__init__.py", line 478, in __call__
result = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kitauchi.Shinji\AppData\Local\Programs\Python\Python312\Lib\site-packages\botocore\client.py", line 569, in _api_call
return self._make_api_call(operation_name, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kitauchi.Shinji\AppData\Local\Programs\Python\Python312\Lib\site-packages\botocore\client.py", line 1023, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (404) when calling the HeadObject operation: Not Found
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\tmp\sample3.py", line 10, in <module>
df = cursor.execute('''
^^^^^^^^^^^^^^^^^^
File "C:\Users\Kitauchi.Shinji\AppData\Local\Programs\Python\Python312\Lib\site-packages\pyathena\pandas\cursor.py", line 162, in execute
self.result_set = AthenaPandasResultSet(
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kitauchi.Shinji\AppData\Local\Programs\Python\Python312\Lib\site-packages\pyathena\pandas\result_set.py", line 143, in __init__
df = self._as_pandas()
^^^^^^^^^^^^^^^^^
File "C:\Users\Kitauchi.Shinji\AppData\Local\Programs\Python\Python312\Lib\site-packages\pyathena\pandas\result_set.py", line 386, in _as_pandas
df = self._read_csv()
^^^^^^^^^^^^^^^^
File "C:\Users\Kitauchi.Shinji\AppData\Local\Programs\Python\Python312\Lib\site-packages\pyathena\pandas\result_set.py", line 269, in _read_csv
length = self._get_content_length()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kitauchi.Shinji\AppData\Local\Programs\Python\Python312\Lib\site-packages\pyathena\result_set.py", line 443, in _get_content_length
raise OperationalError(*e.args) from e
pyathena.error.OperationalError: An error occurred (404) when calling the HeadObject operation: Not Found
```
|
open
|
2025-02-28T04:18:59Z
|
2025-03-03T05:46:20Z
|
https://github.com/laughingman7743/PyAthena/issues/576
|
[] |
KitauchiShinji
| 3
|
plotly/dash-recipes
|
dash
| 40
|
Concern about choosing directory when exporting datatable
|
Hello,
Is it possible to choose a directory (path) when exporting datatable in dash? Thanks.
<img width="960" alt="111" src="https://user-images.githubusercontent.com/68303975/130564249-4fbcbd62-acf2-427e-8be5-5dd7a94fc1a1.png">
|
open
|
2021-08-24T06:01:11Z
|
2021-08-24T06:01:11Z
|
https://github.com/plotly/dash-recipes/issues/40
|
[] |
perryisbusy
| 0
|
fastapi/sqlmodel
|
sqlalchemy
| 318
|
no such table, SQLModel can't find the table
|
pls delete this issue
|
open
|
2022-04-29T06:44:51Z
|
2022-09-05T19:44:24Z
|
https://github.com/fastapi/sqlmodel/issues/318
|
[
"question"
] |
mr-m0nst3r
| 1
|
JaidedAI/EasyOCR
|
deep-learning
| 1,106
|
My_first_lang is only compatible with English???
|
--------mycode:
reader = Reader(['en'],recog_network='my_first_lang',model_storage_directory=basepath+'/model',user_network_directory=basepath+'/user_network');
--------file:

|
open
|
2023-08-07T13:26:50Z
|
2023-09-25T22:12:50Z
|
https://github.com/JaidedAI/EasyOCR/issues/1106
|
[] |
xiayuer0114
| 1
|
seleniumbase/SeleniumBase
|
pytest
| 3,586
|
`sb.wait_for_text_not_visible()` wasn't mapping to the correct CDP Mode method
|
### `sb.wait_for_text_not_visible()` wasn't mapping to the correct CDP Mode method
----
This would cause failures in CDP Mode when calling regular SB methods directly.
|
closed
|
2025-03-05T17:31:03Z
|
2025-03-05T18:25:39Z
|
https://github.com/seleniumbase/SeleniumBase/issues/3586
|
[
"bug",
"UC Mode / CDP Mode"
] |
mdmintz
| 1
|
wsvincent/awesome-django
|
django
| 193
|
🤔 Issue template ideas
|
We need an issue template to help with some expectations. Here is my quick riff on what we might want:
----
Our goal with Awesome Django is to highlight packages that we think are awesome and stand out above the rest.
Our goal isn't to be a comprehensive directory of 1000+ projects like [Django Packages](https://djangopackages.org).
We are looking for projects that are:
- relevant to Django
- maintained
- release and support history
- stand out because they are useful and solve a unique problem
- we can't ignore your star count, but we don't have a high number in mind.
What we are NOT looking out for:
- unmaintained projects
- promote your project, service, or employer
----
- [ ] What makes this product awesome?
- [ ] Are you the author or a maintainer? (no points off for self-promotion)
- [ ] If your project is brand new, we don't have a minimum number of GH stars, but your project needs "enough" stars.
- [ ] Is this project maintained?
- [ ] If your project is published on PyPI, is there a history/pattern of keeping it updated?
- [ ] If your project/service is a paid product, do you work for the same company? (emphasis on disclosure vs. promoting your product/service/company)
- [ ] Django and Python are trademarked by their respective foundations, if your product, paid service, and/or domain name use these trademarks, do you have their permission to do so?
|
closed
|
2022-08-25T15:43:26Z
|
2023-07-15T14:23:42Z
|
https://github.com/wsvincent/awesome-django/issues/193
|
[
"no-issue-activity"
] |
jefftriplett
| 5
|
plotly/dash
|
dash
| 3,214
|
[BUG] dcc.Dropdown width rendering incorrect with Dash 3 rc4
|
**Describe your context**
Please provide us your environment, so we can easily reproduce the issue.
- replace the result of `pip list | grep dash` below
```
dash 3.0.0rc4
dash-core-components 2.0.0
dash_design_kit 1.14.0
dash-html-components 2.0.0
dash-table 5.0.0
```
- if frontend related, tell us your Browser, Version and OS
- OS: [e.g. iOS] MacOS
- Browser [e.g. chrome, safari] Chrome & Safari
- Version [e.g. 22] `Version 134.0.6998.88` Arm64
**Describe the bug**
`dcc.Dropdown` renders squashed with Dash 3, whereas it renders at full-width with Dash 2.x
**Expected behavior**
The `dcc.Dropdown` should render the same way between Dash 2.x and Dash 3.x, if there have been no code changes in the app.
**Screenshots**
Dash 3.0

Dash 2.0

|
open
|
2025-03-13T14:07:28Z
|
2025-03-18T14:10:16Z
|
https://github.com/plotly/dash/issues/3214
|
[
"bug",
"P1",
"dash-3.0"
] |
susodapop
| 0
|
samuelcolvin/dirty-equals
|
pytest
| 48
|
fix `IsNow`
|
The following should pass
```py
assert '2022-07-15T10:56:38.311Z' == IsNow(delta=10, tz='utc', format_string='%Y-%m-%dT%H:%M:%S.%fZ', enforce_tz=False)
```
(ignoring that that's not "now" any longer obvisouly)
|
open
|
2022-07-15T11:00:27Z
|
2024-05-22T08:36:25Z
|
https://github.com/samuelcolvin/dirty-equals/issues/48
|
[] |
samuelcolvin
| 2
|
albumentations-team/albumentations
|
machine-learning
| 2,227
|
CoarseDropout does not work with relative hole_height_range and hole_width_range
|
## Describe the bug
In contrast to the docstring of CoarseDropout, the two parameters `hole_height_range` and `hole_width_range` cannot be given as relative values. This is most probably due to a Validator which was introduced recently (1.4.23) for these two parameters: `AfterValidator(check_range_bounds(1, None))` in `class InitSchema(BaseDropout.InitSchema)`
### To Reproduce
Albumentations >= 1.4.23
```
augmentation = CoarseDropout(
hole_width_range=(0.25, 0.5),
hole_height_range=(0.5, 0.75),
num_holes_range=(1, 100),
p=1.0,
)
```
### Expected behavior
According to the docstring this should work.
### Actual behavior
Error:
```pydantic_core._pydantic_core.ValidationError: 2 validation errors for InitSchema
hole_height_range
Value error, All values in (0.5, 0.75) must be >= 1 [type=value_error, input_value=(0.5, 0.75), input_type=tuple]
For further information visit https://errors.pydantic.dev/2.10/v/value_error
hole_width_range
Value error, All values in (0.25, 0.5) must be >= 1 [type=value_error, input_value=(0.25, 0.5), input_type=tuple]
For further information visit https://errors.pydantic.dev/2.10/v/value_error
```
|
closed
|
2025-01-02T12:34:12Z
|
2025-01-03T19:29:20Z
|
https://github.com/albumentations-team/albumentations/issues/2227
|
[
"bug"
] |
kpms-bastian-kronenbitter
| 1
|
geopandas/geopandas
|
pandas
| 2,534
|
QST:Using geopandas in flask, the program ends directly
|
- [x] I have searched the [geopandas] tag on [StackOverflow](https://stackoverflow.com/questions/tagged/geopandas) and [GIS StackExchange](https://gis.stackexchange.com/questions/tagged/geopandas) for similar questions.
- [ ] I have asked my usage related question on [StackOverflow](https://stackoverflow.com) or [GIS StackExhange](https://gis.stackexchange.com).
--- my question
A simple code read shapefile, if i use it in python env, **it can show out the result**;
like this

**but** when i put it into **flask** program,when i use flask route , the program ends directly,
Here is my flask use code
`
@DataTransformation.route('/TransShape2Jsonfile',methods=['GET','POST'])
def TransShape2Jsonfile():
s2g.TransShapefile2JsonFile()
return 'Data Trans Over'
`

`gpd.read_file(FilePath, encoding='gbk')`
After testing, it is found that it is **read_ The file()** function cannot be run. When the program runs here, it will terminate directly
```
|
closed
|
2022-08-23T07:26:06Z
|
2023-01-05T22:01:47Z
|
https://github.com/geopandas/geopandas/issues/2534
|
[
"question"
] |
Roey1996
| 2
|
mlflow/mlflow
|
machine-learning
| 14,190
|
[BUG] Cannot use self-signed url in OpenAI Model
|
### Issues Policy acknowledgement
- [X] I have read and agree to submit bug reports in accordance with the [issues policy](https://www.github.com/mlflow/mlflow/blob/master/ISSUE_POLICY.md)
### Where did you encounter this bug?
Databricks
### MLflow version
- Client: 2.19.0
- Openai 1.40.2
### System information
- Databrics Runtime 16.0 ML (includes Apache Spark 3.5.0, Scala 2.12)
### Describe the problem
OpenAI deployment cannot access urls signed by private CA.
The instance of `AzureOpenAI`, as seen bellow, do not allow configuration of custom certificates for http client:
https://github.com/mlflow/mlflow/blob/3c86c188dbd76c614373f96cb3c03871458aba9c/mlflow/openai/__init__.py#L674-L683
The suggested method, [provided by OpenAI](https://github.com/openai/openai-python?tab=readme-ov-file#configuring-the-http-client), is this configuration:
```python
import httpx
from openai import OpenAI, DefaultHttpxClient
client = OpenAI(
# Or use the `OPENAI_BASE_URL` env var
base_url="http://my.test.server.example.com:8083/v1",
http_client=DefaultHttpxClient(
proxy="http://my.test.proxy.example.com",
transport=httpx.HTTPTransport(local_address="0.0.0.0"),
),
)
```
My suggestion is to add a new configuration, `OPENAI_SSL_VERIFY`. The implementation should be based on this:
```python
from openai import AzureOpenAI, DefaultHttpxClient
# Note DefaultHttpxClient is a just a proxy to httpx.Client
http_client = DefaultHttpxClient(verify=self.api_config.ssl_verify)
return AzureOpenAI(
api_key=self.api_token.token,
azure_endpoint=self.api_config.api_base,
api_version=self.api_config.api_version,
azure_deployment=self.api_config.deployment_id,
max_retries=max_retries,
timeout=timeout,
http_client=http_client
)
```
### Tracking information
```
System information: Linux #84~20.04.1-Ubuntu SMP Mon Nov 4 18:58:41 UTC 2024
Python version: 3.12.3
MLflow version: 2.19.0
MLflow module location: /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.12/site-packages/mlflow/__init__.py
Tracking URI: databricks
Registry URI: databricks
Databricks runtime version: 16.0
MLflow environment variables:
MLFLOW_CONDA_HOME: /databricks/conda
MLFLOW_DEPLOYMENTS_TARGET: databricks
MLFLOW_GATEWAY_URI: databricks
MLFLOW_PYTHON_EXECUTABLE: /databricks/spark/scripts/mlflow_python.sh
MLFLOW_TRACKING_URI: databricks
MLflow dependencies:
Flask: 2.2.5
Jinja2: 3.1.4
aiohttp: 3.9.5
alembic: 1.14.0
azure-storage-file-datalake: 12.17.0
boto3: 1.34.69
botocore: 1.34.69
docker: 7.1.0
google-cloud-storage: 2.10.0
graphene: 3.4.3
gunicorn: 20.1.0
langchain: 0.2.12
markdown: 3.4.1
matplotlib: 3.8.4
mlflow-skinny: 2.19.0
numpy: 1.26.4
pandas: 1.5.3
pyarrow: 15.0.2
pydantic: 2.8.2
scikit-learn: 1.4.2
scipy: 1.13.1
sqlalchemy: 2.0.30
tiktoken: 0.7.0
virtualenv: 20.26.2
```
### Code to reproduce issue
<!-- PLEASE KEEP BACKTICKS AND CHECK PREVIEW -->
```python
import os
import openai
import mlflow
os.environ["AZURE_OPENAI_API_KEY"] = "xxxxxxx" # Any value
os.environ["AZURE_OPENAI_ENDPOINT"] = "https://self-signed.badssl.com/" # Any self-signed url, like this one
os.environ["OPENAI_API_VERSION"] = "2024-06-01"
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_DEPLOYMENT_NAME"] = "text-embedding-3-small"
print(mlflow.__version__, openai.__version__) # Yields ('2.19.0', '1.40.2')
with mlflow.start_run():
model_info = mlflow.openai.log_model(
model="text-embedding-3-small",
task=openai.embeddings,
artifact_path="model"
)
# Fix error when model has no openai key... but this is another bug
os.environ["OPENAI_API_KEY"] = os.environ['AZURE_OPENAI_API_KEY']
# Load the model in pyfunc format
model = mlflow.pyfunc.load_model(model_info.model_uri)
# This will rise an Request #0 failed with: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate (_ssl.c:1000)
results = model.predict([
"This is a test"
])
# We can download self-signed certificate... but how to use it?
# echo "" | openssl s_client -connect self-signed.badssl.com:443 -prexit 2>/dev/null | sed -n -e '/BEGIN\ CERTIFICATE/,/END\ CERTIFICATE/ p'
```
### Stack trace
<!-- PLEASE KEEP BACKTICKS AND CHECK PREVIEW -->
```
REPLACE_ME
```
### Other info / logs
<!-- PLEASE KEEP BACKTICKS AND CHECK PREVIEW -->
```
APIConnectionError('Connection error.')Traceback (most recent call last):
File "/databricks/python/lib/python3.12/site-packages/httpx/_transports/default.py", line 72, in map_httpcore_exceptions
yield
File "/databricks/python/lib/python3.12/site-packages/httpx/_transports/default.py", line 236, in handle_request
resp = self._pool.handle_request(req)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/databricks/python/lib/python3.12/site-packages/httpcore/_sync/connection_pool.py", line 216, in handle_request
raise exc from None
File "/databricks/python/lib/python3.12/site-packages/httpcore/_sync/connection_pool.py", line 196, in handle_request
response = connection.handle_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/databricks/python/lib/python3.12/site-packages/httpcore/_sync/connection.py", line 99, in handle_request
raise exc
File "/databricks/python/lib/python3.12/site-packages/httpcore/_sync/connection.py", line 76, in handle_request
stream = self._connect(request)
^^^^^^^^^^^^^^^^^^^^^^
File "/databricks/python/lib/python3.12/site-packages/httpcore/_sync/connection.py", line 154, in _connect
stream = stream.start_tls(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/databricks/python/lib/python3.12/site-packages/httpcore/_backends/sync.py", line 152, in start_tls
with map_exceptions(exc_map):
File "/usr/lib/python3.12/contextlib.py", line 158, in __exit__
self.gen.throw(value)
File "/databricks/python/lib/python3.12/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
raise to_exc(exc) from exc
httpcore.ConnectError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate (_ssl.c:1000)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/databricks/python/lib/python3.12/site-packages/openai/_base_client.py", line 972, in _request
response = self._client.send(
^^^^^^^^^^^^^^^^^^
File "/databricks/python/lib/python3.12/site-packages/httpx/_client.py", line 926, in send
response = self._send_handling_auth(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/databricks/python/lib/python3.12/site-packages/httpx/_client.py", line 954, in _send_handling_auth
response = self._send_handling_redirects(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/databricks/python/lib/python3.12/site-packages/httpx/_client.py", line 991, in _send_handling_redirects
response = self._send_single_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/databricks/python/lib/python3.12/site-packages/httpx/_client.py", line 1027, in _send_single_request
response = transport.handle_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/databricks/python/lib/python3.12/site-packages/httpx/_transports/default.py", line 235, in handle_request
with map_httpcore_exceptions():
File "/usr/lib/python3.12/contextlib.py", line 158, in __exit__
self.gen.throw(value)
File "/databricks/python/lib/python3.12/site-packages/httpx/_transports/default.py", line 89, in map_httpcore_exceptions
raise mapped_exc(message) from exc
httpx.ConnectError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate (_ssl.c:1000)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.12/site-packages/mlflow/openai/_openai_autolog.py", line 181, in patched_call
raw_result = original(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.12/site-packages/mlflow/utils/autologging_utils/safety.py", line 573, in call_original
return call_original_fn_with_event_logging(_original_fn, og_args, og_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.12/site-packages/mlflow/utils/autologging_utils/safety.py", line 508, in call_original_fn_with_event_logging
original_fn_result = original_fn(*og_args, **og_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.12/site-packages/mlflow/utils/autologging_utils/safety.py", line 570, in _original_fn
original_result = original(*_og_args, **_og_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/databricks/python/lib/python3.12/site-packages/openai/resources/embeddings.py", line 114, in create
return self._post(
^^^^^^^^^^^
File "/databricks/python/lib/python3.12/site-packages/openai/_base_client.py", line 1259, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/databricks/python/lib/python3.12/site-packages/openai/_base_client.py", line 936, in request
return self._request(
^^^^^^^^^^^^^^
File "/databricks/python/lib/python3.12/site-packages/openai/_base_client.py", line 996, in _request
return self._retry_request(
^^^^^^^^^^^^^^^^^^^^
File "/databricks/python/lib/python3.12/site-packages/openai/_base_client.py", line 1074, in _retry_request
return self._request(
^^^^^^^^^^^^^^
File "/databricks/python/lib/python3.12/site-packages/openai/_base_client.py", line 996, in _request
return self._retry_request(
^^^^^^^^^^^^^^^^^^^^
File "/databricks/python/lib/python3.12/site-packages/openai/_base_client.py", line 1074, in _retry_request
return self._request(
^^^^^^^^^^^^^^
File "/databricks/python/lib/python3.12/site-packages/openai/_base_client.py", line 996, in _request
return self._retry_request(
^^^^^^^^^^^^^^^^^^^^
File "/databricks/python/lib/python3.12/site-packages/openai/_base_client.py", line 1074, in _retry_request
return self._request(
^^^^^^^^^^^^^^
File "/databricks/python/lib/python3.12/site-packages/openai/_base_client.py", line 996, in _request
return self._retry_request(
^^^^^^^^^^^^^^^^^^^^
File "/databricks/python/lib/python3.12/site-packages/openai/_base_client.py", line 1074, in _retry_request
return self._request(
^^^^^^^^^^^^^^
File "/databricks/python/lib/python3.12/site-packages/openai/_base_client.py", line 996, in _request
return self._retry_request(
^^^^^^^^^^^^^^^^^^^^
File "/databricks/python/lib/python3.12/site-packages/openai/_base_client.py", line 1074, in _retry_request
return self._request(
^^^^^^^^^^^^^^
File "/databricks/python/lib/python3.12/site-packages/openai/_base_client.py", line 1006, in _request
raise APIConnectionError(request=request) from err
openai.APIConnectionError: Connection error.
```
### What component(s) does this bug affect?
- [ ] `area/artifacts`: Artifact stores and artifact logging
- [ ] `area/build`: Build and test infrastructure for MLflow
- [X] `area/deployments`: MLflow Deployments client APIs, server, and third-party Deployments integrations
- [ ] `area/docs`: MLflow documentation pages
- [ ] `area/examples`: Example code
- [X] `area/model-registry`: Model Registry service, APIs, and the fluent client calls for Model Registry
- [X] `area/models`: MLmodel format, model serialization/deserialization, flavors
- [ ] `area/recipes`: Recipes, Recipe APIs, Recipe configs, Recipe Templates
- [ ] `area/projects`: MLproject format, project running backends
- [ ] `area/scoring`: MLflow Model server, model deployment tools, Spark UDFs
- [ ] `area/server-infra`: MLflow Tracking server backend
- [ ] `area/tracking`: Tracking Service, tracking client APIs, autologging
### What interface(s) does this bug affect?
- [ ] `area/uiux`: Front-end, user experience, plotting, JavaScript, JavaScript dev server
- [ ] `area/docker`: Docker use across MLflow's components, such as MLflow Projects and MLflow Models
- [ ] `area/sqlalchemy`: Use of SQLAlchemy in the Tracking Service or Model Registry
- [ ] `area/windows`: Windows support
### What language(s) does this bug affect?
- [ ] `language/r`: R APIs and clients
- [ ] `language/java`: Java APIs and clients
- [ ] `language/new`: Proposals for new client languages
### What integration(s) does this bug affect?
- [X] `integrations/azure`: Azure and Azure ML integrations
- [ ] `integrations/sagemaker`: SageMaker integrations
- [ ] `integrations/databricks`: Databricks integrations
|
open
|
2025-01-06T16:54:22Z
|
2025-01-09T00:20:33Z
|
https://github.com/mlflow/mlflow/issues/14190
|
[
"bug",
"area/model-registry",
"area/models",
"integrations/azure",
"area/deployments"
] |
brunodoamaral
| 1
|
deepinsight/insightface
|
pytorch
| 1,897
|
about insightface's model compare embedding
|
I'm really curious.
how measure two embedding distance
|
open
|
2022-01-27T15:43:43Z
|
2022-03-04T11:22:46Z
|
https://github.com/deepinsight/insightface/issues/1897
|
[] |
nadongjin
| 1
|
jessevig/bertviz
|
nlp
| 30
|
visualization of only 3 layers / example model_view_xlnet.ipynb
|
I tried load XLNet only with three layers (it does work with full XLNet) but with three the example model_view_xlnet.ipynb does not work
```
config = XLNetConfig.from_pretrained('/transformers/')
config.n_layer = 3
config.num_labels = 3
model = XLNetModel.from_pretrained('/transformers/')
```
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-11-7c9c3356caa4> in <module>
17 input_id_list = input_ids[0].tolist() # Batch index 0
18 tokens = tokenizer.convert_ids_to_tokens(input_id_list)
---> 19 model_view(attention, tokens)
~/projects/bertviz/bertviz/model_view.py in model_view(attention, tokens, sentence_b_start, prettify_tokens)
78 attn_seq_len = len(attn_data['all']['attn'][0][0])
79 if attn_seq_len != len(tokens):
---> 80 raise ValueError(f"Attention has {attn_seq_len} positions, while number of tokens is {len(tokens)}")
81 display(Javascript('window.params = %s' % json.dumps(params)))
82 display(Javascript(vis_js))
ValueError: Attention has 768 positions, while number of tokens is 14
```
|
closed
|
2019-12-26T05:39:42Z
|
2019-12-31T17:52:12Z
|
https://github.com/jessevig/bertviz/issues/30
|
[] |
cherepanovic
| 1
|
recommenders-team/recommenders
|
data-science
| 1,601
|
[ASK] What's the system requirements on Azure ML to run LightGCN?
|
### Description
I see it takes 47 seconds to run 50 epochs of 100k movielens in the below notebook. I am curious what's the system requirements if I want to reproduce similar results in Azure ML or Azure Databricks.
https://github.com/microsoft/recommenders/blob/main/examples/02_model_collaborative_filtering/lightgcn_deep_dive.ipynb
### Other Comments
|
closed
|
2021-12-30T08:56:53Z
|
2022-03-17T11:43:23Z
|
https://github.com/recommenders-team/recommenders/issues/1601
|
[
"help wanted"
] |
rwforest
| 3
|
WeblateOrg/weblate
|
django
| 13,766
|
Automatic translation add-on setting missing "Add as approved translation"
|
### Describe the issue
When configuring the automatic translation using the add-on, there is no option to choose "Add as approved translation" in the "Automatic translation mode" dropdown
<img width="874" alt="Image" src="https://github.com/user-attachments/assets/cf1369cd-4c9b-4179-ba43-8b770706325d" />
However, the same is available when automatic translation is selected from `Tools > Automatic Translation`
<img width="1168" alt="Image" src="https://github.com/user-attachments/assets/2c03afef-2e05-48ec-91e1-b337265f6a7c" />
### I already tried
- [x] I've read and searched [the documentation](https://docs.weblate.org/).
- [x] I've searched for similar filed issues in this repository.
### Steps to reproduce the behavior
1. Go to Add-ons
2. Install "Automatic Translation" add-on
3. Configure and open dropdown "Automatic translation mode"
### Expected behavior
To have option "Add as approved translation"
### Screenshots
_No response_
### Exception traceback
```pytb
```
### How do you run Weblate?
Docker container
### Weblate versions
5.9.2
### Weblate deploy checks
```shell
```
### Additional context
_No response_
|
closed
|
2025-02-06T10:25:02Z
|
2025-02-06T13:56:50Z
|
https://github.com/WeblateOrg/weblate/issues/13766
|
[
"bug"
] |
anuj-scanova
| 3
|
pytorch/pytorch
|
python
| 149,634
|
[ONNX] Improve onnx ops docs
|
https://pytorch.org/docs/main/onnx_ops.html
Improve example to show the onnx op being used with torch ops.
|
closed
|
2025-03-20T17:01:17Z
|
2025-03-21T03:24:56Z
|
https://github.com/pytorch/pytorch/issues/149634
|
[
"module: onnx",
"triaged"
] |
justinchuby
| 0
|
pytorch/vision
|
machine-learning
| 8,938
|
`num_ops` argument of `RandAugment()` shouldn't accept negative values
|
### 🐛 Describe the bug
Setting negative values to `num_ops` argument of [RandAugment()](https://pytorch.org/vision/master/generated/torchvision.transforms.v2.RandAugment.html) doesn't do augmentation transformations at all as shown below:
```python
from torchvision.datasets import OxfordIIITPet
from torchvision.transforms.v2 import RandAugment
my_data = OxfordIIITPet(
root="data",
transform=RandAugment(num_ops=-1)
# transform=RandAugment(num_ops=-10)
# transform=RandAugment(num_ops=-100)
)
my_data[0][0]
```

And, `num_ops` argument is the number of augmentation transformations according to [the doc](https://pytorch.org/vision/master/generated/torchvision.transforms.v2.RandAugment.html) as shown below:
> Parameters:
> - num_ops ([int](https://docs.python.org/3/library/functions.html#int), optional) – Number of augmentation transformations to apply sequentially
So `num_ops` argument of `RandAugment()` shouldn't accept negative values.
### Versions
```python
import torchvision
torchvision.__version__ # '0.20.1'
```
|
open
|
2025-02-25T13:04:04Z
|
2025-02-25T13:16:26Z
|
https://github.com/pytorch/vision/issues/8938
|
[] |
hyperkai
| 1
|
randyzwitch/streamlit-folium
|
streamlit
| 173
|
Map Disappearing
|
This is a really great plugin!
With that said, I'm encountering an issue where when I try to render a map, it disappears and reappears inconsistently. See video below.
https://www.loom.com/share/79e544d6eea74fd7a898691d217d12a0?sid=d25745ae-b4ea-47a3-8d89-80621fe7c652
For this example, Here are the relevant code sections
```python
def get_pricing():
"""
This function gets all the pricing information from Prycd and updates the pricing dataframe
:return:
"""
global prycd
apn = apn_textarea
# Check for empty fields
if apn is None or len(apn) == 0:
st.error("Please enter a valid APN.")
return
county = county_selectbox
if county is None or len(county) == 0:
st.error("Please select a valid county and state.")
return
fips = __get_fips_code(county)
md = f"""
## Pricing Data
The table below displays the pricing for a property located in {county} with assessor's property number {apn}.
"""
st.markdown(md)
pricing_results = __get_pricing_info(apn, fips)
st.dataframe(data=pricing_results,
use_container_width=True,
hide_index=True,
column_config={
"price": st.column_config.NumberColumn(label="Price", format="$%.2f"),
"price_per_acre": st.column_config.NumberColumn(label="Price Per Acre", format="$%.2f"),
"confidence": st.column_config.Column(label="Confidence"),
"meta.county": st.column_config.Column(label="County", help="The county where the property is located."),
"meta.confidence.Price Coefficient of Variation": st.column_config.NumberColumn(label="Coefficient of Variation"),
"meta.confidence.Acreage Coefficient of Variation": st.column_config.NumberColumn(
label="Acreage Coefficient of Variation"),
"meta.confidence.Number of Total Comps": st.column_config.NumberColumn(
label="Total Comps"),
"meta.confidence.Number of Sold Comps": st.column_config.NumberColumn(
label="Sold Comps")
})
m = folium.Map(location=[39.949610, -75.150282], zoom_start=16)
folium.Marker(
[39.949610, -75.150282], popup="Liberty Bell", tooltip="Liberty Bell"
).add_to(m)
# call to render Folium map in Streamlit
st_data = st_folium(m, width=725)
# Now populate the comps
min_acreage = comp_size_range[0]
max_acreage = comp_size_range[1]
comps_results = pd.DataFrame(__get_comps(county, state_selectbox, min_acreage, max_acreage))
# If there are no comps, display a message explaining that and halt.
if len(comps_results) == 0:
st.warning("No comps were found meeting this criteria.")
return
...
# This function is called by a button click in a sidebar here...
with st.sidebar:
....
submit_button = st.button("Submit", on_click=get_pricing)
```
Any help would be greatly appreciated. I've confirmed the this issue exists in both Firefox and Safari. When I don't include the folium map, the page loads as expected.
|
closed
|
2024-03-20T03:55:38Z
|
2024-11-11T08:42:07Z
|
https://github.com/randyzwitch/streamlit-folium/issues/173
|
[] |
cgivre
| 5
|
giotto-ai/giotto-tda
|
scikit-learn
| 556
|
Increase number of bibliographical entries in glossary
|
I think it would be good if we included a few more citations in the glossary. In particular, it would be good to have a few references to trace the history of persistent homology. A few such are listed in the first page of https://www.maths.ed.ac.uk/~v1ranick/papers/edelhare.pdf, and an even earlier precursor is the 1994 paper by Barannikov [The Framed Morse complex and its invariants](https://hal.archives-ouvertes.fr/hal-01745109/document).
|
closed
|
2021-01-12T15:02:41Z
|
2021-03-11T23:39:38Z
|
https://github.com/giotto-ai/giotto-tda/issues/556
|
[
"documentation"
] |
ulupo
| 0
|
hankcs/HanLP
|
nlp
| 1,580
|
麻烦给CRFSegmenter添加流的读取方式,方便直接从jar包里直接读取bin文件
|
<!--
提问请上论坛,不要发这里!
提问请上论坛,不要发这里!
提问请上论坛,不要发这里!
以下必填,否则直接关闭。
-->
**Describe the feature and the current behavior/state.**
**Will this change the current api? How?**
**Who will benefit with this feature?**
**Are you willing to contribute it (Yes/No):**
**System information**
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
- Python version:
- HanLP version:
**Any other info**
* [x] I've carefully completed this form.
<!-- 发表前先搜索,此处一定要勾选! -->
<!-- 发表前先搜索,此处一定要勾选! -->
<!-- 发表前先搜索,此处一定要勾选! -->
|
closed
|
2020-10-29T09:10:16Z
|
2020-10-29T15:35:43Z
|
https://github.com/hankcs/HanLP/issues/1580
|
[
"feature request"
] |
xiaoheizai
| 1
|
unit8co/darts
|
data-science
| 2,496
|
TiDE Model Stops At A Specific Epoch
|
A more general question, I am trying to run a historical backtest using TiDE model for my use case:
```
from darts.models import TiDEModel
tide_model = TiDEModel(
input_chunk_length=8,
output_chunk_length=3,
n_epochs=20
)
tide_model .fit(
series =...,
past_covariates= ...
future_covariates= ...
)
tide_hf_results = model_estimator.historical_forecasts(
...
)
```
For some reason, the model always stalls at a specific point (77% of Epoch 5). I can see that the kernel is still running under the hood but the progress bar will no longer continue moving. I have tried increasing the memory and CPU by 3x but still, the model would stall at exactly the same point. Not sure if anyone have met this issue before and have any suggested solutions.
No error messages are returned at all so I am not sure how to debug the issue.

|
closed
|
2024-08-12T12:23:25Z
|
2025-03-03T14:17:16Z
|
https://github.com/unit8co/darts/issues/2496
|
[
"bug"
] |
ETTAN93
| 5
|
coqui-ai/TTS
|
deep-learning
| 3,330
|
Support pandas 2
|
**🚀 Feature Description**
Currently, the pandas requirement is constrained to >=1.4,<2.0:
https://github.com/coqui-ai/TTS/blob/11ec9f7471620ebaa57db7ff5705254829ffe516/requirements.txt#L26
[Pandas 2.0.0 was released in April](https://github.com/pandas-dev/pandas/releases/tag/v2.0.0), so this will start to result in dependency conflicts with other libraries that require pandas >= 2.0.
**Solution**
Loosen pandas requirement to support pandas 2
|
closed
|
2023-11-28T22:19:12Z
|
2024-06-26T16:49:04Z
|
https://github.com/coqui-ai/TTS/issues/3330
|
[
"wontfix",
"feature request"
] |
Harmon758
| 7
|
hpcaitech/ColossalAI
|
deep-learning
| 5,319
|
Can your openmoe project train mixtral 8x7b model ?[FEATURE]:
|
### Describe the feature
I want to know if your openmoe project can train other moe_models like mixtral ? And in your openmoe.md , I cannot find this checkpoint in nvidia-apex "git checkout 741bdf50825a97664db08574981962d66436d16a"
|
closed
|
2024-01-29T03:58:54Z
|
2024-02-18T11:31:30Z
|
https://github.com/hpcaitech/ColossalAI/issues/5319
|
[
"enhancement"
] |
ZhangEnmao
| 3
|
plotly/dash-table
|
dash
| 582
|
Rewrite Cypress test servers using gunicorn
|
The table currently uses a set of independent Dash applications to run its end-to-end tests against.
https://github.com/plotly/dash-table/tree/master/tests/cypress/dash
https://github.com/plotly/dash-table/blob/master/package.json#L25
Use gunicorn to load all the test apps into a single app with routes instead.
|
closed
|
2019-09-12T15:29:15Z
|
2020-03-10T19:17:59Z
|
https://github.com/plotly/dash-table/issues/582
|
[
"dash-type-maintenance",
"size: 1"
] |
Marc-Andre-Rivet
| 0
|
ymcui/Chinese-LLaMA-Alpaca
|
nlp
| 249
|
关于打分机制的疑问
|
以code测试里面的十二题为例
https://github.com/ymcui/Chinese-LLaMA-Alpaca/blob/main/examples/CODE.md
十二题的三个答案中,7B和7B plus得分同样是7分,他们的回答文字数量接近,但信息量完全不同,后者提到了GDB和valgrind工具这个对于内存问题的排查十分有帮助,显然是比7B和13B更优的答案。
这里的问题是我们不能根据自己的常识,也不能根据文字长度,或者条理性去评判。如何建立科学的评分体系很重要,否则无法判断模型效果到底是变好还是变坏
|
closed
|
2023-05-05T09:34:10Z
|
2023-05-16T22:02:03Z
|
https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/249
|
[
"stale"
] |
iammeizu
| 3
|
ploomber/ploomber
|
jupyter
| 540
|
replace dependency on pygraphviz
|
`ploomber plot` depends on pygraphviz and graphviz, the latter is not pip-installable which makes setup difficult. we should find a way for users to easily create a dag plot, ideally with no extra dependencies
|
closed
|
2022-02-04T16:18:06Z
|
2022-04-25T20:10:03Z
|
https://github.com/ploomber/ploomber/issues/540
|
[
"good first issue"
] |
edublancas
| 12
|
tox-dev/tox
|
automation
| 3,105
|
Non-existent basepython fails unused environment with usedevelop
|
## Issue
<!-- Describe what's the expected behaviour and what you're observing. -->
With usedevelop=True, and a non-existent basepython in an environment that isn't used, an error happens looking for the non-existent executable.
This seems similar to, but different than https://github.com/tox-dev/tox/issues/2826.
## Environment
Provide at least:
- OS: Mac OSX 13.5.1
<details open>
<summary>Output of <code>pip list</code> of the host Python, where <code>tox</code> is installed</summary>
```console
% pip list
Package Version
------------- -------
cachetools 5.3.1
chardet 5.2.0
colorama 0.4.6
distlib 0.3.7
filelock 3.12.2
packaging 23.1
pip 23.2.1
platformdirs 3.10.0
pluggy 1.2.0
pyproject-api 1.5.4
setuptools 68.0.0
tomli 2.0.1
tox 4.10.0
virtualenv 20.24.3
wheel 0.41.0
```
</details>
## Output of running tox
<details open>
<summary>Output of <code>tox -rvv</code></summary>
```console
% tox -rvv
.pkg: 476 I find interpreter for spec PythonSpec(major=3, minor=10) [virtualenv/discovery/builtin.py:58]
.pkg: 477 D discover exe for PythonInfo(spec=CPython3.10.13.final.0-64, exe=/usr/local/virtualenvs/tmp-2105dbad4ba1a01/bin/python, platform=darwin, version='3.10.13 (main, Aug 25 2023, 06:52:26) [Clang 14.0.3 (clang-1403.0.22.14.1)]', encoding_fs_io=utf-8-utf-8) in /usr/local/pyenv/pyenv/versions/3.10.13 [virtualenv/discovery/py_info.py:441]
.pkg: 479 D filesystem is not case-sensitive [virtualenv/info.py:26]
.pkg: 481 D got python info of %s from (PosixPath('/usr/local/pyenv/pyenv/versions/3.10.13/bin/python3.10'), PosixPath('/Users/nbatchelder/Library/Application Support/virtualenv/py_info/1/c25eae2dfc5d1b10c1c60ba13c399fed12571f8306176d0f7721e638ddb69d8c.json')) [virtualenv/app_data/via_disk_folder.py:131]
.pkg: 504 I proposed PythonInfo(spec=CPython3.10.13.final.0-64, system=/usr/local/pyenv/pyenv/versions/3.10.13/bin/python3.10, exe=/usr/local/virtualenvs/tmp-2105dbad4ba1a01/bin/python, platform=darwin, version='3.10.13 (main, Aug 25 2023, 06:52:26) [Clang 14.0.3 (clang-1403.0.22.14.1)]', encoding_fs_io=utf-8-utf-8) [virtualenv/discovery/builtin.py:65]
.pkg: 504 D accepted PythonInfo(spec=CPython3.10.13.final.0-64, system=/usr/local/pyenv/pyenv/versions/3.10.13/bin/python3.10, exe=/usr/local/virtualenvs/tmp-2105dbad4ba1a01/bin/python, platform=darwin, version='3.10.13 (main, Aug 25 2023, 06:52:26) [Clang 14.0.3 (clang-1403.0.22.14.1)]', encoding_fs_io=utf-8-utf-8) [virtualenv/discovery/builtin.py:67]
.pkg: 570 I find interpreter for spec PythonSpec(path=/usr/local/virtualenvs/tmp-2105dbad4ba1a01/bin/python) [virtualenv/discovery/builtin.py:58]
.pkg: 570 D discover exe from cache /usr/local/pyenv/pyenv/versions/3.10.13 - exact False: PythonInfo({'architecture': 64, 'base_exec_prefix': '/usr/local/pyenv/pyenv/versions/3.10.13', 'base_prefix': '/usr/local/pyenv/pyenv/versions/3.10.13', 'distutils_install': {'data': '', 'headers': 'include/python3.10/UNKNOWN', 'platlib': 'lib/python3.10/site-packages', 'purelib': 'lib/python3.10/site-packages', 'scripts': 'bin'}, 'exec_prefix': '/usr/local/pyenv/pyenv/versions/3.10.13', 'executable': '/usr/local/virtualenvs/tmp-2105dbad4ba1a01/bin/python', 'file_system_encoding': 'utf-8', 'has_venv': True, 'implementation': 'CPython', 'max_size': 9223372036854775807, 'original_executable': '/usr/local/pyenv/pyenv/versions/3.10.13/bin/python3.10', 'os': 'posix', 'path': ['/usr/local/virtualenvs/tmp-2105dbad4ba1a01/lib/python3.10/site-packages/virtualenv/discovery', '/usr/local/pyenv/pyenv/versions/3.10.13/lib/python310.zip', '/usr/local/pyenv/pyenv/versions/3.10.13/lib/python3.10', '/usr/local/pyenv/pyenv/versions/3.10.13/lib/python3.10/lib-dynload', '/usr/local/pyenv/pyenv/versions/3.10.13/lib/python3.10/site-packages'], 'platform': 'darwin', 'prefix': '/usr/local/pyenv/pyenv/versions/3.10.13', 'real_prefix': None, 'stdout_encoding': 'utf-8', 'sysconfig': {'makefile_filename': '/usr/local/pyenv/pyenv/versions/3.10.13/lib/python3.10/config-3.10-darwin/Makefile'}, 'sysconfig_paths': {'data': '{base}', 'include': '{installed_base}/include/python{py_version_short}{abiflags}', 'platlib': '{platbase}/{platlibdir}/python{py_version_short}/site-packages', 'platstdlib': '{platbase}/{platlibdir}/python{py_version_short}', 'purelib': '{base}/lib/python{py_version_short}/site-packages', 'scripts': '{base}/bin', 'stdlib': '{installed_base}/{platlibdir}/python{py_version_short}'}, 'sysconfig_scheme': None, 'sysconfig_vars': {'PYTHONFRAMEWORK': '', 'abiflags': '', 'base': '/usr/local/pyenv/pyenv/versions/3.10.13', 'installed_base': '/usr/local/pyenv/pyenv/versions/3.10.13', 'platbase': '/usr/local/pyenv/pyenv/versions/3.10.13', 'platlibdir': 'lib', 'py_version_short': '3.10'}, 'system_executable': '/usr/local/pyenv/pyenv/versions/3.10.13/bin/python3.10', 'system_stdlib': '/usr/local/pyenv/pyenv/versions/3.10.13/lib/python3.10', 'system_stdlib_platform': '/usr/local/pyenv/pyenv/versions/3.10.13/lib/python3.10', 'version': '3.10.13 (main, Aug 25 2023, 06:52:26) [Clang 14.0.3 (clang-1403.0.22.14.1)]', 'version_info': VersionInfo(major=3, minor=10, micro=13, releaselevel='final', serial=0), 'version_nodot': '310'}) [virtualenv/discovery/py_info.py:439]
.pkg: 570 I proposed PythonInfo(spec=CPython3.10.13.final.0-64, system=/usr/local/pyenv/pyenv/versions/3.10.13/bin/python3.10, exe=/usr/local/virtualenvs/tmp-2105dbad4ba1a01/bin/python, platform=darwin, version='3.10.13 (main, Aug 25 2023, 06:52:26) [Clang 14.0.3 (clang-1403.0.22.14.1)]', encoding_fs_io=utf-8-utf-8) [virtualenv/discovery/builtin.py:65]
.pkg: 570 D accepted PythonInfo(spec=CPython3.10.13.final.0-64, system=/usr/local/pyenv/pyenv/versions/3.10.13/bin/python3.10, exe=/usr/local/virtualenvs/tmp-2105dbad4ba1a01/bin/python, platform=darwin, version='3.10.13 (main, Aug 25 2023, 06:52:26) [Clang 14.0.3 (clang-1403.0.22.14.1)]', encoding_fs_io=utf-8-utf-8) [virtualenv/discovery/builtin.py:67]
.pkg: 578 I find interpreter for spec PythonSpec(path=/this/doesnt/exist/bin/python) [virtualenv/discovery/builtin.py:58]
Traceback (most recent call last):
File "/usr/local/virtualenvs/tmp-2105dbad4ba1a01/bin/tox", line 8, in <module>
sys.exit(run())
File "/usr/local/virtualenvs/tmp-2105dbad4ba1a01/lib/python3.10/site-packages/tox/run.py", line 19, in run
result = main(sys.argv[1:] if args is None else args)
File "/usr/local/virtualenvs/tmp-2105dbad4ba1a01/lib/python3.10/site-packages/tox/run.py", line 45, in main
return handler(state)
File "/usr/local/virtualenvs/tmp-2105dbad4ba1a01/lib/python3.10/site-packages/tox/session/cmd/legacy.py", line 115, in legacy
return run_sequential(state)
File "/usr/local/virtualenvs/tmp-2105dbad4ba1a01/lib/python3.10/site-packages/tox/session/cmd/run/sequential.py", line 24, in run_sequential
return execute(state, max_workers=1, has_spinner=False, live=True)
File "/usr/local/virtualenvs/tmp-2105dbad4ba1a01/lib/python3.10/site-packages/tox/session/cmd/run/common.py", line 236, in execute
state.envs.ensure_only_run_env_is_active()
File "/usr/local/virtualenvs/tmp-2105dbad4ba1a01/lib/python3.10/site-packages/tox/session/env_select.py", line 416, in ensure_only_run_env_is_active
envs, active = self._defined_envs, self._env_name_to_active()
File "/usr/local/virtualenvs/tmp-2105dbad4ba1a01/lib/python3.10/site-packages/tox/session/env_select.py", line 273, in _defined_envs
raise failed[next(iter(failed_to_create))]
File "/usr/local/virtualenvs/tmp-2105dbad4ba1a01/lib/python3.10/site-packages/tox/session/env_select.py", line 250, in _defined_envs
run_env.package_env = self._build_pkg_env(pkg_name_type, name, env_name_to_active)
File "/usr/local/virtualenvs/tmp-2105dbad4ba1a01/lib/python3.10/site-packages/tox/session/env_select.py", line 321, in _build_pkg_env
name_type = next(child_package_envs)
File "/usr/local/virtualenvs/tmp-2105dbad4ba1a01/lib/python3.10/site-packages/tox/tox_env/python/virtual_env/package/pyproject.py", line 150, in register_run_env
yield from super().register_run_env(run_env)
File "/usr/local/virtualenvs/tmp-2105dbad4ba1a01/lib/python3.10/site-packages/tox/tox_env/python/package.py", line 116, in register_run_env
pkg_env = run_env.conf["wheel_build_env"]
File "/usr/local/virtualenvs/tmp-2105dbad4ba1a01/lib/python3.10/site-packages/tox/config/sets.py", line 118, in __getitem__
return self.load(item)
File "/usr/local/virtualenvs/tmp-2105dbad4ba1a01/lib/python3.10/site-packages/tox/config/sets.py", line 129, in load
return config_definition.__call__(self._conf, self.loaders, ConfigLoadArgs(chain, self.name, self.env_name))
File "/usr/local/virtualenvs/tmp-2105dbad4ba1a01/lib/python3.10/site-packages/tox/config/of_type.py", line 105, in __call__
value = self.default(conf, args.env_name) if callable(self.default) else self.default
File "/usr/local/virtualenvs/tmp-2105dbad4ba1a01/lib/python3.10/site-packages/tox/tox_env/python/package.py", line 92, in default_wheel_tag
run_py = cast(Python, run_env).base_python
File "/usr/local/virtualenvs/tmp-2105dbad4ba1a01/lib/python3.10/site-packages/tox/tox_env/python/api.py", line 250, in base_python
self._base_python = self._get_python(base_pythons)
File "/usr/local/virtualenvs/tmp-2105dbad4ba1a01/lib/python3.10/site-packages/tox/tox_env/python/virtual_env/api.py", line 134, in _get_python
interpreter = self.creator.interpreter
File "/usr/local/virtualenvs/tmp-2105dbad4ba1a01/lib/python3.10/site-packages/tox/tox_env/python/virtual_env/api.py", line 126, in creator
return self.session.creator
File "/usr/local/virtualenvs/tmp-2105dbad4ba1a01/lib/python3.10/site-packages/tox/tox_env/python/virtual_env/api.py", line 107, in session
self._virtualenv_session = session_via_cli(env_dir, options=None, setup_logging=False, env=env)
File "/usr/local/virtualenvs/tmp-2105dbad4ba1a01/lib/python3.10/site-packages/virtualenv/run/__init__.py", line 49, in session_via_cli
parser, elements = build_parser(args, options, setup_logging, env)
File "/usr/local/virtualenvs/tmp-2105dbad4ba1a01/lib/python3.10/site-packages/virtualenv/run/__init__.py", line 77, in build_parser
parser._interpreter = interpreter = discover.interpreter # noqa: SLF001
File "/usr/local/virtualenvs/tmp-2105dbad4ba1a01/lib/python3.10/site-packages/virtualenv/discovery/discover.py", line 41, in interpreter
self._interpreter = self.run()
File "/usr/local/virtualenvs/tmp-2105dbad4ba1a01/lib/python3.10/site-packages/virtualenv/discovery/builtin.py", line 46, in run
result = get_interpreter(python_spec, self.try_first_with, self.app_data, self._env)
File "/usr/local/virtualenvs/tmp-2105dbad4ba1a01/lib/python3.10/site-packages/virtualenv/discovery/builtin.py", line 61, in get_interpreter
for interpreter, impl_must_match in propose_interpreters(spec, try_first_with, app_data, env):
File "/usr/local/virtualenvs/tmp-2105dbad4ba1a01/lib/python3.10/site-packages/virtualenv/discovery/builtin.py", line 88, in propose_interpreters
os.lstat(spec.path) # Windows Store Python does not work with os.path.exists, but does for os.lstat
FileNotFoundError: [Errno 2] No such file or directory: '/this/doesnt/exist/bin/python'
```
</details>
## Minimal example
<!-- If possible, provide a minimal reproducer for the issue. -->
tox.ini:
```
[tox]
envlist = py310
[testenv]
usedevelop = True # Comment this out to avoid the error
commands = python -c "print('Hello')"
[testenv:never]
basepython = /this/doesnt/exist/bin/python
```
|
closed
|
2023-08-26T11:20:37Z
|
2023-09-08T22:49:56Z
|
https://github.com/tox-dev/tox/issues/3105
|
[
"bug:minor",
"help:wanted"
] |
nedbat
| 4
|
PedroBern/django-graphql-auth
|
graphql
| 71
|
New installation does not migrate existing users
|
# Description
When installing into a project, the package functionality does not work with any of the users that already existed before the package was added. Presumably because the existing users aren't added into the `graphql_auth_userstatus` table.
Not sure if this is a bug or new request
# Steps to Reproduce
If we need to reproduce and you don't provide steps for it, it will be closed. Alternatively, you can link a repo with the code to run your issue.
1. Have existing users registered without `django-graphql-auth`
2. Install `django-graphql-auth` and run migration
3. Attempt to use any of the functions (i.e., verifyToken)
## Expected behavior
Attempting to use any of the functions with an existing user should work without any extra steps (or at least minimal extra steps via some sort of management command). Works fine with users created after the fact.
## Actual behavior
Nothing happens when attempting to use any of the package features for existing users. (i.e., `verifyToken` returns null for token, `sendPasswordResetEmail` does nothing).
# Requirements
```
aioredis==1.3.1
aniso8601==7.0.0
asgiref==3.2.10
asn1crypto==1.4.0
astroid==2.4.2
async-timeout==3.0.1
attrs==20.2.0
autobahn==20.7.1
Automat==20.2.0
bcrypt==3.2.0
blessed==1.17.10
boto3==1.14.61
botocore==1.17.61
cached-property==1.5.1
cement==3.0.4
certifi==2020.6.20
cffi==1.14.2
channels==2.4.0
channels-redis==3.1.0
chardet==3.0.4
colorama==0.4.3
constantly==15.1.0
cryptography==3.1
daphne==2.5.0
distro==1.5.0
Django==3.1.1
django-cleanup==5.1.0
django-cors-headers==3.5.0
django-filter==2.4.0
django-graphql-auth==0.3.12
django-graphql-jwt==0.3.0
django-guardian==2.3.0
django-polymorphic==3.0.0
django-storages==1.10
djangorestframework==3.11.1
docker==4.3.1
docker-compose==1.27.2
docker-pycreds==0.4.0
dockerpty==0.4.1
docopt==0.6.2
docutils==0.16
graphene==2.1.8
graphene-django==2.13.0
graphene-file-upload==1.2.2
graphql-core==2.3.1
graphql-relay==2.0.1
graphql-ws==0.3.0
hiredis==1.1.0
hyperlink==20.0.1
idna==2.10
importlib-metadata==1.7.0
incremental==17.5.0
isort==5.5.2
jmespath==0.9.4
jsonschema==3.1.1
lazy-object-proxy==1.4.2
mccabe==0.6.1
more-itertools==7.2.0
msgpack==0.6.2
paramiko==2.6.0
pathspec==0.6.0
pbr==5.4.3
Pillow==7.2.0
promise==2.3
psycopg2==2.8.6
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycparser==2.20
PyHamcrest==1.9.0
PyJWT==1.7.1
pylint==2.4.4
PyNaCl==1.3.0
pyOpenSSL==19.1.0
pyrsistent==0.15.4
python-dateutil==2.7.5
python-dotenv==0.14.0
pytz==2018.5
PyYAML==4.2b1
requests==2.20.0
Rx==1.6.1
s3transfer==0.3.3
semantic-version==2.5.0
sentry-sdk==0.17.5
service-identity==18.1.0
singledispatch==3.4.0.3
six==1.14.0
sqlparse==0.3.1
stevedore==1.30.1
stripe==2.43.0
termcolor==1.1.0
texttable==0.9.1
Twisted==20.3.0
txaio==18.8.1
typed-ast==1.4.0
Unidecode==1.1.1
urllib3==1.24.3
wcwidth==0.1.7
websocket-client==0.57.0
wrapt==1.12.1
zipp==3.1.0
zope.interface==5.0.0
```
|
open
|
2020-10-07T00:04:44Z
|
2021-10-24T15:04:33Z
|
https://github.com/PedroBern/django-graphql-auth/issues/71
|
[] |
pfcodes
| 2
|
streamlit/streamlit
|
machine-learning
| 10,786
|
font "sans serif" no longer working
|
### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [x] I added a very descriptive title to this issue.
- [x] I have provided sufficient information below to help reproduce this issue.
### Summary
When adding a `config.toml` with the font set to `"sans serif"` explicitly, as suggested [in the documentation](https://docs.streamlit.io/develop/concepts/configuration/theming#font), a **serif** font is used instead.
```
[theme]
font = "sans serif"
```
This can be fixed by
* omitting the `font` since sans-serif it is the default
* using `"sans-serif"` (with a dash)
It seems as if the font name was changed from `"sans serif"` to `"sans-serif"` without a matching change in the documentation.
This change seems to have been introduced in version 14.3.
### Reproducible Code Example
```Python
```
### Steps To Reproduce
_No response_
### Expected Behavior
_No response_
### Current Behavior
_No response_
### Is this a regression?
- [x] Yes, this used to work in a previous version.
### Debug info
- Streamlit version:
- Python version:
- Operating System:
- Browser:
### Additional Information
_No response_
|
closed
|
2025-03-14T16:02:49Z
|
2025-03-14T17:31:44Z
|
https://github.com/streamlit/streamlit/issues/10786
|
[
"type:bug",
"status:confirmed",
"priority:P2",
"feature:theming",
"feature:config"
] |
johannesjasper
| 2
|
pallets-eco/flask-sqlalchemy
|
sqlalchemy
| 870
|
Cannot connect SQL which SQL password has '%'
|
# Expected Behavior
Hi, I want to use flask-sqlalchemy to connect my SQL database, but I cannot connect to SQL, because the SQL database's password is like "qU%d6".
It quite strange because I can connect to SQL normally using a password like "password".
Changing the SQL password may solve the problem, but this bug should be fixed too I think.
```python
DIALECT = 'mysql'
DRIVER = 'mysqldb'
USERNAME = 'xxx'
PASSWORD = 'qU%d6'
HOST = 'xxx'
PORT = 'xxx'
DATABASE = 'xxx'
SQLALCHEMY_DATABASE_URI = "{}+{}://{}:{}@{}:{}/{}?charset=utf8".format(DIALECT, DRIVER,
USERNAME, PASSWORD, HOST, PORT, DATABASE)
```
### Actual Behavior
The error is: UnicodeDecodeError: 'ascii' codec can't decode byte 0xd6 in position 3: ordinal not in range(128), so I cannot connect to the SQL database.
I have tried:
- use u'qU%d6'
- use r'qU%d6'
- change % to %%
All failed!
```pytb
Traceback (most recent call last):
File "sqldemo.py", line 37, in <module>
db.create_all() # 真正建立模型到数据库
File "/usr/lib/python2.7/site-packages/flask_sqlalchemy/__init__.py", line 1039, in create_all
self._execute_for_all_tables(app, bind, 'create_all')
File "/usr/lib/python2.7/site-packages/flask_sqlalchemy/__init__.py", line 1031, in _execute_for_all_tables
op(bind=self.get_engine(app, bind), **extra)
File "/usr/lib/python2.7/site-packages/flask_sqlalchemy/__init__.py", line 962, in get_engine
return connector.get_engine()
File "/usr/lib/python2.7/site-packages/flask_sqlalchemy/__init__.py", line 556, in get_engine
self._engine = rv = self._sa.create_engine(sa_url, options)
File "/usr/lib/python2.7/site-packages/flask_sqlalchemy/__init__.py", line 972, in create_engine
return sqlalchemy.create_engine(sa_url, **engine_opts)
File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/__init__.py", line 500, in create_engine
return strategy.create(*args, **kwargs)
File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/strategies.py", line 98, in create
(cargs, cparams) = dialect.create_connect_args(u)
File "/usr/lib64/python2.7/site-packages/sqlalchemy/dialects/mysql/mysqldb.py", line 184, in create_connect_args
database="db", username="user", password="passwd"
File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/url.py", line 216, in translate_connect_args
if name is not None and getattr(self, sname, False):
File "/usr/lib64/python2.7/site-packages/sqlalchemy/engine/url.py", line 134, in password
return util.text_type(self.password_original)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xd6 in position 3: ordinal not in range(128)
```
### Environment
* Python version:2.7
* Flask-SQLAlchemy version:2.4.4
* SQLAlchemy version:1.3.18
|
closed
|
2020-08-11T02:21:09Z
|
2020-12-05T19:58:20Z
|
https://github.com/pallets-eco/flask-sqlalchemy/issues/870
|
[] |
happyprotean
| 2
|
albumentations-team/albumentations
|
machine-learning
| 1,989
|
LongestMaxSize does upscale an image in contrast to what notes state(?)
|
Regarding the LongestMaxSize transformation the notes state that:
> Note:
> - If the longest side of the image is already less than or equal to max_size, the image will not be resized.
> - This transform will not crop the image. The resulting image may be smaller than max_size in both dimensions.
> - For non-square images, the shorter side will be scaled proportionally to maintain the aspect ratio.
In contrast, images **seem to be upscaled** eventhough their maximum size is smaller than the defined max_size attribute:
```
def _func_max_size(img: np.ndarray, max_size: int, interpolation: int, func: Callable[..., Any]) -> np.ndarray:
image_shape = img.shape[:2]
scale = max_size / float(func(image_shape))
if scale != 1.0:
new_height, new_width = tuple(round(dim * scale) for dim in image_shape)
return resize(img, (new_height, new_width), interpolation=interpolation)
return img
```
Is this expected behaviour? Does their exist a transformation that does not upscale the image but resizes (downscales) while maintaining the aspect ratio?
|
closed
|
2024-10-15T13:32:08Z
|
2024-10-16T22:20:19Z
|
https://github.com/albumentations-team/albumentations/issues/1989
|
[
"documentation"
] |
baptist
| 1
|
cobrateam/splinter
|
automation
| 405
|
Capture Javascript errors (javascript console)
|
it is possible capture any text from javascript console?
I need to have some way to get Javascript errors.
|
closed
|
2015-05-28T14:55:46Z
|
2022-06-28T00:50:13Z
|
https://github.com/cobrateam/splinter/issues/405
|
[] |
luzfcb
| 6
|
nolar/kopf
|
asyncio
| 165
|
[PR] Post events for cluster-scoped objects to current namespace
|
> <a href="https://github.com/nolar"><img align="left" height="50" src="https://avatars0.githubusercontent.com/u/544296?v=4"></a> A pull request by [nolar](https://github.com/nolar) at _2019-08-05 18:06:51+00:00_
> Original URL: https://github.com/zalando-incubator/kopf/pull/165
> Merged by [nolar](https://github.com/nolar) at _2019-08-07 17:45:11+00:00_
K8s-events cannot be posted cluster-scoped (?), and attached to cluster-scoped resources via API. However, we post them to the current namespace — so that they are not lost completely.
> Issue : #164
## Description
See #164 for details.
Brief summary: K8s events are namespaced, there are no cluster-scoped events. Also, k8s events can refer to an object via spec.involvedObject of type ObjectReference. This structure contains namespace field, to refer to the involved object's namespace (also, name, uid, etc).
I could not find a way to post namespaced events for cluster-scoped resources. It always fails with namespace mismatch (regardless of which library is used). Event via `curl` — see the issue comments.
So, we post the k8s-events to the current namespace, so that they are available via `kubectl get events`, despite they are not seen in `kubectl describe` on the involved objects.
It is not a full problem solution, but it is a bit better than just losing them completely.
## Types of Changes
- Bug fix (non-breaking change which fixes an issue)
|
closed
|
2020-08-18T19:57:52Z
|
2020-08-23T20:48:34Z
|
https://github.com/nolar/kopf/issues/165
|
[
"bug",
"archive"
] |
kopf-archiver[bot]
| 0
|
zappa/Zappa
|
django
| 494
|
[Migrated] Overhaul Logging Subsystem
|
Originally from: https://github.com/Miserlou/Zappa/issues/1305 by [Miserlou](https://github.com/Miserlou)
- Add and document proper log-level handling
- Redefine more sane defaults
- Add event-type and event-target based log tail filtering
|
closed
|
2021-02-20T09:43:28Z
|
2024-04-13T16:36:28Z
|
https://github.com/zappa/Zappa/issues/494
|
[
"production mode",
"no-activity",
"auto-closed"
] |
jneves
| 2
|
Avaiga/taipy
|
data-visualization
| 1,858
|
Allow S3ObjectDatanodes to use all parameters exposed by AWS APIs
|
### Description
Today, an s3 object data node can only use a limited number of parameters.
Taipy should accept all possible parameters for configuring the [boto3 client](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/core/session.html#boto3.session.Session.client), for reading the data with the [get_object](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3/client/get_object.html#get-object) method, and for writing the data with the [put_object](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3/client/put_object.html#put-object) method.
### Solution Proposed
**DatanodeConfig:**
The parameters should be passed by the user through the `DataNodeConfig` method
`taipy.core.config.data_node_config._configure_s3_object`.
The goal is to "keep simple things simple", in particular for most common usages.
So, the purpose is not to expose all the parameters in the configure method, but only the main ones. The others should be passed as optional parameters in kwargs properties.
**Datanode:**
All the parameters (usedfor the client constructor, the get_object, and the put_object methods ) should be used.
### Acceptance Criteria
- [ ] Ensure the new code is unit tested, and check that the code coverage is at least 90%.
- [ ] Create related issues in taipy-doc for documentation and Release Notes.
### Code of Conduct
- [x] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional)
|
closed
|
2024-09-30T10:29:05Z
|
2024-10-15T12:26:09Z
|
https://github.com/Avaiga/taipy/issues/1858
|
[
"Core",
"📈 Improvement",
"⚙️Configuration",
"🆘 Help wanted",
"🟨 Priority: Medium",
"✨New feature",
"📝Release Notes",
"Core: ⚙️ Configuration",
"Core: 📁 Data node"
] |
jrobinAV
| 3
|
Buuntu/fastapi-react
|
fastapi
| 14
|
Add (optional) Black pre-commit hook
|
closed
|
2020-05-20T16:43:23Z
|
2020-07-07T00:45:01Z
|
https://github.com/Buuntu/fastapi-react/issues/14
|
[
"enhancement"
] |
Buuntu
| 0
|
|
allenai/allennlp
|
nlp
| 5,022
|
The SRL predictor doesn't work with the following error message
|
<!--
Please fill this template entirely and do not erase any of it.
We reserve the right to close without a response bug reports which are incomplete.
If you have a question rather than a bug, please ask on [Stack Overflow](https://stackoverflow.com/questions/tagged/allennlp) rather than posting an issue here.
-->
## Checklist
<!-- To check an item on the list replace [ ] with [x]. -->
- [x ] I have verified that the issue exists against the `master` branch of AllenNLP.
- [x ] I have read the relevant section in the [contribution guide](https://github.com/allenai/allennlp/blob/master/CONTRIBUTING.md#bug-fixes-and-new-features) on reporting bugs.
- [x ] I have checked the [issues list](https://github.com/allenai/allennlp/issues) for similar or identical bug reports.
- [x ] I have checked the [pull requests list](https://github.com/allenai/allennlp/pulls) for existing proposed fixes.
- [x ] I have checked the [CHANGELOG](https://github.com/allenai/allennlp/blob/master/CHANGELOG.md) and the [commit log](https://github.com/allenai/allennlp/commits/master) to find out if the bug was already fixed in the master branch.
- [x ] I have included in the "Description" section below a traceback from any exceptions related to this bug.
- [x ] I have included in the "Related issues or possible duplicates" section beloew all related issues and possible duplicate issues (If there are none, check this box anyway).
- [x ] I have included in the "Environment" section below the name of the operating system and Python version that I was using when I discovered this bug.
- [x ] I have included in the "Environment" section below the output of `pip freeze`.
- [x ] I have included in the "Steps to reproduce" section below a minimally reproducible example.
## Description
<!-- Please provide a clear and concise description of what the bug is here. -->
<details>
<summary><b>Python traceback:</b></summary>
<p>
<!-- Paste the traceback from any exception (if there was one) in between the next two lines below -->
The error:
```
Traceback (most recent call last):
File "/home/skhanehzar/DeployedProjects/Narrative/pipeline/srl.py", line 112, in <module>
"https://storage.googleapis.com/allennlp-public-models/structured-prediction-srl-bert.2020.12.15.tar.gz")
File "/home/skhanehzar/anaconda3/envs/narrative/lib/python3.7/site-packages/allennlp/predictors/predictor.py", line 275, in from_path
load_archive(archive_path, cuda_device=cuda_device),
File "/home/skhanehzar/anaconda3/envs/narrative/lib/python3.7/site-packages/allennlp/models/archival.py", line 197, in load_archive
opt_level=opt_level,
File "/home/skhanehzar/anaconda3/envs/narrative/lib/python3.7/site-packages/allennlp/models/model.py", line 398, in load
return model_class._load(config, serialization_dir, weights_file, cuda_device, opt_level)
File "/home/skhanehzar/anaconda3/envs/narrative/lib/python3.7/site-packages/allennlp/models/model.py", line 337, in _load
model.load_state_dict(model_state)
File "/home/skhanehzar/anaconda3/envs/narrative/lib/python3.7/site-packages/torch/nn/modules/module.py", line 847, in load_state_dict
self.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for SrlBert:
Unexpected key(s) in state_dict: "bert_model.embeddings.position_ids".
Process finished with exit code 1
```
</p>
</details>
## Related issues or possible duplicates
- None
## Environment
<!-- Provide the name of operating system below (e.g. OS X, Linux) -->
OS:
<!-- Provide the Python version you were using (e.g. 3.7.1) -->
Python version: 3.7.9
<details>
<summary><b>Output of <code>pip freeze</code>:</b></summary>
<p>
<!-- Paste the output of `pip freeze` in between the next two lines below -->
```
(narrative) skhanehzar@slug:~$ pip freeze
allennlp==1.0.0
allennlp-models==1.0.0
attrs==20.3.0
blis==0.4.1
boto3==1.17.8
botocore==1.20.8
cached-property==1.5.2
catalogue==1.0.0
certifi==2020.12.5
chardet==4.0.0
click==7.1.2
conllu==3.0
cymem==2.0.5
en-core-web-sm @ https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.2.5/en_core_web_sm-2.2.5.tar.gz
filelock==3.0.12
future==0.18.2
h5py==3.1.0
idna==2.10
importlib-metadata==3.4.0
iniconfig==1.1.1
jmespath==0.10.0
joblib==1.0.1
jsonnet==0.17.0
jsonpickle==2.0.0
murmurhash==1.0.5
nltk==3.5
numpy==1.20.1
overrides==3.0.0
packaging==20.9
plac==1.1.3
pluggy==0.13.1
preshed==3.0.5
protobuf==3.14.0
py==1.10.0
py-rouge==1.1
pyparsing==2.4.7
pytest==6.2.2
python-dateutil==2.8.1
regex==2020.11.13
requests==2.25.1
s3transfer==0.3.4
sacremoses==0.0.43
scikit-learn==0.24.1
scipy==1.6.0
sentencepiece==0.1.95
six==1.15.0
spacy==2.2.4
srsly==1.0.5
tensorboardX==2.1
thinc==7.4.0
threadpoolctl==2.1.0
tokenizers==0.7.0
toml==0.10.2
torch==1.5.1
tqdm==4.56.2
transformers==2.11.0
typing-extensions==3.7.4.3
urllib3==1.26.3
wasabi==0.8.2
word2number==1.1
zipp==3.4.0
```
</p>
</details>
## Steps to reproduce
Run the following code with python interpreter
```
from allennlp.predictors.predictor import Predictor
import allennlp_models.tagging
predictor = Predictor.from_path(
"https://storage.googleapis.com/allennlp-public-models/structured-prediction-srl-bert.2020.12.15.tar.gz")
pp = predictor.predict(
sentence="Did Uriah honestly think he could beat the game in under three hours?."
)
```
<details>
<summary><b>Example source:</b></summary>
<p>
<!-- Add a fully runnable example in between the next two lines below that will reproduce the bug -->
See steps to reproduce above
</p>
</details>
|
closed
|
2021-02-25T01:48:43Z
|
2021-03-30T21:37:19Z
|
https://github.com/allenai/allennlp/issues/5022
|
[
"bug"
] |
shinyemimalef
| 5
|
pydantic/FastUI
|
fastapi
| 240
|
Type unions errors aren't showed in frontend
|
Imagine my form is:
```python
class Form(BaseModel):
email: EmailStr | int
```
I know it doesn't make sense to have that union, but this is to just replicate the issue. Then on runtime, when I'm passing an invalid email address, I get this error:
```json
{
"detail": {
"form": [
{
"type": "value_error",
"loc": [
"email",
"function-after[_validate(), str]"
],
"msg": "value is not a valid email address: The part after the @-sign is not valid. It should have a period."
},
{
"type": "int_parsing",
"loc": [
"email",
"int"
],
"msg": "Input should be a valid integer, unable to parse string as an integer"
}
]
}
}
```
Meanwhile, with None, that won't happen. loc would be an array with only "email".

As you can see, no error happened in first scenario with Union of int and email string, but in second scenario it was parsed correctly with Union of string email and None.

|
open
|
2024-03-07T15:45:57Z
|
2024-03-07T15:46:52Z
|
https://github.com/pydantic/FastUI/issues/240
|
[] |
ManiMozaffar
| 0
|
waditu/tushare
|
pandas
| 1,155
|
新需求:能不能有个沪港通净流入的数据
|
发现净流入的数据比单纯的流入数据好像更有相关性
帐号:413155133@qq.com
|
open
|
2019-10-09T03:38:57Z
|
2019-10-10T11:51:52Z
|
https://github.com/waditu/tushare/issues/1155
|
[] |
a136249692
| 1
|
CorentinJ/Real-Time-Voice-Cloning
|
python
| 1,235
|
automatic fake audio generation
|
Hi Corentin,
I want to generate a dataset of fake audios on my own using this toolbox. Is there any way to generate them automatically as I have to generate them manually one by one which is taking too long?
|
open
|
2023-07-19T05:40:45Z
|
2023-07-19T05:54:02Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1235
|
[] |
kathuria07
| 0
|
bregman-arie/devops-exercises
|
python
| 240
|
No solution for Directories Comparison Shell Script Problem
|
The [Directories Comparison shell scripting practice problem](https://github.com/bregman-arie/devops-exercises/blob/master/exercises/shell/directories_comparison.md) does not have a solution
|
closed
|
2022-05-12T11:56:06Z
|
2022-07-01T22:20:31Z
|
https://github.com/bregman-arie/devops-exercises/issues/240
|
[] |
luuuk
| 1
|
Kitware/trame
|
data-visualization
| 333
|
How can trame be deployed in an environment with no desktop or graphical interface, such as k8s
|
<!-- Ignoring this template may result in your bug report getting deleted -->
How can trame be deployed in an environment with no desktop or graphical interface, such as k8s
[x] unbuntu
[x] arm64
|
closed
|
2023-09-15T08:29:47Z
|
2023-09-15T14:21:15Z
|
https://github.com/Kitware/trame/issues/333
|
[] |
haoxl3
| 1
|
pandas-dev/pandas
|
python
| 61,094
|
ENH: `DatetimeIndex.set_freq()`
|
### Feature Type
- [x] Adding new functionality to pandas
- [ ] Changing existing functionality in pandas
- [ ] Removing existing functionality in pandas
### Problem Description
I can set/change the name of a `DatetimeIndex` with `.rename()`. But I cannot set/change its frequency in the same manner.
### Feature Description
To rename a `DatetimeIndex`, I can do this inplace with `idx.name = 'foo'`. Or I can get a new object with `idx2 = idx.rename('foo')`.
I can set or change the frequency inplace with `idx.freq = 'QS-APR'`, but an analogous method for setting or changing the frequency does not exist.
**This proposal is to add the method `DatetimeIndex.set_freq`**
Considering the method name: `with_freq()` or `set_freq()` would both work. I would not use `as_freq()` to avoid confusion with the existing methods `Series.as_freq()` and `DataFrame.as_freq()` which have a different functionality (i.e., change the index length).
The method body would be something like
```py
def set_freq(self, freq, *, inplace: bool = False) -> Self | None:
if inplace:
self.freq = freq
else:
idx = self.copy()
idx.freq = freq
return idx
```
I'm happy to create a PR for this if devs think this is a worthwhile addition
### Alternative Solutions
I can keep on using
```py
idx2 = idx.copy()
idx2.freq = freq
```
but that cannot be used in list comprehensions or lambda expressions and is not chainable, and looks more clunky.
If I need something chainable, the best I think I can do is
```py
idx2 = idx.to_frame().asfreq(freq).index
```
though that undeservedly raises an Exception if the frequencies are equivalent (e.g. QS-FEB and QS-MAY).
### Additional Context
See also https://github.com/pandas-dev/pandas/issues/61086
|
open
|
2025-03-10T09:26:12Z
|
2025-03-19T12:49:37Z
|
https://github.com/pandas-dev/pandas/issues/61094
|
[
"Enhancement",
"Needs Triage"
] |
rwijtvliet
| 1
|
huggingface/datasets
|
deep-learning
| 7,468
|
function `load_dataset` can't solve folder path with regex characters like "[]"
|
### Describe the bug
When using the `load_dataset` function with a folder path containing regex special characters (such as "[]"), the issue occurs due to how the path is handled in the `resolve_pattern` function. This function passes the unprocessed path directly to `AbstractFileSystem.glob`, which supports regular expressions. As a result, the globbing mechanism interprets these characters as regex patterns, leading to a traversal of the entire disk partition instead of confining the search to the intended directory.
### Steps to reproduce the bug
just create a folder like `E:\[D_DATA]\koch_test`, then `load_dataset("parquet", data_dir="E:\[D_DATA]\\test", split="train")`
it will keep searching the whole disk.
I add two `print` in `glob` and `resolve_pattern` to see the path
### Expected behavior
it should load the dataset as in normal folders
### Environment info
- `datasets` version: 3.3.2
- Platform: Windows-10-10.0.22631-SP0
- Python version: 3.10.16
- `huggingface_hub` version: 0.29.1
- PyArrow version: 19.0.1
- Pandas version: 2.2.3
- `fsspec` version: 2024.12.0
|
open
|
2025-03-20T05:21:59Z
|
2025-03-20T05:21:59Z
|
https://github.com/huggingface/datasets/issues/7468
|
[] |
Hpeox
| 0
|
cupy/cupy
|
numpy
| 8,210
|
Support Cuda Stream creation with Priority
|
### Description
Hi,
It appears that CuPy does not support the creation of custom CUDA streams with priority. Since the API for this functionality is already available in CUDA, it would be very helpful if CuPy provided this feature.
Thanks
### Additional Information
_No response_
|
closed
|
2024-02-25T13:33:48Z
|
2024-03-13T04:22:02Z
|
https://github.com/cupy/cupy/issues/8210
|
[
"contribution welcome",
"cat:feature",
"good first issue"
] |
rajagond
| 3
|
ultralytics/yolov5
|
machine-learning
| 12,585
|
train the picture without the target
|
### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
When yolov5 training, how to train the picture without the target, there is an empty txt can be?
### Additional
_No response_
|
closed
|
2024-01-05T12:01:47Z
|
2024-02-17T00:20:07Z
|
https://github.com/ultralytics/yolov5/issues/12585
|
[
"question",
"Stale"
] |
ZhaoMonica
| 4
|
tableau/server-client-python
|
rest-api
| 845
|
Issue with dates format in request options
|
Hello,
Sorry if I am in the wrong place I am new here.
I have an issue when I use the csv_req_option with dates. The date filter isn't applied when I pass it in argument whereas it works for all kinds of string.
Imagine the next dashboard :
Col A = Date
Col B = Event
if I use
csv_req_option = TSC.CSVRequestOptions()
csv_req_option.vf('any Event')
so far everything works
now
csv_req_option = TSC.CSVRequestOptions()
csv_req_option.vf('01/01/2021')
it doesn't work the filter isn't applied
I have tried different date formats but it didn't work...
Thanks for your help.
Best
|
closed
|
2021-05-11T21:35:27Z
|
2021-09-23T01:31:34Z
|
https://github.com/tableau/server-client-python/issues/845
|
[] |
ItachiEren
| 2
|
sherlock-project/sherlock
|
python
| 2,308
|
403 and 404 Errors Still Persist When Querying Usernames
|
### Installation method
PyPI (via pip)
### Description
When I query a username, 403 and 404 errors are still being reported
### Steps to reproduce

"When I query a username, 403 and 404 errors are still being reported."
And usernames that should have information, such as 'X', are not being found in the query results.
### Additional information
_No response_
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct
|
open
|
2024-09-30T03:45:01Z
|
2024-11-01T08:12:08Z
|
https://github.com/sherlock-project/sherlock/issues/2308
|
[
"bug",
"false positive"
] |
CoffeeGeeker
| 7
|
PeterL1n/RobustVideoMatting
|
computer-vision
| 93
|
Difference of fgr output and src input
|
This is perhaps more a question about the network design than the code, sorry if this is not the right channel to ask this.
I wonder what is the difference between the fgr output and the src input image in those pixels belonging to the matting mask (pha> 0).
I supposed that maybe they could refine the edges of the matting where pha <1.0, but representing side by side a matting where the alpha that delivers the network is applied on the original image together with another where it is applied on the fgr output I can not see any difference .
Would it be possible to reduce somewhat the computational cost of the network by eliminating the fgr output?
Best regards.
|
closed
|
2021-10-22T10:28:47Z
|
2021-10-22T20:26:16Z
|
https://github.com/PeterL1n/RobustVideoMatting/issues/93
|
[] |
livingbeams
| 1
|
moshi4/pyCirclize
|
matplotlib
| 84
|
Auto annotation for sectors in chord diagram
|
> Annotation plotting is a feature added in v1.9.0 (python>=3.9). It is not available in v1.8.0.
_Originally posted by @moshi4 in [#83](https://github.com/moshi4/pyCirclize/issues/83#issuecomment-2658729865)_
upgraded to v1.9.0 still it is not changing.
```
from pycirclize import Circos, config
from pycirclize.parser import Matrix
config.ann_adjust.enable = True
circos = Circos.chord_diagram(
matrix,
cmap= sector_color_dict,
link_kws=dict(direction=0, ec="black", lw=0.5, fc="black", alpha=0.5),
link_kws_handler = link_kws_handler_overall,
order = country_order_list,
# label_kws = dict(orientation = 'vertical', r=115)
)
```
While in the documentation track.annotate is used. However I am using from to matrix and updates aren't happing still. Do you have any suggestions.
full pseudocode:
```
country_order_list = sorted(list(set(edge_list['source']).union(set(edge_list['target']))))
for country in country_order_list:
cnt = country.split('_')[0]
if country not in country_color_dict.keys():
sector_color_dict[cnt] = 'red'
else:
sector_color_dict[cnt] = country_color_dict[cnt]
from_to_table_df = edge_list.groupby(['source', 'target']).size().reset_index(name='count')[['source', 'target', 'count']]
matrix = Matrix.parse_fromto_table(from_to_table_df)
from_to_table_df['year'] = year
from_to_table_overall = pd.concat([from_to_table_overall, from_to_table_df])
circos = Circos.chord_diagram(
matrix,
cmap= sector_color_dict,
link_kws=dict(direction=0, ec="black", lw=0.5, fc="black", alpha=0.5),
link_kws_handler = link_kws_handler_overall,
order = country_order_list,
# label_kws = dict(orientation = 'vertical', r=115)
)
circos.plotfig()
plt.show()
plt.title(f'{year}_overall')
plt.close()
```
|
closed
|
2025-02-14T09:41:38Z
|
2025-02-21T09:36:39Z
|
https://github.com/moshi4/pyCirclize/issues/84
|
[
"question"
] |
jishnu-lab
| 7
|
Avaiga/taipy
|
automation
| 1,532
|
[🐛 BUG] inactive file_selector is still active
|
### What went wrong? 🤔
From hugoo on Discord
I would like to leave a file_selector inactive so that the user cannot use it. However, doing active=False only has style effects and the file_selector continues to work
### Expected Behavior
inactive file_selector should be inactive
### Steps to Reproduce Issue
```
from taipy import Gui
filename = ""
page = """
<|{filename}|file_selector|active=False|>
"""
gui = Gui(page=page)
gui.run()
```
### Solution Proposed
_No response_
### Screenshots

### Runtime Environment
_No response_
### Browsers
_No response_
### OS
_No response_
### Version of Taipy
_No response_
### Additional Context
_No response_
### Acceptance Criteria
- [x] Ensure new code is unit tested, and check code coverage is at least 90%.
- [ ] Create related issue in taipy-doc for documentation and Release Notes.
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [X] I am willing to work on this issue (optional)
|
closed
|
2024-07-16T21:33:17Z
|
2024-07-17T08:13:46Z
|
https://github.com/Avaiga/taipy/issues/1532
|
[
"🟥 Priority: Critical",
"💥Malfunction",
"📝Release Notes",
"GUI: Front-End"
] |
FredLL-Avaiga
| 0
|
neuml/txtai
|
nlp
| 146
|
Fix bug with importing service task when workflow extra not installed
|
xmltodict was recently added as a dependency for ServiceTask. Given that this library is optional, it should be wrapped as a conditional import. Currently, the entire workflow package can't be imported unless xmltodict is present which violates txtai's policy to only fail when the specific library is needed.
|
closed
|
2021-11-15T18:19:36Z
|
2021-11-15T18:32:06Z
|
https://github.com/neuml/txtai/issues/146
|
[] |
davidmezzetti
| 0
|
graphql-python/graphene
|
graphql
| 978
|
Bad hyperlink on documentation
|
NonNull reference under required field from this page https://docs.graphene-python.org/en/latest/types/scalars/ redirects to an inexistent page, namely:
`https://docs.graphene-python.org/en/latest/types/scalars/list-and-nonnull/#nonnull`
The correct link is
`https://docs.graphene-python.org/en/latest/types/list-and-nonnull/#nonnull`
|
closed
|
2019-05-30T17:02:29Z
|
2019-06-04T16:23:25Z
|
https://github.com/graphql-python/graphene/issues/978
|
[
"🐛 bug",
"📖 documentation"
] |
Ambro17
| 2
|
skforecast/skforecast
|
scikit-learn
| 647
|
add method to forecasters to return the input data that is passed to the model to make predictions
|
Currently, there is no way to see which data is being used to make predictions.
`create_X_y` creates the data that is used to train the model, but it;s not able to create data that is passed to "predict()`.
The data for the prediction is created in predict() but it is not exposed.
Could we capture the lines that create the input data to the regressor.predict() in a new method, say create_forecast_input() to expose that value to the user?
It'd help with debugging and understanding what the forecaster does under the hood
|
closed
|
2024-02-23T16:19:53Z
|
2024-08-07T14:24:31Z
|
https://github.com/skforecast/skforecast/issues/647
|
[
"enhancement"
] |
solegalli
| 2
|
OpenGeoscience/geonotebook
|
jupyter
| 147
|
M.layers.annotation.polygons[0].data - IndexError: list index out of range
|
When I run 3rd cell from
https://github.com/OpenGeoscience/geonotebook/blob/master/notebooks/04_Annotations.ipynb
d, n = next(M.layers.annotation.polygons[0].data)
I receive:
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-11-66a9995560d8> in <module>()
----> 1 d, n = next(M.layers.annotation.polygons[0].data)
2 #d, n = next(M.layers.annotation.rectangles[1].data)
IndexError: list index out of range
|
closed
|
2017-09-08T16:13:32Z
|
2017-09-08T16:37:17Z
|
https://github.com/OpenGeoscience/geonotebook/issues/147
|
[] |
dementiev
| 2
|
vitalik/django-ninja
|
rest-api
| 1,305
|
[BUG] Adding Field examples breaks generated swagger docs
|
**Describe the bug**
Using the approach described in #1115 to add a description of a parameter, the generated swagger docs break when the `example=` property is added. The generated API docs show `Could not render Parameters, see the console.`
**Versions (please complete the following information):**
- Python version: 3.11.6
- Django version: 5.1
- Django-Ninja version: 1.3.0
- Pydantic version: 2.9.2
Example:
```
from datetime import datetime, timezone
from typing import Annotated
from typing import List
from ninja import Router, Query, Schema, Field
router = Router()
class FilterParams(Schema):
end: Annotated[
datetime,
Field(
examples=[datetime.now(timezone.utc)], # fails when this line is uncommented
description="ISO-formatted timestamp of the latest item to return",
),
]
@router.get(
"",
url_name="list",
response=List[MySchemaOut],
)
def my_list(request, filters: Query[FilterParams]):
pass
```
|
open
|
2024-09-27T06:11:49Z
|
2024-12-12T09:30:18Z
|
https://github.com/vitalik/django-ninja/issues/1305
|
[] |
boosh
| 2
|
chmp/ipytest
|
pytest
| 42
|
Doesn't work with Google Colab
|
Running this small test on Google Colab
```
%%run_pytest[clean] -qq
def test_example():
assert [1, 2, 3] == [1, 2, 3]
```
results in exception:
```
/usr/local/lib/python3.6/dist-packages/pluggy/hooks.py:258: in __call__
return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
/usr/local/lib/python3.6/dist-packages/pluggy/manager.py:67: in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
/usr/local/lib/python3.6/dist-packages/pluggy/manager.py:61: in <lambda>
firstresult=hook.spec_opts.get('firstresult'),
/usr/local/lib/python3.6/dist-packages/ipytest/_pytest_support.py:143: in pytest_collect_file
parent, fspath=path.new(ext=".py"), module=self.module
/usr/local/lib/python3.6/dist-packages/ipytest/_pytest_support.py:156: in from_parent
self = super().from_parent(parent, fspath=fspath)
E AttributeError: 'super' object has no attribute 'from_parent'
```
|
closed
|
2020-07-10T10:45:16Z
|
2020-07-10T11:00:09Z
|
https://github.com/chmp/ipytest/issues/42
|
[] |
borisbrodski
| 3
|
CorentinJ/Real-Time-Voice-Cloning
|
deep-learning
| 800
|
I made a video demonstrating how I used it
|
https://www.youtube.com/watch?v=HZtuHgpRoyc
"Dolly Parton, neural voice clone, tells patient's family about their medical equipment."
|
closed
|
2021-07-16T11:51:26Z
|
2021-08-25T09:21:31Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/800
|
[] |
jaggzh
| 0
|
influxdata/influxdb-client-python
|
jupyter
| 94
|
When ingest dataframe, use alternative tagging
|
In addition to ticket #79
Would it be possible to next to, data_frame_tag_columns=tag_columns, also have a 'data_frame_tag=' argument? This way a tag can be added to a DF which doesn't appear in the DF.
For example, I have a DF with stock prices: timestamp, open, high, low, close (etc) data. I would like to be able to add tags as ticker, exchange etc which don't appear in the DF, by using a 'data_frame_tag=' argument with data_frame_tag='NASDAQ', 'AAPL'
|
closed
|
2020-05-13T14:03:34Z
|
2021-05-14T07:06:17Z
|
https://github.com/influxdata/influxdb-client-python/issues/94
|
[
"enhancement"
] |
cjelsa
| 15
|
ultralytics/yolov5
|
machine-learning
| 12,432
|
How is the number of anchors calculated?
|
### Search before asking
- [x] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
As an input i have image (640x640x3) and on output of the model is next: 1x25200x7, where 1 is a batch size, 7 is the number of classes+5 for bbox and objectness. But what is 25200?
Can someone please tell me a direct formula based on input size to calculate it?
### Additional
Also i found from model architecture that 25200 = CxCxB, but idk B,C. And i found that it depends on input size, for 320x320x3 it smaller but i don't remember how much.
|
closed
|
2023-11-26T16:17:47Z
|
2023-11-27T05:10:34Z
|
https://github.com/ultralytics/yolov5/issues/12432
|
[
"question"
] |
TheMegistone4Ever
| 5
|
fabiogra/moseca
|
streamlit
| 8
|
Local installation fails
|
Hi, I followed Readme instructions, but I cant get it to run either (no luck for me with Docker install either)..
I have Win11. All steps done from anaconda prompt and virtual environment. This is a workflow:
(Win11)
create app folder
(conda)
move to folder Moseca
git clone (*.git)
conda create -p D:\Audio\Moseca\moseca python=3.10.0
cd moseca
conda activate
pip install -r requirements.txt
set PYTHONPATH=D:\Audio\Moseca\moseca
curl -LJO https://huggingface.co/fabiogra/baseline_vocal_remover/resolve/main/baseline.pth
streamlit run app/header.py
I got errors at streamlit step.
(d:\Audio\Moseca\moseca) D:\Audio\Moseca\moseca>streamlit run app/header.py
Fatal Python error: init_import_site: Failed to import the site module
Python runtime state: initialized
Traceback (most recent call last):
File "d:\Audio\Moseca\moseca\lib\site.py", line 617, in <module>
main()
File "d:\Audio\Moseca\moseca\lib\site.py", line 604, in main
known_paths = addsitepackages(known_paths)
File "d:\Audio\Moseca\moseca\lib\site.py", line 387, in addsitepackages
addsitedir(sitedir, known_paths)
File "d:\Audio\Moseca\moseca\lib\site.py", line 226, in addsitedir
addpackage(sitedir, name, known_paths)
File "d:\Audio\Moseca\moseca\lib\site.py", line 179, in addpackage
for n, line in enumerate(f):
File "d:\Audio\Moseca\moseca\lib\encodings\cp1252.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x90 in position 944: character maps to <undefined>
Python --version shows 3.10.0. as it should be. Outside venv i got Python 3.11.
I'm no coder and most probably I'm doing something wrong.
Though it is not the same error as on Docker installation.
Till streamlit, installation went with no problems or error msgs.
|
open
|
2023-09-24T22:56:29Z
|
2023-09-24T22:56:29Z
|
https://github.com/fabiogra/moseca/issues/8
|
[] |
nekiee13
| 0
|
gee-community/geemap
|
streamlit
| 1,017
|
Map.addLayerControl() doesn't seem to be working
|
<!-- Please search existing issues to avoid creating duplicates. -->
### Environment Information
- geemap version: 0.13.1
- Python version: 3.9.12 (conda 4.12.0)
- Operating System: Windows 11
### Description
I'm new to geemap and was looking around a bit and following along the instructions on this page:
[https://geemap.org/notebooks/geemap_and_folium]
### What I Did
in cell [18]
```
Map.addLayerControl()
Map
```
No layercontrol appeared in the top-right of the map, as I was expecting (like in folium/leaflet)
In the later steps adding the various basemaps.. they couldn't be found either
seems something is broken, or I am doing something quite wrong :(
this is the final image after executing cell [21]. pretty bare :(

|
closed
|
2022-04-13T19:07:42Z
|
2022-10-11T09:01:41Z
|
https://github.com/gee-community/geemap/issues/1017
|
[
"bug"
] |
meesterp
| 5
|
bregman-arie/devops-exercises
|
python
| 87
|
AWS image is corrupted
|
This issue is fixed in this PR
https://github.com/bregman-arie/devops-exercises/pull/86
|
closed
|
2020-05-29T11:03:19Z
|
2020-06-03T09:41:32Z
|
https://github.com/bregman-arie/devops-exercises/issues/87
|
[] |
abdelrahmanbadr
| 0
|
jonaswinkler/paperless-ng
|
django
| 1,273
|
[BUG] Paperless webserver not honoring port specified in docker-compose.yaml
|
**Describe the bug**
Run paperless-ng via one of the docker compose scripts provided by paperless. Change the default port 8000 and url of the health check in the compose script prior to running with docker compose. Webserver container (specifically Gunicorn) does not use port specified in compose file, because the port 8000 is hardcoded in the docker/gunicorn.conf.py file ([line 1](https://github.com/jonaswinkler/paperless-ng/blob/3b17f9d6ecc6d1e3459619458ea3fefb260a116d/docker/gunicorn.conf.py#L1)).
**To Reproduce**
Steps to reproduce the behavior:
1. Download one of the sample docker compose files
2. Modify default port (8000) for the webserver to an alternative port
3. Start docker stack using the modified compose file
4. Observe that the log file for the webserver container still reports gunicorn using port 8000 and that the web interface to Paperless is not available on the specified non-default port
**Expected behavior**
Paperless should honour the webserver port specified in the compose file and make the web interface available on that port and avoid binding to the default port (8000) if a different port is specified.
**Webserver logs**
After specifying port 8001 in the compose file, the webserver container log confirms gunicorn still binds to port 8000.
```
Paperless-ng docker container starting...
Mapping UID and GID for paperless:paperless to 1027:100
Creating directory /tmp/paperless
Adjusting permissions of paperless files. This may take a while.
Apply database migrations...
Operations to perform:
Apply all migrations: admin, auth, authtoken, contenttypes, django_q, documents, paperless_mail, sessions
Running migrations:
No migrations to apply.
Executing /usr/local/bin/supervisord -c /etc/supervisord.conf
2021-08-31 12:56:21,791 INFO Set uid to user 0 succeeded
2021-08-31 12:56:21,795 INFO supervisord started with pid 1
2021-08-31 12:56:22,797 INFO spawned: 'consumer' with pid 47
2021-08-31 12:56:22,799 INFO spawned: 'gunicorn' with pid 48
2021-08-31 12:56:22,801 INFO spawned: 'scheduler' with pid 49
[2021-08-31 12:56:23 +0000] [48] [INFO] Starting gunicorn 20.1.0
[2021-08-31 12:56:23 +0000] [48] [INFO] Listening at: http://0.0.0.0:8000 (48)
[2021-08-31 12:56:23 +0000] [48] [INFO] Using worker: paperless.workers.ConfigurableWorker
[2021-08-31 12:56:23 +0000] [48] [INFO] Server is ready. Spawning workers
```
**Relevant information**
- Host OS of the machine running paperless: Synology NAS DS220+
- Browser: Same result in chrome and edge
- Version: 1.5.0
- Installation method: docker
- Changes made in `docker-compose.yml`: webserver port changed to anything other than 8000 on both the ports declaration for the webserver and for the health check test url; e.g.
```
webserver:
ports:
- 8001:8001
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8001"]
```
|
closed
|
2021-08-31T13:14:51Z
|
2021-08-31T14:32:33Z
|
https://github.com/jonaswinkler/paperless-ng/issues/1273
|
[] |
padraigkitterick
| 2
|
wger-project/wger
|
django
| 1,071
|
Add Resistance Bands as Equipment
|
... so users can add exercise variations of the ones with machines/weights
motive:
decided to go back to exercising but this time at home with Resistance Bands, some exercises don't vary much front the OG ones but there's still a difference
Extra:
Ilustrations are very important for these types of exercises because these are more "flexible", shouldn't there be an option to add them so someone that checks the exercise can know what to do instead of having to check for the proper "configuration"?
|
closed
|
2022-06-12T22:46:38Z
|
2022-08-05T13:41:56Z
|
https://github.com/wger-project/wger/issues/1071
|
[] |
Keddyan
| 7
|
plotly/dash-component-boilerplate
|
dash
| 151
|
Migrating to React functional components
|
I'm interested in creating my own dash component using React and while generating a project using `dash-component-boilerplate`, the output React files are written using class components. Any plans on migrating that to functional components instead?
I've already migrated my own project files but thought to bring this topic up for discussion since the latest React docs are all written using functional components and while it is still in Beta, they do mention that these will replace the older docs.
Thanks!
|
closed
|
2023-03-01T21:02:14Z
|
2023-09-21T08:45:02Z
|
https://github.com/plotly/dash-component-boilerplate/issues/151
|
[] |
aeltanawy
| 11
|
fastapi/sqlmodel
|
pydantic
| 523
|
What is the purpose of parse_obj's second argument: "update"?
|
### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
SomeModel.parse_obj({"one": 2}, {"three": 4})
```
### Description
SQLModel's `parse_obj` supports the update argument which is not supported by pydantic's BaseModel. However, I do not see it ever used in docs, github issues, or in source code.
Could someone explain why it was added? What was the actual use case? I understand what it does, but I don't understand why it does it.
### Operating System
Linux, Windows, macOS, Other
### Operating System Details
_No response_
### SQLModel Version
0.0.8
### Python Version
3.11
### Additional Context
_No response_
|
open
|
2023-01-03T21:53:54Z
|
2024-03-04T20:59:03Z
|
https://github.com/fastapi/sqlmodel/issues/523
|
[
"question"
] |
zmievsa
| 2
|
flasgger/flasgger
|
flask
| 585
|
a little miss
|
when i forget install this packet -> apispec
page will back err -> internal server error
just have not other tips, so i search the resource code,
in marshmallow_apispec.py

there have some import code, if err ,set Schema = None
|
open
|
2023-06-27T07:43:36Z
|
2023-06-27T07:45:55Z
|
https://github.com/flasgger/flasgger/issues/585
|
[] |
Spectator133
| 2
|
koxudaxi/datamodel-code-generator
|
pydantic
| 1,361
|
Support for custom types in particular enums
|
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
Given below schema, where enum field type is some.CustomEnum
```
{
"title": "Test",
"description": "Test 123",
"type": "object",
"properties": {
"enum": {
"title": "Enum",
"default": "one",
"type": "some.CustomEnum"
}
}
}
```
I would like code generator to give below
```
...
class Test(BaseModel):
enum: some.CustomEnum = Field('one', title='Enum')
```
At the moment instead of some.CustomEnum it is giving Any.
Note some.CustomEnum would not type check correctly with the one generated by code gen even though they have the same values.
This is because I already have the python code for some.CustomEnum and do not need the code to be generated again.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
This is to handle cases where I already have some of the models and I just want the typing next to the field name in the code generation.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
One solution is to manually parse the output of the current code gen and remove the enum class and also update the enum type. This is tedious as one would have to define patterns for when class starts and ends.
Another solution is to have a new section in the schema that has information about these fields and somehow add it to the generated class but then the Field information would not be available.
I actually thought enum_field_as_literal="all" flag would convert all enums to literal[...], which would help but it didn't seem to do anything.
**Additional context**
Add any other context or screenshots about the feature request here.
|
open
|
2023-06-09T13:23:43Z
|
2023-06-10T00:18:37Z
|
https://github.com/koxudaxi/datamodel-code-generator/issues/1361
|
[] |
peterchenadded
| 1
|
flairNLP/flair
|
nlp
| 2,982
|
'TextClassifier' object has no attribute 'embeddings'
|
TARSClassifier.load error
AttributeError Traceback (most recent call last)
<ipython-input-13-710c2b4d40e4> in <module>
----> 1 tars = TARSClassifier.load('/content/drive/MyDrive/Text_classification/final-model.pt')
2 frames
/usr/local/lib/python3.7/dist-packages/flair/nn/model.py in load(cls, model_path)
147 state = torch.load(f, map_location="cpu")
148
--> 149 model = cls._init_model_with_state_dict(state)
150
151 if "model_card" in state:
/usr/local/lib/python3.7/dist-packages/flair/models/tars_model.py in _init_model_with_state_dict(cls, state, **kwargs)
739 label_dictionary=state.get("label_dictionary"),
740 label_type=state.get("label_type", "default_label"),
--> 741 embeddings=state.get("tars_model").embeddings,
742 num_negative_labels_to_sample=state.get("num_negative_labels_to_sample"),
743 **kwargs,
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in __getattr__(self, name)
1206 return modules[name]
1207 raise AttributeError("'{}' object has no attribute '{}'".format(
-> 1208 type(self).__name__, name))
1209
1210 def __setattr__(self, name: str, value: Union[Tensor, 'Module']) -> None:
AttributeError: 'TextClassifier' object has no attribute 'embeddings'
|
closed
|
2022-11-08T04:36:58Z
|
2022-11-09T15:51:44Z
|
https://github.com/flairNLP/flair/issues/2982
|
[
"bug"
] |
pranavan-rbg
| 6
|
K3D-tools/K3D-jupyter
|
jupyter
| 273
|
How to plot Matplotlib's surfaces?
|
I'm exploring the possibility to use K3D-Jupyter as a plotting library for [SymPy](https://github.com/sympy/sympy/), instead of relying on Matplotlib which is quite slow in a notebook. However, it seems like there is no function/object capable of using the data format used by [Matplotlib's `plot_surface`](https://matplotlib.org/stable/api/_as_gen/mpl_toolkits.mplot3d.axes3d.Axes3D.html#mpl_toolkits.mplot3d.axes3d.Axes3D.plot_surface), in which `x, y, z` are two dimensional arrays.
I've seen [K3D's `surface`](https://k3d-jupyter.org/k3d.html?highlight=surface#k3d.factory.surface), but I think it assumes a uniform grid spacing between `xmin, xmax` and `ymin, ymax`.
What would be the best way to plot matplotlib's surfaces with K3D? I'm going to drop a couple of example on what I'd like to achieve.
**Example 1:**
```
from sympy import *
from sympy.plotting.plot import plot3d, plot3d_parametric_surface
var("x, y")
r = sqrt(x**2 + y**2)
expr = cos(r) * exp(-r / 10)
p = plot3d(expr)
s = p._series[0]
xx, yy, zz = s.get_meshes()
```
Here, `xx, yy, zz` contains the numerical data used by Matplotib to draw the surface. Note that for each `(x, y)` there is one `z`.

**Example 2:**
```
p = plot3d_parametric_surface(cos(u + v), sin(u - v), u - v, (u, -5, 5), (v, -5, 5))
s2 = p._series[0]
xx, yy, zz = s2.get_meshes()
```
Note that for each `(x, y)` there could be multiple values for `z`.

|
closed
|
2021-04-18T08:35:26Z
|
2021-04-23T12:40:12Z
|
https://github.com/K3D-tools/K3D-jupyter/issues/273
|
[] |
Davide-sd
| 3
|
Crinibus/scraper
|
web-scraping
| 228
|
Api link elgiganten.se
|
Is it possible for you to get the API link for elgiganten.se? Tried to change it manually to .se but it seems to not be the same API link
|
closed
|
2023-11-10T20:07:49Z
|
2023-11-15T18:17:04Z
|
https://github.com/Crinibus/scraper/issues/228
|
[] |
Trixarn
| 2
|
bigscience-workshop/petals
|
nlp
| 519
|
Support stable diffusion model
|
can i use stable diffusion model with petals?
|
closed
|
2023-09-21T12:39:23Z
|
2023-09-22T20:56:41Z
|
https://github.com/bigscience-workshop/petals/issues/519
|
[] |
lbgws2
| 2
|
akfamily/akshare
|
data-science
| 5,691
|
stock_board_concept_hist_em 尝试之后好像还是存在问题
|
> 大佬帮忙看看,我测试下,好像还是没有好?
_Originally posted by @fweiger in [#5686](https://github.com/akfamily/akshare/issues/5686#issuecomment-2665321119)_
|
closed
|
2025-02-18T11:14:53Z
|
2025-02-18T13:53:29Z
|
https://github.com/akfamily/akshare/issues/5691
|
[] |
fweiger
| 0
|
sktime/pytorch-forecasting
|
pandas
| 1,038
|
RAM consumption of TimeSeriesDataset
|
I have a dataframe that consumes aprox 10 G in memory. when i try to build the TimeSeriesDataset, it consumes >30G in memory (making explode my RAM). I know It makes sense because the time series dataset is a bigger structure than the dataframe.
How much can the memory consumption grow when building the time series dataset? Like a 4x? I would like to have an estimation to know how to reduce the original dataframe. Is there any way to make TimeSeriesDataset consume less RAM?
Thanks @jdb78
|
open
|
2022-06-16T22:13:25Z
|
2023-09-07T15:02:08Z
|
https://github.com/sktime/pytorch-forecasting/issues/1038
|
[] |
nicocheh
| 3
|
clovaai/donut
|
nlp
| 242
|
size mismatch error
|
When I run the "python3 app.py" for demo, it cannot load the pretrained model naver-clova-ix/donut-base-finetuned-docvqa, there is a size miss match error
pretrained_model = DonutModel.from_pretrained(args.pretrained_path)
File "/home/local/Project/chart/donut/donut/model.py", line 597, in from_pretrained
model = super(DonutModel, cls).from_pretrained(pretrained_model_name_or_path, revision="official", *model_args, **kwargs)
File "/home/local/.local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3091, in from_pretrained
) = cls._load_pretrained_model(
File "/home/local/.local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3532, in _load_pretrained_model
raise RuntimeError(f"Error(s) in loading state_dict for {model.__class__.__name__}:\n\t{error_msg}")
RuntimeError: Error(s) in loading state_dict for DonutModel:
size mismatch for encoder.model.layers.1.downsample.norm.weight: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for encoder.model.layers.1.downsample.norm.bias: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([512]).
size mismatch for encoder.model.layers.1.downsample.reduction.weight: copying a param with shape torch.Size([512, 1024]) from checkpoint, the shape in current model is torch.Size([256, 512]).
size mismatch for encoder.model.layers.2.downsample.norm.weight: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for encoder.model.layers.2.downsample.norm.bias: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([1024]).
size mismatch for encoder.model.layers.2.downsample.reduction.weight: copying a param with shape torch.Size([1024, 2048]) from checkpoint, the shape in current model is torch.Size([512, 1024]).
You may consider adding `ignore_mismatched_sizes=True` in the model `from_pretrained` method.
|
open
|
2023-08-25T22:27:59Z
|
2023-08-31T11:40:35Z
|
https://github.com/clovaai/donut/issues/242
|
[] |
yaoliUoA
| 3
|
nalepae/pandarallel
|
pandas
| 27
|
quiet mode execution
|
Currently there are a ton of messages are printed. Is there a way to mute all or part of the messages?
|
closed
|
2019-06-06T03:23:50Z
|
2019-07-09T17:39:28Z
|
https://github.com/nalepae/pandarallel/issues/27
|
[] |
qiangbo
| 1
|
sunscrapers/djoser
|
rest-api
| 656
|
The resend_activation endpoint discloses if a user with a given email exists
|
The `resend_activation` endpoint returns a 400 response if the given email does not belong to an (inactive) user.
This endpoint re-uses the password reset serializer (#555) but does not respect the `PASSWORD_RESET_SHOW_EMAIL_NOT_FOUND` setting because of these lines:
https://github.com/sunscrapers/djoser/blob/c62371e3f9a8bbad2eaf55ffd0efad6eb6c02f26/djoser/views.py#L208-L209
All settings related to disclosing email default to `False`:
```
PASSWORD_RESET_SHOW_EMAIL_NOT_FOUND
USERNAME_RESET_SHOW_EMAIL_NOT_FOUND
```
`resend_activation` shouldn't break this default.
P.S.
These settings don't work as advertised by the way, setting one to True has the effect of also toggling the other:
https://github.com/sunscrapers/djoser/blob/c62371e3f9a8bbad2eaf55ffd0efad6eb6c02f26/djoser/serializers.py#L145-L149
|
closed
|
2022-02-15T12:38:53Z
|
2023-07-03T15:56:36Z
|
https://github.com/sunscrapers/djoser/issues/656
|
[] |
jaap3
| 0
|
ultralytics/yolov5
|
deep-learning
| 13,299
|
ModuleNotFoundError: No module named 'models.yolo'.
|
### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
I have finetuned my model on google colab and I have downloaded best.py model and save it locally. after that I run below code
`from ultralytics import YOLO
model = YOLO("./models/best.pt")
result = model.predict("input_videos/image.png")`
I make sure about my model path and Input Image path. Also I added __init__.py file in my models folder. Also I have again installed ultralytics library. I know I can run `import os
result = os.system("python yolov5/detect.py --weights models/last.pt --img 640 --conf 0.8 --source input_videos/input_video.mp4")` This code. But this code does not give the integer as a result. So I don't know how to do prediction and detect it's location.
### Additional
I am using python version 3.12.5 and for ultralytics I am using version 8.2.87. I run this code in CPU. I also run the same type of code for detecting person without finetuning model i.e using your models. In that case I got the correct result.
|
closed
|
2024-09-05T12:50:15Z
|
2024-09-06T16:23:18Z
|
https://github.com/ultralytics/yolov5/issues/13299
|
[
"question"
] |
Khush1593
| 3
|
yeongpin/cursor-free-vip
|
automation
| 300
|
[Bug]: Auto Run script fails with error
|
### Commit before submitting
- [x] I understand that Issues are used to provide feedback and solve problems, not to complain in the comments section, and will provide more information to help solve the problem.
- [x] I have checked the top Issue and searched for existing [open issues](https://github.com/yeongpin/cursor-free-vip/issues) and [closed issues](https://github.com/yeongpin/cursor-free-vip/issues?q=is%3Aissue%20state%3Aclosed%20), and found no similar issues.
- [x] I have filled out a short and clear title, so that developers can quickly determine the general problem when browsing the Issue list. Not "a suggestion", "stuck", etc.
### Platform
macOS ARM64
### Version
1.7.12
### Description
fails to auto run script (manually download works only)
### Related log output
```shell
curl -fsSL https://raw.githubusercontent.com/yeongpin/cursor-free-vip/main/scripts/install.sh -o install.sh && chmod +x install.sh && ./install.sh
curl: (56) Failure writing output to destination, passed 1369 returned 4294967295
```
|
open
|
2025-03-18T19:33:40Z
|
2025-03-18T19:34:38Z
|
https://github.com/yeongpin/cursor-free-vip/issues/300
|
[
"bug"
] |
Jordan231111
| 0
|
graphistry/pygraphistry
|
pandas
| 129
|
Bug: Spigo demo in Python3
|
Change import to:
```
try:
import urllib.request as urllib2
except ImportError:
import urllib2
```
|
open
|
2019-07-04T17:28:40Z
|
2019-07-04T17:28:58Z
|
https://github.com/graphistry/pygraphistry/issues/129
|
[
"bug"
] |
lmeyerov
| 0
|
WZMIAOMIAO/deep-learning-for-image-processing
|
deep-learning
| 639
|
可以讲解下一个通用的目标检测grad-cam
|
closed
|
2022-09-12T15:01:06Z
|
2022-10-08T13:48:06Z
|
https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/639
|
[] |
swjtulinxi
| 2
|
|
twopirllc/pandas-ta
|
pandas
| 504
|
Issue running vwap with dataframe index from yFinance data
|
**Which version are you running? The lastest version is on Github. Pip is for major releases.**
```python
import pandas_ta as ta
print(ta.version)
```
0.3.14b0
**Do you have _TA Lib_ also installed in your environment?**
```sh
$ pip list
```
no
**Did you upgrade? Did the upgrade resolve the issue?**
```sh
$ pip install -U git+https://github.com/twopirllc/pandas-ta
```
yes but didn't resolve the issue
**Describe the bug**
I'm trying to run the simple example to calculate the vwap. Since it's provided, the vwap requires the datetime indexing.
However, the data coming via yFinance doesn't have this column and we get the key error for the dataframe
**To Reproduce**
```python
import pandas as pd
import pandas_ta as ta
df = pd.DataFrame()
df = df.ta.ticker("aapl", period="9d", interval="5m")
df.set_index(pd.DatetimeIndex(df["datetime"]), inplace=True)
print(df.columns)
df.ta.vwap(append=True)
```
**Screenshots**

Thanks for using Pandas TA!
|
closed
|
2022-03-23T10:24:58Z
|
2022-03-25T23:34:05Z
|
https://github.com/twopirllc/pandas-ta/issues/504
|
[
"info"
] |
Gillani0
| 4
|
s3rius/FastAPI-template
|
fastapi
| 134
|
generate template error: Stopping generation because post_gen_project hook script didn't exit successfully
|
# Image

# Error Info
`
E:\Code>fastapi_template
Project name: fastapi_template_test
Project description:
Removing resources for disabled feature GraphQL API...
Removing resources for disabled feature Kafka support...
Removing resources for disabled feature Kubernetes...
Removing resources for disabled feature Migrations...
Removing resources for disabled feature Gitlab CI...
Removing resources for disabled feature Dummy model...
Removing resources for disabled feature Self-hosted swagger...
Removing resources for disabled feature Tortoise ORM...
Removing resources for disabled feature Ormar ORM...
Removing resources for disabled feature PsycoPG...
Removing resources for disabled feature Piccolo...
Removing resources for disabled feature Postgresql DB...
Removing resources for disabled feature Opentelemetry support...
Removing resources for disabled feature SQLite DB...
cleanup complete!
⭐ Placing resources nicely in your new project ⭐
Resources are happy to be where they are needed the most.
Git repository initialized.
warning: in the working copy of 'fastapi_template_test/static/docs/redoc.standalone.js', LF will be replaced by CRLF the next time Git touches it
warning: in the working copy of 'fastapi_template_test/static/docs/swagger-ui-bundle.js', LF will be replaced by CRLF the next time Git touches it
warning: in the working copy of 'fastapi_template_test/static/docs/swagger-ui.css', LF will be replaced by CRLF the next time Git touches it
Added files to index.
Traceback (most recent call last):
File "C:\Users\pc\AppData\Local\Temp\tmpo5ko_okk.py", line 74, in <module>
init_repo()
File "C:\Users\pc\AppData\Local\Temp\tmpo5ko_okk.py", line 64, in init_repo
subprocess.run(["poetry", "install", "-n"])
File "C:\Python311\Lib\subprocess.py", line 546, in run
with Popen(*popenargs, **kwargs) as process:
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\subprocess.py", line 1022, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "C:\Python311\Lib\subprocess.py", line 1491, in _execute_child
hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [WinError 2] 系统找不到指定的文件。
Stopping generation because post_gen_project hook script didn't exit successfully
`
# Context Info
Python: 3.11.0 (main, Oct 24 2022, 18:26:48) [MSC v.1933 64 bit (AMD64)] on win32
Pip Package:
`C:\Users\pc>pip list
Package Version
------------------ ---------
arrow 1.2.3
binaryornot 0.4.4
certifi 2022.9.24
cfgv 3.3.1
chardet 5.0.0
charset-normalizer 2.1.1
click 8.1.3
colorama 0.4.6
cookiecutter 1.7.3
distlib 0.3.6
fastapi-template 3.3.10
filelock 3.8.0
identify 2.5.8
idna 3.4
Jinja2 3.1.2
jinja2-time 0.2.0
MarkupSafe 2.1.1
nodeenv 1.7.0
pip 22.3
platformdirs 2.5.3
poyo 0.5.0
pre-commit 2.20.0
prompt-toolkit 3.0.32
pydantic 1.10.2
python-dateutil 2.8.2
python-slugify 6.1.2
PyYAML 6.0
requests 2.28.1
setuptools 65.5.0
six 1.16.0
termcolor 1.1.0
text-unidecode 1.3
toml 0.10.2
typing_extensions 4.4.0
urllib3 1.26.12
virtualenv 20.16.6
wcwidth 0.2.5`
|
closed
|
2022-11-12T20:05:38Z
|
2022-11-13T08:59:40Z
|
https://github.com/s3rius/FastAPI-template/issues/134
|
[] |
liqiujiong
| 3
|
FactoryBoy/factory_boy
|
sqlalchemy
| 121
|
ManyToMany for SQLAlchemy
|
I've trying to get M2M field working with SQLAlchemy. Following the documentation
```
class LandingPageFactory(alchemy.SQLAlchemyModelFactory):
FACTORY_FOR = LandingPage
FACTORY_SESSION = db.session
name = Sequence(lambda n: u'Landing Page %d' % n)
class LPRotatorFactory(alchemy.SQLAlchemyModelFactory):
FACTORY_FOR = LPRotator
FACTORY_SESSION = db.session
name = Sequence(lambda n: u'Landing Page %d' % n)
@post_generation
def landing_pages(self, create, extracted, **kwargs):
if not create:
return
if extracted:
for landing_page in extracted:
self.landing_pages.add(landing_page)
```
Then if I try to set it up like so
```
lp1 = LandingPageFactory()
lp2 = LandingPageFactory()
db.session.commit() # All good here
lpr = LPRotatorFactory(landing_pages=(lp1, lp2))
db.session.commit() # This throws the error
```
This throws an Attribute error.
```
self.landing_pages.add(landing_page)
AttributeError: 'InstrumentedList' object has no attribute 'add'
```
I noticed all the docs and examples use Django, but didn't see anything too specific. Am I doing something wrong?
Thanks
|
closed
|
2014-01-09T06:57:13Z
|
2019-06-02T00:25:43Z
|
https://github.com/FactoryBoy/factory_boy/issues/121
|
[] |
adamrt
| 2
|
keras-team/keras
|
deep-learning
| 20,098
|
module 'keras.utils' has no attribute 'PyDataset'
|
I have correctly installed version 3.0.5 of keras and used pytorch for the backend, but it always prompts module 'keras. utils' has no attribute' PyDataset '. How can I solve this problem?
|
closed
|
2024-08-08T08:32:19Z
|
2024-08-13T06:58:42Z
|
https://github.com/keras-team/keras/issues/20098
|
[
"stat:awaiting response from contributor",
"type:Bug"
] |
Sticcolet
| 5
|
public-apis/public-apis
|
api
| 3,540
|
favorable
|
coolest idea
|
closed
|
2023-06-13T22:25:59Z
|
2023-06-14T04:43:44Z
|
https://github.com/public-apis/public-apis/issues/3540
|
[] |
doypro
| 0
|
waditu/tushare
|
pandas
| 1,713
|
top10_floatholders 和 top10_holders 接口缺少 2006 年数据
|
top10_floatholders 和 top10_holders 接口缺少 2006 年数据的 4 个季度数据,其他年份正常
tushare id: 224776

|
open
|
2023-07-27T08:07:20Z
|
2023-07-27T08:08:35Z
|
https://github.com/waditu/tushare/issues/1713
|
[] |
GitForBruce
| 0
|
python-visualization/folium
|
data-visualization
| 1,786
|
Geocoder with own locations
|
Greetings, i slightly edited geocoder such that it can now accept your own places.
https://github.com/JohnyCarrot/folium-geocoder-own-locations
Hope someone find it usefull.
|
closed
|
2023-07-30T22:10:04Z
|
2024-05-06T07:28:32Z
|
https://github.com/python-visualization/folium/issues/1786
|
[
"documentation"
] |
JohnyCarrot
| 4
|
qubvel-org/segmentation_models.pytorch
|
computer-vision
| 692
|
How to compute metrics for each class in multi class segmentation
|
I would compute the metrics individually for each class so I would like to have in output a (1xC) vector where C is the number of classes, I was trying like this but it throws me an error:
```
output = torch.rand([10, 3, 256, 256])
target = torch.rand([10, 1, 256, 256]).round().long()
# first compute statistics for true positives, false positives, false negative and
# true negative "pixels"
tp, fp, fn, tn = smp.metrics.get_stats(output, target, mode='multi class', num_classes = 3)
# then compute metrics with required reduction (see metric docs)
iou_score = smp.metrics.iou_score(tp, fp, fn, tn, reduction="macro-imagewise")
f1_score = smp.metrics.f1_score(tp, fp, fn, tn, reduction="macro-imagewise")
false_negatives = smp.metrics.false_negative_rate(tp, fp, fn, tn, reduction=None)
recall = smp.metrics.recall(tp, fp, fn, tn, reduction=None)
```
The error:
```
ValueError: For ``multiclass`` mode ``target`` should be one of the integer types, got torch.float32.
```
|
closed
|
2022-12-03T11:38:01Z
|
2023-02-28T02:04:44Z
|
https://github.com/qubvel-org/segmentation_models.pytorch/issues/692
|
[
"Stale"
] |
santurini
| 3
|
sammchardy/python-binance
|
api
| 1,354
|
Websockets RuntimeError "This event loop is already running"
|
When I'm trying to run a websocket, then in some time stop it, and run a new websocket, the following error occurs:
Exception in thread Thread-2:
Traceback (most recent call last):
File "D:\python\Python39\lib\threading.py", line 950, in _bootstrap_inner
self.run()
File "D:\python\Python39\lib\site-packages\binance\threaded_stream.py", line 59, in run
self._loop.run_until_complete(self.socket_listener())
File "D:\python\Python39\lib\asyncio\base_events.py", line 618, in run_until_complete
self._check_running()
File "D:\python\Python39\lib\asyncio\base_events.py", line 578, in _check_running
raise RuntimeError('This event loop is already running')
RuntimeError: This event loop is already running
D:\python\Python39\lib\threading.py:952: RuntimeWarning: coroutine 'ThreadedApiManager.socket_listener' was never awaited
self._invoke_excepthook(self)
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
This is a part of code which I use to run websocket:
`twm = ThreadedWebsocketManager(self.main_window.api_key, self.main_window.api_secret)`
`twm.start()`
`current_candle_websocket = twm.start_kline_futures_socket(callback=self.handle_candle_message, symbol=self.symbol, interval=Client.KLINE_INTERVAL_5MINUTE)`
This is a part of code which I use to stop websocket:
`twm.stop_socket(current_candle_websocket)`
`twm.stop()`
`twm = ''`
I use Python 3.9. The error didn't occur on python-binance 1.0.15, but since some API features are retired I can no longer use this version and updated python-binance to 1.0.19, and after that I am getting this error.
|
open
|
2023-08-20T17:01:52Z
|
2024-01-22T22:37:48Z
|
https://github.com/sammchardy/python-binance/issues/1354
|
[] |
savelyevlad
| 7
|
babysor/MockingBird
|
pytorch
| 812
|
以界面模式启动,程序未响应后关闭重新启动,无法显示出界面
|
**Summary[问题简述(一句话)]**
以界面模式启动,程序未响应后关闭重新启动,无法显示出界面
**Env & To Reproduce[复现与环境]**
python3.7.9
`python demo_toolbox.py -d dataset` 以界面模式启动后,点击 load above 然后在右侧输入框输入文字时候卡住,程序未响应。关闭后再次启动程序则不出现界面。
**Screenshots[截图(如有)]**

再次启动会卡在这里

|
open
|
2023-01-08T05:08:50Z
|
2023-02-07T06:33:37Z
|
https://github.com/babysor/MockingBird/issues/812
|
[] |
test-black
| 2
|
quokkaproject/quokka
|
flask
| 75
|
Fix error on user register
|
review user forms, register and login
|
closed
|
2013-11-05T07:31:59Z
|
2015-07-16T02:56:41Z
|
https://github.com/quokkaproject/quokka/issues/75
|
[
"bug"
] |
rochacbruno
| 1
|
pyro-ppl/numpyro
|
numpy
| 1,870
|
Grads w.r.t. weights of `MixtureGeneral` Distribution are giving `nan`s
|
Hi,
We have created some models where we estimate the weights of the `MixtureGeneral` distribution. However, when computing the gradient of this argument, we are encountering `nan` values. We enabled `jax.config.update("debug_nan", True)` to diagnose the issue, and it pointed to the following line:
https://github.com/pyro-ppl/numpyro/blob/8e9313fd64a34162bc1c08b20ed310373e82e347/numpyro/distributions/mixtures.py#L152
I suspect that after the implementation of https://github.com/pyro-ppl/numpyro/pull/1791, extra care is needed to handle `inf` and `nan` values, possibly by using a double `where` for a safe `logsumexp`.
> [!IMPORTANT]
> This is an urgent issue, so a prompt response would be greatly appreciated.
|
closed
|
2024-09-27T23:55:10Z
|
2024-10-04T18:22:05Z
|
https://github.com/pyro-ppl/numpyro/issues/1870
|
[
"enhancement"
] |
Qazalbash
| 3
|
plotly/dash-core-components
|
dash
| 801
|
[Feature Request] dcc.Slider - Enable user to define slider direction
|
I have an application where I am like to use sliders to crop an image (heatmap graph). The image's (0,0) is defined as the top left of the image. I'd like the y slider to start with 0 at the top, and end with the height of the image at the bottom. Currently, I cannot find a method to invert the slider to allow for the slider to go {top: low, bottom: high}, instead of the default {top: high, bottom: low}
**Describe the solution you'd like**
A slider where the direction from minimum to maximum could be swapped.
**Describe alternatives you've considered**
I tried setting the min to be higher than the max, did not work.
I've tried flipping the slider with CSS and the result is... erratic
|
open
|
2020-05-05T10:08:10Z
|
2020-05-05T15:49:50Z
|
https://github.com/plotly/dash-core-components/issues/801
|
[
"dash-type-enhancement"
] |
Rory-Lambert
| 0
|
gevent/gevent
|
asyncio
| 1,958
|
Update c-ares version to 1.19.1
|
* gevent version: 22.10.2 from PyPI
* Python version: python 3.11.3
* Operating System: docker python:latest
### Description:
Update c-ares version to 1.19.1 (it is the latest version as of today: https://c-ares.org). It is not a bug with the gevent itself, but with the depedency c-ares. These vulnerabilities exists:
* https://nvd.nist.gov/vuln/detail/CVE-2023-32067
* https://nvd.nist.gov/vuln/detail/CVE-2022-4904
* https://nvd.nist.gov/vuln/detail/CVE-2023-31124
Version 1.19.1 seems fine (at least considering these vulnerabilities). Gevent is currently using 1.18.1: https://github.com/gevent/gevent/blob/master/deps/c-ares/include/ares_version.h#L14. It would be nice to update the c-ares version to 1.19.1.
### What I've run:
I tried this using this Dockerile:
```python
FROM python
RUN pip3 install gevent==22.10.2
```
|
closed
|
2023-06-07T07:08:05Z
|
2023-07-11T13:04:24Z
|
https://github.com/gevent/gevent/issues/1958
|
[] |
karabill
| 0
|
ultralytics/ultralytics
|
machine-learning
| 19,774
|
training failed
|
we have a large dataset that contains 1m tables..tained on yolov11x model..
```
def model_train(data_yaml_path):
model = YOLO('yolo11x.pt')
data = Path(data_yaml_path)
results = model.train(data=data, epochs=10, imgsz=800, patience=2, cls=0.25, box=0.05, project="final-table-detection",
device=[0, 1, 2, 3, 4, 5], batch=36)
```
trained only 3 epochs the model takes first epoch as best one..
..
but the result are worst..wht the reason @glenn-jocher
|
open
|
2025-03-19T05:39:36Z
|
2025-03-19T06:32:52Z
|
https://github.com/ultralytics/ultralytics/issues/19774
|
[
"question",
"detect"
] |
tzktok
| 4
|
albumentations-team/albumentations
|
deep-learning
| 1,719
|
[Feature request] Add apply_to_batch
|
There is request how to apply to video or to batch.
Albumentations was not originally designed to be applied to batch.
But it looks like we can add such functionality without too much pain, by just looping over frames with annotations.
Related to:
https://github.com/albumentations-team/albumentations/issues/465
https://github.com/albumentations-team/albumentations/issues/683
https://github.com/albumentations-team/albumentations/issues/1561
|
closed
|
2024-05-10T18:48:05Z
|
2024-10-31T02:18:43Z
|
https://github.com/albumentations-team/albumentations/issues/1719
|
[
"enhancement"
] |
ternaus
| 1
|
axnsan12/drf-yasg
|
django
| 262
|
Question - how do I mark fields as minLength 0 and nullable?
|
I have an app using DRF 3.8.2, Django 1.11.16, and drf-yasg 1.10.2. Schema generation works, but I tried to add automated test cases to verify that the response matches the generated schema.
I have a CharField on a ModelSerializer that is for a model field that has null=True, blank=True. Despite this, drf-yasg appears to be generating a minLength: 1 requirement. Oracle makes no difference between NULL and empty string for VARCHAR2 and NVARCHAR2. DRF returns these fields as '', which I can change, but is there an easier to control this than a "NullableCharField" subclass?
|
closed
|
2018-12-04T16:09:53Z
|
2018-12-04T16:20:02Z
|
https://github.com/axnsan12/drf-yasg/issues/262
|
[] |
danizen
| 1
|
Gozargah/Marzban
|
api
| 1,572
|
better dependency manager for project
|
currently marzban use pip to manage dependencies.
using pip can lead to some problems and always we need some 3rd party programs as venv
we can replace pip with [uv ](https://docs.astral.sh/uv/) to avoid this and have better dependency manager.
|
closed
|
2025-01-05T20:39:48Z
|
2025-02-19T20:46:00Z
|
https://github.com/Gozargah/Marzban/issues/1572
|
[
"Doc",
"Feature",
"FeedBack Needed",
"Refactor",
"v1"
] |
M03ED
| 6
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.