repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
|---|---|---|---|---|---|---|---|---|---|---|---|
oegedijk/explainerdashboard
|
dash
| 272
|
Dashboard loading stuck in docker
|
I'm using explainer dashboard to visualize models made by different AutoML solutions.
When I run my application locally, everything works as intended. But when it runs in a docker environment, the explainer dashboard is stuck when loading a previously generated page at the step generation layout...:

This does not happen with all AutoML models, and when I restart the docker container manually, the dashboard finally boots properly.
I'm currently not sure where to look for this, maybe you have some ideas?
|
closed
|
2023-07-11T17:01:29Z
|
2024-07-18T12:29:12Z
|
https://github.com/oegedijk/explainerdashboard/issues/272
|
[] |
AlexanderZender
| 0
|
xlwings/xlwings
|
automation
| 2,007
|
Partial result returned in range.api.special_cells()
|
#### OS (e.g. Windows 10 or macOS Sierra)
MacOS 12.5.1
#### Versions of xlwings, Excel and Python (e.g. 0.11.8, Office 365, Python 3.7)
Python 3.9.5
xlwings 0.27.10
Microsoft Excel for Mac v16.62
#### Describe your issue (incl. Traceback!)
I have a trouble while using “range.api.special_cells()”, not sure if it is a bug in appscript. In order to make a comparison, I double checked in VBA and xlwings.
The used range is “$A5:$IT4165”, and I filtered data manually, which should have 3789 rows left visible. Here are codes in Excel VBA
```
MsgBox Sheet1.UsedRange.SpecialCells(xlCellTypeVisible).Areas.Count
MsgBox Sheet1.UsedRange.SpecialCells(xlCellTypeVisible).Areas(35).Address
```
It shows the visible range has been splinted into 35 areas, and the last one points to the last block “$A4152:$IT$4158”.
Here are codes in xlwings.
```
sh1.api.used_range.special_cells(type=12).areas()
sh1.api.used_range.special_cells(type=12).areas.count()
sh1.api.used_range.special_cells(type=12).get_address()
```
The first line of code prints a list returned areas, but the result was trunked, and the list length is 17.
I failed to execute the second line of code, it raised an exception CommandError, with message “Parameter error”. I tried areas.count.get(), it prompts “no attribute ‘get’”.
The third line, I got a list of address, the same result as the first line of code.
Did I miss something in xlwings?
|
open
|
2022-09-03T16:10:55Z
|
2022-09-05T16:48:49Z
|
https://github.com/xlwings/xlwings/issues/2007
|
[
"dependency"
] |
tsusoft
| 6
|
LAION-AI/Open-Assistant
|
python
| 2,695
|
Improve code & style of "try our assistant" in dashboard
|
https://github.com/LAION-AI/Open-Assistant/blob/82540c044ee57171fb66e83986b933649d25e7fb/website/src/pages/dashboard.tsx#L39-L49
|
closed
|
2023-04-18T04:19:13Z
|
2023-05-07T14:28:53Z
|
https://github.com/LAION-AI/Open-Assistant/issues/2695
|
[
"website",
"good first issue",
"UI/UX"
] |
yk
| 6
|
Miserlou/Zappa
|
django
| 2,066
|
Dateutil version bump
|
`python-dateutil` is pinned at 2.6.0, but it's starting to [conflict with some things](https://github.com/spulec/freezegun/issues/333) like `freezegun` that would like a newer version. It seems like dependabot has been been fighting for this a while too: https://github.com/Miserlou/Zappa/pulls?q=is%3Apr+dateutil+is%3Aclosed
I couldn't locate much rationale for this being pinned, could it be bumped to `>=2.7.0`?
|
closed
|
2020-03-19T19:23:37Z
|
2020-09-29T15:44:04Z
|
https://github.com/Miserlou/Zappa/issues/2066
|
[] |
chris-erickson
| 3
|
Anjok07/ultimatevocalremovergui
|
pytorch
| 629
|
I tried to update, but everytime i try to extract vocals i cant bc of this error
|

What can i do to stop getting this error?
I know nothing about programming btw
log:
Last Error Received:
Process: Ensemble Mode
If this error persists, please contact the developers with the error details.
Raw Error Details:
PermissionError: "[WinError 5] Acceso denegado: 'D:\\STEMS\\Ensembled_Outputs_1687524557'"
Traceback Error: "
File "UVR.py", line 4640, in process_start
File "UVR.py", line 533, in __init__
"
Error Time Stamp [2023-06-23 08:49:16]
Full Application Settings:
vr_model: Choose Model
aggression_setting: 10
window_size: 512
batch_size: 4
crop_size: 256
is_tta: False
is_output_image: False
is_post_process: False
is_high_end_process: False
post_process_threshold: 0.2
vr_voc_inst_secondary_model: No Model Selected
vr_other_secondary_model: No Model Selected
vr_bass_secondary_model: No Model Selected
vr_drums_secondary_model: No Model Selected
vr_is_secondary_model_activate: False
vr_voc_inst_secondary_model_scale: 0.9
vr_other_secondary_model_scale: 0.7
vr_bass_secondary_model_scale: 0.5
vr_drums_secondary_model_scale: 0.5
demucs_model: Choose Model
segment: Default
overlap: 0.25
shifts: 2
chunks_demucs: Auto
margin_demucs: 44100
is_chunk_demucs: False
is_chunk_mdxnet: False
is_primary_stem_only_Demucs: False
is_secondary_stem_only_Demucs: False
is_split_mode: True
is_demucs_combine_stems: True
demucs_voc_inst_secondary_model: No Model Selected
demucs_other_secondary_model: No Model Selected
demucs_bass_secondary_model: No Model Selected
demucs_drums_secondary_model: No Model Selected
demucs_is_secondary_model_activate: False
demucs_voc_inst_secondary_model_scale: 0.9
demucs_other_secondary_model_scale: 0.7
demucs_bass_secondary_model_scale: 0.5
demucs_drums_secondary_model_scale: 0.5
demucs_pre_proc_model: No Model Selected
is_demucs_pre_proc_model_activate: False
is_demucs_pre_proc_model_inst_mix: False
mdx_net_model: Choose Model
chunks: Auto
margin: 44100
compensate: Auto
is_denoise: False
is_invert_spec: False
is_mixer_mode: False
mdx_batch_size: Default
mdx_voc_inst_secondary_model: No Model Selected
mdx_other_secondary_model: No Model Selected
mdx_bass_secondary_model: No Model Selected
mdx_drums_secondary_model: No Model Selected
mdx_is_secondary_model_activate: False
mdx_voc_inst_secondary_model_scale: 0.9
mdx_other_secondary_model_scale: 0.7
mdx_bass_secondary_model_scale: 0.5
mdx_drums_secondary_model_scale: 0.5
is_save_all_outputs_ensemble: True
is_append_ensemble_name: False
chosen_audio_tool: Manual Ensemble
choose_algorithm: Min Spec
time_stretch_rate: 2.0
pitch_rate: 2.0
is_gpu_conversion: False
is_primary_stem_only: True
is_secondary_stem_only: False
is_testing_audio: False
is_add_model_name: False
is_accept_any_input: False
is_task_complete: False
is_normalization: False
is_create_model_folder: False
mp3_bit_set: 320k
save_format: WAV
wav_type_set: PCM_16
help_hints_var: False
model_sample_mode: False
model_sample_mode_duration: 30
demucs_stems: All Stems
|
open
|
2023-06-23T14:57:18Z
|
2023-07-07T06:37:36Z
|
https://github.com/Anjok07/ultimatevocalremovergui/issues/629
|
[] |
djcxb
| 1
|
piccolo-orm/piccolo
|
fastapi
| 1,003
|
Support arrays of timestamp / timestamptz / date / time in SQLite
|
We don't currently support things like this in SQLite:
```python
class MyTable(Table):
times = Array(Time())
```
SQLite doesn't have native support for array columns. The way we support it in Piccolo is by serialising the array into a string before storing it in the database, and deserialising it again when querying the row.
We use JSON to do the serialisation / deserialisation, which doesn't support `datetime` / `date` / `time` out of the box.
To support this, we'll need to create new row types for SQLite - like `ARRAY_TIME` / `ARRAY_TIMESTAMP` etc. When we read data from a row with this column type, we know we need to deserialise the values back into a list of Python objects.
One of the reasons we need this functionality is because we're doing a lot of improvements to arrays in Piccolo Admin, and we often test on SQLite.
|
closed
|
2024-05-31T11:10:12Z
|
2024-05-31T11:32:12Z
|
https://github.com/piccolo-orm/piccolo/issues/1003
|
[
"enhancement"
] |
dantownsend
| 0
|
HIT-SCIR/ltp
|
nlp
| 572
|
请问可以不分词直接进行词性标注嘛
|
请问可以不分词直接进行词性标注嘛,比如我现在已经有一个词典,需要进行词性标注
|
closed
|
2022-08-09T08:39:09Z
|
2022-11-16T06:51:46Z
|
https://github.com/HIT-SCIR/ltp/issues/572
|
[] |
lelechallc
| 3
|
harry0703/MoneyPrinterTurbo
|
automation
| 171
|
如果有GPU卡的,可以用ffmpeg的编码加速视频合成codec='h264_nvenc'
|
video.py: final_clip.write_videofile(combined_video_path, codec='h264_nvenc', threads=threads,
video.py: result.write_videofile(temp_output_file, codec='h264_nvenc', threads=params.n_threads or 10,
video.py: video_clip.write_videofile(output_file, codec='h264_nvenc', audio_codec="aac", threads=params.n_threads or 10,
|
closed
|
2024-04-05T01:32:48Z
|
2025-02-18T15:40:51Z
|
https://github.com/harry0703/MoneyPrinterTurbo/issues/171
|
[
"suggestion"
] |
shanghailiwei
| 10
|
microsoft/qlib
|
deep-learning
| 1,136
|
training tricks of HIST
|
Hi there,
Thanks for making the codes of HIST public. I may need some discussion on two topics.
1. `iter_daily_shuffle` or `iter_daily`
Regarding the order of training samples fed to model, I've noticed that the days were shuffled by default `(iter_daily_shuffle).` And when I changed the setting to train samples day by day `(iter_daily),` I met a huge performance decrease. I'm a little confused about that. Is the method `iter_daily_shuffle` on suspicion of some information leakage since the model see later samples at first? Have you met the same situation when training the model?
2. split of train/valid/test sets
It introduces great randomness to model performance when I split train/valid/test sets in different time windows. I'm not sure if there is some workaround to overcome this kind of randomness and make the model more robust.
Any idea or help would be appreciated!
|
closed
|
2022-06-16T02:51:45Z
|
2022-10-06T12:03:17Z
|
https://github.com/microsoft/qlib/issues/1136
|
[
"question",
"stale"
] |
BeckyYu96
| 2
|
skforecast/skforecast
|
scikit-learn
| 572
|
support more than 2 percentiles to be passed for `predict_interval`
|
I'm working on creating a back-adapter for `skforecast` models in `sktime`, starting with `ForecasterAutoreg`. The goal is to provide an option to use `skforecast` as an backend alternative to the already existing `make_reduction`. During this, I notice that currently it enforces `interval` to be a list of values in `[0, 100)`, even though it's not necessary that they sum up to 100.
While as a single confidence interval it makes sense, enforcing length does not seem to be a necessity as you are calculating the quantiles from bootstrapping predictions.
https://github.com/JoaquinAmatRodrigo/skforecast/blob/db04f762d878c096b87b97de1a20f35025bbc437/skforecast/ForecasterAutoreg/ForecasterAutoreg.py#L990
If you remove the fixed length requirement, it will be easier to integrate in `sktime` for `predict_quantiles` methods, otherwise it'd need an otherwise unnecessary for loop or calling `predict_bootstrapping` directly and calculating `quantiles` in the back adapter itself. Can it be considered as a feature request?
If you have any other suggestion to address this without this feature request, that will be much appreciated as well.
|
closed
|
2023-10-16T17:06:44Z
|
2023-10-28T17:55:13Z
|
https://github.com/skforecast/skforecast/issues/572
|
[
"enhancement"
] |
yarnabrina
| 5
|
donnemartin/data-science-ipython-notebooks
|
matplotlib
| 10
|
Add notebook for AWS CLI setup and common commands
|
closed
|
2015-07-21T11:38:07Z
|
2016-05-18T02:07:52Z
|
https://github.com/donnemartin/data-science-ipython-notebooks/issues/10
|
[
"feature-request"
] |
donnemartin
| 1
|
|
AutoGPTQ/AutoGPTQ
|
nlp
| 6
|
triton implementation
|
Here's an implementation using triton. I think we can provide faster speeds.
https://github.com/qwopqwop200/AutoGPTQ-triton
|
closed
|
2023-04-22T13:32:49Z
|
2023-04-26T06:08:09Z
|
https://github.com/AutoGPTQ/AutoGPTQ/issues/6
|
[] |
qwopqwop200
| 4
|
chatanywhere/GPT_API_free
|
api
| 294
|
今天Dalle3调用报错403
|
**Describe the bug 描述bug**
今天白天还可以,晚上刚刚突然不行了。API余额还有很多。
```
{'error': {'message': '<!DOCTYPE html>\n<!--[if lt IE 7]> <html class="no-js ie6 oldie" lang="en-US"> <![endif]-->\n<!--[if IE 7]> <html class="no-js ie7 oldie" lang="en-US"> <![endif]-->\n<!--[if IE 8]> <html class="no-js ie8 oldie" lang="en-US"> <![endif]-->\n<!--[if gt IE 8]><!--> <html class="no-js" lang="en-US"> <!--<![endif]-->\n<head>\n<title>Attention Required! | Cloudflare</title>\n<meta charset="UTF-8" />\n<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />\n<meta http-equiv="X-UA-Compatible" content="IE=Edge" />\n<meta name="robots" content="noindex, nofollow" />\n<meta name="viewport" content="width=device-width,initial-scale=1" />\n<link rel="stylesheet" id="cf_styles-css" href="/cdn-cgi/styles/cf.errors.css" />\n<!--[if lt IE 9]><link rel="stylesheet" id=\'cf_styles-ie-css\' href="/cdn-cgi/styles/cf.errors.ie.css" /><![endif]-->\n<style>body{margin:0;padding:0}</style>\n\n\n<!--[if gte IE 10]><!-->\n<script>\n if (!navigator.cookieEnabled) {\n window.addEventListener(\'DOMContentLoaded\', function () {\n var cookieEl = document.getElementById(\'cookie-alert\');\n cookieEl.style.display = \'block\';\n })\n }\n</script>\n<!--<![endif]-->\n\n\n</head>\n<body>\n <div id="cf-wrapper">\n <div class="cf-alert cf-alert-error cf-cookie-error" id="cookie-alert" data-translate="enable_cookies">Please enable cookies.</div>\n <div id="cf-error-details" class="cf-error-details-wrapper">\n <div class="cf-wrapper cf-header cf-error-overview">\n <h1 data-translate="block_headline">Sorry, you have been blocked</h1>\n <h2 class="cf-subheadline"><span data-translate="unable_to_access">You are unable to access</span> api.openai.com</h2>\n </div><!-- /.header -->\n\n <div class="cf-section cf-highlight">\n <div class="cf-wrapper">\n <div class="cf-screenshot-container cf-screenshot-full">\n \n <span class="cf-no-screenshot error"></span>\n \n </div>\n </div>\n </div><!-- /.captcha-container -->\n\n <div class="cf-section cf-wrapper">\n <div class="cf-columns two">\n <div class="cf-column">\n <h2 data-translate="blocked_why_headline">Why have I been blocked?</h2>\n\n <p data-translate="blocked_why_detail">This website is using a security service to protect itself from online attacks. The action you just performed triggered the security solution. There are several actions that could trigger this block including submitting a certain word or phrase, a SQL command or malformed data.</p >\n </div>\n\n <div class="cf-column">\n <h2 data-translate="blocked_resolve_headline">What can I do to resolve this?</h2>\n\n <p data-translate="blocked_resolve_detail">You can email the site owner to let them know you were blocked. Please include what you were doing when this page came up and the Cloudflare Ray ID found at the bottom of this page.</p >\n </div>\n </div>\n </div><!-- /.section -->\n\n <div class="cf-error-footer cf-wrapper w-240 lg:w-full py-10 sm:py-4 sm:px-8 mx-auto text-center sm:text-left border-solid border-0 border-t border-gray-300">\n <p class="text-13">\n <span class="cf-footer-item sm:block sm:mb-1">Cloudflare Ray ID: <strong class="font-semibold">8c4202eb19ad15bc</strong></span>\n <span class="cf-footer-separator sm:hidden">•</span>\n <span id="cf-footer-item-ip" class="cf-footer-item hidden sm:block sm:mb-1">\n Your IP:\n <button type="button" id="cf-footer-ip-reveal" class="cf-footer-ip-reveal-btn">Click to reveal</button>\n <span class="hidden" id="cf-footer-ip">43.153.99.59</span>\n <span class="cf-footer-separator sm:hidden">•</span>\n </span>\n <span class="cf-footer-item sm:block sm:mb-1"><span>Performance & security by</span> <a rel="noopener noreferrer" href=" " id="brand_link" target="_blank">Cloudflare</a ></span>\n \n </p >\n <script>(function(){function d(){var b=a.getElementById("cf-footer-item-ip"),c=a.getElementById("cf-footer-ip-reveal");b&&"classList"in b&&(b.classList.remove("hidden"),c.addEventListener("click",function(){c.classList.add("hidden");a.getElementById("cf-footer-ip").classList.remove("hidden")}))}var a=document;document.addEventListener&&a.addEventListener("DOMContentLoaded",d)})();</script>\n</div><!-- /.error-footer -->\n\n\n </div><!-- /#cf-error-details -->\n </div><!-- /#cf-wrapper -->\n\n <script>\n window._cf_translation = {};\n \n \n</script>\n\n</body>\n</html>\n', 'type': 'chatanywhere_error', 'param': None, 'code': '403 FORBIDDEN'}}
```
**To Reproduce 复现方法**
调用dalle3
**Screenshots 截图**
If applicable, add screenshots to help explain your problem.
**Tools or Programming Language 使用的工具或编程语言**
Python openai库
**Additional context 其他内容**
Add any other context about the problem here.
|
closed
|
2024-09-16T15:55:23Z
|
2024-09-16T15:57:59Z
|
https://github.com/chatanywhere/GPT_API_free/issues/294
|
[] |
HenryXiaoYang
| 1
|
oegedijk/explainerdashboard
|
plotly
| 37
|
Bug: RegressionRandomIndexComponent not robust
|
I have just managed to kill the whole dashboard because of an error with RegressionRandomIndexComponent, i.e. everything works fine without this component, but enabling it yields to the dashboard only displaying "Error loading layout.". The traceback is below.
I fixed it by appending ".astype('float')" to the data which goes into RegressionExplainer.
```
Exception on /_dash-layout [GET]
Traceback (most recent call last):
File "c:\...\lib\site-packages\flask\app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "c:\...\lib\site-packages\flask\app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "c:\...\lib\site-packages\flask\app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "c:\...\lib\site-packages\flask\_compat.py", line 39, in reraise
raise value
File "c:\...\lib\site-packages\flask\app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "c:\...\lib\site-packages\flask\app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "c:\...\lib\site-packages\dash\dash.py", line 531, in serve_layout
json.dumps(layout, cls=plotly.utils.PlotlyJSONEncoder),
File "C:\lib\json\__init__.py", line 238, in dumps
**kw).encode(obj)
File "c:\...\lib\site-packages\_plotly_utils\utils.py", line 45, in encode
encoded_o = super(PlotlyJSONEncoder, self).encode(o)
File "C:\lib\json\encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "C:\lib\json\encoder.py", line 257, in iterencode
return _iterencode(o, 0)
TypeError: keys must be str, int, float, bool or None, not numpy.int64
```
(this shows multiple times)
|
closed
|
2020-12-03T15:54:28Z
|
2020-12-04T08:04:50Z
|
https://github.com/oegedijk/explainerdashboard/issues/37
|
[] |
hkoppen
| 4
|
plotly/dash-core-components
|
dash
| 665
|
Nice to have: label property for dccRadioItems
|
Creating a radio button with:
```
radioButton <- dccRadioItems(
id = "radiobutton-selector",
options = list(
list("label" = "ggplotly", "value" = "ggplotly"),
list("label" = "plotly", "value" = "plotly")
), value = "ggplotly")
```
results in:
<img width="92" alt="Screen Shot 2019-09-30 at 3 05 47 PM" src="https://user-images.githubusercontent.com/20918264/65908630-d2a1e100-e394-11e9-9bac-89e8ac0bc369.png">
However, having a `label` or `title` property could save user from creating an additional component and aligning it with the radioselector.
```
radioButton <- dccRadioItems(
id = "radiobutton-selector",
*label = "Plot Type"*
options = list(
list("label" = "ggplotly", "value" = "ggplotly"),
list("label" = "plotly", "value" = "plotly")
), value = "ggplotly")
```
desired result:
<img width="97" alt="Screen Shot 2019-09-30 at 3 08 00 PM" src="https://user-images.githubusercontent.com/20918264/65909006-90c56a80-e395-11e9-804b-277f083fb16d.png">
|
open
|
2019-09-30T19:20:05Z
|
2019-10-04T00:05:38Z
|
https://github.com/plotly/dash-core-components/issues/665
|
[
"dash-type-enhancement"
] |
CanerIrfanoglu
| 0
|
encode/uvicorn
|
asyncio
| 1,436
|
uvicorn adding its own server header to response
|
### Discussed in https://github.com/encode/uvicorn/discussions/1435
<div type='discussions-op-text'>
<sup>Originally posted by **udit-pandey** April 1, 2022</sup>
I have added few headers using [secure package](https://pypi.org/project/secure/) in my fastapi application. I wanted to overwrite the server header(having default value "uvicorn") to something else. All the added headers using secure package are replicated in the responses except the server header which is coming twice(once with value given by me and other with uvicorn) as shown the postman api response below:

I run my application using:
`gunicorn -k uvicorn.workers.UvicornWorker ${APP_MODULE} --bind 0.0.0.0:80`
Why is this header being added again by **uvicorn** even though it already exists?</div>
|
closed
|
2022-04-01T05:35:17Z
|
2022-11-01T08:04:05Z
|
https://github.com/encode/uvicorn/issues/1436
|
[
"bug",
"http"
] |
Kludex
| 6
|
tfranzel/drf-spectacular
|
rest-api
| 489
|
AUTHENTICATION_WHITELIST not working
|
** Description **
We are trying to override our DEFAULT_AUTHENTICATION_CLASSES that Swagger UI will use. We have SessionAuthentication and TokenAuthentication set in our Django settings. In our SPECTAULAR_SETTINGS we only want to use the TokenAuthentication, so we add it there as a single item list AUTHENTICATION_WHITELIST: ['rest_framework.authentication.TokenAuthentication']. However swagger when loaded still shows both authentication methods.
**To Reproduce**
0.17.3


**Expected behavior**
As per the description in the settings docs, I expected only Token Authentication to appear?

|
closed
|
2021-08-28T15:01:22Z
|
2021-08-28T15:14:40Z
|
https://github.com/tfranzel/drf-spectacular/issues/489
|
[] |
megazza
| 1
|
AirtestProject/Airtest
|
automation
| 608
|
Android里使用的get_text()获取button上的文字,在ios里有类似get_text这种方法吗?
|
**描述问题bug**
Android里使用的get_text()获取button上的文字,在ios里有类似get_text这种方法吗?
这种方式,ios,不执行这行。求大神指点
poco("Window").offspring("Table").offspring("已关注").get_text() == "已关注":
元素信息如下:
type : Button
name : 已关注
visible : True
isEnabled : b'1'
label : b'\xe5\xb7\xb2\xe5\x85\xb3\xe6\xb3\xa8'
identifier : b''
size : [0.16533333333333333, 0.035982008995502246]
pos : [0.8773333333333333, 0.13343328335832083]
zOrders : {'local': 0, 'global': 0}
anchorPoint : [0.5, 0.5]
**复现步骤**
1. get_text() 获取button上的文字name
2. 不执行该语句
**预期效果**
可以获取button的文字,对比name是否一致
**python 版本:** `python3.5`
**airtest 版本:** `1.2.2`
**设备:**
- 型号: iphone7
- 系统: 12.3.1
|
closed
|
2019-11-14T02:47:13Z
|
2019-11-14T02:51:48Z
|
https://github.com/AirtestProject/Airtest/issues/608
|
[] |
daisy0o
| 0
|
ultralytics/ultralytics
|
pytorch
| 19,353
|
Issue with Incorrect Inference Results in INT8 Model After Converting Custom YOLOv11.pt to TensorRT
|
### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
After converting a custom-trained YOLOv11.pt model to TensorRT in FP16, the inference results are correct. However, when converting the same model to TensorRT in INT8, the inference results are incorrect, with most of the scores being zero. When using the official YOLOv11s.pt model, both INT8 and FP16 TensorRT conversions produce correct detection results. The environment and calibration dataset for all conversions are the same. What could be causing this issue?
### Additional
_No response_
|
open
|
2025-02-21T07:32:12Z
|
2025-02-26T05:04:02Z
|
https://github.com/ultralytics/ultralytics/issues/19353
|
[
"question",
"detect",
"exports"
] |
xiaoche-24
| 10
|
FlareSolverr/FlareSolverr
|
api
| 837
|
Timeout when resolving challenge
|
### Have you checked our README?
- [X] I have checked the README
### Have you followed our Troubleshooting?
- [X] I have followed your Troubleshooting
### Is there already an issue for your problem?
- [X] I have checked older issues, open and closed
### Have you checked the discussions?
- [X] I have read the Discussions
### Environment
```markdown
- FlareSolverr version: 3.2.2
- Last working FlareSolverr version: 3.2.2
- Operating system: Fedora Server
- Are you using Docker: yes
- FlareSolverr User-Agent (see log traces or / endpoint): Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36
- Are you using a VPN: no
- Are you using a Proxy: no
- Are you using Captcha Solver: no
- If using captcha solver, which one:
- URL to test this issue: private, no acces outside lan
```
### Description
This morning when I checked radarr and sonarr I found an error on jackett about YGG (protected by cloudflare). When I go to the log jackett ask me to check flaresolver logs and I see that `The Cloudflare 'Verify you are human' button not found on the page` and a 500 Internal Error in logs
### Logged Error Messages
```text
2023-07-31 09:25:22 INFO ReqId 140094368220928 Incoming request => POST /v1 body: {'maxTimeout': 55000, 'cmd': 'request.get', 'url': 'https://www3.yggtorrent.wtf/engine/search?do=search&order=desc&sort=publish_date&category=all'}
2023-07-31 09:25:22 DEBUG ReqId 140094368220928 Launching web browser...
2023-07-31 09:25:23 DEBUG ReqId 140094368220928 Started executable: `/app/chromedriver` in a child process with pid: 7322
2023-07-31 09:25:23 DEBUG ReqId 140094368220928 New instance of webdriver has been created to perform the request
2023-07-31 09:25:23 DEBUG ReqId 140094341994240 Navigating to... https://www3.yggtorrent.wtf/engine/search?do=search&order=desc&sort=publish_date&category=all
2023-07-31 09:25:24 DEBUG ReqId 140094341994240 Response HTML:
****
2023-07-31 09:25:24 INFO ReqId 140094341994240 Challenge detected. Title found: Just a moment...
2023-07-31 09:25:24 DEBUG ReqId 140094341994240 Waiting for title (attempt 1): Just a moment...
2023-07-31 09:25:34 DEBUG ReqId 140094341994240 Timeout waiting for selector
2023-07-31 09:25:34 DEBUG ReqId 140094341994240 Try to find the Cloudflare verify checkbox
2023-07-31 09:25:35 DEBUG ReqId 140094341994240 Cloudflare verify checkbox found and clicked
2023-07-31 09:25:35 DEBUG ReqId 140094341994240 Try to find the Cloudflare 'Verify you are human' button
2023-07-31 09:25:35 DEBUG ReqId 140094341994240 The Cloudflare 'Verify you are human' button not found on the page
2023-07-31 09:25:37 DEBUG ReqId 140094341994240 Waiting for title (attempt 2): Just a moment...
2023-07-31 09:25:47 DEBUG ReqId 140094341994240 Timeout waiting for selector
2023-07-31 09:25:47 DEBUG ReqId 140094341994240 Try to find the Cloudflare verify checkbox
2023-07-31 09:25:47 DEBUG ReqId 140094341994240 Cloudflare verify checkbox found and clicked
2023-07-31 09:25:47 DEBUG ReqId 140094341994240 Try to find the Cloudflare 'Verify you are human' button
2023-07-31 09:25:48 DEBUG ReqId 140094341994240 The Cloudflare 'Verify you are human' button not found on the page
2023-07-31 09:25:50 DEBUG ReqId 140094341994240 Waiting for title (attempt 3): Just a moment...
2023-07-31 09:26:00 DEBUG ReqId 140094341994240 Timeout waiting for selector
2023-07-31 09:26:00 DEBUG ReqId 140094341994240 Try to find the Cloudflare verify checkbox
2023-07-31 09:26:01 DEBUG ReqId 140094341994240 Cloudflare verify checkbox found and clicked
2023-07-31 09:26:01 DEBUG ReqId 140094341994240 Try to find the Cloudflare 'Verify you are human' button
2023-07-31 09:26:01 DEBUG ReqId 140094341994240 The Cloudflare 'Verify you are human' button not found on the page
2023-07-31 09:26:03 DEBUG ReqId 140094341994240 Waiting for title (attempt 4): Just a moment...
2023-07-31 09:26:13 DEBUG ReqId 140094341994240 Timeout waiting for selector
2023-07-31 09:26:13 DEBUG ReqId 140094341994240 Try to find the Cloudflare verify checkbox
2023-07-31 09:26:14 DEBUG ReqId 140094341994240 Cloudflare verify checkbox found and clicked
2023-07-31 09:26:14 DEBUG ReqId 140094341994240 Try to find the Cloudflare 'Verify you are human' button
2023-07-31 09:26:14 DEBUG ReqId 140094341994240 The Cloudflare 'Verify you are human' button not found on the page
2023-07-31 09:26:16 DEBUG ReqId 140094341994240 Waiting for title (attempt 5): Just a moment...
2023-07-31 09:26:19 DEBUG ReqId 140094368220928 A used instance of webdriver has been destroyed
2023-07-31 09:26:19 ERROR ReqId 140094368220928 Error: Error solving the challenge. Timeout after 55.0 seconds.
2023-07-31 09:26:19 DEBUG ReqId 140094368220928 Response => POST /v1 body: {'status': 'error', 'message': 'Error: Error solving the challenge. Timeout after 55.0 seconds.', 'startTimestamp': 1690788322982, 'endTimestamp': 1690788379023, 'version': '3.2.2'}
2023-07-31 09:26:19 INFO ReqId 140094368220928 Response in 56.041 s
2023-07-31 09:26:19 INFO ReqId 140094368220928 172.18.0.3 POST http://flaresolverr:8191/v1 500 Internal Server Error
```
### Screenshots
_No response_
|
closed
|
2023-07-31T07:51:16Z
|
2023-07-31T07:53:44Z
|
https://github.com/FlareSolverr/FlareSolverr/issues/837
|
[] |
rricordeau
| 0
|
sktime/sktime
|
scikit-learn
| 7,282
|
[DOC] Time Series Segmentation with sktime and ClaSP Notebook Example Contains Bug
|
#### Describe the issue linked to the documentation
The ClaSP notebook example contains a bug:
https://www.sktime.net/en/stable/examples/annotation/segmentation_with_clasp.html
The `fmt` parameter is no longer present in the API. It appears to have been replaced with `dense_to_sparse` and `sparse_to_dense` methods.
This is a minor issue, but it's the only annotation example, so I thought I would fix it. (immediate pull request to follow).
<!--
Tell us about the confusion introduced in the documentation.
-->
#### Suggest a potential alternative/fix
The fix is to remove the 'fmt' attribute from the `ClaSPSegmentation` call and then change the Output Format section.
<!--
Tell us how we could improve the documentation in this regard.
-->
|
closed
|
2024-10-16T23:45:05Z
|
2024-10-17T17:55:35Z
|
https://github.com/sktime/sktime/issues/7282
|
[
"documentation"
] |
RobotPsychologist
| 0
|
huggingface/transformers
|
tensorflow
| 36,567
|
torch_dtype is actually used now?
|
### System Info
different transformers versions. see description
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Previously (v4.46.3, didn't check all versions), `torch_dtype` in the config was ignored, meaning that model weights would get loaded in fp32 by default (correct behavior for training). On latest transformers version (v4.49.0), it seems it is now used, and so the weights get loaded with whatever is in the checkpoint. Was this change intentional? I previously recall seeing somewhere in the code that you weren't going to make the change to actually use torch_dtype until v5, and I didn't see anything in release notes at a glance, although maybe I missed it.
```
In [1]: import transformers
In [2]: llama1bcfg = transformers.AutoConfig.from_pretrained('meta-llama/Llama-3.2-1B-Instruct')
In [3]: llama1b = transformers.AutoModelForCausalLM.from_config(llama1bcfg)
In [4]: next(llama1b.parameters()).dtype
Out[4]: torch.bfloat16
```
### Expected behavior
Not actually sure, would like to confirm what you expect now.
|
open
|
2025-03-05T18:38:34Z
|
2025-03-11T00:51:17Z
|
https://github.com/huggingface/transformers/issues/36567
|
[
"bug"
] |
dakinggg
| 3
|
AirtestProject/Airtest
|
automation
| 433
|
log.txt文件中的filepath是写死的绝对路径。能否改成(主程序)的相对路径
|
log.txt文件中的filepath是写死的绝对路径。
通过log.txt生成的report.html 所读取的文件资源都是通过这些绝对路径获取。
一旦我需要把报告,打包给别人或更换位置(比如放nginx服务器上)那么都会读取不到文件资源而打开报告失败。
以下是 log.tx日志。 当我打包报告给别人是,打开报告还是会去以下filepath路径获取资源。
{"data": {"call_args": {"screen": "array([[[202, 167, 141],\n [200, 164, 140],\n [202, 160, 141],\n ...,\n [207, 167, 139],\n [208, 168, 140],\n [209, 169, 141]],\n\n [[203, 166, 140],\n [200, 162, 138],\n [201, 159, 140],\n ...,\n [209, 169, 141],\n [209, 169, 141],\n [209, 169, 141]],\n\n [[209, 169, 144],\n [207, 165, 142],\n [203, 160, 141],\n ...,\n [212, 172, 144],\n [210, 170, 142],\n [208, 168, 140]],\n\n ...,\n\n [[ 0, 0, 0],\n [ 0, 0, 0],\n [ 0, 0, 0],\n ...,\n [ 0, 0, 0],\n [ 0, 0, 0],\n [ 0, 0, 0]],\n\n [[ 0, 0, 0],\n [ 0, 0, 0],\n [ 0, 0, 0],\n ...,\n [ 0, 0, 0],\n [ 0, 0, 0],\n [ 0, 0, 0]],\n\n [[ 0, 0, 0],\n [ 0, 0, 0],\n [ 0, 0, 0],\n ...,\n [ 0, 0, 0],\n [ 0, 0, 0],\n [ 0, 0, 0]]], dtype=uint8)", "self": {"filename": "/home/gitlab-runner/builds/5VAdgMWr/0/FollowmeTest/appuitest/item/picture/android/CN/Home/pendingAccProcess.jpg", "_filepath": "/home/gitlab-runner/builds/5VAdgMWr/0/FollowmeTest/appuitest/item/picture/android/CN/Home/pendingAccProcess.jpg", "threshold": 0.7, "target_pos": 5, "record_pos": null, "resolution": [], "rgb": false, "__class__": "Template"}}, "start_time": 1561023054.4419475, "end_time": 1561023055.4388847, "name": "_cv_match", "ret": null}, "depth": 3, "tag": "function", "time": "2019-06-20 17:30:55"}
|
closed
|
2019-06-20T11:59:59Z
|
2019-06-25T08:12:42Z
|
https://github.com/AirtestProject/Airtest/issues/433
|
[] |
cccthon
| 12
|
wkentaro/labelme
|
deep-learning
| 1,025
|
How to describe the label
|
I want to add a description to the label
|
closed
|
2022-05-25T11:58:18Z
|
2022-06-25T04:17:41Z
|
https://github.com/wkentaro/labelme/issues/1025
|
[] |
lihangyang
| 0
|
aws/aws-sdk-pandas
|
pandas
| 2,874
|
Athena read_sql_query with pyarrow backend trims time in timestamp
|
### Describe the bug
Running this query:
```
wr.athena.read_sql_query("SELECT TIMESTAMP '2024-06-24 9:30:51'", dtype_backend='pyarrow')
```
yields `2024-06-24` instead of `2024-06-24 09:30:51`. It seems like `timestamp` from Athena is mapped to `date64[pyarrow]` instead of `timestamp[ns][pyarrow]`
### How to Reproduce
```
wr.athena.read_sql_query("SELECT TIMESTAMP '2024-06-24 9:30:51'", dtype_backend='pyarrow')
```
### Expected behavior
The result should be similar to running with numpy backend:
```
wr.athena.read_sql_query("SELECT TIMESTAMP '2024-06-24 9:30:51'")
```
which correctly gives back `2024-06-24 09:30:51`
### Your project
_No response_
### Screenshots
_No response_
### OS
Linux
### Python version
3.12
### AWS SDK for pandas version
3.8.0
### Additional context
_No response_
|
closed
|
2024-06-26T04:02:00Z
|
2024-06-26T23:01:11Z
|
https://github.com/aws/aws-sdk-pandas/issues/2874
|
[
"bug"
] |
Aleksei-Poliakov
| 0
|
piccolo-orm/piccolo
|
fastapi
| 749
|
auto-generated fastapi 0.89.0 error
|
Heads up:
It looks like the [fastapi 0.89.0 release](https://github.com/tiangolo/fastapi/releases/tag/0.89.0) breaks the asgi code generated by piccolo ( `piccolo asgi new` )
```
(venv) $ python main.py
INFO: Will watch for changes in these directories: ['/...']
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO: Started reloader process [47337] using StatReload
...
File "/.../app.py", line 21, in <module>
create_admin(
File "/.../venv/lib/python3.9/site-packages/piccolo_admin/endpoints.py", line 1085, in create_admin
return AdminRouter(
File "/.../venv/lib/python3.9/site-packages/piccolo_admin/endpoints.py", line 523, in __init__
private_app.add_api_route(
File "/.../venv/lib/python3.9/site-packages/fastapi/applications.py", line 304, in add_api_route
self.router.add_api_route(
File "/.../venv/lib/python3.9/site-packages/fastapi/routing.py", line 572, in add_api_route
route = route_class(
File "/.../venv/lib/python3.9/site-packages/fastapi/routing.py", line 400, in __init__
self.response_field = create_response_field(
File "/.../venv/lib/python3.9/site-packages/fastapi/utils.py", line 90, in create_response_field
raise fastapi.exceptions.FastAPIError(
fastapi.exceptions.FastAPIError: Invalid args for response field! Hint: check that <class 'starlette.responses.JSONResponse'> is a valid pydantic field type
```
Down-grading to `0.88.0` fixed the error.
|
closed
|
2023-01-07T20:03:48Z
|
2023-01-08T11:56:59Z
|
https://github.com/piccolo-orm/piccolo/issues/749
|
[] |
mwmeyer
| 2
|
python-visualization/folium
|
data-visualization
| 1,500
|
Click choropleth to reveal Altair chart
|
Hello folks,
I see that I can click a marker to get a chart to pop up; however, is there a way to click within a choropleth area to show a chart as a pop up?
Any nudge would be much appreciated!
|
closed
|
2021-08-09T19:30:50Z
|
2021-08-10T20:09:21Z
|
https://github.com/python-visualization/folium/issues/1500
|
[] |
Alcampopiano
| 1
|
litl/backoff
|
asyncio
| 205
|
Expose typing hints as they are part of the API
|
Just recently upgraded to 2.2.1 from 1.11 and pyright gave me lots of type hint errors.
```
/home/builder/archivist/confirmer.py
/home/builder/archivist/confirmer.py:84:15 - error: Argument of type "(details: dict[str, Any]) -> NoReturn" cannot be assigned to parameter "on_giveup" of type "_Handler | Iterable[_Handler]" in function "on_predicate"
Type "(details: dict[str, Any]) -> NoReturn" cannot be assigned to type "_Handler | Iterable[_Handler]"
Type "(details: dict[str, Any]) -> NoReturn" cannot be assigned to type "_Handler"
Parameter 1: type "Details" cannot be assigned to type "dict[str, Any]"
"Details" is incompatible with "dict[str, Any]"
"function" is incompatible with protocol "Iterable[_Handler]"
"_iter_" is not present (reportGeneralTypeIssues)
/home/builder/archivist/confirmer.py:122:15 - error: Argument of type "(details: dict[str, Any]) -> NoReturn" cannot be assigned to parameter "on_giveup" of type "_Handler | Iterable[_Handler]" in function "on_predicate"
Type "(details: dict[str, Any]) -> NoReturn" cannot be assigned to type "_Handler | Iterable[_Handler]"
Type "(details: dict[str, Any]) -> NoReturn" cannot be assigned to type "_Handler"
Parameter 1: type "Details" cannot be assigned to type "dict[str, Any]"
"Details" is incompatible with "dict[str, Any]"
"function" is incompatible with protocol "Iterable[_Handler]"
"_iter_" is not present (reportGeneralTypeIssues)
```
In order to fix this I had to import a private directory viz:
from backoff._typing import Details
Surely type hints are part of the API and should not be private ?
|
open
|
2023-07-19T13:19:34Z
|
2023-07-19T13:20:05Z
|
https://github.com/litl/backoff/issues/205
|
[] |
eccles
| 0
|
cvat-ai/cvat
|
computer-vision
| 8,381
|
Static dimensions per track over an entire video
|
### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Is your feature request related to a problem? Please describe.
Hi,
We are using CVAT a lot for labeling, one of your key USPs for us is the good interpolation of bounding boxes and support for rotated bounding boxes.
In our data domain, we label ships, bridges, buoys, etc... in topview videos, where the size of these objects is static. Sometimes these objects aren't clearly visible in early parts of the video, and we would like to change the dimensions of a track. Currently, we have to step through all the keyframes per object and manually adapt the size of the bounding box. Mostly, we just discard the bounding boxes and start over because that is faster.
<img width="453" alt="Screenshot 2024-08-30 at 07 58 25" src="https://github.com/user-attachments/assets/a11b4147-c0af-4ba3-aabd-5ec02375e688">
### Describe the solution you'd like
It would save us A LOT of time, if we had the option to change the dimensions of an object over the entire track in a video at once. For example, be able to right-click a bounding box and select "propagate dimensions to track" or similar.
Thank you for considering this request and keep up the good work here!
|
open
|
2024-08-30T13:59:52Z
|
2024-08-30T14:01:22Z
|
https://github.com/cvat-ai/cvat/issues/8381
|
[
"enhancement"
] |
timresink
| 0
|
ydataai/ydata-profiling
|
data-science
| 1,520
|
Bug Report:cannot import name 'Buffer' from 'typing_extensions' (/usr/local/lib/python3.10/dist-packages/typing_extensions.py)
|
### Current Behaviour
I get this error:
"cannot import name 'Buffer' from 'typing_extensions' (/usr/local/lib/python3.10/dist-packages/typing_extensions.py)"
when I try importing from ydata_profiling import ProfileReport on google colab.
I wondered if you could help me.
### Expected Behaviour
install nomally
### Data Description
-
### Code that reproduces the bug
```Python
from ydata_profiling import ProfileReport
```
### pandas-profiling version
v4.6.3
### Dependencies
```Text
python==3.10.12
pandas==1.5.3
numpy==1.23.5
```
### OS
google colab
### Checklist
- [X] There is not yet another bug report for this issue in the [issue tracker](https://github.com/ydataai/pandas-profiling/issues)
- [X] The problem is reproducible from this bug report. [This guide](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) can help to craft a minimal bug report.
- [X] The issue has not been resolved by the entries listed under [Common Issues](https://pandas-profiling.ydata.ai/docs/master/pages/support_contrib/common_issues.html).
|
open
|
2023-12-18T13:40:30Z
|
2024-02-09T10:27:17Z
|
https://github.com/ydataai/ydata-profiling/issues/1520
|
[
"needs-triage"
] |
abebe0210
| 6
|
jessevig/bertviz
|
nlp
| 20
|
Hovering issue
|
Hello, thanks for the work.
Short bug report.
In safari, when you are hovering over the visualisation in google colab, the window in of the visualisation is scrolling down automatically, making it impossible to work with model_view and neuron_view. Works in chrome.
Thanks again
|
closed
|
2019-09-19T08:03:05Z
|
2020-03-28T13:15:03Z
|
https://github.com/jessevig/bertviz/issues/20
|
[] |
Keisn1
| 3
|
kennethreitz/responder
|
graphql
| 21
|
Background task considerations.
|
There are likely some considerations around backlogs and process shutdown to think about here.
In Starlette, background tasks run within the request/response task. The response has been sent, and the server can continue to service other requests on the same connection, but it is also implicitly able to determine how many background tasks are running (response sent, but ASGI call not yet complete), and is able to effect graceful shutdown (for standard process restarts. Close connections, wait until background tasks are complete with a suitable timeout ceiling, then shutdown)
A global queue attached to the app is valid too, but ideally you’d want to 1. Ensure that the ASGI lifecycle shutdown waits for the queue to empty. 2. Consider how you want to deal with task backlogs.
Actions for now could be:
1. Attach background tasks to responses instead, ala Starlette. Everything’ll be handled for you then.
2. Don’t sweat the finer details for now, since it’s all alpha. We can come back to it.
|
closed
|
2018-10-13T06:51:31Z
|
2018-10-30T11:52:44Z
|
https://github.com/kennethreitz/responder/issues/21
|
[
"feature"
] |
tomchristie
| 1
|
supabase/supabase-py
|
fastapi
| 127
|
how to use .range() using supabase-py
|
please tell me how to use .range() using supabase-py
|
closed
|
2022-01-18T02:35:55Z
|
2023-10-07T12:57:15Z
|
https://github.com/supabase/supabase-py/issues/127
|
[] |
alif-arrizqy
| 1
|
activeloopai/deeplake
|
computer-vision
| 2,150
|
[FEATURE]
|
## 🚨🚨 Feature Request
- [ ] Related to an existing [Issue](../issues)
- [ ] A new implementation (Improvement, Extension)
### Is your feature request related to a problem?
A clear and concise description of what the problem is. Ex. I have an issue when [...]
### If your feature will improve `HUB`
A clear and concise description of how it will help `HUB`. Please prefer references, if possible [...]
### Description of the possible solution
A clear and concise description of what you want to happen. Add any considered drawbacks.
### An alternative solution to the problem can look like
A clear and concise description of any alternative solutions or features you've considered.
**Teachability, Documentation, Adoption, Migration Strategy**
If you can, explain how users will be able to use this and possibly write out a version the docs.
Maybe a screenshot or design?
|
closed
|
2023-01-30T20:17:42Z
|
2024-09-19T08:45:42Z
|
https://github.com/activeloopai/deeplake/issues/2150
|
[
"enhancement"
] |
AtelLex
| 2
|
xinntao/Real-ESRGAN
|
pytorch
| 350
|
What happened to RaGAN?
|
ESRGAN was trained with a RaGAN but Real-ESRGAN was trained with a vanilla GAN. As far as I can tell there was no mention in the Real-ESRGAN paper as to why this decision was made, could you shed some light on the subject?
|
open
|
2022-06-01T20:54:06Z
|
2022-06-01T20:54:06Z
|
https://github.com/xinntao/Real-ESRGAN/issues/350
|
[] |
siegelaaron94
| 0
|
mitmproxy/pdoc
|
api
| 662
|
[Question] Is there any way to prevent pdoc from documenting method objects?
|
#### Problem Description
A clear and concise description of what the bug is.
In my application, I use `typer` as a CLI framework (https://typer.tiangolo.com/). Part of the framework uses decorators with annotations to manage flags, help text, etc.
In the example below you can see that the `@callback` decorator uses a class `CommonOptions` which provide a reusable set of options. When I run `pdoc` the `typer.Option()` method (`typer.models.OptionInfo`) is found to be an in-memory object, and as a result the `id` value changes on every run of `pdoc`.
This is problematic as the documentation related to the files which use this decorator are never "correct" (the `id` values are always different). I would like to use `pdoc` in a pre-commit hook, but this behavior makes that infeasible.
My question is: Is there any way to prevent this from happening?
There are a few possible "solutions" that would be acceptable if possible, for example:
1. Is my code written "incorrectly" with relation to how `pdoc` works that isn't obvious to me?
2. Is there a flag that I can add to `pdoc` to write a static reference to the in-memory object? (I don't believe so)
3. Is there a decorator that will skip "following" the object, and just leave a static reference to the method? I have tried `@private` in the docstring, however I would really prefer to have the methods documented. Also, since `typer` uses the docstring for help text, this will show up in the help text and cause confusion.
4. Is there a way to tell `pdoc` to ignore / exclude an entire directory tree from documentation? So I don't have to add `@private` to all of the docstrings individually.
5. Is there some change that could implement one or more of the above solutions within `pdoc`?
Including example code below in case there's something I can do to fix my code. I can try to produce a simpler example for reproducibility, if requested.
code:
```python
@callback(app, CommonOptions)
def callback(ctx: typer.Context):
"""
Callback help text
"""
if ctx.invoked_subcommand is None:
typer.help(ctx)
```
documentation:
```python
@callback(app, CommonOptions)
def callback(
ctx: typer.models.Context,
*,
log_level: Annotated[Optional[str], <typer.models.OptionInfo object at 0x7ff4362c79b0>] = None,
version: Annotated[Optional[bool], <typer.models.OptionInfo object at 0x7ff4362c4a10>] = False
):
```
#### Steps to reproduce the behavior:
1. See example code below.
#### System Information
Paste the output of "pdoc --version" here.
```shell
❯ pdoc --version
pdoc: 14.3.0
Python: 3.12.1
Platform: Linux-6.2.0-1017-lowlatency-x86_64-with-glibc2.35
```
#### Example source
`CommonOptions` source:
```python
"""
source: https://github.com/tiangolo/typer/issues/153#issuecomment-1771421969
"""
from dataclasses import dataclass, field
from typing import Annotated, Optional
import click
import typer
from lbctl.utils.helpers.version import Version, VersionCheckErrors
@dataclass
class CommonOptions:
"""
Dataclass defining CLI options used by all commands.
@private - hide from pdoc output due to some dynamic objects
"""
instance = None
ATTRNAME: str = field(default="common_params", metadata={"ignore": True})
def __post_init__(self):
CommonOptions.instance = self
@classmethod
def from_context(cls, ctx: typer.Context) -> "CommonOptions":
if (common_params_dict := getattr(ctx, "common_params", None)) is None:
raise ValueError("Context missing common_params")
return cls(**common_params_dict)
def callback_log_level(cls, ctx: typer.Context, value: str):
"""Callback for log level."""
if value:
from lbctl.utils.config import config
config.configure_logger(console_log_level=value)
def callback_version(cls, ctx: typer.Context, value: bool):
"""Callback for version."""
if value:
try:
ver = Version()
ver.version(show_check=True, suggest_update=True)
except (KeyboardInterrupt, click.exceptions.Abort):
raise VersionCheckErrors.Aborted
raise VersionCheckErrors.Checked
log_level: Annotated[
Optional[str],
typer.Option(
"--log-level",
"-L",
help="Set log level for current command",
callback=callback_log_level,
),
] = None
version: Annotated[
Optional[bool],
typer.Option("--version", "-V", help="Show version and exit", callback=callback_version),
] = False
```
decorator source
```python
"""
source: https://github.com/tiangolo/typer/issues/153#issuecomment-1771421969
"""
from dataclasses import fields
from functools import wraps
from inspect import Parameter, signature
from typing import TypeVar
import typer
from lbctl.common.options import CommonOptions
OptionsType = TypeVar("OptionsType", bound="CommonOptions")
def callback(typer_app: typer.Typer, options_type: OptionsType, *args, **kwargs):
def decorator(__f):
@wraps(__f)
def wrapper(*__args, **__kwargs):
if len(__args) > 0:
raise RuntimeError("Positional arguments are not supported")
__kwargs = _patch_wrapper_kwargs(options_type, **__kwargs)
return __f(*__args, **__kwargs)
_patch_command_sig(wrapper, options_type)
return typer_app.callback(*args, **kwargs)(wrapper)
return decorator
def command(typer_app, options_type, *args, **kwargs):
def decorator(__f):
@wraps(__f)
def wrapper(*__args, **__kwargs):
if len(__args) > 0:
raise RuntimeError("Positional arguments are not supported")
__kwargs = _patch_wrapper_kwargs(options_type, **__kwargs)
return __f(*__args, **__kwargs)
_patch_command_sig(wrapper, options_type)
return typer_app.command(*args, **kwargs)(wrapper)
return decorator
def _patch_wrapper_kwargs(options_type, **kwargs):
if (ctx := kwargs.get("ctx")) is None:
raise RuntimeError("Context should be provided")
common_opts_params: dict = {}
if options_type.instance is not None:
common_opts_params.update(options_type.instance.__dict__)
for field in fields(options_type):
if field.metadata.get("ignore", False):
continue
value = kwargs.pop(field.name)
if value == field.default:
continue
common_opts_params[field.name] = value
options_type(**common_opts_params)
setattr(ctx, options_type.ATTRNAME, common_opts_params)
return {"ctx": ctx, **kwargs}
def _patch_command_sig(__w, options_type) -> None:
sig = signature(__w)
new_parameters = sig.parameters.copy()
options_type_fields = fields(options_type)
for field in options_type_fields:
if field.metadata.get("ignore", False):
continue
new_parameters[field.name] = Parameter(
name=field.name,
kind=Parameter.KEYWORD_ONLY,
default=field.default,
annotation=field.type,
)
for kwarg in sig.parameters.values():
if kwarg.kind == Parameter.KEYWORD_ONLY and kwarg.name != "ctx":
if kwarg.name not in new_parameters:
new_parameters[kwarg.name] = kwarg.replace(default=kwarg.default)
new_sig = sig.replace(parameters=tuple(new_parameters.values()))
setattr(__w, "__signature__", new_sig)
```
|
closed
|
2024-01-16T19:36:32Z
|
2024-01-18T00:07:23Z
|
https://github.com/mitmproxy/pdoc/issues/662
|
[
"bug"
] |
lhriley
| 5
|
explosion/spaCy
|
data-science
| 12,333
|
Docker containers on spaCy website are not working
|
<!-- NOTE: For questions or install related issues, please open a Discussion instead. -->
## How to reproduce the behaviour
1. Go to "https://spacy.io/usage/rule-based-matching"
2. Go to "Editable Code" block
3. Select "Run" button
Returns: "Connecting failed. Please reload and try again."
<!-- Include a code example or the steps that led to the problem. Please try to be as specific as possible. -->
## Your Environment
<!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.-->
* Operating System:
* Python Version Used:
* spaCy Version Used:
* Environment Information:
|
closed
|
2023-02-25T12:43:51Z
|
2023-05-26T00:02:13Z
|
https://github.com/explosion/spaCy/issues/12333
|
[
"docs"
] |
andyjessen
| 10
|
nonebot/nonebot2
|
fastapi
| 3,044
|
Plugin: nonebot_plugin_mai_arcade
|
### PyPI 项目名
nonebot-plugin-mai-arcade
### 插件 import 包名
nonebot_plugin_mai_arcade
### 标签
[{"label":"maimai","color":"#ea5252"},{"label":"arcade","color":"#ea5252"}]
### 插件配置项
No response
|
closed
|
2024-10-20T06:53:52Z
|
2024-10-23T06:17:45Z
|
https://github.com/nonebot/nonebot2/issues/3044
|
[
"Plugin"
] |
YuuzukiRin
| 19
|
ludwig-ai/ludwig
|
computer-vision
| 3,850
|
Getting `ValueError: Hyperopt Section not present in config` while loading hyperopt from YAML config
|
**Description**
The code in https://github.com/ludwig-ai/ludwig/blob/7e34450188f1265e6cd9cbda600dc0a605627099/ludwig/hyperopt/run.py#L208 seems to have a bug where its comparing if the HYPEROPT key is contained in the variable `config` - which is actually a string that contains the path to the config file. I believe the original intention was to compare it with `config_dict`
**To Reproduce**
```
#hyperopt.py
from ludwig.hyperopt.run import hyperopt
import pandas
df = pandas.read_csv('./rotten_tomatoes.csv')
results = hyperopt(config='./rotten_tomatoes.yaml', dataset=df)
```
Running the above results in the following error:
The rottent_tomatoes.csv and rotten_tomatoes.yaml file is as per the tutorial here https://ludwig.ai/latest/getting_started/hyperopt/
```
$ python hyperopt.py
Traceback (most recent call last):
File "/Users/mohan.krishnan/Workspace/autotrain/hyperopt.py", line 12, in <module>
results = hyperopt(config='./rotten_tomatoes.yaml', dataset=df)
File "/Users/mohan.krishnan/Workspace/autotrain/env/lib/python3.10/site-packages/ludwig/hyperopt/run.py", line 209, in hyperopt
raise ValueError("Hyperopt Section not present in config")
ValueError: Hyperopt Section not present in config
```
**Expected behavior**
The config is correctly parsed without the exception being thrown
**Environment:**
- OS: Mac OS
- Version 13.6.3
- Python version : 3.10
|
closed
|
2023-12-26T03:37:39Z
|
2024-01-08T17:42:47Z
|
https://github.com/ludwig-ai/ludwig/issues/3850
|
[] |
mohangk
| 1
|
CorentinJ/Real-Time-Voice-Cloning
|
tensorflow
| 566
|
MegaUpload model link not working
|
The MegaUpload link to the models is no longer working. maybe replace it with a new [mega.nz](url) link?
|
closed
|
2020-10-19T08:57:59Z
|
2020-10-24T17:34:48Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/566
|
[] |
ranshaa05
| 3
|
CorentinJ/Real-Time-Voice-Cloning
|
python
| 884
|
Using a different speaker encoder
|
Hello, I really appreciate the work on display here. I was just wondering if I could use a different speaker encoder. If someone used a different encoder, could you explain the difficulties of replacing the encoder and how the results were different from the speaker encoder already in use?
|
closed
|
2021-11-02T13:01:41Z
|
2021-11-04T22:21:37Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/884
|
[] |
AhmedHashish123
| 7
|
DistrictDataLabs/yellowbrick
|
matplotlib
| 1,064
|
Custom axes titles for learning curve subplots
|
Please add options to add custom x, y axes labels when plotting subplots for learning curves.
At present unable to add them. Also same goes with xticks, yticks.
Xticks: As you can see for my implementation there's no fixed xticks for each subplots. Make an option to add custom xticks.

|
closed
|
2020-05-13T05:49:04Z
|
2020-05-13T13:22:03Z
|
https://github.com/DistrictDataLabs/yellowbrick/issues/1064
|
[
"invalid"
] |
VinodKumar9576
| 2
|
twopirllc/pandas-ta
|
pandas
| 313
|
TTM Squeeze PRO indicator
|
First of all, I use squeeze indicator (ttm squeeze) and I love it. But I was wondering if someone can create john carter's new squeeze indicator called "ttm squeeze pro".
Here are some links:
https://usethinkscript.com/threads/john-carters-squeeze-pro-indicator-for-thinkorswim-free.4021/
https://www.tradingview.com/script/TAAt6eRX-Squeeze-PRO-Indicator-Makit0/
|
closed
|
2021-06-19T22:19:55Z
|
2021-06-27T18:21:03Z
|
https://github.com/twopirllc/pandas-ta/issues/313
|
[
"enhancement",
"help wanted",
"good first issue"
] |
zlpatel
| 2
|
supabase/supabase-py
|
flask
| 517
|
pydntic error on importing supabase
|
**Describe the bug**
If I import supabase as `from supabase import create_client` it leads to an import error for field_validator from pydantic.
**To Reproduce**
Steps to reproduce the behavior:
1. Install supabase using conda.
2. Import supabase.
**Expected behavior**
Import with no errors.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: linux
- Version 1.0.3
|
closed
|
2023-08-08T10:43:24Z
|
2023-09-23T20:27:59Z
|
https://github.com/supabase/supabase-py/issues/517
|
[] |
Saatvik-droid
| 8
|
sczhou/CodeFormer
|
pytorch
| 91
|
On Installation process, "segmentation fault (core dumped)"
|
Hi, thanks for a great work!
I am trying to git clone the project and try inference using the source code.
However, when I try this command:
"python basicsr/setup.py develop"
it keeps making "segmentation fault (core dumped)" error.
My environment is Ubuntu 18.04.6
When I tried the process in the colab, it does work.
Can you check if everything is correct on your installation process or description?
Were there any changes in "basicsr/setup.py" file?
Thank you
|
open
|
2022-12-31T10:45:38Z
|
2023-01-17T14:36:39Z
|
https://github.com/sczhou/CodeFormer/issues/91
|
[] |
HyeonHo99
| 1
|
minimaxir/textgenrnn
|
tensorflow
| 241
|
Tokenizing Dataset Fails with newline or index error
|
When trying to tokenize a dataset, it fails with either the error
`Error: new-line character seen in unquoted field - do you need to open the file in universal-newline mode?`
or one about list index out of range.
Running the newest version of the Colab notebook and this happens with both GPT-2 and GPT-Neo.
Please let me know what info is needed or what I can try to fix this.
Thanks!
|
open
|
2021-11-09T18:21:51Z
|
2021-11-09T18:21:51Z
|
https://github.com/minimaxir/textgenrnn/issues/241
|
[] |
leetfin
| 0
|
DistrictDataLabs/yellowbrick
|
matplotlib
| 1,105
|
Visual ATM Model Report
|
**Describe the solution you'd like**
The [Auto Tune Models (ATM)](https://github.com/HDI-Project/ATM) Python library is an easy-to-use classification model solver that searches for the best model given a CSV dataset containing features and a target. During its run it creates a SQLite database that stores results from its auto-tuning and can be accessed using a results object with summary data, scores, and the best model found. ATM also has a CLI and a REST API.
To take advantage of ATM, a Yellowbrick `contrib` module (e.g. `yellowbrick.contrib.atm`) should implement visual diagnostics functionality for the ATM results object, allowing the user to explore classification visualizers for the best classifier, or compare classification visualizers across multiple models. Note that ATM may be an excellent start to getting multi-model report functionality from Yellowbrick, since ATM wraps a suite of trained and cross-validated models.
Open questions include:
- Should Yellowbrick directly access ATM's database?
- What data does ATM provide that could enable ATM-specific visualizations?
- Can Yellowbrick be used with the REST API?
A successful conclusion to this issue is the creation of an `yellowbrick.contrib.atm` package with the following functionality:
- [ ] A wrapper for `atm.Model` and/or `atm.Datarun` that enables Yellowbrick classifier visualizers
- [ ] Documentation/blog post about how to integrate Yellowbrick and ATM
- [ ] Follow on issues for ATM-specific visualizers and functionality
**Is your feature request related to a problem? Please describe.**
This issue is related to #397 that described using Yellowbrick with other ML libraries. Since this discussion, Yellowbrick has incorporated contrib support for 3rd party libraries using wrappers and other methods (see #1103) and has been used successfully with [other projects like Keras](https://towardsdatascience.com/evaluating-keras-neural-network-performance-using-yellowbrick-visualizations-ad65543f3174). The ATM library, however, will not work with the wrapper since it uses sklearn under the hood and has extended functionality that Yellowbrick could take advantage of, such as multi-model comparisons. Because of this, an ATM contrib model would be well suited to Yellowbrick's out of core approach.
**Examples**
N/A
@mitevpi any thoughts on this?
|
open
|
2020-10-05T19:08:21Z
|
2020-10-05T19:16:30Z
|
https://github.com/DistrictDataLabs/yellowbrick/issues/1105
|
[
"type: contrib"
] |
bbengfort
| 0
|
OFA-Sys/Chinese-CLIP
|
nlp
| 133
|
请问您整理的用来预训练模型的大规模中文数据(~2亿的图文对数据)公开吗?
|
请问您整理的用来预训练模型的大规模中文数据(~2亿的图文对数据)公开了吗?
|
closed
|
2023-06-08T00:31:09Z
|
2023-06-14T14:31:59Z
|
https://github.com/OFA-Sys/Chinese-CLIP/issues/133
|
[] |
huhuhuqia
| 1
|
pallets-eco/flask-wtf
|
flask
| 262
|
Form -> FlaskForm rename breaks custom metaclasses
|
When using a custom metaclass you usually subclass `FormMeta`. This fails thanks to the different metaclass used to show the deprecation warning (#249):
> TypeError: Error when calling the metaclass bases
> metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases
It's an easy fix (not using the deprecated `Form` class) but still, I think this should at least e mentioned in the docs.
cc @davidism
|
closed
|
2016-10-03T08:12:40Z
|
2021-05-28T01:03:46Z
|
https://github.com/pallets-eco/flask-wtf/issues/262
|
[] |
ThiefMaster
| 1
|
onnx/onnx
|
scikit-learn
| 6,555
|
System.ExecutionEngineException creating Microsoft.ML.OnnxRuntime.SessionOptions
|
This issue is reported by https://developercommunity.visualstudio.com/user/25004 and moved from https://developercommunity.visualstudio.com/t/SystemExecutionEngineException-creating/10794175
Hi devs!
I have a C#.NET project that uses the Microsoft.ML.OnnxRuntime and Microsoft.ML.OnnxRuntime.Managed packages.
System.ExecutionEngineException error began while creating Microsoft.ML.OnnxRuntime.SessionOptions when the Microsoft.ML.OnnxRuntime and Microsoft.ML.OnnxRuntime.Managed Version=1.19.2 packages were updated to Version=1.20.0.
Do the 1.20.0 packages include all required dependencies? Previous version packages seem to work without issue.
Thanks!
-Denny
|
closed
|
2024-11-25T17:39:00Z
|
2025-01-06T18:47:15Z
|
https://github.com/onnx/onnx/issues/6555
|
[
"bug",
"topic: runtime"
] |
tarekgh
| 5
|
bmoscon/cryptofeed
|
asyncio
| 507
|
OKEx error: Unexpected keyword argument 'status'
|
**Describe the bug**
I am using the sample script for liquidations with OKEx. After a couple of seconds, it ends with this error message:
```
File "[...]/lib/python3.9/site-packages/cryptofeed/feed.py", line 296, in callback await cb(**kwargs)
TypeError: __call__() got an unexpected keyword argument 'status'
```
**To Reproduce**
```
f.add_feed(OKEx(channels=[LIQUIDATIONS], symbols=['BTC-USD-SWAP'],
callbacks={LIQUIDATIONS: LiquidationCallback(liquidations)}), timeout=-1)
```
|
closed
|
2021-06-02T06:11:23Z
|
2021-06-14T22:16:02Z
|
https://github.com/bmoscon/cryptofeed/issues/507
|
[
"bug"
] |
MikeMaxNow
| 2
|
strawberry-graphql/strawberry
|
graphql
| 3,517
|
default_factory doesn't work
|
Hi!
I use default_factory to initialize my variable, but variable always returns the same result. It seems like default_factory doesn't work and it returns always the same result of function.
Here is example to reproduce:
https://play.strawberry.rocks/?gist=a7a5e62ffe4e68696b44456398d11104
|
open
|
2024-05-27T15:13:17Z
|
2025-03-20T15:56:45Z
|
https://github.com/strawberry-graphql/strawberry/issues/3517
|
[
"bug"
] |
ShtykovaAA
| 11
|
comfyanonymous/ComfyUI
|
pytorch
| 6,673
|
Add Support For OT Lora, Loha and Dora for HunYuan Video in ComfyUI
|
### Feature Idea
Please add support in ComfyUI for loading of OneTrainer Lora, LoHa and Dora files.
Attached are the key names for an OT Lora, LoHa, Dora with full layers, TE1 and TE2 trained and a bundled embedding (essentially every option possible)
[LoHaFullTETI_keys.txt](https://github.com/user-attachments/files/18631714/LoHaFullTETI_keys.txt)
[LoRaFullTETI_keys.txt](https://github.com/user-attachments/files/18631715/LoRaFullTETI_keys.txt)
[DoRaFullTETI_keys.txt](https://github.com/user-attachments/files/18631713/DoRaFullTETI_keys.txt)
Safetensors if needed can be found here:
[SafeTensor Files](https://huggingface.co/datasets/Calamdor/OT_Files/tree/main)
### Existing Solutions
https://github.com/comfyanonymous/ComfyUI/issues/6531#issuecomment-2617789374
is a workaround for an OT Lora, but not for Dora and likely not for a Lora with TE.
### Other
_No response_
|
open
|
2025-02-02T07:51:44Z
|
2025-02-24T07:30:42Z
|
https://github.com/comfyanonymous/ComfyUI/issues/6673
|
[
"Feature"
] |
Calamdor
| 15
|
statsmodels/statsmodels
|
data-science
| 8,548
|
Python does not seem to correctly report fitted values in statmodels ARIMA when differencing is involved
|
When fitting an ARIMA model using the statsmodels python implementation I see the following behaviour, python does not seem to correctly provide the values for the differenced lags. I am comparing the results with the ones obtained using the R ARIMA implementation.
```python
import pandas as pd
df=pd.read_csv('ffp2\datasets\euretail.csv')
df.index=pd.date_range(start=df['index'].str[:4].min(),freq='1Q',periods=df.shape[0])
df=df['value']
from statsmodels.tsa.arima.model import ARIMA
model=ARIMA(df, order=(0,1,1), seasonal_order=(0,1,1,4)).fit()
pd.concat((df,model.fittedvalues, model.resid), axis=1).rename(columns={0:'py_fit',1:'py_resid'}).head(10)
```
<details>
Scenario:
- Seasonal non-stationary data which requires
- Single differencing
- Seasonal first differencing of period 4
- We end up with a ARIMA(0,1,1)(0,1,1,4) model
Please note the issue occurs when using the python ARIMA implementation when importing from `statsmodels.tsa.arima.model import ARIMA` I've seen webtutorials where it seems to function correctly but they seem to be using the previous implementation `from statsmodels.tsa.arima_model import ARIMA`, this implementation seems to currently be deprecated https://github.com/statsmodels/statsmodels/issues/3884
Python fitted model below (for reference)
```
SARIMAX Results
=======================================================================================
Dep. Variable: value No. Observations: 64
Model: ARIMA(0, 1, 1)x(0, 1, 1, 4) Log Likelihood -34.642
Date: Thu, 01 Dec 2022 AIC 75.285
Time: 14:09:56 BIC 81.517
Sample: 03-31-1996 HQIC 77.718
- 12-31-2011
Covariance Type: opg
==============================================================================
coef std err z P>|z| [0.025 0.975]
------------------------------------------------------------------------------
ma.L1 0.2903 0.155 1.872 0.061 -0.014 0.594
ma.S.L4 -0.6912 0.132 -5.250 0.000 -0.949 -0.433
sigma2 0.1810 0.034 5.316 0.000 0.114 0.248
===================================================================================
Ljung-Box (L1) (Q): 0.25 Jarque-Bera (JB): 1.91
Prob(Q): 0.62 Prob(JB): 0.38
Heteroskedasticity (H): 0.76 Skew: -0.22
Prob(H) (two-sided): 0.54 Kurtosis: 3.77
===================================================================================
Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).
```
R model code for reference below
```r
chooseCRANmirror(graphics=FALSE, ind=70)
# # Install if needed
# install.packages("fpp2")
# install.packages("readr")
# install.packages("tsibble")
library("fpp2")
library("readr")
library("tsibble")
fit <- Arima(euretail, order=c(0,1,1), seasonal=c(0,1,1))
fitted(fit)
resid(fit)
```
</details>
#### Expected Output
I would expect R and Python implementations to provide much closer results, it seems as if the Python implementation might have a bug
```
value py_fit py_resid r_fit r_resid
1996-03-31 89.13 0.000000 89.130000 89.078541 0.051459 <---- (Issue Here)
1996-06-30 89.52 89.130003 0.389997 89.496685 0.023315
1996-09-30 89.88 89.520000 0.360000 89.864432 0.015568
1996-12-31 90.12 89.879998 0.240002 90.143352 -0.023352
1997-03-31 89.19 134.685000 -45.495000 89.380354 -0.190354 <---- (Issue Here)
1997-06-30 89.78 89.579993 0.200007 89.621984 0.158016
1997-09-30 90.03 90.193547 -0.163547 90.164354 -0.134354
1997-12-31 90.38 90.222837 0.157163 90.251028 0.128972
1998-03-31 90.27 89.463010 0.806990 89.602346 0.667654
1998-06-30 90.77 90.951737 -0.181737 90.937687 -0.167687
1998-09-30 91.85 91.019609 0.830391 91.077857 0.772143
1998-12-31 92.51 92.389250 0.120750 92.397782 0.112218
1999-03-31 92.21 92.044928 0.165072 92.056141 0.153859
1999-06-30 92.52 92.752252 -0.232252 92.744481 -0.224481
1999-09-30 93.62 93.066856 0.553144 93.083940 0.536060
```
Dataset
[euretail.csv](https://github.com/statsmodels/statsmodels/files/10134041/euretail.csv)
#### Output of ``import statsmodels.api as sm; sm.show_versions()``
<details>
INSTALLED VERSIONS
------------------
Python: 3.9.6.final.0
statsmodels
===========
Installed: 0.13.5 (c:\Users\alber\Documents\forecasting-principles-and-practice\venv\lib\site-packages\statsmodels)
Required Dependencies
=====================
cython: Not installed
numpy: 1.21.4 (c:\Users\alber\Documents\forecasting-principles-and-practice\venv\lib\site-packages\numpy)
scipy: 1.7.3 (c:\Users\alber\Documents\forecasting-principles-and-practice\venv\lib\site-packages\scipy)
pandas: 1.3.5 (c:\Users\alber\Documents\forecasting-principles-and-practice\venv\lib\site-packages\pandas)
dateutil: 2.8.2 (c:\Users\alber\Documents\forecasting-principles-and-practice\venv\lib\site-packages\dateutil)
patsy: 0.5.2 (c:\Users\alber\Documents\forecasting-principles-and-practice\venv\lib\site-packages\patsy)
Optional Dependencies
=====================
matplotlib: 3.5.1 (c:\Users\alber\Documents\forecasting-principles-and-practice\venv\lib\site-packages\matplotlib)
backend: module://matplotlib_inline.backend_inline
cvxopt: Not installed
joblib: 1.1.0 (c:\Users\alber\Documents\forecasting-principles-and-practice\venv\lib\site-packages\joblib)
Developer Tools
================
IPython: 7.30.1 (c:\Users\alber\Documents\forecasting-principles-and-practice\venv\lib\site-packages\IPython)
jinja2: Not installed
sphinx: Not installed
pygments: 2.10.0 (c:\Users\alber\Documents\forecasting-principles-and-practice\venv\lib\site-packages\pygments)
pytest: Not installed
virtualenv: Not installed
</details>
|
open
|
2022-12-01T16:28:42Z
|
2022-12-02T08:56:22Z
|
https://github.com/statsmodels/statsmodels/issues/8548
|
[] |
agr5
| 2
|
PokeAPI/pokeapi
|
api
| 1,182
|
[BUG] Pokemon weight is inaccurate
|
When requesting a pokemon from the API, the weight is off by 10x.
For example, when requesting `https://pokeapi.co/api/v2/pokemon/snorlax`
`weight = 4600`
when it should be
`weight = 460kg`
This is true for Ditto and Meowth as well. Have not checked other pokemon, but assume it is the same.
Edit: If this falls on me to fix, it will be delayed.
|
closed
|
2025-01-05T23:11:48Z
|
2025-01-08T09:31:35Z
|
https://github.com/PokeAPI/pokeapi/issues/1182
|
[] |
pedwards95
| 5
|
jazzband/django-oauth-toolkit
|
django
| 1,032
|
DRF - Adding User from Panel vs adding with code error?
|
`class CustomerRegister(APIView):
permission_classes = (permissions.AllowAny,)
def post(self, request):
data = request.data
data['is_active'] = True
serializer = UserSerializer(data=data)
if serializer.is_valid():
user = User.objects.create_user(**data)
user.save()
customer = Customer.objects.create(user=User.objects.get(username=data['username']))
url, headers, body, status_code = self.create_token_response(request)
return Response(json.loads(body), status=status_code)
return Response(data=serializer.errors, status=status.HTTP_400_BAD_REQUEST)`
When I am adding the user from the admin panel everything works and I am getting my tokens.
When I use this code I am getting `{
"error": "invalid_grant",
"error_description": "Invalid credentials given."
}`
My request for the token is absolutely the same.
Does anyone has any idea why?
|
closed
|
2021-11-19T12:01:04Z
|
2021-11-20T05:29:11Z
|
https://github.com/jazzband/django-oauth-toolkit/issues/1032
|
[
"question"
] |
yotovtsvetomir
| 1
|
huggingface/transformers
|
tensorflow
| 36,638
|
[BUG] Batch inference DDP + zero stage 3 = inference code hangs
|
https://github.com/deepspeedai/DeepSpeed/issues/7128
I ran the batch inference code with deepspeed generation, not the vllm one. The code hangs while I set zero stage = 3. I created a minimal code snippet for you to debug the error.
```python
import os
import torch
import torch.distributed as dist
from transformers import AutoModelForCausalLM, AutoTokenizer
import deepspeed
# Initialize distributed environment
def setup_distributed():
dist.init_process_group(backend="nccl", init_method="env://")
local_rank = int(os.getenv("LOCAL_RANK", 0))
torch.cuda.set_device(local_rank)
return local_rank
def load_model(model_name="facebook/opt-1.3b", local_rank=0):
# Ensure distributed environment is set up
if not dist.is_initialized():
dist.init_process_group(backend="nccl", init_method="env://")
world_size = dist.get_world_size() # Number of GPUs available
torch.cuda.set_device(local_rank) # Assign each process to a GPU
print(
f"Loading model {model_name} on rank {local_rank}, using {world_size} GPUs for model parallelism"
)
# Load model and tokenizer
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# ✅ DeepSpeed Inference config for Model Parallelism
ds_config = {
# "replace_with_kernel_inject": False, # Enables optimized inference kernels
"tensor_parallel": {"tp_size": 1}, # Enables Model Parallelism
"dtype": "bf16"
if torch.cuda.is_bf16_supported()
else "fp16", # Automatic dtype selection
}
# ✅ Initialize DeepSpeed for Model Parallel Inference
model = deepspeed.init_inference(model, config=ds_config)
return model, tokenizer
# Perform inference with data parallelism
def batch_inference(model, tokenizer, prompts, local_rank):
inputs = tokenizer(prompts, return_tensors="pt", padding=True, truncation=True).to(
f"cuda:{local_rank}"
)
with torch.no_grad():
outputs = model.generate(**inputs, max_length=150, synced_gpus=True)
return tokenizer.batch_decode(outputs, skip_special_tokens=True)
def main():
local_rank = setup_distributed()
model, tokenizer = load_model(local_rank=local_rank)
# Each GPU gets a different batch
global_batch = [
[
"What is AI?",
"Explain deep learning.",
], # Batch for GPU 0
[
"Tell me a joke.",
"What is reinforcement learning? Tell me all the details",
], # Batch for GPU 1
]
prompts = global_batch[local_rank] if local_rank < len(global_batch) else []
print(f"GPU {local_rank} prompts:", prompts)
# Perform batch inference
results = batch_inference(model, tokenizer, prompts, local_rank)
print(f"GPU {local_rank} results:", results)
dist.barrier() # Ensure all GPUs finish
if __name__ == "__main__":
main()
```
Run the code with
```bash
NCCL_DEBUG=INFO NCCL_BLOCKING_WAIT=1 NCCL_ASYNC_ERROR_HANDLING=1 deepspeed --num_gpus 2 test_deepspeed.py
```
The code should run without error because it's DDP.
Now, if we change set "tensor_parallel": {"tp_size": 1} -> "tensor_parallel": {"tp_size": 2} and rerun the code. The code hangs forever. Note that the bug happens when DDP + TP are enabled.
|
open
|
2025-03-11T03:20:47Z
|
2025-03-11T12:55:46Z
|
https://github.com/huggingface/transformers/issues/36638
|
[] |
ShengYun-Peng
| 1
|
BayesWitnesses/m2cgen
|
scikit-learn
| 524
|
Accuracies totally different
|
Hi. I am converting the tree model to C using m2cgen. Although the inference latencies are much lower, the accuracies are way off. Here's how I am converting and reading the .so files
```
from xgboost import XGBRFRegressor
num_est=100
model = XGBRFRegressor(n_estimators=num_est, max_depth=8)
model.fit(X_train, y_train)
code = m2c.export_to_c(model)
len(code)
with open('model.c', 'w') as f:
f.write(code)
!gcc -Ofast -shared -o lgb_score.so -fPIC model.c
!ls -l lgb_score.so
lib = ctypes.CDLL('./lgb_score.so')
score = lib.score
# Define the types of the output and arguments of this function.
score.restype = ctypes.c_double
score.argtypes = [ndpointer(ctypes.c_double)]
```
Why is this happening and how can I fix it?
|
open
|
2022-06-10T02:38:28Z
|
2022-06-17T04:43:00Z
|
https://github.com/BayesWitnesses/m2cgen/issues/524
|
[] |
harishprabhala
| 3
|
dask/dask
|
pandas
| 11,417
|
using tweepy to extract data but getting error
|
i am getting this coroutine error for my code.i need some tweet data for some sentiment analysis and I am unable to get it due to this error.
python
from twikit import Client, TooManyRequests
import time
from datetime import datetime
import csv
from configparser import ConfigParser
from random import randint
MINIMUM_TWEETS = 20
QUERY = 'stock'
config = ConfigParser()
config.read("config.ini")
username = config['X']["username"]
email = config['X']["email"]
password = config ['X']["password"]
# 1. use the login credentials
client =Client(language = "en-US")
# client.login(auth_info_1=username,auth_info_2=email,password=password)
# client.save_cookies("cookies.json")
client.load_cookies('cookies.json')
#get tweets
tweets = client.search_tweet(QUERY, product = 'Top')
for tweet in tweets:
print(vars(tweets))
break
File "d:\VS CODE\Sentiment Analysis using Tweepy\main.py", line 26, in <module>
for tweet in tweets:
TypeError: 'coroutine' object is not iterable
sys:1: RuntimeWarning: coroutine 'Client.search_tweet' was never awaited
- Dask version:
- Python version:3.12
- Operating System: windows 10
- Install method (conda, pip, source): pip
|
closed
|
2024-10-07T17:55:34Z
|
2024-10-07T20:07:42Z
|
https://github.com/dask/dask/issues/11417
|
[
"needs triage"
] |
tapl0w
| 1
|
albumentations-team/albumentations
|
machine-learning
| 2,335
|
[New transform] GridShuffle3D
|
We can do in 3D similar things as GridShuffle, but in 3D
|
open
|
2025-02-07T20:44:59Z
|
2025-02-07T20:45:05Z
|
https://github.com/albumentations-team/albumentations/issues/2335
|
[
"enhancement",
"good first issue"
] |
ternaus
| 0
|
tiangolo/uwsgi-nginx-flask-docker
|
flask
| 297
|
All static files return 403
|
Not sure what I'm doing wrong but all my static files are issuing `403` errors and `permission denied` errors.
Logging into the docker environment and changing permissions to 777 fixes the errors but doesn't fix the underlying issue.
|
closed
|
2022-08-16T09:45:15Z
|
2024-08-07T04:57:41Z
|
https://github.com/tiangolo/uwsgi-nginx-flask-docker/issues/297
|
[] |
Garulf
| 1
|
TencentARC/GFPGAN
|
deep-learning
| 340
|
Following picture can not be restored, seems can not recognize the face
|
Following picture can not be restored, seems can not recognize the face

It would be an issue of RealESRGAN, I'm not sure, but I believe you can figure it out much quickly than I do.
|
open
|
2023-02-15T07:20:13Z
|
2023-02-15T07:22:44Z
|
https://github.com/TencentARC/GFPGAN/issues/340
|
[] |
butaixianran
| 0
|
iterative/dvc
|
data-science
| 9,757
|
dvc exp show: External s3 address not properly shown
|
# Bug Report
<!--
## dvc exp show: External s3 address not properly shown
-->
## Description
Hello,
I extended the example from https://github.com/iterative/dvc/issues/9713. Thank you so much for addressing that so quickly! This is much appreciated!
When now using an external s3 address `s3://<BUCKET>/<FILE_NAME>` (e.g., `s3://my_bucket/model.pkl`) as an output location in DVC 3.7, `workspace` and `master` branch in `dvc exp show` use two different names to refer to the s3 location, neither of which seems correct: `master` uses `<REPO_PATH>/<BUCKET>/<FILE_NAME>`, `workspace` uses `<BUCKET>/<FILENAME>`. Both are missing the prefix `s3://`
### Reproduce
For reproducing, please specify the respective `<BUCKET>` and `<FILE_NAME>` in the following:
```
git init -q dvc_issue
cd dvc_issue
dvc init -q
cat <<EOT >> .dvc/config
[cache]
type = symlink
EOT
cat <<EOT >> dvc.yaml
vars:
- uri_model: s3://<BUCKET>/<FILE_NAME>
stages:
train:
cmd: python train.py
deps:
- train.py
outs:
- \${uri_model}:
cache: false
evaluate:
cmd: python evaluate.py
deps:
- evaluate.py
- \${uri_model}
metrics:
- metrics.json:
cache: false
EOT
cat <<EOT >> train.py
import boto3
def main():
bucket_name = <BUCKET>
file_name = <FILE_NAME>
data = b"weights: 1, 2, 3"
s3 = boto3.resource('s3')
object = s3.Object(bucket_name, file_name)
object.put(Body=data)
print("Finished train.")
if __name__ == "__main__":
main()
EOT
cat <<EOT >> evaluate.py
import json
def main():
metrics_filename = "metrics.json"
data = {"auc": 0.29}
with open(metrics_filename, 'w') as f:
json.dump(data, f)
print("Finished evaluate.")
if __name__ == "__main__":
main()
EOT
dvc repro -q
git add .
git commit -q -m "initial"
dvc exp show -v
```
### Expected
A single column with the entry `s3://<BUCKET>/<FILENAME>`.
### Environment information
**Output of `dvc doctor`:**
```console
$ dvc doctor
VC version: 3.7.0 (pip)
------------------------
Platform: Python 3.10.8 on Linux-3.10.0-1127.8.2.el7.x86_64-x86_64-with-glibc2.17
Subprojects:
dvc_data = 2.6.0
dvc_objects = 0.23.1
dvc_render = 0.5.3
dvc_task = 0.3.0
scmrepo = 1.0.4
Supports:
http (aiohttp = 3.8.5, aiohttp-retry = 2.8.3),
https (aiohttp = 3.8.5, aiohttp-retry = 2.8.3),
s3 (s3fs = 2023.6.0, boto3 = 1.26.0)
Config:
Global: /home/kpetersen/.config/dvc
System: /etc/xdg/dvc
```
-->
|
open
|
2023-07-25T00:01:48Z
|
2024-10-23T08:06:33Z
|
https://github.com/iterative/dvc/issues/9757
|
[
"bug",
"p2-medium",
"ui",
"A: experiments"
] |
kpetersen-hf
| 1
|
plotly/jupyter-dash
|
jupyter
| 66
|
AttributeError: module 'google.colab.output' has no attribute 'serve_kernel_port_as_iframe' in latest version
|
I experienced the following issue described here: https://stackoverflow.com/questions/68729989/jupyterdash-in-jupyterlabs-fails-after-using-plotly-express-in-a-prior-cell/68737108#68737108
I was only able to solve my issue by downgrading to version 0.2.1.
Maybe there can be a way for users to turn off the connectivity to Google Colab when it causes problems?
|
open
|
2021-08-11T06:44:42Z
|
2021-08-11T06:44:42Z
|
https://github.com/plotly/jupyter-dash/issues/66
|
[] |
jkropko
| 0
|
tflearn/tflearn
|
tensorflow
| 849
|
Model.predict giving same predictions for every examples
|
I have a 110 layer resnet trained and validated with 4 classes to classify. Training examples are in decent proportion (30%,20%,25%,25%). It has validation accuracy of around 90%. When testing it for new examples it gives same class as output always. I am giving a list of arrays as input to model.predict. I have attached the code below.
`from __future__ import division, print_function, absolute_import
import numpy as np
#import pandas as pd
import tflearn
import os
from glob import glob
import cv2
import csv
import pickle
from tflearn.data_utils import shuffle, to_categorical
from tflearn.layers.core import input_data, dropout, fully_connected
from tflearn.layers.conv import conv_2d, max_pool_2d
from tflearn.layers.estimator import regression
from tflearn.layers.normalization import local_response_normalization
from tflearn.data_preprocessing import ImagePreprocessing
from tflearn.data_augmentation import ImageAugmentation
import h5py
train_data = h5py.File('train_dataset_her2.h5', 'r')
X = train_data['X']
Y = train_data['Y']
test_data = h5py.File('test_dataset_her2.h5', 'r')
testX = test_data['X']
testY = test_data['Y']
# Real-time data preprocessing
img_prep = tflearn.ImagePreprocessing()
img_prep.add_featurewise_zero_center()
# Real-time data augmentation
img_aug = tflearn.ImageAugmentation()
img_aug.add_random_flip_leftright()
#network
n=18
network = input_data(shape=[None, 224, 224,3])#data_preprocessing=img_prep,data_augmentation=img_aug)
network=conv_2d(network,108,3,activation='relu')
network=max_pool_2d(network,2)
network=conv_2d(network,108,3,activation='relu')
network=max_pool_2d(network,2)
network = conv_2d(network,108, 3, activation='relu')
network = dropout(network, 0.8)
network = tflearn.conv_2d(network, 16, 3, regularizer='L2', weight_decay=0.0001)
network = tflearn.residual_block(network, n, 16)
network = tflearn.residual_block(network, 1, 32, downsample=True)
network = tflearn.residual_block(network, n-1, 32)
network = tflearn.residual_block(network, 1, 64, downsample=True)
network = tflearn.residual_block(network, n-1, 64)
network = tflearn.batch_normalization(network)
network = tflearn.activation(network, 'relu')
network = tflearn.global_avg_pool(network)
network = tflearn.fully_connected(network, 4, activation='softmax')
adam =tflearn.optimizers.Adam(learning_rate=0.002)
network = tflearn.regression(network, optimizer=adam,
loss='categorical_crossentropy')
model = tflearn.DNN(network, tensorboard_verbose=0)
model.load("dataset_adam_resnet.tfl.ckpt-4000")
print("Done loading model")
############################################################################
###Prediction
sub_folder = [temp[0] for temp in os.walk('Dataset_Test_Data')]
sub_folder = sub_folder[1:]
######################################################################
###predict without pickle
for f1 in range(1):#(len(sub_folder)):
list_images = sorted(glob(sub_folder[f1] + '/*.jpg'))
predictions = []
temp_m = sub_folder[f1].split("/")
print("OPerating '%s' folder" %temp_m[1])
for item in list_images:
print("predicting%s"%item)
predictions.append(model.predict_label((cv2.imread(item)).astype(float).reshape(1,224,224,3)))
writer = csv.writer(open('./HER2_Test_Data/'+temp_m[1]+'/Prediction_cnn_without_pickle' + temp_m[1] + '.csv', "w"))
writer.writerows(predictions)
`
|
open
|
2017-07-22T21:30:11Z
|
2019-05-24T14:20:06Z
|
https://github.com/tflearn/tflearn/issues/849
|
[] |
deepakanandece
| 2
|
flaskbb/flaskbb
|
flask
| 294
|
Celery is NOT running
|
There is a problem.
Celery is not running.
You can start celery with this command:
flaskbb --config None celery worker
when do it as above there will be:
-------------- celery@ip-172-31-16-221 v4.0.2 (latentcall)
---- **** -----
--- * *** * -- Linux-4.4.44-39.55.amzn1.x86_64-x86_64-with-glibc2.2.5 2017-06-08 02:42:07
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: flaskbb:0x7******27410
- ** ---------- .> transport: redis://localhost:6379//
- ** ---------- .> results: redis://localhost:6379/
- *** --- * --- .> concurrency: 1 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> celery exchange=celery(direct) key=celery
[2017-06-08 02:42:08,577: CRITICAL/MainProcess] Unrecoverable error: TypeError("can_read() got an unexpected keyword argument 'timeout'",)
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/celery/worker/worker.py", line 203, in start
self.blueprint.start(self)
File "/usr/local/lib/python2.7/site-packages/celery/bootsteps.py", line 119, in start
step.start(parent)
File "/usr/local/lib/python2.7/site-packages/celery/bootsteps.py", line 370, in start
return self.obj.start()
File "/usr/local/lib/python2.7/site-packages/celery/worker/consumer/consumer.py", line 318, in start
blueprint.start(self)
File "/usr/local/lib/python2.7/site-packages/celery/bootsteps.py", line 119, in start
step.start(parent)
File "/usr/local/lib/python2.7/site-packages/celery/worker/consumer/consumer.py", line 594, in start
c.loop(*c.loop_args())
File "/usr/local/lib/python2.7/site-packages/celery/worker/loops.py", line 88, in asynloop
next(loop)
File "/usr/local/lib/python2.7/site-packages/kombu/async/hub.py", line 345, in create_loop
cb(*cbargs)
File "/usr/local/lib/python2.7/site-packages/kombu/transport/redis.py", line 1039, in on_readable
self.cycle.on_readable(fileno)
File "/usr/local/lib/python2.7/site-packages/kombu/transport/redis.py", line 337, in on_readable
chan.handlers[type]()
File "/usr/local/lib/python2.7/site-packages/kombu/transport/redis.py", line 671, in _receive
while c.connection.can_read(timeout=0):
TypeError: can_read() got an unexpected keyword argument 'timeout'
|
closed
|
2017-06-08T02:46:17Z
|
2018-04-15T07:47:46Z
|
https://github.com/flaskbb/flaskbb/issues/294
|
[] |
battlecat
| 1
|
graphistry/pygraphistry
|
jupyter
| 239
|
[FEA] pipe
|
**Is your feature request related to a problem? Please describe.**
It'd be super convenient to have a graph-friendly pipeline operator, replacing `a(b(c(g)))` with `g.pipe(c).pipe(b).pipe(a)`, similar to pandas and other ~monadic envs
**Describe the solution you'd like**
It should support transforms over nodes, edges, and graphs
Ex:
```python
g.pipe(graph=lambda g: g.bind(x='x'))
g.pipe(lambda g: g.bind(x='x')) # shorthand
g.pipe(edges=lambda g: g._edges[['src', 'dst']]) # g in, df out
g.edges(lambda g: g._edges[['src', 'dst']]) # shorthand
g.pipe(nodes=lambda g: g._nodes[['node']]) # g in, df out
g.nodes(lambda g: g._nodes[['node']]) # shorthand
```
- It should be allowed to provide both `nodes=`, `edges=`
- If both `nodes=` / `edges=` and `graph=` kwargs are provided, run them in the provided order:
```python
g.pipe(nodes=fn_1, edges=fn_2, graph=fn_3)
g.pipe(graph=fn_1, nodes=fn_2, edges=fn_3)
```
**Describe alternatives you've considered**
While we do have `g.edges(df)` / `g.nodes(df)`, they do not support flows like `g.cypher(...).edges(clean_fn)`
**Additional context**
* Similar to pandas `pipe`
* Most composition operators get inherited from here as they're table level
* ... Except we still don't have graph-level composition: union, subtract, ...
|
closed
|
2021-07-02T17:48:49Z
|
2021-08-24T14:16:19Z
|
https://github.com/graphistry/pygraphistry/issues/239
|
[
"enhancement",
"help wanted",
"good-first-issue"
] |
lmeyerov
| 1
|
autokey/autokey
|
automation
| 129
|
When installed via pip, gir1.2-appindicator3-0.1 required
|
## Classification:
UI/Usability
## Reproducibility:
Always
## Summary
Attempting to install via pip3 on recent debian has no problem until you attempt to launch the gui. It depends on the package
## Steps to Reproduce
- pip3 install autokey
- for good measure install libappindicator-3 (and -dev)
- fix any more missing pip3 dependancies
- pip3 install autokey
- autokey-gtk
## Expected Results
- The gui should start
## Actual Results
$ autokey-gtk (minikube/kube-system)
Traceback (most recent call last):
File "/home/ranyardm/.local/bin/autokey-gtk", line 7, in <module>
from autokey.gtkui.__main__ import main
File "/home/ranyardm/.local/lib/python3.5/site-packages/autokey/gtkui/__main__.py", line 4, in <module>
from autokey.gtkapp import Application
File "/home/ranyardm/.local/lib/python3.5/site-packages/autokey/gtkapp.py", line 31, in <module>
from autokey.gtkui.notifier import get_notifier
File "/home/ranyardm/.local/lib/python3.5/site-packages/autokey/gtkui/notifier.py", line 22, in <module>
gi.require_version('AppIndicator3', '0.1')
File "/usr/lib/python3/dist-packages/gi/__init__.py", line 118, in require_version
raise ValueError('Namespace %s not available' % namespace)
ValueError: Namespace AppIndicator3 not available
If helpful, submit screenshots of the issue to help debug. `autokey-gtk --verbose` output is also useful.
## Version
0.93.10
If the problem is known to be present in more than one version, please list all of those.
Installed via: pip3 install autokey
Distro:
latest debian (9.4)
Describe any debugging steps you've taken yourself.
Lots of googling, eventually found a solution
If you've found a workaround, provide it here.
apt install gir1.2-appindicator3-0.1
^^ The above should be in the README.md, the wiki somewhere or somewhere much more obvious than it is.
|
open
|
2018-04-05T13:55:32Z
|
2018-07-29T17:28:08Z
|
https://github.com/autokey/autokey/issues/129
|
[] |
iMartyn
| 3
|
apache/airflow
|
python
| 48,021
|
Task Execution API: Implement default version handling for no header
|
Task Execution API versioning was added in https://github.com/apache/airflow/pull/47951 via [Cadwyn](https://github.com/zmievsa/cadwyn).
As a follow-up we should Implement default version handling for no header since it isn't available out of the box. So if a version isn't provided, we default it to latest.
|
open
|
2025-03-20T16:22:12Z
|
2025-03-20T16:24:31Z
|
https://github.com/apache/airflow/issues/48021
|
[
"area:API",
"area:task-execution-interface-aip72"
] |
kaxil
| 0
|
chezou/tabula-py
|
pandas
| 276
|
Tabula read_pdf cannot read all pages
|
<!--- Provide a general summary of your changes in the Title above -->
# Summary of your issue
tabula.read_pdf cannot read all pages.
# What did you do when you faced the problem?
I could not do anything.
## Code:
```
import tabula
fp = r"https://www.apple.com/supplier-responsibility/pdf/Apple-Supplier-List.pdf"
df = tabula.read_pdf(fp, pages = "all", pandas_options = {'header': None})
```
## Expected behavior:
<!--- Write your expected results/outputs -->
```
write your expected output
```
A list of 33 elements. Each element is a dataframe that contains the content of one page of the pdf report
## Actual behavior:
<!--- Put the actual results/outputs -->
```
paste your output
```
The code only returns a list of one element, a dataframe:
0 1
0 Kyocera Corporation 1166-1 Hebimizo-cho, Higashiomi, Shiga, Japan
1 Kyocera Corporation 1-1 Kokubuyamashita-cho, Kirishima-shi, Kagoshima, Japan
2 Kyocera Corporation 1 Mikata, Ayabe, Kyoto, Japan
3 Laird PLC Building 1, Dejin Industrial Park, Fuyuanyi Road, Heping Community, Fuyong Town, Bao'an
4 District, Shenzhen, Guangdong, China
5 Laird PLC Building No. 1-7 & 9, No. 8 Pengfeng Road, Dakun Industry Park, Songjiang District, Shanghai,
6 China
7 Laird PLC 3rd Building, No. 398, Yuandian Road, Minhang District, Shanghai, China
8 Laird PLC 28 Huanghe South Road, Kunshan Economic & Tech Development Zone, Kunshan, Jiangsu,
9 China
10 Largan Precision No. 18, Tutong First Industrial District, Tutong, Changping, Dongguan, Guangdong, China
11 Company Limited
12 Largan Precision No. 11, Jingke Road, Nantun District, Taichung, Taiwan
13 Company Limited
## Related Issues:
|
closed
|
2021-04-15T05:54:14Z
|
2021-04-15T05:54:26Z
|
https://github.com/chezou/tabula-py/issues/276
|
[] |
ZhangYuchenApril
| 1
|
aio-libs/aiopg
|
sqlalchemy
| 40
|
Asyncio + sqlalchemy ORM
|
Привіт.
Як працювати з об'єктами алхімії?
Ну тобто зараз є можливість виконувати sql запити через conn.execute() і отримувати звідти скаляри.
А можна отримувати обджекти?
|
closed
|
2014-12-23T10:13:21Z
|
2015-01-11T10:18:33Z
|
https://github.com/aio-libs/aiopg/issues/40
|
[] |
iho
| 2
|
joouha/euporie
|
jupyter
| 133
|
Enabling private sixel color registers should not be necessary
|
This is not a big deal, but these private sixel color functions don't actually do anything:
https://github.com/joouha/euporie/blob/be40f57aefad591f8880ea400f64ea7a9ecee924/euporie/core/io.py#L187-L193
Mode 1070 is a private mode, so those sequences should be `\x1b[?1070h` and `\x1b[?1070l`.
That said, it's highly unlikely you need to request private color registers anyway. That's only necessary if you're using an image without defining a color palette, and you want it to use the default palette. But it's not a standard mode, and it won't work consistently even on terminals that do support it, so there's very little justification for using it.
|
open
|
2025-03-20T11:27:33Z
|
2025-03-20T11:27:33Z
|
https://github.com/joouha/euporie/issues/133
|
[] |
j4james
| 0
|
junyanz/pytorch-CycleGAN-and-pix2pix
|
pytorch
| 1,366
|
why result from paired image and single image are different?
|
as i said.
waiting for you.
|
open
|
2022-01-18T07:05:59Z
|
2022-01-20T23:10:44Z
|
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1366
|
[] |
zhanghahah1993
| 3
|
pytest-dev/pytest-html
|
pytest
| 128
|
Chrome blocking data uri
|
I'm running Chrome 60 and cannot click to open any of the Links (Browser Log, Server Log, etc.) in my self-contained pytest-html report. This seems to be due to Chrome blocking top frame navigation to data urls. See [here](https://groups.google.com/a/chromium.org/forum/#!topic/blink-dev/GbVcuwg_QjM) for their developer discussion on this. The console error is ```Not allowed to navigate top frame to data URL:``` Workaround is to use firefox, or right click to open the URL in a new tab.
|
closed
|
2017-08-28T19:42:57Z
|
2018-04-16T09:43:05Z
|
https://github.com/pytest-dev/pytest-html/issues/128
|
[] |
micheletest
| 2
|
rgerum/pylustrator
|
matplotlib
| 69
|
Not expected behavior with drag and drop of subplots
| ERROR: type should be string, got "\r\nhttps://github.com/user-attachments/assets/a017d3f5-a794-492b-9985-d395b8615825\r\n\r\nI experience weird behavior when manually interacting with the subplots generated by the example code (see the recorded screen).\r\n\r\nConfiguration: \r\nMacOS 13.7.1\r\nPython 3.10\r\nPylustrator 1.3.0"
|
open
|
2024-11-24T12:17:52Z
|
2025-03-09T01:59:25Z
|
https://github.com/rgerum/pylustrator/issues/69
|
[] |
amichaut
| 3
|
plotly/dash
|
jupyter
| 2,989
|
dcc.dropdown options order not consistent during search
|
dash 2.18.0
dash-bootstrap-components 1.6.0
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
Chrome, Windows
**Describe the bug**
For this dcc.Dropdown:
`dcc.Dropdown(options=['1', '11 Text', '12', '110', '111', '112'])
`
The desired order is as set in options. When nothing is typed in search I see as expected:

Once I start searching for 11, the order changes and no longer matches the desired order:

|
open
|
2024-09-08T10:49:02Z
|
2024-09-09T17:43:49Z
|
https://github.com/plotly/dash/issues/2989
|
[
"bug",
"P3"
] |
BellLongworth
| 0
|
DistrictDataLabs/yellowbrick
|
scikit-learn
| 1,016
|
Update tests to support Pandas 1.0
|
Pandas 1.0 introduces some deprecations that break our tests; we need to update our tests in order to fully support newer versions of Pandas.
|
closed
|
2020-02-07T00:30:30Z
|
2020-06-12T03:53:56Z
|
https://github.com/DistrictDataLabs/yellowbrick/issues/1016
|
[
"priority: high",
"type: technical debt"
] |
bbengfort
| 0
|
pyppeteer/pyppeteer
|
automation
| 222
|
requests-html, pyppeteer and proxies
|
Hi All
I'm attempting to use proxy servers with pypetter (via requests-html)
Are you able to confirm that pyppeteer is able to use proxies such as `user:pass@my-proxy-server.com:12345`
I attempt to use them as per chromium docs with `--proxy-server="user:pass@my-proxy-server.com:12345"` but get various `PageErrors` relating to proxy etc.
Are you able to confirm that pyppeteer is able to accept the proxy command line switches for chromium?
Thanks!
|
open
|
2021-02-12T17:12:06Z
|
2021-02-14T03:06:30Z
|
https://github.com/pyppeteer/pyppeteer/issues/222
|
[] |
Bobspadger
| 1
|
fastapi-users/fastapi-users
|
fastapi
| 769
|
Update documentation to match changes with v8.x.x
|
## Describe the bug
Some of the documentation has mismatched method names. For example, the documentation under the `Usage -> Routes` page still references an `after_register` handler, but that is now the `on_after_register` handler within a `UserManager`.
## To Reproduce
Go to the [Usage > Routes Page](https://fastapi-users.github.io/fastapi-users/usage/routes/#post-register) to see some of the mismatched method names.
## Fixes
- I opened #768 to address this.
- The examples at [https://fastapi-users.github.io/fastapi-users/configuration/full-example/](https://fastapi-users.github.io/fastapi-users/configuration/full-example/) also have invalid `UserManager` classes and method names, but those examples are not in this repo for me to fix.
|
closed
|
2021-10-15T20:56:19Z
|
2021-10-18T05:42:12Z
|
https://github.com/fastapi-users/fastapi-users/issues/769
|
[
"bug"
] |
jdukewich
| 1
|
microsoft/nni
|
deep-learning
| 5,038
|
can i use netadapt with yolov5?
|
**Describe the issue**:
**Environment**:
- NNI version:
- Training service (local|remote|pai|aml|etc):
- Client OS:
- Server OS (for remote mode only):
- Python version:
- PyTorch/TensorFlow version:
- Is conda/virtualenv/venv used?:
- Is running in Docker?:
**Configuration**:
- Experiment config (remember to remove secrets!):
- Search space:
**Log message**:
- nnimanager.log:
- dispatcher.log:
- nnictl stdout and stderr:
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout
-->
**How to reproduce it?**:
|
open
|
2022-08-01T10:26:41Z
|
2022-08-04T01:49:59Z
|
https://github.com/microsoft/nni/issues/5038
|
[] |
mumu1431
| 1
|
huggingface/peft
|
pytorch
| 1,971
|
hyper-paramter `r` in adalora_confg and lora_config
|
It is clear that `r` in lora_config refers to rank. However, in adalora, I noticed that `r` also exists (besides `target_r` and `init_r`) and is used in the example. I am wondering what this `r` represents in adalora? Thanks.
```
from transformers import AutoModelForSeq2SeqLM, LoraConfig >>> from peft import AdaLoraModel, AdaLoraConfig
config = AdaLoraConfig(
peft_type="ADALORA", task_type="SEQ_2_SEQ_LM", r=8, lora_alpha=32, target_modules=["q", "v"],
lora_dropout=0.01,
)
```
|
closed
|
2024-07-29T22:21:21Z
|
2024-08-01T10:24:43Z
|
https://github.com/huggingface/peft/issues/1971
|
[
"good first issue",
"contributions-welcome"
] |
Vincent950129
| 1
|
Evil0ctal/Douyin_TikTok_Download_API
|
web-scraping
| 151
|
下载视频会被定向到douyin.wtf上
|
使用docker将项目部署到自己服务器后使用下载功能会被定向到douyin.wtf而不是部署服务器本身,如果手动将douyin.wtf替换为部署服务器地址则可正常下载
***发生错误的平台?***
TikTok
如:抖音/TikTok
***发生错误的端点?***
Web APP
如:API-V1/API-V2/Web APP
***提交的输入值?***
https://www.douyin.com/discover?modal_id=7069543727328398622
如:短视频链接
***是否有再次尝试?***
是
如:是,发生错误后X时间后错误依旧存在。
***你有查看本项目的自述文件或接口文档吗?***
有,并且很确定该问题是程序导致的
如:有,并且很确定该问题是程序导致的。
|
closed
|
2023-02-08T13:45:27Z
|
2023-02-08T17:38:10Z
|
https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/151
|
[
"BUG"
] |
c-or
| 1
|
tortoise/tortoise-orm
|
asyncio
| 1,667
|
Problems using `.raw()` instead of `.filter()`
|
**Describe the bug**
To get consistency between Tortoise-based and non-Tortoise based queries, I started replacing the usual `<ModelClassName>.filter(...)` fetches with `<ModelClassName>.raw(<PlainSQLQueryString>)`.
It seemed to work, but I later found that saves were often not actually committing to the database. This caused some serious data consistency and workflow problems in several of my networks, which only came right after changing back to `.filter(...)`.
**Expected behavior**
When using `<ModelClassName>.raw(...)` I was expecting the returned Model row objects to behave in the same way as the ones returned from awaiting a `<ModelClassName>.filter(...)` query.
**Additional context**
Is there a "safe" and recommended way to get `Model` row objects from raw SQL queries, which behave identically to those retrieved from `.filter(...)` ?
All help appreciated.
|
open
|
2024-06-28T03:07:46Z
|
2024-07-02T02:43:11Z
|
https://github.com/tortoise/tortoise-orm/issues/1667
|
[] |
davidmcnabnz
| 1
|
modoboa/modoboa
|
django
| 2,895
|
dkim missing via auth-smtp (submission), but added via local shell mail only
|
# Impacted versions
* OS Type: Debian
* OS Version: Debian 10 (buster)
* Database Type: PostgreSQL
* Database version: 11.19
* Modoboa: 2.0.5
* installer used: Yes
# Steps to reproduce
Standard configuration via Installer.
/etc/postfix/main.cf contains:
```
smtpd_milters = inet:127.0.0.1:12345
non_smtpd_milters = inet:127.0.0.1:12345
milter_default_action = accept
milter_content_timeout = 30s
```
opendkim.service is configured with `Socket inet:12345@localhost` and is running.
Domain has a dkim key and has a green dkim icon.
# Current behavior
When sending an email with the correct From header from a mail client, no dkim header is added. The mail.log has no dkim header:
```
Mar 6 11:45:45 modoboa1 postfix/submission/smtpd[29468]: connect from gw.apg2.net[81.3.13.187]
Mar 6 11:45:45 modoboa1 postfix/submission/smtpd[29468]: Anonymous TLS connection established from gw.apg2.net[81.3.13.187]: TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)
Mar 6 11:45:45 modoboa1 postfix/submission/smtpd[29468]: NOQUEUE: client=gw.apg2.net[81.3.13.187], sasl_method=PLAIN, sasl_username=user@apg2.de
Mar 6 11:45:47 modoboa1 postfix/smtpd[29508]: connect from localhost[127.0.0.1]
Mar 6 11:45:47 modoboa1 postfix/smtpd[29508]: 7E631BFBB1: client=localhost[127.0.0.1], orig_client=gw.apg2.net[81.3.13.187]
Mar 6 11:45:47 modoboa1 postfix/cleanup[29509]: 7E631BFBB1: message-id=<3E50EF74-D3C3-439B-AFC1-58EE779DFB83@apg2.de>
Mar 6 11:45:47 modoboa1 postfix/smtpd[29508]: disconnect from localhost[127.0.0.1] ehlo=1 xforward=1 mail=1 rcpt=1 data=1 quit=1 commands=6
Mar 6 11:45:47 modoboa1 postfix/qmgr[25853]: 7E631BFBB1: from=<user@apg2.de>, size=4655, nrcpt=1 (queue active)
Mar 6 11:45:47 modoboa1 amavis[18307]: (18307-14) Passed CLEAN {RelayedOutbound}, ORIGINATING LOCAL [81.3.13.187]:37543 [81.3.13.187] <user@apg2.de> -> <user@ymail.com>, Message-ID: <3E50EF74-D3C3-439B-AFC1-58EE779DFB83@apg2.de>, mail_id: MhInoSFVu_r4, Hits: -2.899, size: 4179, queued_as: 7E631BFBB1, 1564 ms
Mar 6 11:45:47 modoboa1 postfix/submission/smtpd[29468]: proxy-accept: END-OF-MESSAGE: 250 2.0.0 from MTA(smtp:[127.0.0.1]:10025): 250 2.0.0 Ok: queued as 7E631BFBB1; from=<user@apg2.de> to=<user@ymail.com> proto=ESMTP helo=<[10.5.248.3]> sasl_username=<user@apg2.de>
Mar 6 11:45:49 modoboa1 postfix/smtp[29511]: Trusted TLS connection established to mta6.am0.yahoodns.net[67.195.228.110]:25: TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256
Mar 6 11:45:51 modoboa1 postfix/smtp[29511]: 7E631BFBB1: to=<user@ymail.com>, relay=mta6.am0.yahoodns.net[67.195.228.110]:25, delay=3.6, delays=0.04/0.03/2.1/1.4, dsn=2.0.0, status=sent (250 ok dirdel)
Mar 6 11:45:51 modoboa1 postfix/qmgr[25853]: 7E631BFBB1: removed
```
An email sent locally via a linux shell mail tool gets the dkim header though:
```
Mar 6 12:27:42 modoboa1 postfix/pickup[25852]: 4A49BBFD04: uid=1000 from=<user>
Mar 6 12:27:42 modoboa1 postfix/cleanup[17156]: 4A49BBFD04: message-id=<20230306112742.GA17268@apg2.de>
Mar 6 12:27:42 modoboa1 opendkim[1502]: 4A49BBFD04: DKIM-Signature field added (s=modoboa, d=apg2.de)
Mar 6 12:27:42 modoboa1 postfix/qmgr[25853]: 4A49BBFD04: from=<user@apg2.de>, size=464, nrcpt=1 (queue active)
Mar 6 12:27:43 modoboa1 postfix/smtp[17163]: Trusted TLS connection established to mta6.am0.yahoodns.net[67.195.204.77]:25: TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256
Mar 6 12:27:44 modoboa1 postfix/smtp[17163]: 4A49BBFD04: to=<user@ymail.com>, relay=mta6.am0.yahoodns.net[67.195.204.77]:25, delay=2.2, delays=0.06/0.01/1.3/0.82, dsn=2.0.0, status=sent (250 ok dirdel)
Mar 6 12:27:44 modoboa1 postfix/qmgr[25853]: 4A49BBFD04: removed
```
# Expected behavior
Mails sent via authenticated-smtp from an external mail client should get the dkim signature header.
|
closed
|
2023-03-06T12:09:34Z
|
2023-08-07T04:05:31Z
|
https://github.com/modoboa/modoboa/issues/2895
|
[
"stale"
] |
71ae
| 8
|
microsoft/qlib
|
deep-learning
| 975
|
Cannot build a new model
|
When I want to contribute a new model, it notices me as:
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 965, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'qlib.contrib.model.pytorch_newmodel_ts'
How can I solve this. & So, is there a document for development?
|
closed
|
2022-03-13T02:29:33Z
|
2024-07-16T14:50:40Z
|
https://github.com/microsoft/qlib/issues/975
|
[
"question"
] |
mczhuge
| 2
|
coqui-ai/TTS
|
deep-learning
| 4,125
|
[Bug] Random Talk
|
### Describe the bug
when i try tts in hindi with coqui tts it speak the given senetence but with some random talk (not understandable).
### To Reproduce
genetrate :- हिंदी भाषा in hi
it will speak hindi but with random talk
### Expected behavior
_No response_
### Logs
_No response_
### Environment
```shell
xttsv2_2.0.3
python 3.11
pytorch 2.2.1
os windows
cuda
rtx 3050 4gb 75watts
conda
```
### Additional context
_No response_
|
closed
|
2025-01-08T06:31:52Z
|
2025-02-22T05:07:49Z
|
https://github.com/coqui-ai/TTS/issues/4125
|
[
"bug",
"wontfix"
] |
0-666
| 7
|
zama-ai/concrete-ml
|
scikit-learn
| 910
|
How to build a СNN on BFV?
|
Hello!
I am doing research related to neural networks and homomorphic encryption. Your library is amazing and allows you to explore the TFHE scheme! And as I understand it, the example of the convolutional neural network works with this scheme. However, a question arises, in one of the Issue I saw that you have support for several homomorphic encryption schemes, including the BFV scheme. Therefore, I have a question, how to build a CNN with this scheme?
If I am wrong and the CNN already works on BFV, is it possible to build it on TFHE?
P.S. (This is about an example docs/advanced_examples/ConvolutionalNeuralNetwork.ipynb)
|
closed
|
2024-10-03T12:56:03Z
|
2024-10-03T15:10:00Z
|
https://github.com/zama-ai/concrete-ml/issues/910
|
[] |
reaneling
| 2
|
coqui-ai/TTS
|
deep-learning
| 2,483
|
[Bug] pip install -e .[all,dev,notebooks] giving an error
|
### Describe the bug
In the instructions under "Install TTS", there is command to install extras that is causing me an error when I run it:
`pip install -e .[all,dev,notebooks] # Select the relevant extras`
### To Reproduce
I tried each one of those options: [all, dev, notebooks]
```
`pip install -e . all`
`pip install -e . dev`
`pip install -e . notebooks`
```
They all came back with an error something like this:
```
steve@gpu2:~/workspace/TTS$ pip3.10 install -e . dev
Defaulting to user installation because normal site-packages is not writeable
Obtaining file:///home/steve/workspace/TTS
Installing build dependencies ... done
Checking if build backend supports build_editable ... done
Getting requirements to build editable ... done
Preparing editable metadata (pyproject.toml) ... done
ERROR: Could not find a version that satisfies the requirement dev (from versions: none)
ERROR: No matching distribution found for dev
```
Am I entering this command in wrong? Notice that there is no space between . and [ in the original command listed in the read.me. I am assuming there is a space in between. Not putting there results in the following error:
```
steve@gpu2:~/workspace/TTS$ pip3.10 install -e .all
Defaulting to user installation because normal site-packages is not writeable
ERROR: .all is not a valid editable requirement. It should either be a path to a local project or a VCS URL (beginning with bzr+http, bzr+https, bzr+ssh, bzr+sftp, bzr+ftp, bzr+lp, bzr+file, git+http, git+https, git+ssh, git+git, git+file, hg+file, hg+http, hg+https, hg+ssh, hg+static-http, svn+ssh, svn+http, svn+https, svn+svn, svn+file).
```
It's been some time since I have worked in linux, so forgive me if this is obvious. Thanks!
### Expected behavior
I am not expecting an error message.
### Logs
_No response_
### Environment
```shell
I don't even have a bin directory yet.
```
### Additional context
_No response_
|
closed
|
2023-04-05T06:43:20Z
|
2023-11-26T10:28:38Z
|
https://github.com/coqui-ai/TTS/issues/2483
|
[
"bug",
"wontfix"
] |
steve3p0
| 4
|
taverntesting/tavern
|
pytest
| 213
|
Failure messages
|
We are trying a simple test. while verifying the response i got error
in verify
```
raise TestFailError("Test '{:s}' failed:\n{:s}".format(self.name, self._str_errors()), failures=self.errors)
vern.util.exceptions.TestFailError: Test '23_03_Post P1 V1' failed:
E - Value mismatch in body: Type of returned data was different than expected (expected["0"]["biosStartAddress"] = '3022853549', actual["0"]["biosStartAddress"] = '3022853549')
##############################################################################
================================================= test session starts ================================================= platform win32 -- Python 3.5.0, pytest-4.0.0, py-1.7.0, pluggy-0.8.0 -- c:\python35-32\python.exe
cachedir: .pytest_cache
rootdir: E:\Tavern\BIOS\AttestationMeasurementService, inifile:
plugins: tavern-0.20.0
collected 1 item
test_Test.tavern.yaml::Test_23_ Put B1 P1 V1 and B1 P2 V1 should be allowed FAILED [100%]
====================================================== FAILURES =======================================================
E:\Tavern\BIOS\AttestationMeasurementService\test_Test.tavern.yaml::Test_23_ Put B1 P1 V1 and B1 P2 V1 should be allowed
c:\python35-32\lib\site-packages\_pytest\runner.py:211: in __init__ self.result = func() c:\python35-32\lib\site-packages\_pytest\runner.py:193: in <lambda>
lambda: ihook(item=item, **kwds),
c:\python35-32\lib\site-packages\pluggy\hooks.py:284: in __call__
return self._hookexec(self, self.get_hookimpls(), kwargs)
c:\python35-32\lib\site-packages\pluggy\manager.py:67: in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
c:\python35-32\lib\site-packages\pluggy\manager.py:61: in <lambda>
firstresult=hook.spec.opts.get("firstresult") if hook.spec else False,
c:\python35-32\lib\site-packages\pluggy\callers.py:208: in _multicall
return outcome.get_result()
c:\python35-32\lib\site-packages\pluggy\callers.py:80: in get_result
raise ex[1].with_traceback(ex[2]) c:\python35-32\lib\site-packages\pluggy\callers.py:187: in _multicall
res = hook_impl.function(*args)
c:\python35-32\lib\site-packages\_pytest\runner.py:121: in pytest_runtest_call
item.runtest()
c:\python35-32\lib\site-packages\tavern\testutils\pytesthook.py:431: in runtest
run_test(self.path, self.spec, self.global_cfg)
c:\python35-32\lib\site-packages\tavern\core.py:145: in run_test
run_stage_(sessions, stage, tavern_box, test_block_config)
c:\python35-32\lib\site-packages\tavern\core.py:180: in run_stage
saved = v.verify(response)
c:\python35-32\lib\site-packages\tavern\_plugins\rest\response.py:207: in verify
raise TestFailError("Test '{:s}' failed:\n{:s}".format(self.name, self._str_errors()), failures=self.errors)
E tavern.util.exceptions.TestFailError: Test '23_03_Post P1 V1' failed:
E - Value mismatch in body: Type of returned data was different than expected (expected["0"]["biosStartAddress"] = '3022853549', actual["0"]["biosStartAddress"] = '3022853549') -------------------------------------------------- Captured log call --------------------------------------------------
base.py 37 ERROR Value mismatch in body: Type of returned data was different than expected (expected["0"]["biosStartAddress"] = '3022853549', actual["0"]["biosStartAddress"] = '3022853549') =============================================== short test summary info =============================================== FAIL test_Test.tavern.yaml::Test_23_ Put B1 P1 V1 and B1 P2 V1 should be allowed
============================================== 1 failed in 1.70 seconds ===============================================
```
|
closed
|
2018-11-27T08:22:34Z
|
2018-12-09T18:15:57Z
|
https://github.com/taverntesting/tavern/issues/213
|
[] |
gbhangale416
| 2
|
deeppavlov/DeepPavlov
|
nlp
| 1,599
|
Doesn't work with recent version of pytorch-crf
|
Want to contribute to DeepPavlov? Please read the [contributing guideline](http://docs.deeppavlov.ai/en/master/devguides/contribution_guide.html) first.
Please enter all the information below, otherwise your issue may be closed without a warning.
**DeepPavlov version** (you can look it up by running `pip show deeppavlov`): 1.0.0
**Python version**: 3.9.5
**Operating system** (ubuntu linux, windows, ...): Windows 11
**Issue**: Error when trying a modified example from the readme.
**Content or a name of a configuration file**:
See below
**Command that led to error**:
```
model = build_model(deeppavlov.configs.ner.ner_collection3_bert, download=True)
```
**Error (including full traceback)**:
```
2022-11-10 18:35:28.686 INFO in 'deeppavlov.download'['download'] at line 138: Skipped http://files.deeppavlov.ai/v1/ner/ner_rus_bert_coll3_torch.tar.gz download because of matching hashes
[nltk_data] Downloading package punkt to
[nltk_data] C:\Users\Ellsel\AppData\Roaming\nltk_data...
[nltk_data] Package punkt is already up-to-date!
[nltk_data] Downloading package stopwords to
[nltk_data] C:\Users\Ellsel\AppData\Roaming\nltk_data...
[nltk_data] Package stopwords is already up-to-date!
[nltk_data] Downloading package perluniprops to
[nltk_data] C:\Users\Ellsel\AppData\Roaming\nltk_data...
[nltk_data] Package perluniprops is already up-to-date!
[nltk_data] Downloading package nonbreaking_prefixes to
[nltk_data] C:\Users\Ellsel\AppData\Roaming\nltk_data...
[nltk_data] Package nonbreaking_prefixes is already up-to-date!
2022-11-10 18:35:31.569 INFO in 'deeppavlov.core.data.simple_vocab'['simple_vocab'] at line 112: [loading vocabulary from C:\Users\Ellsel\.deeppavlov\models\ner_rus_bert_coll3_torch\tag.dict]
Traceback (most recent call last):
File "c:\Users\Ellsel\Desktop\Automation\conversation.py", line 4, in <module>
model = build_model(deeppavlov.configs.ner.ner_collection3_bert, download=True)
File "C:\Users\Ellsel\AppData\Local\Programs\Python\Python39\lib\site-packages\deeppavlov\core\commands\infer.py", line 53, in build_model
component = from_params(component_config, mode=mode)
File "C:\Users\Ellsel\AppData\Local\Programs\Python\Python39\lib\site-packages\deeppavlov\core\common\params.py", line 92, in from_params
obj = get_model(cls_name)
File "C:\Users\Ellsel\AppData\Local\Programs\Python\Python39\lib\site-packages\deeppavlov\core\common\registry.py", line 74, in get_model
return cls_from_str(_REGISTRY[name])
File "C:\Users\Ellsel\AppData\Local\Programs\Python\Python39\lib\site-packages\deeppavlov\core\common\registry.py", line 42, in cls_from_str
return getattr(importlib.import_module(module_name), cls_name)
File "C:\Users\Ellsel\AppData\Local\Programs\Python\Python39\lib\importlib\__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 855, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "C:\Users\Ellsel\AppData\Local\Programs\Python\Python39\lib\site-packages\deeppavlov\models\torch_bert\torch_transformers_sequence_tagger.py", line 28, in <module>
from deeppavlov.models.torch_bert.crf import CRF
File "C:\Users\Ellsel\AppData\Local\Programs\Python\Python39\lib\site-packages\deeppavlov\models\torch_bert\crf.py", line 4, in <module>
from torchcrf import CRF as CRFbase
ModuleNotFoundError: No module named 'torchcrf'
```
`pip install pytorch-crf==0.4.0` needed.
|
closed
|
2022-11-10T18:07:51Z
|
2022-11-11T14:49:33Z
|
https://github.com/deeppavlov/DeepPavlov/issues/1599
|
[
"bug"
] |
claell
| 3
|
Nekmo/amazon-dash
|
dash
| 33
|
Native Openhab support
|
https://community.openhab.org/t/amazon-dash-button-things-wont-come-online-initializing/34438/60
|
closed
|
2018-02-20T20:18:00Z
|
2018-03-11T23:15:51Z
|
https://github.com/Nekmo/amazon-dash/issues/33
|
[
"enhancement"
] |
Nekmo
| 0
|
graphql-python/graphene-django
|
django
| 944
|
How to generate non nullable queries?
|
This is my model and schema:
```python
class AccountRegion(models.Model):
name = models.CharField(_('name'), max_length=128)
class AccountRegionType(DjangoObjectType):
class Meta:
model = AccountRegion
class Query(graphene.ObjectType):
account_regions = graphene.List(AccountRegionType)
def resolve_account_regions(self, info):
return AccountRegion.objects.all()
```
When generating the GraphQL schema using the `graphql_schema` management command, I get this output:
```graphql
schema {
query: Query
}
type AccountRegionType {
id: String!
name: String!
}
type Query {
accountRegions: [AccountRegionType]
}
```
What I need is to generate the query so it looks like this (notice the double `!`):
```graphql
...
type Query {
accountRegions: [AccountRegionType!]!
}
```
If I modify my query like this:
```python
class Query(graphene.ObjectType):
account_regions = graphene.List(AccountRegionType, required=True)
...
```
I'm able to generate this schema:
```graphql
...
type Query {
accountRegions: [AccountRegionType]!
}
```
But I'm not sure how to specify that within the `accountRegions` result array, the full `AccountRegionType` object will be present.
|
closed
|
2020-04-23T16:27:15Z
|
2020-05-09T15:47:23Z
|
https://github.com/graphql-python/graphene-django/issues/944
|
[] |
honi
| 2
|
RomelTorres/alpha_vantage
|
pandas
| 48
|
Example Code Doesn't Work in Python 3.6.1
|
Installed alpha_vantage from pip.
`from alpha_vantage.timeseries import TimeSeries
import matplotlib.pyplot as plt
ts = TimeSeries(key='my key was here', output_format='pandas')
data, meta_data = ts.get_intraday(symbol='MSFT',interval='1min', outputsize='full')
data['close'].plot()
plt.title('Intraday Times Series for the MSFT stock (1 min)')
plt.show()`
"C:\Users\Doug\OneDrive\family\doug\work in progress\alphavantage\alphaenv\Scripts\python.exe" "C:/Users/Doug/OneDrive/family/doug/work in progress/alphavantage/rolling returns/alpha_play.py"
Traceback (most recent call last):
File "C:\Users\Doug\OneDrive\family\doug\work in progress\alphavantage\alphaenv\lib\site-packages\pandas\core\indexes\base.py", line 2525, in get_loc
return self._engine.get_loc(key)
File "pandas\_libs\index.pyx", line 117, in pandas._libs.index.IndexEngine.get_loc
File "pandas\_libs\index.pyx", line 139, in pandas._libs.index.IndexEngine.get_loc
File "pandas\_libs\hashtable_class_helper.pxi", line 1265, in pandas._libs.hashtable.PyObjectHashTable.get_item
File "pandas\_libs\hashtable_class_helper.pxi", line 1273, in pandas._libs.hashtable.PyObjectHashTable.get_item
KeyError: 'close'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:/Users/Doug/OneDrive/family/doug/work in progress/alphavantage/rolling returns/alpha_play.py", line 6, in <module>
data['close'].plot()
File "C:\Users\Doug\OneDrive\family\doug\work in progress\alphavantage\alphaenv\lib\site-packages\pandas\core\frame.py", line 2139, in __getitem__
return self._getitem_column(key)
File "C:\Users\Doug\OneDrive\family\doug\work in progress\alphavantage\alphaenv\lib\site-packages\pandas\core\frame.py", line 2146, in _getitem_column
return self._get_item_cache(key)
File "C:\Users\Doug\OneDrive\family\doug\work in progress\alphavantage\alphaenv\lib\site-packages\pandas\core\generic.py", line 1842, in _get_item_cache
values = self._data.get(item)
File "C:\Users\Doug\OneDrive\family\doug\work in progress\alphavantage\alphaenv\lib\site-packages\pandas\core\internals.py", line 3843, in get
loc = self.items.get_loc(item)
File "C:\Users\Doug\OneDrive\family\doug\work in progress\alphavantage\alphaenv\lib\site-packages\pandas\core\indexes\base.py", line 2527, in get_loc
return self._engine.get_loc(self._maybe_cast_indexer(key))
File "pandas\_libs\index.pyx", line 117, in pandas._libs.index.IndexEngine.get_loc
File "pandas\_libs\index.pyx", line 139, in pandas._libs.index.IndexEngine.get_loc
File "pandas\_libs\hashtable_class_helper.pxi", line 1265, in pandas._libs.hashtable.PyObjectHashTable.get_item
File "pandas\_libs\hashtable_class_helper.pxi", line 1273, in pandas._libs.hashtable.PyObjectHashTable.get_item
KeyError: 'close'
Process finished with exit code 1
|
closed
|
2018-02-21T15:22:47Z
|
2018-02-22T08:41:45Z
|
https://github.com/RomelTorres/alpha_vantage/issues/48
|
[] |
dougransom
| 2
|
numpy/numpy
|
numpy
| 28,256
|
TYP: `timedelta64.__divmod__` incorrect inference
|
### Describe the issue:
Using `divmod` widens generic type of timedelta64. The last overload should probably use `Self` instead of `timedelta64`, or possible add an overload for the timedelta case.
https://github.com/numpy/numpy/blob/6bc905859780c44193942ea2d0d297abcd691330/numpy/__init__.pyi#L4472-L4477
### Reproduce the code example:
```python
from datetime import timedelta as TD
from typing import assert_type
import numpy as np
td = np.timedelta64(1, "D")
assert_type(td, np.timedelta64[TD]) # ✅
n, remainder = divmod(td, td)
assert_type(remainder, np.timedelta64[TD]) # ❌ timedelta64[timedelta | int | None]
```
### Python and NumPy Versions:
2.2.2
3.13.1 (main, Dec 4 2024, 08:54:14) [GCC 11.4.0]
### Type-checker version and settings:
mypy 1.4.1
pyright 1.1.393
|
closed
|
2025-01-31T17:44:53Z
|
2025-02-01T19:40:53Z
|
https://github.com/numpy/numpy/issues/28256
|
[
"41 - Static typing"
] |
randolf-scholz
| 1
|
flasgger/flasgger
|
flask
| 285
|
__init__() missing 2 required positional arguments: 'schema_name_resolver' and 'spec'
|
Hello, I'm getting the error below. Am I missing anything?
```
../../../env36/lib64/python3.6/site-packages/flask_base/app.py:1: in <module>
from flasgger import Swagger, LazyString, LazyJSONEncoder
../../../env36/lib64/python3.6/site-packages/flasgger/__init__.py:8: in <module>
from .base import Swagger, Flasgger, NO_SANITIZER, BR_SANITIZER, MK_SANITIZER, LazyJSONEncoder # noqa
../../../env36/lib64/python3.6/site-packages/flasgger/base.py:37: in <module>
from .utils import extract_definitions
../../../env36/lib64/python3.6/site-packages/flasgger/utils.py:22: in <module>
from .marshmallow_apispec import SwaggerView
../../../env36/lib64/python3.6/site-packages/flasgger/marshmallow_apispec.py:13: in <module>
openapi_converter = openapi.OpenAPIConverter(openapi_version='2.0')
E TypeError: __init__() missing 2 required positional arguments: 'schema_name_resolver' and 'spec'
```
|
open
|
2019-02-10T15:14:05Z
|
2019-07-25T05:52:22Z
|
https://github.com/flasgger/flasgger/issues/285
|
[] |
wobeng
| 12
|
streamlit/streamlit
|
machine-learning
| 10,747
|
Add support for Jupyter widgets / ipywidgets
|
### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [x] I added a descriptive title and summary to this issue.
### Summary
Jupyter Widgets are [interactive browser controls](https://github.com/jupyter-widgets/ipywidgets/blob/main/docs/source/examples/Index.ipynb) for Jupyter notebooks. Implement support for using ipywidgets elements in a Streamlit app.
### Why?
_No response_
### How?
```python
import ipywidgets as widgets
widget = st.ipywidgets(widgets.IntSlider())
st.write(widget.value)
```
### Additional Context
- Related to https://github.com/streamlit/streamlit/issues/10746
- Related discussion: https://discuss.streamlit.io/t/ipywidgets-wip/3870
|
open
|
2025-03-12T16:22:36Z
|
2025-03-18T10:31:37Z
|
https://github.com/streamlit/streamlit/issues/10747
|
[
"type:enhancement",
"feature:custom-components",
"type:possible-component"
] |
lukasmasuch
| 1
|
amidaware/tacticalrmm
|
django
| 1,714
|
Feature Request: Automated Task run at User Login
|
**Is your feature request related to a problem? Please describe.**
I have a Task that reads the Logonserver of the Agent and writes it to a Custom Field, when there is not User logged it, the Script dont "posts an answer" to the Custom Field, so the Field is empty and not visible at the Summary Tab. Actually the Script runs every Day at 12:00 - on Clients there is normally a User logged in, so this is OK, but when the Script runs at an Server Agent the Field is gone.
**Describe the solution you'd like**
I wish there is an Option to run Tasks automaticaly when an User signs in.
I think it should be possible - the Automated Tasks are running with the Task Sheduler of Windows, so there is an option for it!?
|
open
|
2023-12-22T15:43:08Z
|
2023-12-22T15:43:35Z
|
https://github.com/amidaware/tacticalrmm/issues/1714
|
[] |
maieredv-manuel
| 0
|
junyanz/pytorch-CycleGAN-and-pix2pix
|
pytorch
| 780
|
using CycleGAN for Chinese characters style transfer
|
Hi, thank you for sharing the code and this is a very good work. Now, I want to know if the CycleGAN can be used for Chinese characters style transfer. As I know, zi2zi used the pix2pix for this task. I need some suggestions. Thank you~^_^
|
closed
|
2019-09-27T08:57:25Z
|
2019-09-29T11:18:01Z
|
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/780
|
[] |
Danee-wawawa
| 2
|
tensorlayer/TensorLayer
|
tensorflow
| 1,076
|
input shape of deformable convilution OP must be fixed?If my input shape of w is not fixed. how can i use deformable convilution function?
|


|
open
|
2020-04-16T03:06:29Z
|
2020-04-16T03:10:04Z
|
https://github.com/tensorlayer/TensorLayer/issues/1076
|
[] |
cjt222
| 0
|
aimhubio/aim
|
tensorflow
| 3,060
|
LOG.old files
|
## ❓Question
I am using an [aimlflow](https://github.com/aimhubio/aimlflow) watcher to sync aim with mlflow every minute and I found out that the repository size get's quite big (1gb for a run with ~2e5 logged metrics) because of an abundance of LOG.old files inside the meta/chunks/run_id folder.
Are these necessary, can I remove them or prevent them from being stored?
|
closed
|
2023-11-10T13:29:23Z
|
2023-11-11T08:00:25Z
|
https://github.com/aimhubio/aim/issues/3060
|
[
"type / question"
] |
roman-vygon
| 2
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.