url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
list | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/26026
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26026/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26026/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26026/events
|
https://github.com/huggingface/transformers/issues/26026
| 1,885,478,061 |
I_kwDOCUB6oc5wYhyt
| 26,026 |
CUDA memory does not release with `del model` where .from_pretrained() loading model to multi-devices
|
{
"login": "Ricardo-L-C",
"id": 28146614,
"node_id": "MDQ6VXNlcjI4MTQ2NjE0",
"avatar_url": "https://avatars.githubusercontent.com/u/28146614?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ricardo-L-C",
"html_url": "https://github.com/Ricardo-L-C",
"followers_url": "https://api.github.com/users/Ricardo-L-C/followers",
"following_url": "https://api.github.com/users/Ricardo-L-C/following{/other_user}",
"gists_url": "https://api.github.com/users/Ricardo-L-C/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ricardo-L-C/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ricardo-L-C/subscriptions",
"organizations_url": "https://api.github.com/users/Ricardo-L-C/orgs",
"repos_url": "https://api.github.com/users/Ricardo-L-C/repos",
"events_url": "https://api.github.com/users/Ricardo-L-C/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ricardo-L-C/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @muellerzr @younesbelkada ",
"Hi @Ricardo-L-C \r\nZach can confirm :D from what I know there is a nice utility method in accelerated called `release_memory`: (zach, correct me if the usage of that method below is wrong)\r\n\r\n```python\r\nfrom accelerate.utils import release_memory\r\n\r\n...\r\n\r\nmodel = release_memory(model)\r\n```\r\n\r\n\r\n\r\n",
"Thanks @younesbelkada \r\nI have tried `release_memory` now, but it didn't work as expected, and the cuda Memory-Usage behaved the same as before.\r\nI check the implementation of this function, it shows\r\n```\r\nReleases memory from `objects` by setting them to `None` and calls `gc.collect()` and `torch.cuda.empty_cache()`.\r\nReturned objects should be reassigned to the same variables.\r\n```\r\nAnd that seems similar to my manual operation.",
"> Thanks @younesbelkada I have tried `release_memory` now, but it didn't work as expected, and the cuda Memory-Usage behaved the same as before. I check the implementation of this function, it shows\r\n> \r\n> ```\r\n> Releases memory from `objects` by setting them to `None` and calls `gc.collect()` and `torch.cuda.empty_cache()`.\r\n> Returned objects should be reassigned to the same variables.\r\n> ```\r\n> \r\n> And that seems similar to my manual operation.\r\n\r\nHow to solve it? I have met the same problem?"
] | 1,694 | 1,702 | 1,696 |
NONE
| null |
### System Info
- `transformers` version: 4.33.0
- Platform: Linux-5.4.119-1-tlinux4-0010.3-x86_64-with-glibc2.17
- Python version: 3.10.6
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.3
- Accelerate version: 0.21.0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: NO
- mixed_precision: fp16
- use_cpu: False
- num_processes: 1
- machine_rank: 0
- num_machines: 1
- gpu_ids: all
- rdzv_backend: static
- same_network: True
- main_training_function: main
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- PyTorch version (GPU?): 2.0.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes, A10 * 2
- Using distributed or parallel set-up in script?: Yes, loading model to 2 cuda devices
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I'm loading [CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) with 2 * A10 gpus as follows:
```python
import torch
from transformers import AutoModel
model = AutoModel.from_pretrained("./", torch_dtype=torch.float16, device_map="auto").eval()
```
when finished, the cuda Memory-Usage looks like this:
```
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.141.03 Driver Version: 470.141.03 CUDA Version: 11.7 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA A10 On | 00000000:6F:00.0 Off | 0 |
| 0% 40C P0 60W / 150W | 12647MiB / 22731MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 NVIDIA A10 On | 00000000:70:00.0 Off | 0 |
| 0% 40C P0 63W / 150W | 13569MiB / 22731MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
```
and the `model.hf_device_map` shows the follow:
```python
{'embed_tokens': 0, 'layers.0': 0, 'layers.1': 0, 'layers.2': 0, 'layers.3': 0, 'layers.4': 0, 'layers.5': 0, 'layers.6': 0, 'layers.7': 0, 'layers.8': 0, 'layers.9': 0, 'layers.10': 0, 'layers.11': 0, 'layers.12': 0, 'layers.13': 0, 'layers.14': 0, 'layers.15': 0, 'layers.16': 0, 'layers.17': 0, 'layers.18': 0, 'layers.19': 1, 'layers.20': 1, 'layers.21': 1, 'layers.22': 1, 'layers.23': 1, 'layers.24': 1, 'layers.25': 1, 'layers.26': 1, 'layers.27': 1, 'layers.28': 1, 'layers.29': 1, 'layers.30': 1, 'layers.31': 1, 'layers.32': 1, 'layers.33': 1, 'layers.34': 1, 'layers.35': 1, 'layers.36': 1, 'layers.37': 1, 'layers.38': 1, 'layers.39': 1, 'norm': 1}
```
but when I wanna reload the model or any other model, the cuda memory seems not released with `del model`
```
del model
# reloading
model = AutoModel.from_pretrained("./", torch_dtype=torch.float16, device_map="auto").eval()
```
the new cuda Memory-Usage is like:
```
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.141.03 Driver Version: 470.141.03 CUDA Version: 11.7 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA A10 On | 00000000:6F:00.0 Off | 0 |
| 0% 42C P0 61W / 150W | 9667MiB / 22731MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 NVIDIA A10 On | 00000000:70:00.0 Off | 0 |
| 0% 43C P0 64W / 150W | 9253MiB / 22731MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
```
and I found some weights are offloaded to cpu with `model.hf_device_map`:
```python
{'embed_tokens': 0, 'layers.0': 0, 'layers.1': 0, 'layers.2': 0, 'layers.3': 0, 'layers.4': 0, 'layers.5': 0, 'layers.6': 0, 'layers.7': 0, 'layers.8': 0, 'layers.9': 0, 'layers.10': 0, 'layers.11': 0, 'layers.12': 0, 'layers.13': 0, 'layers.14': 1, 'layers.15': 1, 'layers.16': 1, 'layers.17': 1, 'layers.18': 1, 'layers.19': 1, 'layers.20': 1, 'layers.21': 1, 'layers.22': 1, 'layers.23': 1, 'layers.24': 1, 'layers.25': 1, 'layers.26': 1, 'layers.27': 1, 'layers.28': 'cpu', 'layers.29': 'cpu', 'layers.30': 'cpu', 'layers.31': 'cpu', 'layers.32': 'cpu', 'layers.33': 'cpu', 'layers.34': 'cpu', 'layers.35': 'cpu', 'layers.36': 'cpu', 'layers.37': 'cpu', 'layers.38': 'cpu', 'layers.39': 'cpu', 'norm': 'cpu'}
```
If I add `torch.cuda.empty_cache()` between `del model` and reloading, what the `nvidia-smi` shows never changes.
### Expected behavior
while reloading, the whole model should be able to load into gpus as the first time, and `nvidia-smi` after `del model` should show Memory-Usage is decrease greatly.
I think this may occur in models loaded to multi-gpus with `device_map="auto"` or `device_map={a dict}`.
When I load smaller models like [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) to single a10 without `device_map="auto"`, it behaves as expected.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26026/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26026/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26025
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26025/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26025/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26025/events
|
https://github.com/huggingface/transformers/pull/26025
| 1,885,432,812 |
PR_kwDOCUB6oc5Zwgc1
| 26,025 |
Punctuation fix
|
{
"login": "kwonmha",
"id": 8953934,
"node_id": "MDQ6VXNlcjg5NTM5MzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/8953934?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kwonmha",
"html_url": "https://github.com/kwonmha",
"followers_url": "https://api.github.com/users/kwonmha/followers",
"following_url": "https://api.github.com/users/kwonmha/following{/other_user}",
"gists_url": "https://api.github.com/users/kwonmha/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kwonmha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kwonmha/subscriptions",
"organizations_url": "https://api.github.com/users/kwonmha/orgs",
"repos_url": "https://api.github.com/users/kwonmha/repos",
"events_url": "https://api.github.com/users/kwonmha/events{/privacy}",
"received_events_url": "https://api.github.com/users/kwonmha/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26025). All of your documentation changes will be reflected on that endpoint."
] | 1,694 | 1,694 | 1,694 |
CONTRIBUTOR
| null |
# What does this PR do?
Add maybe missing punctuation which enables to ends sentence and starts a new sentence.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26025/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26025/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26025",
"html_url": "https://github.com/huggingface/transformers/pull/26025",
"diff_url": "https://github.com/huggingface/transformers/pull/26025.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26025.patch",
"merged_at": 1694112892000
}
|
https://api.github.com/repos/huggingface/transformers/issues/26024
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26024/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26024/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26024/events
|
https://github.com/huggingface/transformers/pull/26024
| 1,885,329,220 |
PR_kwDOCUB6oc5ZwJ9p
| 26,024 |
fix _resize_token_embeddings will set lm head size to 0 when enabled deepspeed zero3
|
{
"login": "kai01ai",
"id": 140378742,
"node_id": "U_kgDOCF4Cdg",
"avatar_url": "https://avatars.githubusercontent.com/u/140378742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kai01ai",
"html_url": "https://github.com/kai01ai",
"followers_url": "https://api.github.com/users/kai01ai/followers",
"following_url": "https://api.github.com/users/kai01ai/following{/other_user}",
"gists_url": "https://api.github.com/users/kai01ai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kai01ai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kai01ai/subscriptions",
"organizations_url": "https://api.github.com/users/kai01ai/orgs",
"repos_url": "https://api.github.com/users/kai01ai/repos",
"events_url": "https://api.github.com/users/kai01ai/events{/privacy}",
"received_events_url": "https://api.github.com/users/kai01ai/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @amyeroberts ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26024). All of your documentation changes will be reflected on that endpoint."
] | 1,694 | 1,698 | 1,694 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes https://github.com/huggingface/transformers/issues/25977
After resizing the input_embedding, the value of new_embeddings.weight.shape[0] is utilized as the new size for resizing the lm_head. However, when deepspeed zero3 is enabled, this value becomes 0. This PR addresses this issue by updating new_num_tokens explicitly.
## Who can review?
@ArthurZucker, @pacman100
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26024/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26024/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26024",
"html_url": "https://github.com/huggingface/transformers/pull/26024",
"diff_url": "https://github.com/huggingface/transformers/pull/26024.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26024.patch",
"merged_at": 1694077840000
}
|
https://api.github.com/repos/huggingface/transformers/issues/26023
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26023/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26023/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26023/events
|
https://github.com/huggingface/transformers/pull/26023
| 1,885,286,924 |
PR_kwDOCUB6oc5ZwA6l
| 26,023 |
Fix CircleCI config
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,694 | 1,694 | 1,694 |
COLLABORATOR
| null |
# What does this PR do?
@ArthurZucker fixed (in #25995) my PR #25895, but there change had a minor issue.
The `startswith` method of python's `str` requires `startswith arg must be str or a tuple of str`.
Not a big deal, but currently we just lose showing the summary part (only the `failure_short.txt` being showed).
For example, see https://app.circleci.com/pipelines/github/huggingface/transformers/72336/workflows/87f9e28c-9b95-46a9-b306-36a9a371da2e/jobs/912355
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26023/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26023/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26023",
"html_url": "https://github.com/huggingface/transformers/pull/26023",
"diff_url": "https://github.com/huggingface/transformers/pull/26023.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26023.patch",
"merged_at": 1694091095000
}
|
https://api.github.com/repos/huggingface/transformers/issues/26022
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26022/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26022/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26022/events
|
https://github.com/huggingface/transformers/pull/26022
| 1,885,204,054 |
PR_kwDOCUB6oc5ZvvPh
| 26,022 |
remove the logic of reduce gradient for XLA, since Accelerate will handle it automatically.
|
{
"login": "statelesshz",
"id": 28150734,
"node_id": "MDQ6VXNlcjI4MTUwNzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/28150734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/statelesshz",
"html_url": "https://github.com/statelesshz",
"followers_url": "https://api.github.com/users/statelesshz/followers",
"following_url": "https://api.github.com/users/statelesshz/following{/other_user}",
"gists_url": "https://api.github.com/users/statelesshz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/statelesshz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/statelesshz/subscriptions",
"organizations_url": "https://api.github.com/users/statelesshz/orgs",
"repos_url": "https://api.github.com/users/statelesshz/repos",
"events_url": "https://api.github.com/users/statelesshz/events{/privacy}",
"received_events_url": "https://api.github.com/users/statelesshz/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26022). All of your documentation changes will be reflected on that endpoint."
] | 1,694 | 1,696 | 1,694 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
As the title says.
See https://github.com/huggingface/accelerate/pull/1926 for more information.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerz and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
cc @muellerzr
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26022/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26022/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26022",
"html_url": "https://github.com/huggingface/transformers/pull/26022",
"diff_url": "https://github.com/huggingface/transformers/pull/26022.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26022.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/26021
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26021/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26021/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26021/events
|
https://github.com/huggingface/transformers/pull/26021
| 1,885,189,819 |
PR_kwDOCUB6oc5ZvsMs
| 26,021 |
fix the deepspeed tests
|
{
"login": "pacman100",
"id": 13534540,
"node_id": "MDQ6VXNlcjEzNTM0NTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pacman100",
"html_url": "https://github.com/pacman100",
"followers_url": "https://api.github.com/users/pacman100/followers",
"following_url": "https://api.github.com/users/pacman100/following{/other_user}",
"gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pacman100/subscriptions",
"organizations_url": "https://api.github.com/users/pacman100/orgs",
"repos_url": "https://api.github.com/users/pacman100/repos",
"events_url": "https://api.github.com/users/pacman100/events{/privacy}",
"received_events_url": "https://api.github.com/users/pacman100/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Change itself look OK. \r\n\r\nJust for my own understanding, does this mean the trainer is changing variables - `a` in this instance - in its local scope, even if not explicitly passed to the trainer or set in any config? ",
"> Just for my own understanding, does this mean the trainer is changing variables - a in this instance - in its local scope, even if not explicitly passed to the trainer or set in any config?\r\n\r\nThe default value of the parameter is 0 and post the training the value should change, the latest commit makes it explicit."
] | 1,694 | 1,694 | 1,694 |
CONTRIBUTOR
| null |
# What does this PR do?
1. Post the https://github.com/huggingface/transformers/pull/25863, a test needs fixing. This PR does that
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26021/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26021/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26021",
"html_url": "https://github.com/huggingface/transformers/pull/26021",
"diff_url": "https://github.com/huggingface/transformers/pull/26021.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26021.patch",
"merged_at": 1694581014000
}
|
https://api.github.com/repos/huggingface/transformers/issues/26020
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26020/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26020/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26020/events
|
https://github.com/huggingface/transformers/pull/26020
| 1,885,096,615 |
PR_kwDOCUB6oc5ZvYPi
| 26,020 |
Added HerBERT to README.md
|
{
"login": "Muskan011",
"id": 40476698,
"node_id": "MDQ6VXNlcjQwNDc2Njk4",
"avatar_url": "https://avatars.githubusercontent.com/u/40476698?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Muskan011",
"html_url": "https://github.com/Muskan011",
"followers_url": "https://api.github.com/users/Muskan011/followers",
"following_url": "https://api.github.com/users/Muskan011/following{/other_user}",
"gists_url": "https://api.github.com/users/Muskan011/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Muskan011/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Muskan011/subscriptions",
"organizations_url": "https://api.github.com/users/Muskan011/orgs",
"repos_url": "https://api.github.com/users/Muskan011/repos",
"events_url": "https://api.github.com/users/Muskan011/events{/privacy}",
"received_events_url": "https://api.github.com/users/Muskan011/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26020). All of your documentation changes will be reflected on that endpoint."
] | 1,694 | 1,694 | 1,694 |
CONTRIBUTOR
| null |
Resolved issue #26016 by adding HerBERT to README.md
Documentation: @stevhliu and @MKhalusova
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26020/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26020/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26020",
"html_url": "https://github.com/huggingface/transformers/pull/26020",
"diff_url": "https://github.com/huggingface/transformers/pull/26020.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26020.patch",
"merged_at": 1694112705000
}
|
https://api.github.com/repos/huggingface/transformers/issues/26019
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26019/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26019/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26019/events
|
https://github.com/huggingface/transformers/issues/26019
| 1,885,086,373 |
I_kwDOCUB6oc5wXCKl
| 26,019 |
Llama 2 + FSDP Auto Wrap Issue
|
{
"login": "jmif",
"id": 1000442,
"node_id": "MDQ6VXNlcjEwMDA0NDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1000442?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmif",
"html_url": "https://github.com/jmif",
"followers_url": "https://api.github.com/users/jmif/followers",
"following_url": "https://api.github.com/users/jmif/following{/other_user}",
"gists_url": "https://api.github.com/users/jmif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmif/subscriptions",
"organizations_url": "https://api.github.com/users/jmif/orgs",
"repos_url": "https://api.github.com/users/jmif/repos",
"events_url": "https://api.github.com/users/jmif/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmif/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hello, can you try https://github.com/huggingface/accelerate/pull/1919 and let us know if that fixes the issue?",
"Installed via ` poetry add 'git+https://github.com/pacman100/accelerate.git#fix_fsdp_torch_compile_issue'\r\n` and I get\r\n\r\n```\r\n File \".virtualenvs/llm-training/lib/python3.10/site-packages/transformers/trainer.py\", line 1672, in _inner_training_loop\r\n self.model = self.accelerator.prepare(self.model)self.model = self.accelerator.prepare(self.model)\r\n File \"virtualenvs/llm-training/lib/python3.10/site-packages/accelerate/accelerator.py\", line 1221, in prepare\r\n is_type_fsdp = (type(obj) == FSDP) or isinstance(unwrapped_model, FSDP)\r\nUnboundLocalError: local variable 'unwrapped_model' referenced before assignment\r\n is_type_fsdp = (type(obj) == FSDP) or isinstance(unwrapped_model, FSDP)\r\n```",
"Hello, able to reproduce the issue. I believe that when the inner training loop errors out during the backward pass, the model which is wrapped in FSDP unit results in a corrupted state post which the following warning is given before error:\r\n```\r\n/home/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/torch/nn/modules/module.py:2459: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at aten/src/ATen/core/TensorBody.h:489.)\r\n if p.grad is not None:\r\n```\r\n\r\noutput logs for `roberta-large` model wherein i print the model before the `accelerator.prepare`. We can see that when it calls the inner loop again, the model is in corrupted state with the internal modules being wrapped in FSDP unit but the whole model not being wrapped in FSDP unit:\r\n```\r\nFullyShardedDataParallel(\r\n (_fsdp_wrapped_module): RobertaForSequenceClassification(\r\n (roberta): RobertaModel(\r\n (embeddings): FullyShardedDataParallel(\r\n (_fsdp_wrapped_module): RobertaEmbeddings(\r\n (word_embeddings): Embedding(50265, 1024, padding_idx=1)\r\n (position_embeddings): Embedding(514, 1024, padding_idx=1)\r\n (token_type_embeddings): Embedding(1, 1024)\r\n (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (encoder): RobertaEncoder(\r\n (layer): ModuleList(\r\n (0-23): 24 x RobertaLayer(\r\n (attention): RobertaAttention(\r\n (self): FullyShardedDataParallel(\r\n (_fsdp_wrapped_module): RobertaSelfAttention(\r\n (query): Linear(in_features=1024, out_features=1024, bias=True)\r\n (key): Linear(in_features=1024, out_features=1024, bias=True)\r\n (value): Linear(in_features=1024, out_features=1024, bias=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (output): RobertaSelfOutput(\r\n (dense): Linear(in_features=1024, out_features=1024, bias=True)\r\n (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (intermediate): RobertaIntermediate(\r\n (dense): Linear(in_features=1024, out_features=4096, bias=True)\r\n (intermediate_act_fn): GELUActivation()\r\n )\r\n (output): RobertaOutput(\r\n (dense): Linear(in_features=4096, out_features=1024, bias=True)\r\n (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n )\r\n )\r\n )\r\n (classifier): RobertaClassificationHead(\r\n (dense): Linear(in_features=1024, out_features=1024, bias=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n (out_proj): Linear(in_features=1024, out_features=2, bias=True)\r\n )\r\n )\r\n)\r\n***** Running training *****\r\n Num examples = 25,000\r\n Num Epochs = 10\r\n Instantaneous batch size per device = 512\r\n Training with DataParallel so batch size has been adjusted to: 256\r\n Total train batch size (w. parallel, distributed & accumulation) = 512\r\n Gradient Accumulation steps = 1\r\n Total optimization steps = 480\r\n Number of trainable parameters = 293,150,210\r\n/home/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/torch/nn/modules/module.py:2459: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at aten/src/ATen/core/TensorBody.h:489.)\r\n if p.grad is not None:\r\n 0%| | 0/240 [00:01<?, ?it/s]\r\n/home/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/torch/nn/modules/module.py:2459: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at aten/src/ATen/core/TensorBody.h:489.)\r\n if p.grad is not None:\r\nin inner training loop\r\nThe following columns in the training set don't have a corresponding argument in `RobertaForSequenceClassification.forward` and have been ignored: text. If text are not expected by `RobertaForSequenceClassification.forward`, you can safely ignore this message.\r\nRobertaForSequenceClassification(\r\n (roberta): RobertaModel(\r\n (embeddings): FullyShardedDataParallel(\r\n (_fsdp_wrapped_module): RobertaEmbeddings(\r\n (word_embeddings): Embedding(50265, 1024, padding_idx=1)\r\n (position_embeddings): Embedding(514, 1024, padding_idx=1)\r\n (token_type_embeddings): Embedding(1, 1024)\r\n (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (encoder): RobertaEncoder(\r\n (layer): ModuleList(\r\n (0-23): 24 x RobertaLayer(\r\n (attention): RobertaAttention(\r\n (self): FullyShardedDataParallel(\r\n (_fsdp_wrapped_module): RobertaSelfAttention(\r\n (query): Linear(in_features=1024, out_features=1024, bias=True)\r\n (key): Linear(in_features=1024, out_features=1024, bias=True)\r\n (value): Linear(in_features=1024, out_features=1024, bias=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (output): RobertaSelfOutput(\r\n (dense): Linear(in_features=1024, out_features=1024, bias=True)\r\n (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n (intermediate): RobertaIntermediate(\r\n (dense): Linear(in_features=1024, out_features=4096, bias=True)\r\n (intermediate_act_fn): GELUActivation()\r\n )\r\n (output): RobertaOutput(\r\n (dense): Linear(in_features=4096, out_features=1024, bias=True)\r\n (LayerNorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n )\r\n )\r\n )\r\n )\r\n (classifier): RobertaClassificationHead(\r\n (dense): Linear(in_features=1024, out_features=1024, bias=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n (out_proj): Linear(in_features=1024, out_features=2, bias=True)\r\n )\r\n)\r\nTraceback (most recent call last):\r\n File \"/home/sourab/temp/issue_26019.py\", line 82, in <module>\r\n train()\r\n File \"/home/sourab/temp/issue_26019.py\", line 78, in train\r\n trainer.train()\r\n File \"/home/sourab/transformers/src/transformers/trainer.py\", line 1557, in train\r\n return inner_training_loop(\r\n File \"/home/sourab/accelerate/src/accelerate/utils/memory.py\", line 136, in decorator\r\n return function(batch_size, *args, **kwargs)\r\n File \"/home/sourab/transformers/src/transformers/trainer.py\", line 1680, in _inner_training_loop\r\n self.model = self.accelerator.prepare(self.model)\r\n File \"/home/sourab/accelerate/src/accelerate/accelerator.py\", line 1273, in prepare\r\n result = tuple(\r\n File \"/home/sourab/accelerate/src/accelerate/accelerator.py\", line 1274, in <genexpr>\r\n self._prepare_one(obj, first_pass=True, device_placement=d) for obj, d in zip(args, device_placement)\r\n File \"/home/sourab/accelerate/src/accelerate/accelerator.py\", line 1084, in _prepare_one\r\n return self.prepare_model(obj, device_placement=device_placement)\r\n File \"/home/sourab/accelerate/src/accelerate/accelerator.py\", line 1446, in prepare_model\r\n model = FSDP(model, **kwargs)\r\n File \"/home/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py\", line 463, in __init__\r\n _auto_wrap(\r\n File \"/home/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/torch/distributed/fsdp/_wrap_utils.py\", line 45, in _auto_wrap\r\n _check_nested_wrapping(root_module)\r\n File \"/home/sourab/miniconda3/envs/hf/lib/python3.10/site-packages/torch/distributed/fsdp/_wrap_utils.py\", line 107, in _check_nested_wrapping\r\n raise ValueError(\r\nValueError: FSDP auto wrapping requires modules to not already have FSDP applied but found roberta.embeddings in\r\nRobertaForSequenceClassification(\r\n```\r\n\r\nI don't see how this can be fixed as it is not touching the transformers/accelerate code when it gets corrupted.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Is it getting corrupted in torch land? Happy to open an issue elsewhere but I'm not familiar enough with the internals to understand the source :)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"used AutoModelForSequenceClassification and \r\n```\r\nbnb_config = BitsAndBytesConfig(\r\n load_in_4bit = True,\r\n bnb_4bit_qunat_type = \"nf4\",\r\n bnb_4bit_compute_dtype = torch.float16,\r\n)\r\n\r\n```\r\nfor loading the model and finetuned this model by using LoRA and saved as \"tuned_model\"\r\nwhile loading the model:\r\n```\r\nfrom transformers import pipeline\r\npipe = pipeline('text-classification',\r\n tuned_model,\r\n device_map=\"auto\")\r\n```\r\n```\r\n---------------------------------------------------------------------------\r\nKeyError Traceback (most recent call last)\r\n[<ipython-input-19-fd1b8ca97698>](https://localhost:8080/#) in <cell line: 2>()\r\n 1 from transformers import pipeline\r\n----> 2 pipe = pipeline('text-classification',tuned_model, device_map=\"auto\")\r\n\r\n5 frames\r\n[/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py](https://localhost:8080/#) in <dictcomp>(.0)\r\n 3809 p: {\"safetensors_file\": f, \"weight_name\": p, \"dtype\": str_dtype}\r\n 3810 for p, f in weight_map.items()\r\n-> 3811 if param_device_map[p] == \"disk\"\r\n 3812 }\r\n 3813 \r\n\r\nKeyError: 'lm_head.weight'\r\n```\r\nCan any one suggest me how to load this tuned_model ?",
"Hi @Zuhashaik, this looks like it might be a different problem. Could you open a new issue please?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,694 | 1,702 | 1,702 |
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.34.0.dev0
- Based off https://github.com/huggingface/transformers/commit/300d6a4a62aac89b3f439110561d5a2268ffad9e
- One additional patch pulled in to fix an (I think) unrelated issue https://github.com/jmif/transformers/commit/2fe3989ebbb6729e560c6b438b4e1c7ef38412b4
- Installing from https://github.com/jmif/transformers/commit/2fe3989ebbb6729e560c6b438b4e1c7ef38412b4 will give you code I'm running
- Platform: Linux-6.2.0-1012-gcp-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.3
- Accelerate version: 0.22.0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: FSDP
- mixed_precision: fp16
- use_cpu: False
- debug: False
- num_processes: 2
- machine_rank: 0
- num_machines: 1
- rdzv_backend: static
- same_network: True
- main_training_function: main
- fsdp_config: {'fsdp_auto_wrap_policy': 'TRANSFORMER_BASED_WRAP', 'fsdp_backward_prefetch_policy': 'NO_PREFETCH', 'fsdp_forward_prefetch': False, 'fsdp_offload_params': False, 'fsdp_sharding_strategy': 1, 'fsdp_state_dict_type': 'FULL_STATE_DICT', 'fsdp_sync_module_states': True, 'fsdp_use_orig_params': False}
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: distributed
### Who can help?
@pacman100 @muellerz
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
import logging
from datasets import load_dataset
from transformers import (
set_seed,
TrainingArguments,
Trainer,
DataCollatorWithPadding,
AutoTokenizer,
AutoModelForSequenceClassification,
)
logger = logging.getLogger()
def train():
set_seed(42)
imdb = load_dataset("imdb")
tokenizer = AutoTokenizer.from_pretrained(
"meta-llama/Llama-2-7b-hf", max_length=4096
)
tokenizer.pad_token = tokenizer.eos_token
model = AutoModelForSequenceClassification.from_pretrained(
"meta-llama/Llama-2-7b-hf",
num_labels=2,
max_position_embeddings=1024,
use_safetensors=True,
)
def tokenize(examples):
return tokenizer(examples["text"], truncation=True)
train_dataset = imdb["train"].map(tokenize, batched=True)
auto_find_batch_size = True
batch_size = 8
gradient_accumulation_steps = 1
training_args = TrainingArguments(
run_name="test",
output_dir="/tmp/test",
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
gradient_accumulation_steps=gradient_accumulation_steps,
auto_find_batch_size=auto_find_batch_size,
eval_delay=0,
evaluation_strategy="epoch",
learning_rate=3e-5,
weight_decay=3e-6,
adam_beta1=0.9,
adam_beta2=0.999,
adam_epsilon=1e-8,
num_train_epochs=10,
lr_scheduler_type="linear",
warmup_steps=10,
log_level="info",
save_strategy="epoch",
seed=43,
fp16=True,
dataloader_drop_last=True,
)
collator = DataCollatorWithPadding(tokenizer=tokenizer, padding="longest")
trainer = Trainer(
model,
training_args,
tokenizer=tokenizer,
data_collator=collator,
train_dataset=train_dataset,
)
trainer.train()
if __name__ == "__main__":
train()
```
When I run this script with `auto_find_batch_size = True` I get the following error during model load:
```
Traceback (most recent call last):
File "llm_training/coarse_relevancy/accelerate/repro.py", line 82, in <module>
train()
File "llm_training/coarse_relevancy/accelerate/repro.py", line 78, in train
trainer.train()
File ".virtualenvs/llm-training/lib/python3.10/site-packages/transformers/trainer.py", line 1556, in train
return inner_training_loop(
File ".virtualenvs/llm-training/lib/python3.10/site-packages/accelerate/utils/memory.py", line 136, in decorator
return function(batch_size, *args, **kwargs)
File ".virtualenvs/llm-training/lib/python3.10/site-packages/transformers/trainer.py", line 1675, in _inner_training_loop
self.model = self.accelerator.prepare(self.model)
File ".virtualenvs/llm-training/lib/python3.10/site-packages/accelerate/accelerator.py", line 1270, in prepare
result = tuple(
File ".virtualenvs/llm-training/lib/python3.10/site-packages/accelerate/accelerator.py", line 1271, in <genexpr>
self._prepare_one(obj, first_pass=True, device_placement=d) for obj, d in zip(args, device_placement)
File ".virtualenvs/llm-training/lib/python3.10/site-packages/accelerate/accelerator.py", line 1083, in _prepare_one
return self.prepare_model(obj, device_placement=device_placement)
File ".virtualenvs/llm-training/lib/python3.10/site-packages/accelerate/accelerator.py", line 1429, in prepare_model
model = FSDP(model, **kwargs)
File ".virtualenvs/llm-training/lib/python3.10/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 391, in __init__
_auto_wrap(auto_wrap_kwargs, fsdp_kwargs, FullyShardedDataParallel)
File ".virtualenvs/llm-training/lib/python3.10/site-packages/torch/distributed/fsdp/_wrap_utils.py", line 55, in _auto_wrap
raise ValueError(
ValueError: Expected model.layers.0 to NOT be FullyShardedDataParallel if using an `auto_wrap_policy`
```
However, when I set `auto_find_batch_size = False` I do not.
### Expected behavior
I do not receive the error. Note that I'm not yet testing on a machine large enough to fully train the model, so with `auto_find_batch_size = False` I actually get a CUDA OOM. If I'm reading the stack trace right this indicates I'm progressing past the FSDP exception above. Including the full CUDA OOM stack trace when setting `auto_find_batch_size = False` in case it helps debug.
```
Traceback (most recent call last):
return forward_call(*args, **kwargs)
File ".virtualenvs/llm-training/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 708, in forward
File "llm-training/llm_training/coarse_relevancy/accelerate/repro.py", line 82, in <module>
train()
layer_outputs = decoder_layer(
File ".virtualenvs/llm-training/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
File "llm-training/llm_training/coarse_relevancy/accelerate/repro.py", line 78, in train
trainer.train()
File ".virtualenvs/llm-training/lib/python3.10/site-packages/transformers/trainer.py", line 1556, in train
return inner_training_loop(
return forward_call(*args, **kwargs)
File ".virtualenvs/llm-training/lib/python3.10/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 748, in forward
File ".virtualenvs/llm-training/lib/python3.10/site-packages/transformers/trainer.py", line 1838, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
File ".virtualenvs/llm-training/lib/python3.10/site-packages/transformers/trainer.py", line 2693, in training_step
loss = self.compute_loss(model, inputs)
File ".virtualenvs/llm-training/lib/python3.10/site-packages/transformers/trainer.py", line 2718, in compute_loss
outputs = model(**inputs)
output = self._fsdp_wrapped_module(*args, **kwargs)
File ".virtualenvs/llm-training/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
File ".virtualenvs/llm-training/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File ".virtualenvs/llm-training/lib/python3.10/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 748, in forward
output = self._fsdp_wrapped_module(*args, **kwargs)
File ".virtualenvs/llm-training/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File ".virtualenvs/llm-training/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 959, in forward
transformer_outputs = self.model(
return forward_call(*args, **kwargs)
File ".virtualenvs/llm-training/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 437, in forward
File ".virtualenvs/llm-training/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
hidden_states = self.mlp(hidden_states)
File ".virtualenvs/llm-training/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
File ".virtualenvs/llm-training/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 708, in forward
layer_outputs = decoder_layer(
File ".virtualenvs/llm-training/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File ".virtualenvs/llm-training/lib/python3.10/site-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py", line 748, in forward
output = self._fsdp_wrapped_module(*args, **kwargs)
return forward_call(*args, **kwargs)
File ".virtualenvs/llm-training/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File ".virtualenvs/llm-training/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 220, in forward
File ".virtualenvs/llm-training/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 424, in forward
hidden_states, self_attn_weights, present_key_value = self.self_attn(
down_proj = self.down_proj(self.act_fn(self.gate_proj(x)) * self.up_proj(x))
File ".virtualenvs/llm-training/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
File ".virtualenvs/llm-training/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File ".virtualenvs/llm-training/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 333, in forward
query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids)
return forward_call(*args, **kwargs)
File ".virtualenvs/llm-training/lib/python3.10/site-packages/torch/nn/modules/linear.py", line 114, in forward
File ".virtualenvs/llm-training/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 186, in apply_rotary_pos_emb
q_embed = (q * cos) + (rotate_half(q) * sin)
File ".virtualenvs/llm-training/lib/python3.10/site-packages/transformers/models/llama/modeling_llama.py", line 177, in rotate_half
return torch.cat((-x2, x1), dim=-1)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 40.00 MiB (GPU 0; 15.77 GiB total capacity; 15.15 GiB already allocated; 1.88 MiB free; 15.25 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
return F.linear(input, self.weight, self.bias)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 76.00 MiB (GPU 1; 15.77 GiB total capacity; 15.06 GiB already allocated; 37.88 MiB free; 15.21 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26019/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26019/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26018
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26018/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26018/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26018/events
|
https://github.com/huggingface/transformers/issues/26018
| 1,885,067,009 |
I_kwDOCUB6oc5wW9cB
| 26,018 |
`Helsinki-NLP/opus-*` models `decode` not removing metaspace character
|
{
"login": "xenova",
"id": 26504141,
"node_id": "MDQ6VXNlcjI2NTA0MTQx",
"avatar_url": "https://avatars.githubusercontent.com/u/26504141?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xenova",
"html_url": "https://github.com/xenova",
"followers_url": "https://api.github.com/users/xenova/followers",
"following_url": "https://api.github.com/users/xenova/following{/other_user}",
"gists_url": "https://api.github.com/users/xenova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xenova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xenova/subscriptions",
"organizations_url": "https://api.github.com/users/xenova/orgs",
"repos_url": "https://api.github.com/users/xenova/repos",
"events_url": "https://api.github.com/users/xenova/events{/privacy}",
"received_events_url": "https://api.github.com/users/xenova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "tanaymeh",
"id": 26519539,
"node_id": "MDQ6VXNlcjI2NTE5NTM5",
"avatar_url": "https://avatars.githubusercontent.com/u/26519539?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tanaymeh",
"html_url": "https://github.com/tanaymeh",
"followers_url": "https://api.github.com/users/tanaymeh/followers",
"following_url": "https://api.github.com/users/tanaymeh/following{/other_user}",
"gists_url": "https://api.github.com/users/tanaymeh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tanaymeh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tanaymeh/subscriptions",
"organizations_url": "https://api.github.com/users/tanaymeh/orgs",
"repos_url": "https://api.github.com/users/tanaymeh/repos",
"events_url": "https://api.github.com/users/tanaymeh/events{/privacy}",
"received_events_url": "https://api.github.com/users/tanaymeh/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "tanaymeh",
"id": 26519539,
"node_id": "MDQ6VXNlcjI2NTE5NTM5",
"avatar_url": "https://avatars.githubusercontent.com/u/26519539?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tanaymeh",
"html_url": "https://github.com/tanaymeh",
"followers_url": "https://api.github.com/users/tanaymeh/followers",
"following_url": "https://api.github.com/users/tanaymeh/following{/other_user}",
"gists_url": "https://api.github.com/users/tanaymeh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tanaymeh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tanaymeh/subscriptions",
"organizations_url": "https://api.github.com/users/tanaymeh/orgs",
"repos_url": "https://api.github.com/users/tanaymeh/repos",
"events_url": "https://api.github.com/users/tanaymeh/events{/privacy}",
"received_events_url": "https://api.github.com/users/tanaymeh/received_events",
"type": "User",
"site_admin": false
}
] |
[
"I would like to work on fixing this, @xenova!",
"@ArthurZucker I need some guidance here! I suppose this is not as simple as a regex replacement right? Should I contact the team members of Helsinki-NLP and get in touch with them for this or do you think there is a programmatical way to solve this?",
"Hey! Sure:\r\n1. Make sure that this is a bug, and taht the original tokenizer behaviour is not this one\r\n2. Look if this is only a `fast` issue (Meaning trying `use_fast = False` and check the outputs as well. \r\n3. Try to re-convert the model, maybe it was not correctly uploaded. Check the `convert_slow_tokenizer.py` in transformers to see the conversion. That is were you will find if `add_prefix_space` was used or not. Also check the `normalizers` and `post_processors` and `decoders`! \r\n\r\nCheers! ",
"1. I am quite sure this is a bug - I don't think it makes sense to keep these metaspace characters. See [here](https://github.com/huggingface/transformers/blob/18ee1fe76295239335bf1528c744fe1cfba21cc8/src/transformers/models/llama/tokenization_llama.py#L246-L250) for example (`LlamaTokenizer` removes it). You can also search the codebase for `SPIECE_UNDERLINE` and in each case when decoding it is removed. And this is not present for the [`MarianTokenizer`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/marian/tokenization_marian.py#L268-L281) (which is what these models use).\r\n2. From what I can tell there is no fast version of the [tokenizer](https://github.com/huggingface/transformers/tree/18ee1fe76295239335bf1528c744fe1cfba21cc8/src/transformers/models/marian)\r\n ```py\r\n from transformers import AutoTokenizer\r\n tokenizer=AutoTokenizer.from_pretrained('Helsinki-NLP/opus-mt-en-es', use_fast=False)\r\n tokenizer.decode(tokenizer(\"hello world\")['input_ids'])\r\n # outputs the same: '▁hello▁world</s>'\r\n ```\r\n3. See 2",
"Thanks a lot, @xenova and @ArthurZucker for your comments!\r\n\r\nFrom what I understand here, I need to change `MarianTokenizer` so that it removes metaspaces characters and then re-convert the `Helsinki-NLP/opus-*` models. Please correct me if I am wrong!\r\n",
"> and then re-convert the Helsinki-NLP/opus-* models.\r\n\r\nYou shouldn't need to re-convert any models. The vocab.json, merges.txt, and tokenizer_config.json will also all stay the same.\r\n\r\nAll you should need to do is update [`MarianTokenizer`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/marian/tokenization_marian.py) to replace the `▁` with ` `",
"Got it, thanks @xenova! I used the same logic as `LlamaTokenizer` but now instead of `▁hello▁world` as output, I get `hello▁world` which is still wrong.\r\n\r\nShould I use string replacement or regex to remove the metaspace character instead?",
"You could probably just do something similar to this:\r\n\r\nhttps://github.com/huggingface/transformers/blob/95b374952dc27d8511541d6f5a4e22c9ec11fb24/src/transformers/models/mbart/tokenization_mbart.py#L304-L307\r\n\r\nbut [here](https://github.com/huggingface/transformers/blob/95b374952dc27d8511541d6f5a4e22c9ec11fb24/src/transformers/models/marian/tokenization_marian.py#L281). e.g., \r\n`return out_string.strip()` → `return out_string.replace(SPIECE_UNDERLINE, \" \").strip()`\r\n\r\n@ArthurZucker Is this good practice for sentencepiece tokenizers? From what I can tell, `sp_model.decode_pieces` is [not used very often](https://github.com/search?q=repo%3Ahuggingface%2Ftransformers++decode_pieces&type=code), so this decode block might be quite outdated itself.",
"Thanks for the comment @xenova!\r\n\r\nI did the following in my PR, is it acceptable too?\r\n\r\n```python\r\n def convert_tokens_to_string(self, tokens: List[str]) -> str:\r\n \"\"\"Uses source spm if _decode_use_source_tokenizer is True, and target spm otherwise\"\"\"\r\n if tokens[0].startswith(SPIECE_UNDERLINE):\r\n tokens[0] = tokens[0][1:]\r\n\r\n\t\t# Other code in between\r\n\r\n out_string += sp_model.decode_pieces(current_sub_tokens)\r\n out_string = out_string.replace(SPIECE_UNDERLINE, \" \")\r\n return out_string.strip()\r\n```"
] | 1,694 | 1,694 | 1,694 |
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.34.0.dev0
- Platform: Linux-5.15.109+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.3
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu118 (False)
- Tensorflow version (GPU?): 2.12.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.7.2 (cpu)
- Jax version: 0.4.14
- JaxLib version: 0.4.14
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Running
```python
from transformers import AutoTokenizer
tokenizer=AutoTokenizer.from_pretrained('Helsinki-NLP/opus-mt-en-es')
tokenizer.decode(tokenizer("hello world")['input_ids'])
```
produces `▁hello▁world</s>`.
```py
tokenizer.decode(tokenizer("hello world")['input_ids'], skip_special_tokens=True)
```
produces `▁hello▁world`
### Expected behavior
The metaspace character (`▁`) should be removed, and the returned string should be `hello world</s>` and `hello world`, respectively. This should be similar to:
```py
from transformers import AutoTokenizer
tokenizer=AutoTokenizer.from_pretrained('facebook/nllb-200-distilled-600M')
tokenizer.decode(tokenizer("hello world")['input_ids'], skip_special_tokens=True)
```
which produces `hello world`
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26018/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26018/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26017
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26017/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26017/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26017/events
|
https://github.com/huggingface/transformers/pull/26017
| 1,885,032,241 |
PR_kwDOCUB6oc5ZvKqu
| 26,017 |
Fix vilt config docstring parameter to match value in init
|
{
"login": "raghavanone",
"id": 115454562,
"node_id": "U_kgDOBuGyYg",
"avatar_url": "https://avatars.githubusercontent.com/u/115454562?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/raghavanone",
"html_url": "https://github.com/raghavanone",
"followers_url": "https://api.github.com/users/raghavanone/followers",
"following_url": "https://api.github.com/users/raghavanone/following{/other_user}",
"gists_url": "https://api.github.com/users/raghavanone/gists{/gist_id}",
"starred_url": "https://api.github.com/users/raghavanone/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/raghavanone/subscriptions",
"organizations_url": "https://api.github.com/users/raghavanone/orgs",
"repos_url": "https://api.github.com/users/raghavanone/repos",
"events_url": "https://api.github.com/users/raghavanone/events{/privacy}",
"received_events_url": "https://api.github.com/users/raghavanone/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"> Thanks for opening this PR! The fix should go the other way: the documentation is updated to reflect the default values\r\n\r\nOh Yes, Got misled by the discussion [here](https://github.com/huggingface/transformers/issues/25639) . But checked [here](https://huggingface.co/dandelin/vilt-b32-mlm/blob/main/config.json) , the init is right, only the documentation needs to be changed .",
"@amyeroberts Done.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26017). All of your documentation changes will be reflected on that endpoint."
] | 1,694 | 1,694 | 1,694 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes #25639
This PR fixes the Vilt models config's init parameters to match the ones in the documentation.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26017/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26017/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26017",
"html_url": "https://github.com/huggingface/transformers/pull/26017",
"diff_url": "https://github.com/huggingface/transformers/pull/26017.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26017.patch",
"merged_at": 1694112823000
}
|
https://api.github.com/repos/huggingface/transformers/issues/26016
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26016/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26016/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26016/events
|
https://github.com/huggingface/transformers/issues/26016
| 1,884,951,749 |
I_kwDOCUB6oc5wWhTF
| 26,016 |
HerBERT missing from list of supported models
|
{
"login": "xenova",
"id": 26504141,
"node_id": "MDQ6VXNlcjI2NTA0MTQx",
"avatar_url": "https://avatars.githubusercontent.com/u/26504141?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xenova",
"html_url": "https://github.com/xenova",
"followers_url": "https://api.github.com/users/xenova/followers",
"following_url": "https://api.github.com/users/xenova/following{/other_user}",
"gists_url": "https://api.github.com/users/xenova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xenova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xenova/subscriptions",
"organizations_url": "https://api.github.com/users/xenova/orgs",
"repos_url": "https://api.github.com/users/xenova/repos",
"events_url": "https://api.github.com/users/xenova/events{/privacy}",
"received_events_url": "https://api.github.com/users/xenova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1834067346,
"node_id": "MDU6TGFiZWwxODM0MDY3MzQ2",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Documentation",
"name": "Documentation",
"color": "77cc3b",
"default": false,
"description": ""
}
] |
closed
| false | null |
[] |
[
"I would like to work on this. @xenova ",
"I have fixed this issue but my PR requires review."
] | 1,694 | 1,694 | 1,694 |
CONTRIBUTOR
| null |
### System Info
n/a
### Who can help?
@stevhliu @MKhalusova
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
HerBERT has a model_doc (https://huggingface.co/docs/transformers/main/en/model_doc/herbert), but is not in the README:

### Expected behavior
It should be in the README :)
For example:
```
1. **[HerBERT](https://huggingface.co/docs/transformers/model_doc/herbert)** released with the paper [KLEJ: Comprehensive Benchmark for Polish Language Understanding](https://www.aclweb.org/anthology/2020.acl-main.111.pdf) by Piotr Rybak, Robert Mroczkowski, Janusz Tracz, and Ireneusz Gawlik.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26016/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26016/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26015
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26015/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26015/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26015/events
|
https://github.com/huggingface/transformers/issues/26015
| 1,884,709,490 |
I_kwDOCUB6oc5wVmJy
| 26,015 |
DeepSpeed stuck when training
|
{
"login": "karths8",
"id": 47289950,
"node_id": "MDQ6VXNlcjQ3Mjg5OTUw",
"avatar_url": "https://avatars.githubusercontent.com/u/47289950?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/karths8",
"html_url": "https://github.com/karths8",
"followers_url": "https://api.github.com/users/karths8/followers",
"following_url": "https://api.github.com/users/karths8/following{/other_user}",
"gists_url": "https://api.github.com/users/karths8/gists{/gist_id}",
"starred_url": "https://api.github.com/users/karths8/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/karths8/subscriptions",
"organizations_url": "https://api.github.com/users/karths8/orgs",
"repos_url": "https://api.github.com/users/karths8/repos",
"events_url": "https://api.github.com/users/karths8/events{/privacy}",
"received_events_url": "https://api.github.com/users/karths8/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Cc @pacman100 ",
"Hello, could you do `rm -rf /root/.cache/torch_extensions/py310_cu118/` and then rerun? At times, when it was stuck indefinitely for me, removing the `torch_extensions` folder helped.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,694 | 1,697 | 1,697 |
NONE
| null |
### System Info
- `transformers` version: 4.34.0.dev0
- Platform: Linux-5.4.0-153-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.3
- Accelerate version: 0.23.0.dev0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: DEEPSPEED
- use_cpu: False
- debug: False
- num_processes: 4
- machine_rank: 0
- num_machines: 1
- rdzv_backend: static
- same_network: True
- main_training_function: main
- deepspeed_config: {'deepspeed_config_file': '/workspace/deepspeed_config_stage3.json', 'zero3_init_flag': True}
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
DeepSpeed config file:
{
"bf16": {
"enabled": "auto"
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"offload_param": {
"device": "cpu",
"pin_memory": true
},
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": 16777216,
"stage3_prefetch_bucket_size": 15099494.4,
"stage3_param_persistence_threshold": 40960,
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 2000,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I am using [this](https://gist.github.com/younesbelkada/9f7f75c94bdc1981c8ca5cc937d4a4da) script to finetune a CodeLlama-34B model with DeepSpeed (via accelerate) using QLoRA.
### Expected behavior
When the finetuning starts, it seems like everything loads up properly but the loading gets stuck indefinitely after it loads the `cpu_adam op` as shown in the Traceback below:
```
[2023-09-06 19:48:51,882] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect)
WARNING:torch.distributed.run:
*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
*****************************************
[2023-09-06 19:48:59,440] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2023-09-06 19:48:59,448] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2023-09-06 19:48:59,455] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2023-09-06 19:48:59,474] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect)
Loading checkpoint shards: 100%|█████████████████| 7/7 [00:58<00:00, 8.40s/it]
Loading checkpoint shards: 100%|█████████████████| 7/7 [01:05<00:00, 9.30s/it]
Using /root/.cache/torch_extensions/py310_cu118 as PyTorch extensions root...
Detected CUDA files, patching ldflags
Emitting ninja build file /root/.cache/torch_extensions/py310_cu118/cpu_adam/build.ninja...
Building extension module cpu_adam...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
Loading extension module cpu_adam...
Time to load cpu_adam op: 2.4223172664642334 seconds
```
After this it just gets stuck. The memory occupied in GPU is also constant and there are no processes running:
```
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.105.17 Driver Version: 525.105.17 CUDA Version: 12.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA A100 80G... On | 00000000:43:00.0 Off | 0 |
| N/A 36C P0 63W / 300W | 22454MiB / 81920MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 1 NVIDIA A100 80G... On | 00000000:44:00.0 Off | 0 |
| N/A 37C P0 68W / 300W | 22462MiB / 81920MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 2 NVIDIA A100 80G... On | 00000000:C3:00.0 Off | 0 |
| N/A 41C P0 68W / 300W | 22462MiB / 81920MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 3 NVIDIA A100 80G... On | 00000000:C4:00.0 Off | 0 |
| N/A 44C P0 66W / 300W | 21932MiB / 81920MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26015/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26015/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26014
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26014/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26014/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26014/events
|
https://github.com/huggingface/transformers/issues/26014
| 1,884,627,715 |
I_kwDOCUB6oc5wVSMD
| 26,014 |
`NameError: name 'torch' is not defined` in version 4.33.0
|
{
"login": "neelkapadiaAWS",
"id": 126013127,
"node_id": "U_kgDOB4LOxw",
"avatar_url": "https://avatars.githubusercontent.com/u/126013127?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/neelkapadiaAWS",
"html_url": "https://github.com/neelkapadiaAWS",
"followers_url": "https://api.github.com/users/neelkapadiaAWS/followers",
"following_url": "https://api.github.com/users/neelkapadiaAWS/following{/other_user}",
"gists_url": "https://api.github.com/users/neelkapadiaAWS/gists{/gist_id}",
"starred_url": "https://api.github.com/users/neelkapadiaAWS/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neelkapadiaAWS/subscriptions",
"organizations_url": "https://api.github.com/users/neelkapadiaAWS/orgs",
"repos_url": "https://api.github.com/users/neelkapadiaAWS/repos",
"events_url": "https://api.github.com/users/neelkapadiaAWS/events{/privacy}",
"received_events_url": "https://api.github.com/users/neelkapadiaAWS/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @younesbelkada ",
"Hi @neelkapadiaAWS \r\nThanks for the issue, I am quite surprised because torch should be imported here: https://github.com/huggingface/transformers/blob/main/src/transformers/integrations/bitsandbytes.py#L12 if bitsandbytes is correctly installed. \r\nI also tried to run the attached script and did not managed to reproduce, here is the traceback I get\r\n\r\n```bash\r\nSetting ds_accelerator to cuda (auto detect)\r\n2023-09-11 07:46:51 | ERROR | stderr | INFO: Started server process [15112]\r\n2023-09-11 07:46:51 | ERROR | stderr | INFO: Waiting for application startup.\r\n2023-09-11 07:46:51 | ERROR | stderr | INFO: Application startup complete.\r\n2023-09-11 07:46:51 | ERROR | stderr | INFO: Uvicorn running on http://localhost:8088 (Press CTRL+C to quit)\r\n2023-09-11 07:46:55 | INFO | stdout | INFO: 127.0.0.1:37590 - \"GET / HTTP/1.1\" 404 Not Found\r\n2023-09-11 07:46:55 | INFO | stdout | INFO: 127.0.0.1:37590 - \"GET /favicon.ico HTTP/1.1\" 404 Not Found\r\n2023-09-11 07:47:02 | INFO | stdout | INFO: 127.0.0.1:49584 - \"GET / HTTP/1.1\" 404 Not Found\r\n2023-09-11 07:47:12 | INFO | stdout | INFO: 127.0.0.1:35054 - \"GET / HTTP/1.1\" 404 Not Found\r\n```",
"Hi @younesbelkada - thanks for the response. I might not have installed `bitsandbytes` explicitly when I faced this. Let me retry with that.\r\n\r\nThough, I also did not install it when I tried the same with the previous version of `transformers`. So I am not sure why I did not see this error with the old version.",
"Ok I see, let me know how it goes!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,694 | 1,697 | 1,697 |
NONE
| null |
### System Info
With the latest transformers version, I am seeing the following error when loading a model via fast chat:
```
2023-09-06 19:18:46 | ERROR | stderr | During handling of the above exception, another exception occurred:
2023-09-06 19:18:46 | ERROR | stderr |
2023-09-06 19:18:46 | ERROR | stderr | Traceback (most recent call last):
2023-09-06 19:18:46 | ERROR | stderr | File "/opt/conda/envs/pytorch/lib/python3.10/runpy.py", line 196, in _run_module_as_main
2023-09-06 19:18:46 | ERROR | stderr | return _run_code(code, main_globals, None,
2023-09-06 19:18:46 | ERROR | stderr | File "/opt/conda/envs/pytorch/lib/python3.10/runpy.py", line 86, in _run_code
2023-09-06 19:18:46 | ERROR | stderr | exec(code, run_globals)
2023-09-06 19:18:46 | ERROR | stderr | File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/fastchat/serve/model_worker.py", line 449, in <module>
2023-09-06 19:18:46 | ERROR | stderr | worker = ModelWorker(
2023-09-06 19:18:46 | ERROR | stderr | File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/fastchat/serve/model_worker.py", line 207, in __init__
2023-09-06 19:18:46 | ERROR | stderr | self.model, self.tokenizer = load_model(
2023-09-06 19:18:46 | ERROR | stderr | File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/fastchat/model/model_adapter.py", line 269, in load_model
2023-09-06 19:18:46 | ERROR | stderr | model, tokenizer = adapter.load_model(model_path, kwargs)
2023-09-06 19:18:46 | ERROR | stderr | File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/fastchat/model/model_adapter.py", line 1213, in load_model
2023-09-06 19:18:46 | ERROR | stderr | model, tokenizer = super().load_model(model_path, from_pretrained_kwargs)
2023-09-06 19:18:46 | ERROR | stderr | File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/fastchat/model/model_adapter.py", line 77, in load_model
2023-09-06 19:18:46 | ERROR | stderr | model = AutoModel.from_pretrained(
2023-09-06 19:18:46 | ERROR | stderr | File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 563, in from_pretrained
2023-09-06 19:18:46 | ERROR | stderr | return model_class.from_pretrained(
2023-09-06 19:18:46 | ERROR | stderr | File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3175, in from_pretrained
2023-09-06 19:18:46 | ERROR | stderr | ) = cls._load_pretrained_model(
2023-09-06 19:18:46 | ERROR | stderr | File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3563, in _load_pretrained_model
2023-09-06 19:18:46 | ERROR | stderr | new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model(
2023-09-06 19:18:46 | ERROR | stderr | File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/transformers/modeling_utils.py", line 753, in _load_state_dict_into_meta_model
2023-09-06 19:18:46 | ERROR | stderr | set_module_quantized_tensor_to_device(
2023-09-06 19:18:46 | ERROR | stderr | File "/opt/conda/envs/pytorch/lib/python3.10/site-packages/transformers/integrations/bitsandbytes.py", line 58, in set_module_quantized_tensor_to_device
2023-09-06 19:18:46 | ERROR | stderr | if old_value.device == torch.device("meta") and device not in ["meta", torch.device("meta")] and value is None:
2023-09-06 19:18:46 | ERROR | stderr | NameError: name 'torch' is not defined
```
I was able to resolve this by going to an older version of transformers (4.31.0).
This seems to be a bug introduced in the latest version.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Steps to reproduce the behavior:
1. Install fschat using `pip install fschat`
2. Install latest transformers using `pip install transformers`
3. Download any huggingface model to a local path
4. Run the following command:
```
python -m fastchat.serve.model_worker --model-path <MODEL_PATH> --device 'cuda' --num-gpus 8 --max-gpu-memory 19Gib --load-8bit --cpu-offloading --host $HOSTNAME --port 8080 --no-register > model_worker_logs.txt 2>&1 &
```
### Expected behavior
Expected behavior is that the model should been loaded successfully on the fastchat server.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26014/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26014/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26013
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26013/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26013/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26013/events
|
https://github.com/huggingface/transformers/pull/26013
| 1,884,559,865 |
PR_kwDOCUB6oc5ZtlQJ
| 26,013 |
Bump gitpython from 3.1.32 to 3.1.34 in /examples/research_projects/distillation
|
{
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
}
|
[
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"OK, I won't notify you again about this release, but will get in touch when a new version is available. If you'd rather skip all updates until the next major or minor version, let me know by commenting `@dependabot ignore this major version` or `@dependabot ignore this minor version`.\n\nIf you change your mind, just re-open this PR and I'll resolve any conflicts on it.",
"@dependabot ignore this major version",
"OK, I won't notify you about version 3.x.x again, unless you re-open this PR."
] | 1,694 | 1,694 | 1,694 |
CONTRIBUTOR
| null |
Bumps [gitpython](https://github.com/gitpython-developers/GitPython) from 3.1.32 to 3.1.34.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/gitpython-developers/GitPython/releases">gitpython's releases</a>.</em></p>
<blockquote>
<h2>3.1.34 - fix resource leaking</h2>
<h2>What's Changed</h2>
<ul>
<li>util: close lockfile after opening successfully by <a href="https://github.com/skshetry"><code>@skshetry</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1639">gitpython-developers/GitPython#1639</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/skshetry"><code>@skshetry</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1639">gitpython-developers/GitPython#1639</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/gitpython-developers/GitPython/compare/3.1.33...3.1.34">https://github.com/gitpython-developers/GitPython/compare/3.1.33...3.1.34</a></p>
<h2>v3.1.33 - with security fix</h2>
<h2>What's Changed</h2>
<ul>
<li>WIP Quick doc by <a href="https://github.com/LeoDaCoda"><code>@LeoDaCoda</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1608">gitpython-developers/GitPython#1608</a></li>
<li>Partial clean up wrt mypy and black by <a href="https://github.com/bodograumann"><code>@bodograumann</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1617">gitpython-developers/GitPython#1617</a></li>
<li>Disable merge_includes in config writers by <a href="https://github.com/bodograumann"><code>@bodograumann</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1618">gitpython-developers/GitPython#1618</a></li>
<li>feat: full typing for "progress" parameter in Repo class by <a href="https://github.com/madebylydia"><code>@madebylydia</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1634">gitpython-developers/GitPython#1634</a></li>
<li>Fix CVE-2023-40590 by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1636">gitpython-developers/GitPython#1636</a></li>
<li><a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1566">#1566</a> Creating a lock now uses python built-in "open()" method to work arou… by <a href="https://github.com/HageMaster3108"><code>@HageMaster3108</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1619">gitpython-developers/GitPython#1619</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/LeoDaCoda"><code>@LeoDaCoda</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1608">gitpython-developers/GitPython#1608</a></li>
<li><a href="https://github.com/bodograumann"><code>@bodograumann</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1617">gitpython-developers/GitPython#1617</a></li>
<li><a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1636">gitpython-developers/GitPython#1636</a></li>
<li><a href="https://github.com/HageMaster3108"><code>@HageMaster3108</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1619">gitpython-developers/GitPython#1619</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/gitpython-developers/GitPython/compare/3.1.32...3.1.33">https://github.com/gitpython-developers/GitPython/compare/3.1.32...3.1.33</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/2a2ae776825f249a3bb7efd9b08650486226b027"><code>2a2ae77</code></a> prepare patch release</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/47147406a5931e07641385f27e0e018927044c55"><code>4714740</code></a> Merge pull request <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1639">#1639</a> from skshetry/close-lockfile</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/3e829eb516a60212bae81a6549361be4748e22d7"><code>3e829eb</code></a> util: close lockfile after opening successfully</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/f882cd8422fbb2517eebbf45824eb07951b948f3"><code>f882cd8</code></a> update instructions for how to create a release</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/993f04588aa362fdce7c7f2f0848b5daedd8cb72"><code>993f045</code></a> prepare for next release</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/a1c472bd314f3b2cd3743f2c17bfcf36453c4784"><code>a1c472b</code></a> Merge pull request <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1619">#1619</a> from HageMaster3108/bugfix/use-python-builtin-open-m...</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/70924c4265c2d3629d978dd7bfc9ab1678d91e7d"><code>70924c4</code></a> Skip now permanently failing test with note on how to fix it</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/8b75434e2c8a082cdeb4971cc6f0ee2bafec45bc"><code>8b75434</code></a> Merge pull request <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1636">#1636</a> from EliahKagan/cve-2023-40590</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/7611cd909b890b971d23bce3bd4244ad1c381f22"><code>7611cd9</code></a> Don't check form of version number</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/94e0fb0794b88b78ceed94ff18ee7d68587d890d"><code>94e0fb0</code></a> Add a unit test for CVE-2023-40590</li>
<li>Additional commits viewable in <a href="https://github.com/gitpython-developers/GitPython/compare/3.1.32...3.1.34">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details>
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26013/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26013/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26013",
"html_url": "https://github.com/huggingface/transformers/pull/26013",
"diff_url": "https://github.com/huggingface/transformers/pull/26013.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26013.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/26012
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26012/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26012/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26012/events
|
https://github.com/huggingface/transformers/pull/26012
| 1,884,557,749 |
PR_kwDOCUB6oc5Ztkx0
| 26,012 |
Bump gitpython from 3.1.32 to 3.1.34 in /examples/research_projects/decision_transformer
|
{
"login": "dependabot[bot]",
"id": 49699333,
"node_id": "MDM6Qm90NDk2OTkzMzM=",
"avatar_url": "https://avatars.githubusercontent.com/in/29110?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dependabot%5Bbot%5D",
"html_url": "https://github.com/apps/dependabot",
"followers_url": "https://api.github.com/users/dependabot%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/dependabot%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/dependabot%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dependabot%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dependabot%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/dependabot%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/dependabot%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/dependabot%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/dependabot%5Bbot%5D/received_events",
"type": "Bot",
"site_admin": false
}
|
[
{
"id": 1905493434,
"node_id": "MDU6TGFiZWwxOTA1NDkzNDM0",
"url": "https://api.github.com/repos/huggingface/transformers/labels/dependencies",
"name": "dependencies",
"color": "0366d6",
"default": false,
"description": "Pull requests that update a dependency file"
}
] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@dependabot ignore this major version",
"OK, I won't notify you about version 3.x.x again, unless you re-open this PR."
] | 1,694 | 1,694 | 1,694 |
CONTRIBUTOR
| null |
Bumps [gitpython](https://github.com/gitpython-developers/GitPython) from 3.1.32 to 3.1.34.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/gitpython-developers/GitPython/releases">gitpython's releases</a>.</em></p>
<blockquote>
<h2>3.1.34 - fix resource leaking</h2>
<h2>What's Changed</h2>
<ul>
<li>util: close lockfile after opening successfully by <a href="https://github.com/skshetry"><code>@skshetry</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1639">gitpython-developers/GitPython#1639</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/skshetry"><code>@skshetry</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1639">gitpython-developers/GitPython#1639</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/gitpython-developers/GitPython/compare/3.1.33...3.1.34">https://github.com/gitpython-developers/GitPython/compare/3.1.33...3.1.34</a></p>
<h2>v3.1.33 - with security fix</h2>
<h2>What's Changed</h2>
<ul>
<li>WIP Quick doc by <a href="https://github.com/LeoDaCoda"><code>@LeoDaCoda</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1608">gitpython-developers/GitPython#1608</a></li>
<li>Partial clean up wrt mypy and black by <a href="https://github.com/bodograumann"><code>@bodograumann</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1617">gitpython-developers/GitPython#1617</a></li>
<li>Disable merge_includes in config writers by <a href="https://github.com/bodograumann"><code>@bodograumann</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1618">gitpython-developers/GitPython#1618</a></li>
<li>feat: full typing for "progress" parameter in Repo class by <a href="https://github.com/madebylydia"><code>@madebylydia</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1634">gitpython-developers/GitPython#1634</a></li>
<li>Fix CVE-2023-40590 by <a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1636">gitpython-developers/GitPython#1636</a></li>
<li><a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1566">#1566</a> Creating a lock now uses python built-in "open()" method to work arou… by <a href="https://github.com/HageMaster3108"><code>@HageMaster3108</code></a> in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1619">gitpython-developers/GitPython#1619</a></li>
</ul>
<h2>New Contributors</h2>
<ul>
<li><a href="https://github.com/LeoDaCoda"><code>@LeoDaCoda</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1608">gitpython-developers/GitPython#1608</a></li>
<li><a href="https://github.com/bodograumann"><code>@bodograumann</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1617">gitpython-developers/GitPython#1617</a></li>
<li><a href="https://github.com/EliahKagan"><code>@EliahKagan</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1636">gitpython-developers/GitPython#1636</a></li>
<li><a href="https://github.com/HageMaster3108"><code>@HageMaster3108</code></a> made their first contribution in <a href="https://redirect.github.com/gitpython-developers/GitPython/pull/1619">gitpython-developers/GitPython#1619</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a href="https://github.com/gitpython-developers/GitPython/compare/3.1.32...3.1.33">https://github.com/gitpython-developers/GitPython/compare/3.1.32...3.1.33</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/2a2ae776825f249a3bb7efd9b08650486226b027"><code>2a2ae77</code></a> prepare patch release</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/47147406a5931e07641385f27e0e018927044c55"><code>4714740</code></a> Merge pull request <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1639">#1639</a> from skshetry/close-lockfile</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/3e829eb516a60212bae81a6549361be4748e22d7"><code>3e829eb</code></a> util: close lockfile after opening successfully</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/f882cd8422fbb2517eebbf45824eb07951b948f3"><code>f882cd8</code></a> update instructions for how to create a release</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/993f04588aa362fdce7c7f2f0848b5daedd8cb72"><code>993f045</code></a> prepare for next release</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/a1c472bd314f3b2cd3743f2c17bfcf36453c4784"><code>a1c472b</code></a> Merge pull request <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1619">#1619</a> from HageMaster3108/bugfix/use-python-builtin-open-m...</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/70924c4265c2d3629d978dd7bfc9ab1678d91e7d"><code>70924c4</code></a> Skip now permanently failing test with note on how to fix it</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/8b75434e2c8a082cdeb4971cc6f0ee2bafec45bc"><code>8b75434</code></a> Merge pull request <a href="https://redirect.github.com/gitpython-developers/GitPython/issues/1636">#1636</a> from EliahKagan/cve-2023-40590</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/7611cd909b890b971d23bce3bd4244ad1c381f22"><code>7611cd9</code></a> Don't check form of version number</li>
<li><a href="https://github.com/gitpython-developers/GitPython/commit/94e0fb0794b88b78ceed94ff18ee7d68587d890d"><code>94e0fb0</code></a> Add a unit test for CVE-2023-40590</li>
<li>Additional commits viewable in <a href="https://github.com/gitpython-developers/GitPython/compare/3.1.32...3.1.34">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details>
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26012/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26012/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26012",
"html_url": "https://github.com/huggingface/transformers/pull/26012",
"diff_url": "https://github.com/huggingface/transformers/pull/26012.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26012.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/26010
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26010/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26010/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26010/events
|
https://github.com/huggingface/transformers/issues/26010
| 1,884,367,287 |
I_kwDOCUB6oc5wUSm3
| 26,010 |
"inputs" argument in pipelines not working if it's explicitly specified
|
{
"login": "Keredu",
"id": 29210848,
"node_id": "MDQ6VXNlcjI5MjEwODQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/29210848?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Keredu",
"html_url": "https://github.com/Keredu",
"followers_url": "https://api.github.com/users/Keredu/followers",
"following_url": "https://api.github.com/users/Keredu/following{/other_user}",
"gists_url": "https://api.github.com/users/Keredu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Keredu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Keredu/subscriptions",
"organizations_url": "https://api.github.com/users/Keredu/orgs",
"repos_url": "https://api.github.com/users/Keredu/repos",
"events_url": "https://api.github.com/users/Keredu/events{/privacy}",
"received_events_url": "https://api.github.com/users/Keredu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hey thanks for sharing. The bug is indeed here: https://github.com/huggingface/transformers/blob/fa6107c97edf7cf725305a34735a57875b67d85e/src/transformers/pipelines/text_classification.py#L159\r\n\r\nIt is introduced by the legacy layer.\r\n\r\nThe fix would be checking `args[0]` or ` inputs` but enforcing at least one exists.\r\n\r\nI'm up for a change if it helps (since the code would still be contained and clearly identifiable as legacy)",
"I just created a pull request with a possible solution: #26028 \r\n\r\nI checked the tests for text_classification but I didn't considered necessary to make any new test. If you think I should add or modify anything, tell me.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,694 | 1,697 | 1,697 |
NONE
| null |
### System Info
transformers==4.33.0
sentencepiece==0.1.99
torch==2.0.1
### Who can help?
@Narsil
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers import pipeline
classifier = pipeline(
task="sentiment-analysis",
model="distilbert-base-uncased-finetuned-sst-2-english",
)
res = classifier(inputs=["I've been waiting for a HuggingFace course my whole life.",
"I hate this so much!"])
print(res)
```
When using the code from above, it throws an ``IndexError: tuple index out of range`` error. However, when running the following code, it works fine:
```python
from transformers import pipeline
classifier = pipeline(
task="sentiment-analysis",
model="distilbert-base-uncased-finetuned-sst-2-english",
)
res = classifier(["I've been waiting for a HuggingFace course my whole life.",
"I hate this so much!"])
print(res)
```
I have tried to debug it and it looks like in L156 of [text_classification.py](https://github.com/huggingface/transformers/blob/fa6107c97edf7cf725305a34735a57875b67d85e/src/transformers/pipelines/text_classification.py#L156), depending on if the "inputs" argument is specified, the list is passed in the args or in the kwargs:
1. If it's explicitly specified (i.e. passed as kwargs), when calling the super() method, it goes to L1077 in [base.py](https://github.com/huggingface/transformers/blob/fa6107c97edf7cf725305a34735a57875b67d85e/src/transformers/pipelines/base.py#L1077) and it works fine but then, when it returns to the text_classfication.py file and tries to execute the [L159](https://github.com/huggingface/transformers/blob/fa6107c97edf7cf725305a34735a57875b67d85e/src/transformers/pipelines/text_classification.py#L159) of text_classification.py (``isinstance(args[0], str)``), it throws the following error ``IndexError: tuple index out of range`` because args is empty.
2. If it's not explicitly specified (i.e. passed as args), it works perfectly since ``isinstance(args[0], str)`` on L159 of text_classification.py doesn't throw the error because we passed the inputs as args so it's not empty.
### Expected behavior
I would expect that both code snippets return the same output:
```python
[{'label': 'POSITIVE', 'score': 0.9598049521446228}, {'label': 'NEGATIVE', 'score': 0.9994558691978455}]
```
TBH, I'm not sure if it's a bug or this behaviour is intended. In case it's a bug, I'd be glad to help fix it.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26010/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26010/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26009
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26009/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26009/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26009/events
|
https://github.com/huggingface/transformers/issues/26009
| 1,884,158,505 |
I_kwDOCUB6oc5wTfop
| 26,009 |
CUDA OOM with increased max input length
|
{
"login": "karths8",
"id": 47289950,
"node_id": "MDQ6VXNlcjQ3Mjg5OTUw",
"avatar_url": "https://avatars.githubusercontent.com/u/47289950?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/karths8",
"html_url": "https://github.com/karths8",
"followers_url": "https://api.github.com/users/karths8/followers",
"following_url": "https://api.github.com/users/karths8/following{/other_user}",
"gists_url": "https://api.github.com/users/karths8/gists{/gist_id}",
"starred_url": "https://api.github.com/users/karths8/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/karths8/subscriptions",
"organizations_url": "https://api.github.com/users/karths8/orgs",
"repos_url": "https://api.github.com/users/karths8/repos",
"events_url": "https://api.github.com/users/karths8/events{/privacy}",
"received_events_url": "https://api.github.com/users/karths8/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @younesbelkada ",
"Hi @karths8 \r\n\r\nTo increase the sequence length for training, you might be interested to use `BetterTransformer` API from transformers & optimum to use memory efficient attention and increase your maximum sequence length for training.\r\n\r\nTo enable it, first install optimum package `pip install optimum`\r\nthen call `model.to_bettertransformer()` before passing the model to the SFTTrainer object. \r\nNote that `BetterTransformer` is not compatible when it comes to training with padd tokens, therefore you have to use `packing=True` (i.e. activate this arg: https://gist.github.com/younesbelkada/9f7f75c94bdc1981c8ca5cc937d4a4da#file-finetune_llama_v2-py-L104) in order to concatenate the input sentences all together, and separate them with an EOS token between each sample. That way no padding token is used during training and the model will attend to all tokens during training. \r\n\r\nYou can also push that to the next level by force-dispatching the `torch.scaled_dot_product_attention()` to call the Flash attention kernel using the trick presented here: https://twitter.com/younesbelkada/status/1696478075721302143?s=20 / you just need to add the `with torch.backends.cuda.sdp_kernel(enable_math=False, enable_mem_efficient=False, enable_flash=True):` context manager on top of this call: https://gist.github.com/younesbelkada/9f7f75c94bdc1981c8ca5cc937d4a4da#file-finetune_llama_v2-py-L224 \r\n\r\nYou can more generally read about BT+decoder models here: https://huggingface.co/docs/transformers/perf_infer_gpu_one#decoder-models\r\n\r\nIn the near future we're also trying to support Flash Attention-2 through a possible native integration https://github.com/huggingface/transformers/pull/25598 , if that PR gets merged, you'll be able to fine-tune your model on any dataset (including non-packed dataset) and with possibly an even larger sequence length\r\n\r\nHope this helps!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Closing as I consider the issue as solved, feel free to re-open if you have more questions",
"this fix is not compatible with DataCollatorForCompletionOnlyLM due to \"You can use the DataCollatorForCompletionOnlyLM to train your model on the generated prompts only. Note that this works only in the case when packing=False.\" I have the same problem using 'trl import SFTTrainer'"
] | 1,694 | 1,698 | 1,697 |
NONE
| null |
### System Info
- `transformers` version: 4.34.0.dev0
- Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.3
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I used [this](https://gist.github.com/younesbelkada/9f7f75c94bdc1981c8ca5cc937d4a4da) script to finetune a CodeLlama-34B model on a 4xA100 80GB setup. The PEFT method used here was QLoRA. When i train with a max input length to 4k, the training goes fine. But when I increase the max input length to 8k, the GPU usage is as follows: The first GPU builds up to about ~74GB and then goes OOM shortly after while the other 3 GPUs have a usage of about ~20GB per GPU. I am setting `device_map='auto'` when loading the model in my script.
### Expected behavior
Ideally i expect somewhat equal usage of GPU memory when training and for it not to be blocked on a single GPUs memory. I have no clue why so much memory is being used (when I am using QLoRA especially). Why is this happening and what can be done to remedy this ?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26009/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26009/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26008
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26008/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26008/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26008/events
|
https://github.com/huggingface/transformers/pull/26008
| 1,884,047,542 |
PR_kwDOCUB6oc5Zr1V0
| 26,008 |
Remove Falcon from undocumented list
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,694 | 1,694 | 1,694 |
MEMBER
| null |
Because Falcon was in the library in stealth mode, its classes were added to the undocumented list. That can now be removed!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26008/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26008/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26008",
"html_url": "https://github.com/huggingface/transformers/pull/26008",
"diff_url": "https://github.com/huggingface/transformers/pull/26008.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26008.patch",
"merged_at": 1694011744000
}
|
https://api.github.com/repos/huggingface/transformers/issues/26007
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26007/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26007/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26007/events
|
https://github.com/huggingface/transformers/pull/26007
| 1,884,034,177 |
PR_kwDOCUB6oc5ZryYF
| 26,007 |
Integrate AMD GPU in CI/CD environment
|
{
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@ydshieh @LysandreJik I think we are in a good shape for review and merging.\r\n\r\nWhat we did: \r\n- [x] Added custom runners with tags `docker-gpu`, `single-gpu`, `amd-gpu`, `mi210`\r\n- [x] Provide a custom PyTorch GPU Dockerfile for AMD dependencies\r\n- [x] Create a new `self-push-amd.yml` workflow file for everything related to AMD testing\r\n- [x] Validated the workflow against a simple BERT modifications \r\n\r\nWhat we cannot ensure as of today:\r\n- [ ] All the current tests being executed on main will be green 😅 ",
"Hi @mfuntowicz \r\n\r\nLooking at the runs in https://github.com/huggingface/transformers/actions/workflows/self-push-amd.yml, you will see no test job (Model test) is being triggered (as no test is being collected).\r\n\r\nAlso the slack report won't work as the tag is sitll using `single-amdgpu` instead of `single-gpu`.",
"@LysandreJik in case you want to take a final look :-)",
"Merge now so @mfuntowicz can show progress to AMD team today."
] | 1,694 | 1,695 | 1,695 |
MEMBER
| null | null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26007/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26007/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26007",
"html_url": "https://github.com/huggingface/transformers/pull/26007",
"diff_url": "https://github.com/huggingface/transformers/pull/26007.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26007.patch",
"merged_at": 1695214130000
}
|
https://api.github.com/repos/huggingface/transformers/issues/26006
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26006/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26006/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26006/events
|
https://github.com/huggingface/transformers/pull/26006
| 1,883,774,249 |
PR_kwDOCUB6oc5Zq5oT
| 26,006 |
Falcon: fix revision propagation
|
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26006). All of your documentation changes will be reflected on that endpoint."
] | 1,693 | 1,693 | 1,693 |
MEMBER
| null |
Revision should be kept in the kwargs as these are being propagated throughout the method afterwards.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26006/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26006/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26006",
"html_url": "https://github.com/huggingface/transformers/pull/26006",
"diff_url": "https://github.com/huggingface/transformers/pull/26006.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26006.patch",
"merged_at": 1693999261000
}
|
https://api.github.com/repos/huggingface/transformers/issues/26005
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26005/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26005/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26005/events
|
https://github.com/huggingface/transformers/issues/26005
| 1,883,624,619 |
I_kwDOCUB6oc5wRdSr
| 26,005 |
Issue while running Llama-2-7b-chat-gptq
|
{
"login": "bsurya27",
"id": 75432785,
"node_id": "MDQ6VXNlcjc1NDMyNzg1",
"avatar_url": "https://avatars.githubusercontent.com/u/75432785?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bsurya27",
"html_url": "https://github.com/bsurya27",
"followers_url": "https://api.github.com/users/bsurya27/followers",
"following_url": "https://api.github.com/users/bsurya27/following{/other_user}",
"gists_url": "https://api.github.com/users/bsurya27/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bsurya27/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bsurya27/subscriptions",
"organizations_url": "https://api.github.com/users/bsurya27/orgs",
"repos_url": "https://api.github.com/users/bsurya27/repos",
"events_url": "https://api.github.com/users/bsurya27/events{/privacy}",
"received_events_url": "https://api.github.com/users/bsurya27/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @ArthurZucker ",
"Maybe I should be changing my README examples to disable exllama support by default\r\n\r\nI'm unclear as to whether ExLlama kernels are meant to be fully supported via Transformers or not, or only when using AutoGPTQ directly?\r\n\r\n@fxmarty could you clarify?",
"Also @bsurya27 , could you share the full traceback as well as a full reproducer? \r\ncc @SunMarc as well as he recently worked on `exllama` support in transformers",
"Hi @bsurya27 , please install the optimum library from source and let me know if it works. Exllama are meant to be fully supported via Transformers. ",
"This issue was fixed in https://github.com/huggingface/optimum/pull/1329. We will do tomorrow a release of Optimum including the fix.",
"> Also @bsurya27 , could you share the full traceback as well as a full reproducer? cc @SunMarc as well as he recently worked on `exllama` support in transformers\r\n\r\nWhat is a full traceback? I'm sorry. I am newbie and am not aware of several terms.",
"> Hi @bsurya27 , please install the optimum library from source and let me know if it works. Exllama are meant to be fully supported via Transformers.\r\n\r\nI have installed the optimum library, but I am still facing the issue.",
"> Maybe I should be changing my README examples to disable exllama support by default\r\n> \r\n> I'm unclear as to whether ExLlama kernels are meant to be fully supported via Transformers or not, or only when using AutoGPTQ directly?\r\n> \r\n> @fxmarty could you clarify?\r\n\r\nActually, the example which was in the older README file worked pretty well, and I didn't get any kind of Runtime error, so I never used the code exllama_set_max_input_length(model,4096).\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,693 | 1,697 | 1,697 |
NONE
| null |
### System Info
- `transformers` version: 4.33.0
- Platform: Linux-5.15.109+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.3
- Accelerate version: 0.22.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.7.2 (gpu)
- Jax version: 0.4.14
- JaxLib version: 0.4.14
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: False
### Who can help?
@fxmarty @TheBloke
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
While trying to run the llama-2-7b-chat-gptq model on google colab.
I have installed the required libraries and am working on a GPU runtime.
I have defined the model the ALMOST same way as present in the read.me file on huggingface. The only things I have changed are the 'revision' argument to 'gptq-4bit-128g-actorder_True' from 'main' and 'device_map' to 'cuda:0' from 'auto'.
I try running the model.generate() function with the arguments set exactly to those in the read.me. I encounter a runtime error -
RuntimeError: The temp_state buffer is too small in the exllama backend. Please call the exllama_set_max_input_length function to increase the buffer size. Example:
from auto_gptq import exllama_set_max_input_length
model = exllama_set_max_input_length(model, 4096)
So I added those two lines of code mentioned and I face another error, this time while the model is being built.
'LlamaForCausalLM' object has no attribute 'quantize_config', this is the error I get.
### Expected behavior
The expected behaviour for the code would be to run normally, present the user with a input box and generate text based on that input.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26005/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26005/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26004
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26004/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26004/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26004/events
|
https://github.com/huggingface/transformers/issues/26004
| 1,883,438,463 |
I_kwDOCUB6oc5wQv1_
| 26,004 |
NLLB-200 Accelerate-based multi-GPU finetuning leads to 3x VRAM consumption as compared to single-GPU finetuning
|
{
"login": "molokanov50",
"id": 85157008,
"node_id": "MDQ6VXNlcjg1MTU3MDA4",
"avatar_url": "https://avatars.githubusercontent.com/u/85157008?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/molokanov50",
"html_url": "https://github.com/molokanov50",
"followers_url": "https://api.github.com/users/molokanov50/followers",
"following_url": "https://api.github.com/users/molokanov50/following{/other_user}",
"gists_url": "https://api.github.com/users/molokanov50/gists{/gist_id}",
"starred_url": "https://api.github.com/users/molokanov50/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/molokanov50/subscriptions",
"organizations_url": "https://api.github.com/users/molokanov50/orgs",
"repos_url": "https://api.github.com/users/molokanov50/repos",
"events_url": "https://api.github.com/users/molokanov50/events{/privacy}",
"received_events_url": "https://api.github.com/users/molokanov50/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @muellerzr @pacman100 ",
"Hi @molokanov50, thanks for reporting. I found out that the problem is specific to this model (loading with `device_map` consume more vram as expected). Other models such as `t5-small` have comparable VRAM consumption in multi-GPU and single-GPU fine-tuning scenarios. I'll try to fix that. If you find the issue, feel free to do a PR ! ",
"Hello @molokanov50, if the model fits on a single GPU, I would advise you to use DDP without the `device_map` for faster training as it will use both the GPUs all the time instead of naive pipelining of `device_map`",
"Hello @pacman100, DDP unfortunately doesn't fit me because my overall motivation is to finetune an NLLB-200 model as large as `NLLB-200-3.3B`. I know from my experiments (see above) that a single-GPU finetuning of `NLLB-200-1.3B` requires 35...40 GB VRAM. This enables me to make an estimation that to finetune `NLLB-200-3.3B` (3x amount of parameters) I will need a single 105...120 GB GPU. We have no such GPUs at the moment, so `NLLB-200-3.3B` cannot fit any of available ones.\r\nThat is definitely the case when the model doesn't fit on a single GPU.\r\nThe 2-GPU parallelization of a smaller model such as `NLLB-200-1.3B` over smaller GPUs (such that the model cannot fit any single one) is necessary and informative; by this, we model the aforementioned case. Without this experiment, assembling a multi-GPU node with total 120 GB VRAM for `NLLB-200-3.3B` makes no sense. We need to make sure that pipeline-parallelized NLLB-200 training can eventually consume the same (summary) VRAM amount as in the single-GPU case (maybe, after some fixes).",
"Hi @SunMarc,\r\nAs for now, has it become possible to fix the problem?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,693 | 1,698 | 1,698 |
NONE
| null |
### System Info
- transformers version: 4.32.1
- Platform: Linux-5.15.0-78-generic-x86_64-with-glibc2.31
- Python version: 3.9.7
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.3
- Accelerate version: 0.22.0
- Accelerate config: not found
- PyTorch version (GPU?): 1.10.1+cu113 (True)
The versions of the following packages are not specified and, therefore, are the latest:
- sentencepiece
- sacrebleu
- sacremoses
- psutil
- nltk
- evaluate
- scikit-learn
### Who can help?
@SunMarc
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I run multi-GPU and, for comparison, single-GPU finetuning of `NLLB-200-distilled-600M` and `NLLB-200-1.3B`.
In multi-GPU finetuning, I'm always on 2x 24 GB GPUs (48 GB VRAM in total).
I successfully finetuned `NLLB-200-distilled-600M` on a single 12 GB GPU, as well as `NLLB-200-1.3B` on a 40 GB GPU. Thus, my VRAM resources in my multi-GPU configuration is obviously greater than in any single-GPU scenario.
To my surprise, `NLLB-200-distilled-600M` finetuning on 2 GPUs occupied 30 GB VRAM that is 3 times greater than the memory required for a single-GPU finetuning.
Also, for `NLLB-200-1.3B` finetuning on 2 GPUs I got CUDA OOM, i.e., 48 GB VRAM is insufficient to perform this finetuning. On the other hand, a 40 GB GPU is sufficient for a single-GPU finetuning.
Seems too strange, since in model parallelism, only some part of a model resides on each GPU, and the used memory on each GPU should be less than in a single-GPU scenario.
My multi-GPU finetuning code:
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, Seq2SeqTrainingArguments, Seq2SeqTrainer
import pandas as pd
from sklearn.model_selection import train_test_split
import torch
import torch.utils.data
from transformers import DataCollatorForSeq2Seq
import evaluate
import numpy as np
from argparse import ArgumentParser
import os
os.environ["PYTORCH_CUDA_ALLOC_CONF"] = "max_split_size_mb:512"
modelPath = "facebook/nllb-200-distilled-600M"
tokenizer = AutoTokenizer.from_pretrained(modelPath)
model = AutoModelForSeq2SeqLM.from_pretrained(modelPath, device_map="auto")
parser = ArgumentParser()
parser.add_argument('--source-lang', type=str, default='eng_Latn')
parser.add_argument('--target-lang', type=str, default='rus_Cyrl')
parser.add_argument('--delimiter', type=str, default=';')
args = parser.parse_args()
dff = pd.read_csv('dataset/data.csv', sep=args.delimiter)
source = dff[args.source_lang].values.tolist()
target = dff[args.target_lang].values.tolist()
max = 512
X_train, X_val, y_train, y_val = train_test_split(source, target, test_size=0.2)
X_train_tokenized = tokenizer(X_train, padding=True, truncation=True, max_length=max, return_tensors="pt")
y_train_tokenized = tokenizer(y_train, padding=True, truncation=True, max_length=max, return_tensors="pt")
X_val_tokenized = tokenizer(X_val, padding=True, truncation=True, max_length=max, return_tensors="pt")
y_val_tokenized = tokenizer(y_val, padding=True, truncation=True, max_length=max, return_tensors="pt")
class ForDataset(torch.utils.data.Dataset):
def __init__(self, inputs, targets):
self.inputs = inputs
self.targets = targets
def __len__(self):
return len(self.targets)
def __getitem__(self, index):
input_ids = torch.tensor(self.inputs["input_ids"][index]).squeeze()
target_ids = torch.tensor(self.targets["input_ids"][index]).squeeze()
return {"input_ids": input_ids, "labels": target_ids}
train_dataset = ForDataset(X_train_tokenized, y_train_tokenized)
test_dataset = ForDataset(X_val_tokenized, y_val_tokenized)
data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=model, return_tensors="pt")
metric = evaluate.load("sacrebleu")
def postprocess_text(preds, labels):
preds = [pred.strip() for pred in preds]
labels = [[label.strip()] for label in labels]
return preds, labels
def compute_metrics(eval_preds):
preds, labels = eval_preds
if isinstance(preds, tuple):
preds = preds[0]
decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True)
labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)
decoded_preds, decoded_labels = postprocess_text(decoded_preds, decoded_labels)
result = metric.compute(predictions=decoded_preds, references=decoded_labels)
result = {"bleu": result["score"]}
prediction_lens = [np.count_nonzero(pred != tokenizer.pad_token_id) for pred in preds]
result["gen_len"] = np.mean(prediction_lens)
result = {k: round(v, 4) for k, v in result.items()}
return result
training_args = Seq2SeqTrainingArguments(
output_dir="mymodel",
evaluation_strategy="epoch",
save_strategy='epoch',
learning_rate=2e-5,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
weight_decay=0.01,
save_total_limit=3,
num_train_epochs=20,
predict_with_generate=True,
load_best_model_at_end=True
)
trainer = Seq2SeqTrainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=test_dataset,
tokenizer=tokenizer,
data_collator=data_collator,
compute_metrics=compute_metrics,
)
trainer.train()
trainer.save_model('finalmodel')
```
Text of the shell file used to run my code:
`python3 finetune.py --source-lang eng_Latn --target-lang rus_Cyrl --delimiter ';'`
[data.csv](https://github.com/huggingface/transformers/files/12534680/data.csv)
### Expected behavior
Comparable (approximately equal) summary VRAM consumption in multi-GPU and single-GPU finetuning scenarios.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26004/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26004/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26003
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26003/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26003/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26003/events
|
https://github.com/huggingface/transformers/pull/26003
| 1,883,247,042 |
PR_kwDOCUB6oc5ZpHC5
| 26,003 |
Update README.md
|
{
"login": "NinoRisteski",
"id": 95188570,
"node_id": "U_kgDOBax2Wg",
"avatar_url": "https://avatars.githubusercontent.com/u/95188570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NinoRisteski",
"html_url": "https://github.com/NinoRisteski",
"followers_url": "https://api.github.com/users/NinoRisteski/followers",
"following_url": "https://api.github.com/users/NinoRisteski/following{/other_user}",
"gists_url": "https://api.github.com/users/NinoRisteski/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NinoRisteski/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NinoRisteski/subscriptions",
"organizations_url": "https://api.github.com/users/NinoRisteski/orgs",
"repos_url": "https://api.github.com/users/NinoRisteski/repos",
"events_url": "https://api.github.com/users/NinoRisteski/events{/privacy}",
"received_events_url": "https://api.github.com/users/NinoRisteski/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26003). All of your documentation changes will be reflected on that endpoint."
] | 1,693 | 1,693 | 1,693 |
CONTRIBUTOR
| null |
fixed a typo
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26003/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26003/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26003",
"html_url": "https://github.com/huggingface/transformers/pull/26003",
"diff_url": "https://github.com/huggingface/transformers/pull/26003.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26003.patch",
"merged_at": 1693994111000
}
|
https://api.github.com/repos/huggingface/transformers/issues/26002
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26002/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26002/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26002/events
|
https://github.com/huggingface/transformers/pull/26002
| 1,883,232,490 |
PR_kwDOCUB6oc5ZpD28
| 26,002 |
🌐 [i18n-KO] Translated `whisper.md` to Korean
|
{
"login": "nuatmochoi",
"id": 46990061,
"node_id": "MDQ6VXNlcjQ2OTkwMDYx",
"avatar_url": "https://avatars.githubusercontent.com/u/46990061?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nuatmochoi",
"html_url": "https://github.com/nuatmochoi",
"followers_url": "https://api.github.com/users/nuatmochoi/followers",
"following_url": "https://api.github.com/users/nuatmochoi/following{/other_user}",
"gists_url": "https://api.github.com/users/nuatmochoi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nuatmochoi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nuatmochoi/subscriptions",
"organizations_url": "https://api.github.com/users/nuatmochoi/orgs",
"repos_url": "https://api.github.com/users/nuatmochoi/repos",
"events_url": "https://api.github.com/users/nuatmochoi/events{/privacy}",
"received_events_url": "https://api.github.com/users/nuatmochoi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"위의 리뷰 내용을 제외하고는, glossary와 문장 요소 등 다 잘 번역되어 있는 것 같습니다! LGTM!!\r\n좋은 번역 감사합니다!",
"Did you want to ping someone else for Korean review or should I merge this? ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26002). All of your documentation changes will be reflected on that endpoint.",
"@ArthurZucker \r\n> Did you want to ping someone else for Korean review or should I merge this?\r\n\r\nThe review is complete and I just hope to merge this PR.\r\n",
"Thanks for the contribution! "
] | 1,693 | 1,695 | 1,695 |
CONTRIBUTOR
| null |
# What does this PR do?
Translated the `whisper.md.md` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
## Before reviewing
- [x] Check for missing / redundant translations (번역 누락/중복 검사)
- [x] Grammar Check (맞춤법 검사)
- [x] Review or Add new terms to glossary (용어 확인 및 추가)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview로 정상작동 확인)
## Who can review? (Initial)
May you please review this PR?
@bolizabeth, @nuatmochoi, @heuristicwave, @mjk0618, @keonju2, @harheem, @HongB1, @junejae, @54data, @seank021, @augustinLib, @sronger, @TaeYupNoh, @kj021, @eenzeenee
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
May you please review this PR? @sgugger, @ArthurZucker, @eunseojo
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26002/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26002/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26002",
"html_url": "https://github.com/huggingface/transformers/pull/26002",
"diff_url": "https://github.com/huggingface/transformers/pull/26002.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26002.patch",
"merged_at": 1695067961000
}
|
https://api.github.com/repos/huggingface/transformers/issues/26001
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26001/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26001/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26001/events
|
https://github.com/huggingface/transformers/issues/26001
| 1,883,172,387 |
I_kwDOCUB6oc5wPu4j
| 26,001 |
I'm loading llama2 on a single machine with dual GPUs using Accelerate, and it's giving me a CUDA error. It seems that tensors are unexpectedly being allocated to two different GPUs during runtime.
|
{
"login": "sev777",
"id": 38484725,
"node_id": "MDQ6VXNlcjM4NDg0NzI1",
"avatar_url": "https://avatars.githubusercontent.com/u/38484725?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sev777",
"html_url": "https://github.com/sev777",
"followers_url": "https://api.github.com/users/sev777/followers",
"following_url": "https://api.github.com/users/sev777/following{/other_user}",
"gists_url": "https://api.github.com/users/sev777/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sev777/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sev777/subscriptions",
"organizations_url": "https://api.github.com/users/sev777/orgs",
"repos_url": "https://api.github.com/users/sev777/repos",
"events_url": "https://api.github.com/users/sev777/events{/privacy}",
"received_events_url": "https://api.github.com/users/sev777/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"If you are using `device_map='auto'`, then you are already using `Accelerate`. So you can remove the 3 lines in your code that use `Accelerator`.\r\n\r\nRefer to: https://huggingface.co/docs/accelerate/usage_guides/big_modeling#using-transformers-diffusers-and-other-open-source-libraries",
"> If you are using `device_map='auto'`, then you are already using `Accelerate`. So you can remove the 3 lines in your code that use `Accelerator`.\r\n> \r\n> Refer to: https://huggingface.co/docs/accelerate/usage_guides/big_modeling#using-transformers-diffusers-and-other-open-source-libraries\r\n\r\nThanks, after remove the code, it will still report the same error.",
"cc @ArthurZucker ",
"Hey! The error you are getting seems to be `Index out of bound` which usually means you are feeding inputs that are greater than the size of the mbedding matrix. For this, I suspect that the `tokenizer` has extra tokens (for padding) but the embedding matrix was not properly resized. Make sure to check the inputs of the model, and if you try to run on cpu you'll get the full traceback with the positional embedding",
"> \r\ncc @ArthurZucker \r\n\r\nThanks, I test on CPU, and it can run correct.\r\nlike:\r\n```python\r\n#Output:\r\nLoading checkpoint shards: 100%|██████████████████| 3/3 [00:17<00:00, 5.69s/it]\r\n['<s> Q: What is the largest animal?\\nA: The blue whale is the largest animal in the world. It can grow to be 100 feet long and weigh 150 tons.']\r\n```\r\n\r\n**But when I test on GPU, I get the same wrong.**\r\n\r\nAnd follow your suspect, I find the 'position_ids' in [modeling_llama.py](https://github.com/huggingface/transformers/blob/fa6107c97edf7cf725305a34735a57875b67d85e/src/transformers/models/llama/modeling_llama.py#L711C1-L711C47) is on device: 0, and it's value is :\r\n```python\r\nposition_ids = tensor([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]], device='cuda:0')\r\n```\r\n\r\nBut, when it is transferred as an argument to the [LlamaDecoderLayer.forward](https://github.com/huggingface/transformers/blob/fa6107c97edf7cf725305a34735a57875b67d85e/src/transformers/models/llama/modeling_llama.py#L427), it is moved to device 1. and the value is:\r\n\r\n```python\r\nposition_ids = \r\ntensor([[-9223372034707292160, -9223372034707292160, -9223372034707292160,\r\n -9223372034707292160, -9223372034707292160, -9223372034707292160,\r\n -9223372034707292160, -9223372034707292160, -9223372034707292160,\r\n -9223372034707292160, -9223372034707292160, -9223372034707292160]],\r\n device='cuda:1')\r\n\r\nor sometimes:\r\nposition_ids = \r\ntensor([[4872363901336821031, 4872363901336821031, 4872363901336821031,\r\n 4872363901336821031, 4872363901336821031, 4872363901336821031,\r\n 0, 0, 0,\r\n 0, 0, 0]]\r\n```\r\n\r\nSo, the [apply_rotary_pos_emb](https://github.com/huggingface/transformers/blob/fa6107c97edf7cf725305a34735a57875b67d85e/src/transformers/models/llama/modeling_llama.py#L184) will wrong.\r\n\r\n**Furthermore, to my surprise, when I manually executed the code line by line in debug mode, it didn't seem to throw any errors.** \r\n",
"Here I giev an example how I run the code without wrong ~~.\r\n\r\nThe main code is:\r\n\r\n```python\r\nimport torch\r\nfrom transformers import LlamaTokenizer, LlamaForCausalLM, GenerationConfig\r\n\r\nimport os\r\nos.environ['CUDA_VISIBLE_DEVICES']=\"0,1\"\r\n\r\n\r\nmodel_path = '/root/sev777/LMs/huggingface/llama2_13B'\r\n\r\ntokenizer = LlamaTokenizer.from_pretrained(model_path)\r\ntokenizer.pad_token_id=tokenizer.eos_token_id\r\ntokenizer.pad_token=tokenizer.eos_token\r\nmodel = LlamaForCausalLM.from_pretrained(\r\n model_path,device_map='auto'\r\n)\r\ngeneration_config = GenerationConfig(\r\n pad_token_id=tokenizer.eos_token_id,\r\n pad_token=tokenizer.eos_token,\r\n)\r\n\r\nprompt = 'I love :'\r\ninput_ids = tokenizer(prompt, return_tensors=\"pt\").input_ids.cuda(0)\r\ngeneration_output = model.generate(\r\n input_ids=input_ids, max_new_tokens=3,generation_config=generation_config\r\n)\r\n\r\nprint(tokenizer.batch_decode(generation_output))\r\n\r\n```\r\n\r\nAnd I added ‘break’ after generating a word in GenerationMixin.greedy_search().\r\n(It's not supposed to, but I added break so I could manually end the process as quickly as possible.)\r\n```python\r\n break #\r\n if this_peer_finished and not synced_gpus:\r\n break\r\n```\r\n\r\nThen I add a breakpoint at [decoder_layer](https://github.com/huggingface/transformers/blob/fa6107c97edf7cf725305a34735a57875b67d85e/src/transformers/models/llama/modeling_llama.py#L711C1-L711C47), and executed the code manually step by step.\r\n\r\nThen I get the output:\r\n```python\r\n['<s> I love :A']\r\n```\r\nSo, is it because my CPU and GPU are not synchronized, causing some variables to have incorrect data when switching between GPUs?\r\n\r\nThis phenomenon leads to: if I debug the code step by step, then position_ids will not experience the sudden value changes as mentioned above.\r\n",
"cc @SunMarc would be cool if you can have a look! ",
"Hi @sev777 , i'm unable to reproduce your error unfortunately. I would suggest updating/reinstalling cuda. Let us know if there is any progress on your end. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,693 | 1,700 | 1,700 |
NONE
| null |
### System Info
I'm loading llama2 on a single machine with dual GPUs using Accelerate, and it's giving me a CUDA error. It seems that tensors are unexpectedly being allocated to two different GPUs during runtime.
My Computer use:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 64
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7282 16-Core Processor
Virtualization: AMD-V
GPU: A100 40G
```python
import torch
from transformers import LlamaTokenizer, LlamaForCausalLM
import os
os.environ['CUDA_VISIBLE_DEVICES']="0,1"
model_path = '/root/sev777/LMs/huggingface/llama2_13B'
tokenizer = LlamaTokenizer.from_pretrained(model_path)
model = LlamaForCausalLM.from_pretrained(
model_path,device_map='auto'
)
prompt = 'Q: What is the largest animal?\nA:'
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda(0)
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=32
)
print(generation_output)
```
My log is:
`
Loading checkpoint shards: 0%| | 0/3 [00:00<?, ?it/s]
Loading checkpoint shards: 33%|███▎ | 1/3 [00:10<00:20, 10.00s/it]
Loading checkpoint shards: 67%|██████▋ | 2/3 [00:19<00:09, 9.64s/it]
Loading checkpoint shards: 100%|██████████| 3/3 [00:25<00:00, 7.87s/it]
Loading checkpoint shards: 100%|██████████| 3/3 [00:25<00:00, 8.39s/it]
Traceback (most recent call last):
File "mt.py", line 20, in <module>
generation_output = model.generate(
File "/root/sev777/miniconda3/envs/llama_adapter/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/root/sev777/miniconda3/envs/llama_adapter/lib/python3.8/site-packages/transformers/generation/utils.py", line 1538, in generate
../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [0,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [1,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [2,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
...
../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [32,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0 ], thread: [33return self.greedy_search(,0
,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
../aten/src/ATen/native/cuda/IndexKernel.cu File "/root/sev777/miniconda3/envs/llama_adapter/lib/python3.8/site-packages/transformers/generation/utils.py", line 2362, in greedy_search
:91: operator(): block: [0,0,0], thread: [34,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [0,0,0], thread: [35,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
...
../aten/src/ATen/native/cuda/IndexKernel.cu:91: operator(): block: [1,0,0], thread: [127,0,0] Assertion `index >= -sizes[i] && index < sizes[i] && "index out of bounds"` failed.
return F.linear(input, self.weight, self.bias)
RuntimeError: CUDA error: CUBLAS_STATUS_INTERNAL_ERROR when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`
`
"Is this because my GPU and CPU are conflicting, or is it due to other reasons?"
Thanks!
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I'm loading llama2 on a single machine with dual GPUs using Accelerate, and it's giving me a CUDA error. It seems that tensors are unexpectedly being allocated to two different GPUs during runtime.
### Expected behavior
I'm loading llama2 on a single machine with dual GPUs using Accelerate, and it's giving me a CUDA error. It seems that tensors are unexpectedly being allocated to two different GPUs during runtime.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26001/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26001/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/26000
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/26000/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/26000/comments
|
https://api.github.com/repos/huggingface/transformers/issues/26000/events
|
https://github.com/huggingface/transformers/issues/26000
| 1,883,130,854 |
I_kwDOCUB6oc5wPkvm
| 26,000 |
AutoModelForSequenceClassification + GPT2 + Accelerate + FSDP fails to load pretrained model
|
{
"login": "jmif",
"id": 1000442,
"node_id": "MDQ6VXNlcjEwMDA0NDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1000442?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmif",
"html_url": "https://github.com/jmif",
"followers_url": "https://api.github.com/users/jmif/followers",
"following_url": "https://api.github.com/users/jmif/following{/other_user}",
"gists_url": "https://api.github.com/users/jmif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmif/subscriptions",
"organizations_url": "https://api.github.com/users/jmif/orgs",
"repos_url": "https://api.github.com/users/jmif/repos",
"events_url": "https://api.github.com/users/jmif/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmif/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 5616426447,
"node_id": "LA_kwDOCUB6oc8AAAABTsPdzw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/solved",
"name": "solved",
"color": "B1D6DC",
"default": false,
"description": ""
}
] |
closed
| false | null |
[] |
[
"Hello, this has been resolved by PR https://github.com/huggingface/transformers/pull/25820",
"Confirmed!"
] | 1,693 | 1,694 | 1,694 |
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.32.1
- Platform: Linux-6.2.0-1012-gcp-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.3
- Accelerate version: 0.22.0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: FSDP
- mixed_precision: fp16
- use_cpu: False
- debug: False
- num_processes: 2
- machine_rank: 0
- num_machines: 1
- rdzv_backend: static
- same_network: True
- main_training_function: main
- fsdp_config: {'fsdp_auto_wrap_policy': 'TRANSFORMER_BASED_WRAP', 'fsdp_backward_prefetch_policy': 'NO_PREFETCH', 'fsdp_forward_prefetch': False, 'fsdp_offload_params': False, 'fsdp_sharding_strategy': 1, 'fsdp_state_dict_type': 'FULL_STATE_DICT', 'fsdp_sync_module_states': True, 'fsdp_use_orig_params': False}
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: FSDP
### Who can help?
@muellerz @pacman100
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I'm trying to fine tune GPT2 using accelerate + HF training + FSDP with the intention of training larger GPT2 models later when this is working. With this training script:
```
import logging
import tempfile
from datasets import load_dataset
from transformers import (
set_seed,
TrainingArguments,
Trainer,
DataCollatorWithPadding,
AutoTokenizer,
AutoModelForSequenceClassification,
)
logger = logging.getLogger()
def train(output_dir: str):
set_seed(42)
imdb = load_dataset("imdb")
tokenizer = AutoTokenizer.from_pretrained("gpt2", max_length=1024)
tokenizer.pad_token = tokenizer.eos_token
model = AutoModelForSequenceClassification.from_pretrained(
"gpt2",
num_labels=2,
max_position_embeddings=1024,
use_safetensors=True,
)
def tokenize(examples):
return tokenizer(examples["text"], truncation=True)
train_dataset = imdb["train"].map(tokenize, batched=True)
training_args = TrainingArguments(
output_dir=output_dir,
eval_delay=0,
evaluation_strategy="epoch",
learning_rate=3e-5,
weight_decay=3e-6,
adam_beta1=0.9,
adam_beta2=0.999,
adam_epsilon=1e-8,
num_train_epochs=10,
lr_scheduler_type="linear",
warmup_steps=10,
log_level="info",
save_strategy="epoch",
seed=43,
fp16=True,
dataloader_drop_last=True,
)
collator = DataCollatorWithPadding(tokenizer=tokenizer, padding="max_length")
trainer = Trainer(
model,
training_args,
tokenizer=tokenizer,
data_collator=collator,
train_dataset=train_dataset,
)
trainer.train()
def main():
with tempfile.TemporaryDirectory() as d:
train(d)
if __name__ == "__main__":
main()
```
I get this error:
```
File "repro.py", line 73, in main
train(d)
File "repro.py", line 27, in train
model = AutoModelForSequenceClassification.from_pretrained(
File ".virtualenvs/llm-training/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 516, in from_pretrained
return model_class.from_pretrained(
File ".virtualenvs/llm-training/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3091, in from_pretrained
) = cls._load_pretrained_model(
File ".virtualenvs/llm-training/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3492, in _load_pretrained_model
set_module_tensor_to_device(
File ".virtualenvs/llm-training/lib/python3.10/site-packages/accelerate/utils/modeling.py", line 265, in set_module_tensor_to_device
new_module = getattr(module, split)
File ".virtualenvs/llm-training/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1614, in __getattr__
raise AttributeError("'{}' object has no attribute '{}'".format(
```
Here's how I run:
```
python -m accelerate.commands.launch --config_file config/accelerate.yaml epro.py
```
### Expected behavior
When I run this without accelerate (ie run the repro.py script directly) I don't run into the load error.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/26000/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/26000/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25999
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25999/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25999/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25999/events
|
https://github.com/huggingface/transformers/pull/25999
| 1,882,754,273 |
PR_kwDOCUB6oc5Znafz
| 25,999 |
Update training_args.py - addition of self.distributed_state when using XPU
|
{
"login": "Serizao",
"id": 11671895,
"node_id": "MDQ6VXNlcjExNjcxODk1",
"avatar_url": "https://avatars.githubusercontent.com/u/11671895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Serizao",
"html_url": "https://github.com/Serizao",
"followers_url": "https://api.github.com/users/Serizao/followers",
"following_url": "https://api.github.com/users/Serizao/following{/other_user}",
"gists_url": "https://api.github.com/users/Serizao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Serizao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Serizao/subscriptions",
"organizations_url": "https://api.github.com/users/Serizao/orgs",
"repos_url": "https://api.github.com/users/Serizao/repos",
"events_url": "https://api.github.com/users/Serizao/events{/privacy}",
"received_events_url": "https://api.github.com/users/Serizao/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @muellerzr @pacman100 ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25999). All of your documentation changes will be reflected on that endpoint."
] | 1,693 | 1,694 | 1,694 |
CONTRIBUTOR
| null |
# addition of `self.distributed_state` when using XPU
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
# Fixe
In the base code self.distributed_state does not appear to be defined, which causes the script to crash when used on lines 1813 and 1814 in my case.
I therefore propose an update with a definition of this variable when using an XPU
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerz and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25999/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25999/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25999",
"html_url": "https://github.com/huggingface/transformers/pull/25999",
"diff_url": "https://github.com/huggingface/transformers/pull/25999.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25999.patch",
"merged_at": 1694629306000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25998
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25998/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25998/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25998/events
|
https://github.com/huggingface/transformers/issues/25998
| 1,882,653,269 |
I_kwDOCUB6oc5wNwJV
| 25,998 |
Resume from checkpoint functionality of the Pytorch example has some bugs and need to fixed.
|
{
"login": "MingxuanZhangPurdue",
"id": 74074145,
"node_id": "MDQ6VXNlcjc0MDc0MTQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/74074145?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MingxuanZhangPurdue",
"html_url": "https://github.com/MingxuanZhangPurdue",
"followers_url": "https://api.github.com/users/MingxuanZhangPurdue/followers",
"following_url": "https://api.github.com/users/MingxuanZhangPurdue/following{/other_user}",
"gists_url": "https://api.github.com/users/MingxuanZhangPurdue/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MingxuanZhangPurdue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MingxuanZhangPurdue/subscriptions",
"organizations_url": "https://api.github.com/users/MingxuanZhangPurdue/orgs",
"repos_url": "https://api.github.com/users/MingxuanZhangPurdue/repos",
"events_url": "https://api.github.com/users/MingxuanZhangPurdue/events{/privacy}",
"received_events_url": "https://api.github.com/users/MingxuanZhangPurdue/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
] |
[
"This looks like an accelerate problem rather than a `transformers` problem! Could you take a look @muellerzr @pacman100 ?",
"The latter of which is already done, but I'll open a PR to fix the first part of printing the right path"
] | 1,693 | 1,697 | 1,697 |
NONE
| null |
### System Info
This example, i.e., [transformers/examples/pytorch/text-classification/run_glue_no_trainer.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-classification/run_glue_no_trainer.py) has some bugs related to resume from check point, to be more specific, below are the original code adopted from line 520-521 and line 530-534
```
accelerator.print(f"Resumed from checkpoint: {checkpoint_path}")
accelerator.load_state(path)
```
```
# need to multiply `gradient_accumulation_steps` to reflect real steps
resume_step = int(training_difference.replace("step_", "")) * args.gradient_accumulation_steps
starting_epoch = resume_step // len(train_dataloader)
resume_step -= starting_epoch * len(train_dataloader)
completed_steps = resume_step // args.gradient_accumulation_step
```
However, it should be,
```
accelerator.print(f"Resumed from checkpoint: {checkpoint_path}")
accelerator.load_state(checkpoint_path)
```
where we should load from ``checkpoint_path`` instead of ``path``, and
```
# need to multiply `gradient_accumulation_steps` to reflect real steps
resume_step = int(training_difference.replace("step_", "")) * args.gradient_accumulation_steps
starting_epoch = resume_step // len(train_dataloader)
completed_steps = resume_step // args.gradient_accumulation_steps
resume_step -= starting_epoch * len(train_dataloader)
```
we should move ``completed_steps = resume_step // args.gradient_accumulation_steps`` up, before we change the ``resume_step``.
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
See above
### Expected behavior
I have provided the bugs and how to fix them, thanks a lot!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25998/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25998/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25997
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25997/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25997/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25997/events
|
https://github.com/huggingface/transformers/issues/25997
| 1,882,516,155 |
I_kwDOCUB6oc5wNOq7
| 25,997 |
upscaling segmentation mask OneFormer
|
{
"login": "nikky4D",
"id": 7451106,
"node_id": "MDQ6VXNlcjc0NTExMDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/7451106?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nikky4D",
"html_url": "https://github.com/nikky4D",
"followers_url": "https://api.github.com/users/nikky4D/followers",
"following_url": "https://api.github.com/users/nikky4D/following{/other_user}",
"gists_url": "https://api.github.com/users/nikky4D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nikky4D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nikky4D/subscriptions",
"organizations_url": "https://api.github.com/users/nikky4D/orgs",
"repos_url": "https://api.github.com/users/nikky4D/repos",
"events_url": "https://api.github.com/users/nikky4D/events{/privacy}",
"received_events_url": "https://api.github.com/users/nikky4D/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi, thanks for raising an issue! \r\n\r\nThis is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.\r\n\r\nIf using the model's image processor, you can change the size of the post-processed mask by specifying `target_sizes` in the post processing methods e.g. [post_process_semantic_segmentation](https://github.com/huggingface/transformers/blob/842e99f1b9ee2a0fa239997ef695c5ed0bd77195/src/transformers/models/oneformer/image_processing_oneformer.py#L1069)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,693 | 1,697 | 1,697 |
NONE
| null |
I would like to scale the segmentation mask produced by OneFormer.
My gpu is small, so I downscale my original image to a smaller image, which I can then pass through OneFormer. However, how do I scale up the results back to the original image size? Is there a command in OneFormer or SegFormer that allows me to set the input and output size?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25997/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25997/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25996
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25996/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25996/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25996/events
|
https://github.com/huggingface/transformers/pull/25996
| 1,882,509,307 |
PR_kwDOCUB6oc5ZmlMn
| 25,996 |
[`VITS`] tokenizer integration test: fix revision did not exist
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,693 | 1,694 | 1,693 |
COLLABORATOR
| null |
# What does this PR do?
Fixes the revisiono f the integration test for the VITs tokenizer.
`RUN_SLOW=1 pytest tests/models/vits/test_tokenization_vits.py ` on main outputs:
```python
E OSError: Can't load tokenizer for 'facebook/mms-tts-eng'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'facebook/mms-tts-eng' is the correct path to a directory containing all relevant files for a VitsTokenizer tokenizer.
src/transformers/tokenization_utils_base.py:1893: OSError
```
because the revision does not exist
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25996/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25996/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25996",
"html_url": "https://github.com/huggingface/transformers/pull/25996",
"diff_url": "https://github.com/huggingface/transformers/pull/25996.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25996.patch",
"merged_at": 1693941693000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25995
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25995/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25995/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25995/events
|
https://github.com/huggingface/transformers/pull/25995
| 1,882,412,855 |
PR_kwDOCUB6oc5ZmQMg
| 25,995 |
[`CI`] Fix red CI and ERROR failed should show
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,693 | 1,693 | 1,693 |
COLLABORATOR
| null |
# What does this PR do?
Fixes the main branch after the merge of #25895 which has a typo when checking the outputs of the ci tests.
Two part fix:
1. Fatal errors were not shown in the outputs so we don't know what was wrong
```diff
- if x.startswith("FAILED ")]); fp.close(); fp = open("summary_short.txt", "w"); fp.write(failed); fp.close()'
+ if x.startswith("FAILED ", "ERROR ")]); fp.close(); fp = open("summary_short.txt", "w"); fp.write(failed); fp.close()'
```
3. Fix the error:
```diff
- check_test_command = f'if [ -s reports/test_{self.name}/failures_short.txt ]; '
+ check_test_command = f'if [ -s reports/{self.name}/failures_short.txt ]; '
```
should do the trick
3. There is now https://github.com/huggingface/transformers/commit/8d518013efbd10c178dd0dba0f9ba93229e2e78a that broke main, marking the test as slow 😉
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25995/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25995/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25995",
"html_url": "https://github.com/huggingface/transformers/pull/25995",
"diff_url": "https://github.com/huggingface/transformers/pull/25995.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25995.patch",
"merged_at": 1693937760000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25994
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25994/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25994/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25994/events
|
https://github.com/huggingface/transformers/issues/25994
| 1,882,396,123 |
I_kwDOCUB6oc5wMxXb
| 25,994 |
Passing tokenizer call kwargs (like truncation) in pipeline
|
{
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2392046359,
"node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue",
"name": "Good Second Issue",
"color": "dd935a",
"default": false,
"description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!"
},
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] |
closed
| false | null |
[] |
[
"Indeed it's not implemented: \r\n\r\nhttps://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/fill_mask.py#L96\r\n\r\nThe name would be `tokenizer_kwargs` though to be consistent with the rest.",
"Hello! I'm going to take a crack at this one if that's cool. ",
"Have at it! Would be great to have this implemented. @nmcahill ",
"Need the same functionality in \"text-generation\" pipeline. Would like to take a go!",
"Any solution for this?",
"I am pretty sure the solution is to add the kwargs to the `pipeline` in that case `text-generation` . #28362 fixed is so closing",
"The problem is that max_length (tokenizer) gets mistaken with max_new_tokens (generator).\r\n\r\nSo the pipeline complains about a duplicated argument and informs that max_new_tokens will take precedence.\r\n\r\nAnyway, I have implemented a preprocessing function, so the issue is workarounded.\r\n"
] | 1,693 | 1,707 | 1,707 |
COLLABORATOR
| null |
### System Info
- `transformers` version: 4.32.1
- Platform: Linux-5.14.0-284.25.1.el9_2.x86_64-x86_64-with-glibc2.34
- Python version: 3.11.4
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.3
- Accelerate version: 0.22.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
### Who can help?
@ArthurZucker for the tokenizers and @Narsil for the pipeline.
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I am trying to figure out how I can truncate text in a pipeline (without explicitly writing a preprocessing step for my data). I've looked at the documentation and searched the net. A lot of people seem to be asking this question (both on forums and Stack Overflow), but any solution that I could find does not work anymore. Below I try a number of them but none of them work. How can I enable truncation in the pipeline?
```python
from transformers import pipeline
model_name = "bert-base-cased"
text = "Luke, I am not your [MASK]. " * 512 # Make sure text is longer than max model length
# As-is (error, size mismatch -- no truncation seems to happen)
pipe = pipeline("fill-mask", model=model_name)
result = pipe([text])
# truncation in init (error, unrecognized keyword)
pipe = pipeline("fill-mask", model=model_name, truncation=True)
result = pipe([text])
# truncation in call (error, unrecognized keyword)
pipe = pipeline("fill-mask", model=model_name)
result = pipe([text], truncation=True)
# truncation as tokenizer kwargs in tuple (error, size mismatch)
tokenizer_tuple = (model_name, {"truncation": True})
pipe = pipeline("fill-mask", model=model_name, tokenizer=tokenizer_tuple)
result = pipe([text])
# Truncation as tokenize_kwargs (https://github.com/huggingface/transformers/issues/21971#issuecomment-1456725779)
# Unexpected keyword error
pipe = pipeline("fill-mask", model=model_name, tokenize_kwargs={"truncation": True})
result = pipe([text])
```
### Expected behavior
A fix if this is currently not implemented or broken but definitely also a documentation upgrade to clarify how tokenizer kwargs should be passed to a pipeline - both `init` and `call` kwargs!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25994/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25994/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25993
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25993/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25993/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25993/events
|
https://github.com/huggingface/transformers/pull/25993
| 1,882,134,652 |
PR_kwDOCUB6oc5ZlTmE
| 25,993 |
Call torch.use_deterministic_algorithms(True) for tests that require determinism
|
{
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25993). All of your documentation changes will be reflected on that endpoint.",
"First question: `do not pass on main` --> I think you are talking about running on AMD GPU?\r\n\r\nMy main concern is about \r\n\r\n```\r\nand if only nondeterministic algorithms are available they will throw a [RuntimeError](https://docs.python.org/3/library/exceptions.html#RuntimeError) when called.\r\n```\r\n\r\nsee [here](https://pytorch.org/docs/stable/generated/torch.use_deterministic_algorithms.html#torch.use_deterministic_algorithms)\r\n\r\nCould we wait until all the AMD GPU CI runner is set up, and let's run it without any modification to see what fail and what pass. Then we can start to working on failing tests.",
"> First question: do not pass on main --> I think you are talking about running on AMD GPU?\r\n\r\nYes, on MI210 with RoCm 5.4.2.\r\n\r\n> Could we wait until all the AMD GPU CI runner is set up, and let's run it without any modification to see what fail and what pass. Then we can start to working on failing tests.\r\n\r\nIt is fine, although by running manually the transformers tests it appears that `CUDA_VISIBLE_DEVICES=0 pytest tests/models/cpmant/test_modeling_cpmant.py -s -k \"test_determinism\"` do not pass (among others).\r\n\r\n@ydshieh Yes, I did not think about the RuntimeError. Would it make sense to run these tests under `torch.use_deterministic_algorithms(True)`, catch relevant `RuntimeError` and if happening, fall back on `torch.use_deterministic_algorithms(False)`?\r\n\r\n@amyeroberts We could probably call `torch.use_deterministic_algorithms(False)` at the end of those tests.",
"> it make sense to run these tests under torch.use_deterministic_algorithms(True), catch relevant RuntimeError and if happening, fall back on torch.use_deterministic_algorithms(False)?\r\n\r\nMight be. But as I mentioned, I would prefer not to start modifying the tests before a full run of the CI with AMD with the current testing code.",
"Actually, we can use `warn_only=True`.",
"For completeness, fixes:\r\n\r\n```\r\n[gw38] [ 11%] FAILED tests/models/cpmant/test_modeling_cpmant.py::CpmAntModelTest::test_determinism <- tests/test_modeling_common.py \r\n[gw38] [ 11%] FAILED tests/models/cpmant/test_modeling_cpmant.py::CpmAntModelTest::test_model_outputs_equivalence <- tests/test_modeling_common.py \r\ntests/models/cpmant/test_modeling_cpmant.py::CpmAntModelTest::test_model_parallel_beam_search <- tests/test_modeling_common.py \r\n[gw38] [ 11%] FAILED tests/models/cpmant/test_modeling_cpmant.py::CpmAntModelTest::test_save_load <- tests/test_modeling_common.py \r\ntests/models/cpmant/test_modeling_cpmant.py::CpmAntModelTest::test_save_load_fast_init_from_base <- tests/test_modeling_common.py \r\n```",
"From my side, I would love to have this decorator applied to the test methods that are implemented for the relevant model test classes (here `CpmAntModelTest`), i.e. override from the common ones.\r\n\r\nOf course, if a test is going to fail for all models on AMD GPU, we can applied it to the common test method.\r\n\r\nWDYT?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,693 | 1,702 | 1,702 |
COLLABORATOR
| null |
Although it appears that on Nvidia GPUs all model tests requiring determinism pass without this option, this is not the case on AMD MI210, and it is not safe to assume in general. For example `CUDA_VISIBLE_DEVICES=1 pytest tests/models/cpmant/test_modeling_cpmant.py -s -k "test_determinism"` do not pass on main but do pass with this PR. Looking at the intermediate logits, it appears that a nn.Linear call is responsible of the deviation.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25993/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25993/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25993",
"html_url": "https://github.com/huggingface/transformers/pull/25993",
"diff_url": "https://github.com/huggingface/transformers/pull/25993.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25993.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/25991
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25991/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25991/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25991/events
|
https://github.com/huggingface/transformers/pull/25991
| 1,882,016,737 |
PR_kwDOCUB6oc5Zk6NT
| 25,991 |
Fix err with FSDP
|
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@pacman100 sounds good, let me know if you are pleased :) ",
"Ran into this issue too, confirmed this PR resolves for us."
] | 1,693 | 1,694 | 1,694 |
CONTRIBUTOR
| null |
# What does this PR do?
One potential solution for https://github.com/huggingface/transformers/issues/25988, we need to guard for things in Accelerate main better for items, especially items like fsdp and deepspeed
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@pacman100
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25991/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25991/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25991",
"html_url": "https://github.com/huggingface/transformers/pull/25991",
"diff_url": "https://github.com/huggingface/transformers/pull/25991.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25991.patch",
"merged_at": 1694060574000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25992
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25992/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25992/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25992/events
|
https://github.com/huggingface/transformers/issues/25992
| 1,882,021,062 |
I_kwDOCUB6oc5wLVzG
| 25,992 |
Why does Hugging Face's push_to_hub convert saved models to .bin instead of using safetensor mode?
|
{
"login": "okoliechykwuka",
"id": 51082506,
"node_id": "MDQ6VXNlcjUxMDgyNTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/51082506?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/okoliechykwuka",
"html_url": "https://github.com/okoliechykwuka",
"followers_url": "https://api.github.com/users/okoliechykwuka/followers",
"following_url": "https://api.github.com/users/okoliechykwuka/following{/other_user}",
"gists_url": "https://api.github.com/users/okoliechykwuka/gists{/gist_id}",
"starred_url": "https://api.github.com/users/okoliechykwuka/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/okoliechykwuka/subscriptions",
"organizations_url": "https://api.github.com/users/okoliechykwuka/orgs",
"repos_url": "https://api.github.com/users/okoliechykwuka/repos",
"events_url": "https://api.github.com/users/okoliechykwuka/events{/privacy}",
"received_events_url": "https://api.github.com/users/okoliechykwuka/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @okoliechykwuka , I'll transfer the question to the `transformers` repository as it's more related to the implementation there (and not the underlying `huggingface_hub` which is only responsible of uploading a file, not generating it). Also to help `transformers`'s maintainers, which version of `transformers` / `tokenizers` are you using?",
"Hi, could you provide a self-contained code snippet that demonstrate this issue, please?",
"I believe this is because `safe_serialization` is set to `False` by default. \r\n\r\nPutting it to `True` by default would mean that any model saved in a current model of `transformers` would be incompatible with past versions of `transformers` that didn't have `safetensors` installed by default. We have added `safetensors` as a core dependency in v4.30.0.\r\n\r\nWe're thinking that moving towards default `safetensors` serialization could be done in ~v4.35.0, so in about two months.\r\n\r\ncc @Narsil ",
"@Wauplin I am using the latest version of the transformer library.\r\n\r\n@ydshieh I have provided the code snippet below for inspection. Thanks\r\n\r\ntransformer == 4.33.0\r\n\r\n\r\n\r\n`!pip install -q -U transformers peft accelerate optimum\r\n`\r\n\r\n```\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\nmodel_id = \"model_name\"\r\ntokenizer = AutoTokenizer.from_pretrained(model_id)\r\nmodel = AutoModelForCausalLM.from_pretrained(model_id)\r\ntokenizer = AutoTokenizer.from_pretrained(model_id)\r\nnew_model = \"llama-2-13b-chat\"\r\nmodel.save_pretrained(new_model, safe_serialization=True)\r\ntokenizer.save_pretrained(new_model)\r\nmodel.push_to_hub(new_model, use_temp_dir=False)\r\ntokenizer.push_to_hub(new_model, use_temp_dir=False)\r\n```\r\n\r\n",
"Hosting my model `runpod` using a text generation endpoint is giving me the below error. It shows that I am supposed to have my model weights in safetensor format. but I am surprised that pushing to the hub doesn't push the models in the directory in safetensor format even though that is the format the model was saved.\r\n\r\n```safetensors_rust.safetensorerror: error while serializing: ioerror(os { code: 122, kind: filesystemquotaexceeded, message: \"disk quota exceeded\" })```",
"Hey @okoliechykwuka, the `push_to_hub` method should call its own serialization method under the hood.\r\n\r\nCould you try this change in your code?\r\n\r\n```diff\r\n- model.push_to_hub(new_model, use_temp_dir=False)\r\n+ model.push_to_hub(new_model, use_temp_dir=False, safe_serialization=True)\r\n```\r\n\r\nThis should push the checkpoints in `safetensors` format.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,693 | 1,697 | 1,697 |
NONE
| null |
I am attempting to push a saved model in `model-00001-of-00006.safetensors `mode, but the model gets converted to `pytorch_model-00001-of-00006.bin` before being saved to the hub.
How can I prevent this?
Here is what I am simply doing:
```
model.push_to_hub(new_model, use_temp_dir=False)
tokenizer.push_to_hub(new_model, use_temp_dir=False)
```

|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25992/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25992/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25990
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25990/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25990/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25990/events
|
https://github.com/huggingface/transformers/pull/25990
| 1,881,944,459 |
PR_kwDOCUB6oc5Zkqq1
| 25,990 |
Fix benchmark tests on RoCm
|
{
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25990). All of your documentation changes will be reflected on that endpoint.",
"@fxmarty What about using the rocm wrapper? Might be something cleaner moving forward:\r\n\r\nhttps://github.com/RadeonOpenCompute/rocm_smi_lib/blob/master/python_smi_tools/rocm_smi.py",
"@mfuntowicz There is also https://github.com/RadeonOpenCompute/pyrsmi, unfortunately it appears that this package does not have a pypi release on pypi index yet.\r\n\r\nResolving dependencies could be done by editing transformers `setup.cfg` to add the index `https://test.pypi.org/simple`, but it seems quite risky given that many packages may push to pypi test as well. \r\n\r\nI am not sure the source you pointed to would help either, it is not exposed through a python package. So I guess we would need to modify our path (ugly) or run subprocess anyway.",
"For completeness, fixes:\r\n\r\n```\r\n[gw9] [ 2%] FAILED tests/benchmark/test_benchmark.py::BenchmarkTest::test_inference_encoder_decoder_with_configs\r\n[gw9] [ 4%] FAILED tests/benchmark/test_benchmark.py::BenchmarkTest::test_inference_no_configs \r\ntests/benchmark/test_benchmark.py::BenchmarkTest::test_inference_no_configs_only_pretrain\r\n[gw9] [ 4%] FAILED tests/benchmark/test_benchmark.py::BenchmarkTest::test_inference_no_configs_only_pretrain \r\ntests/benchmark/test_benchmark.py::BenchmarkTest::test_inference_no_model_no_architectures \r\n[gw9] [ 6%] FAILED tests/benchmark/test_benchmark.py::BenchmarkTest::test_inference_torchscript \r\n[gw9] [ 7%] FAILED tests/benchmark/test_benchmark.py::BenchmarkTest::test_inference_with_configs \r\n[gw9] [ 7%] FAILED tests/benchmark/test_benchmark.py::BenchmarkTest::test_save_csv_files \r\n[gw9] [ 9%] FAILED tests/benchmark/test_benchmark.py::BenchmarkTest::test_trace_memory \r\ntests/benchmark/test_benchmark.py::BenchmarkTest::test_train_encoder_decoder_with_configs \r\n[gw9] [ 11%] FAILED tests/benchmark/test_benchmark.py::BenchmarkTest::test_train_no_configs \r\n[gw9] [ 13%] FAILED tests/benchmark/test_benchmark.py::BenchmarkTest::test_train_no_configs_fp16 \r\n[gw9] [ 14%] FAILED tests/benchmark/test_benchmark.py::BenchmarkTest::test_train_with_configs \r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,693 | 1,697 | 1,697 |
COLLABORATOR
| null |
Some benchmark tests (not sure if they are commonly run) do not pass on RoCm systems due to py3nvml being nvidia-specific. This PR fixes the issue using `rocm-smi` instead, I couldn't find a binding to an equivalent of NVIDIA Management Library for AMD.
Those benchmark tests may be remove in favor of the optimum-benchmark built by @IlyasMoutawwakil though, I guess
We may need a `torch-dev-rocm` to avoid installing pynvml on RoCm systems.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25990/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25990/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25990",
"html_url": "https://github.com/huggingface/transformers/pull/25990",
"diff_url": "https://github.com/huggingface/transformers/pull/25990.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25990.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/25989
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25989/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25989/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25989/events
|
https://github.com/huggingface/transformers/issues/25989
| 1,881,869,201 |
I_kwDOCUB6oc5wKwuR
| 25,989 |
Whisper Encoder's positional encodings shouldn't be trainable
|
{
"login": "gau-nernst",
"id": 26946864,
"node_id": "MDQ6VXNlcjI2OTQ2ODY0",
"avatar_url": "https://avatars.githubusercontent.com/u/26946864?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gau-nernst",
"html_url": "https://github.com/gau-nernst",
"followers_url": "https://api.github.com/users/gau-nernst/followers",
"following_url": "https://api.github.com/users/gau-nernst/following{/other_user}",
"gists_url": "https://api.github.com/users/gau-nernst/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gau-nernst/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gau-nernst/subscriptions",
"organizations_url": "https://api.github.com/users/gau-nernst/orgs",
"repos_url": "https://api.github.com/users/gau-nernst/repos",
"events_url": "https://api.github.com/users/gau-nernst/events{/privacy}",
"received_events_url": "https://api.github.com/users/gau-nernst/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Regarding matching the repo, I'dont think there is much difference in the results as we have 1e-4 matching logits with our integration tests. \r\n\r\nRegarding whether or not this should be trainable is up to debate:\r\n- the original repo does not provide a training script, so training discrepencies are not something we are trying to avoid \r\n- the idea behind leaving it as is is that you can do whatever you want: you can set `requires_grad` to `False` or you could load another vector obtained using a different functions. ",
"The positional encodings in Whisper Encoder are fixed sinusoidal encodings. Similar to rotary embeddings, it is fixed, and it should not be trained. I don't think there is a case that they should be trainable. Even though OpenAI did not provide a training script, it's clear that the sinusoidal positional encodings are fixed, and not trainable.\r\n\r\nLetting the user does whatever he/she wants is fine, but\r\n\r\n1. At least inform the user about this behavior. I lost a few days of experiments due to this.\r\n2. Even if you want the positional encodings to be possibly trainable, shouldn't the default behavior be that it is not trainable, and user can set them to be trainable if they want?\r\n3. Making them as buffers (and thus not trainable) still allows user to load other kinds of positional encodings. Buffers are included in state dict (unless they are specified to be `persistent=False`).",
"Agreed that we should default to non trainable. We probably did not have a lot of people training the encoder as this was not notice 🤗 Would you like to open a PR? \r\nAs you mentioned best solution would be to make it backward compatible, keeping the correct naming ! \r\nWould also say nice catch 👍🏻 ",
"Which approach do you prefer\r\n\r\n1. Keep it as `nn.Embedding`, but set it to `requires_grad=False` in `__init__()` i.e. default to non-trainable. The users can set it to trainable if they want.\r\n2. Change it to buffer (my original suggestion, similar to OpenAI original repo). It can never be trainable this way. Loading positional encodings still works, since buffer is included in state dict.",
"Nice catch @gau-nernst! I would advocate for 1 to keep compatibility with Flax Whisper and TF Whisper which are using embedding layers as well, possibly with different dimension orders though. If we change to buffers, we'll have to check that the cross-platform loading logic works with these new weights",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,693 | 1,697 | 1,697 |
CONTRIBUTOR
| null |
### System Info
Currently Whisper Encoder's positional encodings in HF is an `nn.Embedding`, which makes it trainable.
https://github.com/huggingface/transformers/blob/aea761499f4b1193f2706f471442da6f9df65d65/src/transformers/models/whisper/modeling_whisper.py#L837
In the original implementation, it is fixed to sinusoidal positional encodings, and assigned as a buffer, thus not trainable.
https://github.com/openai/whisper/blob/e8622f9afc4eba139bf796c210f5c01081000472/whisper/model.py#L150
This creates diverging behavior when finetuning Whisper Encoder.
On a separate note, personally I found that OpenAI's weights for Whisper Encoder positional encodings are slightly different from the ones generated by `sinusoids()` function. Thus, loading these positional encodings weights (as buffers) is still necessary to match the original repo.
### Who can help?
@sanchit-gandhi @ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
NA
### Expected behavior
The positional encodings in Whisper Encoder should not be trainable. This can be done by replacing `nn.Embedding` with a buffer. To preserve weight compatibility, it might be necessary to wrap the buffer around a module, so that the state dict key remains the same.
e.g.
```python
self.embed_position = nn.Module() # empty module
self.embed_position.register_buffer("weight", torch.zeros(self.max_source_positions, embed_dim)) # preserve the key embed_position.weight
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25989/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25989/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25988
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25988/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25988/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25988/events
|
https://github.com/huggingface/transformers/issues/25988
| 1,881,849,532 |
I_kwDOCUB6oc5wKr68
| 25,988 |
AttributeError: 'FullyShardedDataParallelPlugin' object has no attribute 'activation_checkpointing'
|
{
"login": "scissorstail",
"id": 93466598,
"node_id": "U_kgDOBZIv5g",
"avatar_url": "https://avatars.githubusercontent.com/u/93466598?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/scissorstail",
"html_url": "https://github.com/scissorstail",
"followers_url": "https://api.github.com/users/scissorstail/followers",
"following_url": "https://api.github.com/users/scissorstail/following{/other_user}",
"gists_url": "https://api.github.com/users/scissorstail/gists{/gist_id}",
"starred_url": "https://api.github.com/users/scissorstail/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/scissorstail/subscriptions",
"organizations_url": "https://api.github.com/users/scissorstail/orgs",
"repos_url": "https://api.github.com/users/scissorstail/repos",
"events_url": "https://api.github.com/users/scissorstail/events{/privacy}",
"received_events_url": "https://api.github.com/users/scissorstail/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @muellerzr @pacman100 ",
"Hi! This has been fixed on main, please install transformers via `pip install git+https://github.com/huggingface/transformers`"
] | 1,693 | 1,695 | 1,695 |
NONE
| null |
### System Info
```
Traceback (most recent call last):
File "/workspace/run/run_llm.py", line 717, in <module>
main()
File "/workspace/run/run_llm.py", line 644, in main
trainer = Trainer(
File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 342, in __init__
self.create_accelerator_and_postprocess()
File "/opt/conda/lib/python3.10/site-packages/transformers/trainer.py", line 3900, in create_accelerator_and_postprocess
"activation_checkpointing", fsdp_plugin.activation_checkpointing
AttributeError: 'FullyShardedDataParallelPlugin' object has no attribute 'activation_checkpointing'
```
https://github.com/huggingface/transformers/blob/aea761499f4b1193f2706f471442da6f9df65d65/src/transformers/trainer.py#L3893-L3907
The 'FullyShardedDataParallelPlugin' class in [accelerate](https://github.com/huggingface/accelerate) version **v0.22.0** does not have 'activation_checkpointing'. but the **main** branch does.
**v0.22.0**
https://github.com/huggingface/accelerate/blob/6b3e559926afc4b9a127eb7762fc523ea0ea656a/src/accelerate/utils/dataclasses.py#L778
**main**
https://github.com/huggingface/accelerate/blob/739b135f8367becb67ffaada12fe76e3aa60fefd/src/accelerate/utils/dataclasses.py#L783
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
-
### Expected behavior
-
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25988/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25988/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25987
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25987/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25987/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25987/events
|
https://github.com/huggingface/transformers/pull/25987
| 1,881,795,456 |
PR_kwDOCUB6oc5ZkKOY
| 25,987 |
Trainer: delegate default generation values to `generation_config`
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,693 | 1,693 | 1,693 |
MEMBER
| null |
# What does this PR do?
Thank you @nick-maykr for raising the related issue.
In a nutshell, at some point in the seq2seq trainer, if `max_length` and `num_beams` were not set through the legacy arguments, we were fetching them from `model.config`. This is wrong -- all the logic about defaults is now handled in `model.generation_config`, and doesn't need any explicit value setting.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25987/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25987/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25987",
"html_url": "https://github.com/huggingface/transformers/pull/25987",
"diff_url": "https://github.com/huggingface/transformers/pull/25987.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25987.patch",
"merged_at": 1693921620000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25986
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25986/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25986/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25986/events
|
https://github.com/huggingface/transformers/pull/25986
| 1,881,746,485 |
PR_kwDOCUB6oc5Zj_ct
| 25,986 |
[VITS] Fix nightly tests
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,693 | 1,694 | 1,694 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes the tokenizer integration test and multi-GPU test that failed on the nightly run: https://github.com/huggingface/transformers/actions/runs/6068405445/job/16461410319
The tokenizer fix is trivial (needed to update the commit ID)!
The multi-GPU test was failing because the output sequence length for VITS is a function of the model **inputs**, rather than being a function of the input **sequence lengths** only.
Let's say we have 2 GPUs over which we want to run DP:
* GPU 1 outputs a sequence length of `N`, which is computed based on the input in the first element of the batch `x`
* GPU 2 outputs a sequence length of `M`, which is computed based on the input in the second element of the batch `y`
=> there is nothing to enforce that `N = M`, since the VITS output sequence length is a function of the inputs. Thus, we cannot concatenate the inputs after running the forward pass, since they have different dims.
```python
# pseudo code for data parallelism
input_1, input_2 = torch.split(input, 2, dim=0)
output_1 = model(input_1)
output_2 = model(input_2)
output = torch.concatenate([output_1, output_2], dim=0) # breaks because input_1 and input_2 have different sequence lengths
```
The fix for the test is to pass the same inputs to both GPUs, and disable the stochastic duration predictor. This way, we get consistent outputs across our GPUs.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25986/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25986/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25986",
"html_url": "https://github.com/huggingface/transformers/pull/25986",
"diff_url": "https://github.com/huggingface/transformers/pull/25986.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25986.patch",
"merged_at": 1694105354000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25985
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25985/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25985/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25985/events
|
https://github.com/huggingface/transformers/pull/25985
| 1,881,677,631 |
PR_kwDOCUB6oc5Zjwc4
| 25,985 |
[Wav2Vec2 Conformer] Fix inference float16
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,693 | 1,693 | 1,693 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes #25964 - the Wav2Vec2 conformer model with rotary embeddings now works when we load it `from_pretrained` with float16. The issue was originating in the rotary embedding layer, which was returning the positional embeddings in float32 always
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25985/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25985/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25985",
"html_url": "https://github.com/huggingface/transformers/pull/25985",
"diff_url": "https://github.com/huggingface/transformers/pull/25985.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25985.patch",
"merged_at": 1693934767000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25984
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25984/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25984/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25984/events
|
https://github.com/huggingface/transformers/pull/25984
| 1,881,578,313 |
PR_kwDOCUB6oc5ZjbAZ
| 25,984 |
Fix `test_finetune_bert2bert`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,693 | 1,694 | 1,694 |
COLLABORATOR
| null |
# What does this PR do?
Fix the CI error. See the comment in the change of this PR.
```
tests/trainer/test_trainer_seq2seq.py::Seq2seqTrainerTester::test_finetune_bert2bert
(line 162) ValueError: Make sure to set the pad_token_id attribute of the model's configuration.
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25984/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25984/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25984",
"html_url": "https://github.com/huggingface/transformers/pull/25984",
"diff_url": "https://github.com/huggingface/transformers/pull/25984.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25984.patch",
"merged_at": 1694620424000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25983
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25983/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25983/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25983/events
|
https://github.com/huggingface/transformers/pull/25983
| 1,881,570,603 |
PR_kwDOCUB6oc5ZjZVY
| 25,983 |
seq2seq speech recognition example: use forward attention mask if apply spec augment is True
|
{
"login": "sorgfresser",
"id": 80467011,
"node_id": "MDQ6VXNlcjgwNDY3MDEx",
"avatar_url": "https://avatars.githubusercontent.com/u/80467011?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sorgfresser",
"html_url": "https://github.com/sorgfresser",
"followers_url": "https://api.github.com/users/sorgfresser/followers",
"following_url": "https://api.github.com/users/sorgfresser/following{/other_user}",
"gists_url": "https://api.github.com/users/sorgfresser/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sorgfresser/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sorgfresser/subscriptions",
"organizations_url": "https://api.github.com/users/sorgfresser/orgs",
"repos_url": "https://api.github.com/users/sorgfresser/repos",
"events_url": "https://api.github.com/users/sorgfresser/events{/privacy}",
"received_events_url": "https://api.github.com/users/sorgfresser/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @sanchit-gandhi "
] | 1,693 | 1,693 | 1,693 |
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
The example for speech recognition for `seq2seq` did not forward the attention mask for using SpecAugment on using whisper. I strongly believe this is a bug in the example and it should be resolved now.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@patrickvonplaten
@bofenghuang
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25983/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25983/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25983",
"html_url": "https://github.com/huggingface/transformers/pull/25983",
"diff_url": "https://github.com/huggingface/transformers/pull/25983.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25983.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/25982
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25982/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25982/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25982/events
|
https://github.com/huggingface/transformers/issues/25982
| 1,881,555,053 |
I_kwDOCUB6oc5wJkBt
| 25,982 |
embedding_size=0 when training with deepspeed zero3
|
{
"login": "iMountTai",
"id": 35353688,
"node_id": "MDQ6VXNlcjM1MzUzNjg4",
"avatar_url": "https://avatars.githubusercontent.com/u/35353688?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iMountTai",
"html_url": "https://github.com/iMountTai",
"followers_url": "https://api.github.com/users/iMountTai/followers",
"following_url": "https://api.github.com/users/iMountTai/following{/other_user}",
"gists_url": "https://api.github.com/users/iMountTai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iMountTai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iMountTai/subscriptions",
"organizations_url": "https://api.github.com/users/iMountTai/orgs",
"repos_url": "https://api.github.com/users/iMountTai/repos",
"events_url": "https://api.github.com/users/iMountTai/events{/privacy}",
"received_events_url": "https://api.github.com/users/iMountTai/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 5616426447,
"node_id": "LA_kwDOCUB6oc8AAAABTsPdzw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/solved",
"name": "solved",
"color": "B1D6DC",
"default": false,
"description": ""
}
] |
closed
| false | null |
[] |
[
"Hey! Thanks for opening an issue, could you write a bit more about the issue you have? I'm sorry but I can't really understand the problem as is. Do you have a reproducer?",
"related to this issue https://github.com/huggingface/transformers/issues/25977",
"Hello, please clearly explain the issue along with a minimal example to reproduce it.",
"Hello, this has been fixed, could you try to run using the main branch?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,693 | 1,698 | 1,698 |
NONE
| null |
### System Info

### Who can help?
@ArthurZucker @pacman100
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
lora training with deepspeed zero3
```
model.get_input_embeddings().weight.shape[0] = 0
```
with Transformers4.31.0
```
model.get_input_embeddings().weight.shape[0] = 0
model.resize_token_embeddings(55296)
model.get_input_embeddings().weight.shape[0] = 55296
```
with Transformers4.33.0.dev0
```
model.get_input_embeddings().weight.shape[0] = 0
model.resize_token_embeddings(55296)
model.get_input_embeddings().weight.shape[0] = 0
```
### Expected behavior
embedding_size is returned for a correct value, otherwise

|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25982/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25982/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25981
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25981/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25981/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25981/events
|
https://github.com/huggingface/transformers/pull/25981
| 1,881,490,094 |
PR_kwDOCUB6oc5ZjH3S
| 25,981 |
fix typo
|
{
"login": "kai01ai",
"id": 140378742,
"node_id": "U_kgDOCF4Cdg",
"avatar_url": "https://avatars.githubusercontent.com/u/140378742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kai01ai",
"html_url": "https://github.com/kai01ai",
"followers_url": "https://api.github.com/users/kai01ai/followers",
"following_url": "https://api.github.com/users/kai01ai/following{/other_user}",
"gists_url": "https://api.github.com/users/kai01ai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kai01ai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kai01ai/subscriptions",
"organizations_url": "https://api.github.com/users/kai01ai/orgs",
"repos_url": "https://api.github.com/users/kai01ai/repos",
"events_url": "https://api.github.com/users/kai01ai/events{/privacy}",
"received_events_url": "https://api.github.com/users/kai01ai/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@amyeroberts I've checked, and there are no other instances of 'doanloading'. ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25981). All of your documentation changes will be reflected on that endpoint."
] | 1,693 | 1,698 | 1,693 |
CONTRIBUTOR
| null |
# What does this PR do?
Change `doanloading` to `downloading` in `src/transformers/modeling_tf_utils.py`
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerz and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25981/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25981/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25981",
"html_url": "https://github.com/huggingface/transformers/pull/25981",
"diff_url": "https://github.com/huggingface/transformers/pull/25981.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25981.patch",
"merged_at": 1693908787000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25980
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25980/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25980/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25980/events
|
https://github.com/huggingface/transformers/pull/25980
| 1,881,466,246 |
PR_kwDOCUB6oc5ZjCvH
| 25,980 |
Fix `beam_scores` shape when token scores shape changes after `logits_processor`
|
{
"login": "BakerBunker",
"id": 17872844,
"node_id": "MDQ6VXNlcjE3ODcyODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/17872844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BakerBunker",
"html_url": "https://github.com/BakerBunker",
"followers_url": "https://api.github.com/users/BakerBunker/followers",
"following_url": "https://api.github.com/users/BakerBunker/following{/other_user}",
"gists_url": "https://api.github.com/users/BakerBunker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BakerBunker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BakerBunker/subscriptions",
"organizations_url": "https://api.github.com/users/BakerBunker/orgs",
"repos_url": "https://api.github.com/users/BakerBunker/repos",
"events_url": "https://api.github.com/users/BakerBunker/events{/privacy}",
"received_events_url": "https://api.github.com/users/BakerBunker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @gante ",
"Hi @BakerBunker 👋 \r\n\r\nYou wrote \"When token scores shape changes after `logits_processor`\" as the cause for the proposed changes -- this situation should not happen 🤔 \r\n\r\nWould you be able to share an example? ",
"Sure, I trained a model with different sizes of input and output embeddings, because the output vocab of the model is much smaller compared to the input vocab. And because the input and output embeddings make up a large percentage of the parameters in the model, this saves a lot of GPU memory during training. However, during the generation process, I need to align the input and output `input_ids` to call the `generate()` interface properly. Here is my code for the align process:\r\n\r\n```python\r\nclass TokenAlignProcessor(LogitsProcessor):\r\n def __call__(self, input_ids, scores):\r\n new_score = torch.empty(scores.shape[0], len(tokenizer), device=DEVICE).fill_(\r\n -torch.inf\r\n )\r\n new_score[:, -OVERLAP_TOKEN_NUMS :] = scores\r\n return new_score\r\n```",
"Thank you for the contribution, @BakerBunker 💛 "
] | 1,693 | 1,694 | 1,694 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
When token scores shape changes after `logits_processor`, `next_token_scores_processed ` has different shape with `beam_scores[:, None].expand_as(next_token_scores)`, this PR fixes this issue.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ x ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerz and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25980/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25980/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25980",
"html_url": "https://github.com/huggingface/transformers/pull/25980",
"diff_url": "https://github.com/huggingface/transformers/pull/25980.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25980.patch",
"merged_at": 1694628768000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25979
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25979/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25979/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25979/events
|
https://github.com/huggingface/transformers/issues/25979
| 1,881,413,406 |
I_kwDOCUB6oc5wJBce
| 25,979 |
can not speicfy gpu in trainner
|
{
"login": "kimkeithvn",
"id": 41425845,
"node_id": "MDQ6VXNlcjQxNDI1ODQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/41425845?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kimkeithvn",
"html_url": "https://github.com/kimkeithvn",
"followers_url": "https://api.github.com/users/kimkeithvn/followers",
"following_url": "https://api.github.com/users/kimkeithvn/following{/other_user}",
"gists_url": "https://api.github.com/users/kimkeithvn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kimkeithvn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kimkeithvn/subscriptions",
"organizations_url": "https://api.github.com/users/kimkeithvn/orgs",
"repos_url": "https://api.github.com/users/kimkeithvn/repos",
"events_url": "https://api.github.com/users/kimkeithvn/events{/privacy}",
"received_events_url": "https://api.github.com/users/kimkeithvn/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @muellerzr @pacman100 ",
"Same comment as https://github.com/huggingface/transformers/issues/25321, you need to do this outside the trainer/during the call to launch the script. ",
"I use `CUDA_VISIBLE_DEVICES=1 acclerate launch script.py` and it works. Thanks for your advice."
] | 1,693 | 1,693 | 1,693 |
NONE
| null |
### System Info
- `transformers` version: 4.31.0
- Platform: Linux-4.15.0-213-generic-x86_64-with-glibc2.27
- Python version: 3.10.11
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: 0.21.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: true
- Using distributed or parallel set-up in script?: false
### Who can help?
/
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Hi I’m trying to fine-tune model with Trainer in transformers,
Well, I want to use a specific number of GPU in my server.
My server has two GPUs and I want to train my model with GPU index 1.
I’ve read the `Trainer` and `Seq2SeqTrainingArguments` documents, and I’ve tried the `CUDA_VISIBLE_DEVICES` env variable already. I found `torch.cuda.current_device()` was correct before defining training_args object by `Seq2SeqTrainingArguments`.
transformers=4.31.0
torch=2.0.0
CUDA Version: 11.8
```python
gpu_idx = 1
os.environ["CUDA_VISIBLE_DEVICES"] = str(gpu_idx)
torch.cuda.set_device(gpu_idx)
# torch.cuda.current_device = 1
...
training_args = Seq2SeqTrainingArguments(output_dir=output_dir, do_train=True, save_strategy='no',
evaluation_strategy='no', logging_strategy='epoch', do_eval=False,
load_best_model_at_end=True,
greater_is_better=True,
num_train_epochs=train_epoch, learning_rate=lr, seed=seed,
per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size,
warmup_steps=warmup_steps, weight_decay=0.01, logging_dir=logging_dir,
fp16=True, report_to=['tensorboard'])
# torch.cuda.current_device = 0
```
```shell
nvidia-smi
```

I will appreciate your suggestions for this problem. Thanks.
### Expected behavior
I want to specify gpu in my script with `Trainer` and `Seq2SeqTrainingArguments`
```
# torch.cuda.current_device = 1
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25979/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25979/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25978
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25978/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25978/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25978/events
|
https://github.com/huggingface/transformers/issues/25978
| 1,881,399,323 |
I_kwDOCUB6oc5wI-Ab
| 25,978 |
The impact of a quantization config on .num_parameters
|
{
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"I tried with `llama-7b`...\r\n1. w/o 4 bit quantization: (16 bit)\r\ntrainable params: 33,554,432 || all params: 6,771,970,048 || trainable%: 0.49548996469513035\r\n2. w/\r\ntrainable params: 33,554,432 || all params: 3,533,967,360 || trainable%: 0.9494833591219133\r\n\r\nI wonder why changing the precision leads to the decrease of the parameters...\r\n\r\nIs this behavior expected?",
"Hi @BramVanroy \r\nThis is indeed a bug and needs fixing! Please see #26132 that explains the fix",
"Cool, thanks! @younesbelkada ",
"I think this behavior might be due to the implementation of Linear4bit: https://github.com/TimDettmers/bitsandbytes/blob/main/bitsandbytes/nn/modules.py\r\n\r\n`requires_grad=False` is always set for weights"
] | 1,693 | 1,698 | 1,694 |
COLLABORATOR
| null |
### System Info
- `transformers` version: 4.32.1
- Platform: Linux-5.14.0-284.25.1.el9_2.x86_64-x86_64-with-glibc2.34
- Python version: 3.11.4
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.3
- Accelerate version: 0.22.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
### Who can help?
@SunMarc and @younesbelkada
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I am trying to understand the impact of a quantization config on the number of parameters better but as discussed internally I am not sure whether what I am experiencing is a bug in `num_parameters` or the expected behavior when using quantization.
As far as my understanding of quantization goes, it usually implies **changing the precision** of parameters but not specifically reducing the number of parameters. However, when I run the following code, I can see that the number of parameters is cut in half (~6B compared to 13B).
```python
from transformers import AutoModelForCausalLM, BitsAndBytesConfig
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype="bfloat16",
bnb_4bit_use_double_quant=True,
)
model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-13b-chat-hf", quantization_config=bnb_config, device_map="auto")
model.num_parameters()
# 6671979520
```
### Expected behavior
Is this behavior expected? If so, what is the reason/theory behind it? If not, can the `num_parameters` implementation be fixed?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25978/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25978/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25977
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25977/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25977/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25977/events
|
https://github.com/huggingface/transformers/issues/25977
| 1,881,351,775 |
I_kwDOCUB6oc5wIyZf
| 25,977 |
Assertion error when using Trainer & Deepspeed stage 3 with `model.resize_token_embeddings`
|
{
"login": "kai01ai",
"id": 140378742,
"node_id": "U_kgDOCF4Cdg",
"avatar_url": "https://avatars.githubusercontent.com/u/140378742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kai01ai",
"html_url": "https://github.com/kai01ai",
"followers_url": "https://api.github.com/users/kai01ai/followers",
"following_url": "https://api.github.com/users/kai01ai/following{/other_user}",
"gists_url": "https://api.github.com/users/kai01ai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kai01ai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kai01ai/subscriptions",
"organizations_url": "https://api.github.com/users/kai01ai/orgs",
"repos_url": "https://api.github.com/users/kai01ai/repos",
"events_url": "https://api.github.com/users/kai01ai/events{/privacy}",
"received_events_url": "https://api.github.com/users/kai01ai/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Meet the same error. Exactly the same situation: transformers **> 4.31.0** + deepspeed zero3 v0.10.2 + `resize_token_embeddings`.",
"same issue for `transformers >= 4.32.0`",
"Try to fix this by https://github.com/huggingface/transformers/pull/26024"
] | 1,693 | 1,694 | 1,694 |
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.33.0
- Platform: Linux-3.10.0-1160.83.1.el7.x86_64-x86_64-with-glibc2.35
- Python version: 3.10.9
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.2
- Accelerate version: 0.22.0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: DEEPSPEED
- mixed_precision: bf16
- use_cpu: False
- debug: False
- num_processes: 64
- machine_rank: 0
- num_machines: 8
- main_process_ip: 127.0.0.1
- main_process_port: 29500
- rdzv_backend: static
- same_network: True
- main_training_function: main
- deepspeed_config: {'deepspeed_multinode_launcher': 'standard', 'gradient_accumulation_steps': 1, 'gradient_clipping': 1.0, 'offload_optimizer_device': 'none', 'offload_param_device': 'none', 'zero3_init_flag': True, 'zero3_save_16bit_model': True, 'zero_stage': 3}
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@pacman100
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I've encountered an issue while training using the combination of Trainer & Deepspeed stage 3. When invoking model.resize_token_embeddings, an AssertError arises during training. This was not an issue in transformers version 4.31.0. However, for versions > 4.31.0 and in the main branch, this problem persists. I suspect this might be related to PR https://github.com/huggingface/transformers/pull/25394
```
File "/root/miniconda3/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/root/miniconda3/lib/python3.10/site-packages/deepspeed/runtime/zero/partitioned_param_coordinator.py", line 310, in fetch_sub_module
return func(*args, **kwargs)
File "/root/miniconda3/lib/python3.10/site-packages/deepspeed/runtime/zero/partitioned_param_coordinator.py", line 310, in fetch_sub_module
assert param.ds_status == ZeroParamStatus.AVAILABLE, param.ds_summary()
AssertionErrorassert param.ds_status == ZeroParamStatus.AVAILABLE, param.ds_summary()
AssertionError: {'id': 292, 'status': 'NOT_AVAILABLE', 'numel': 0, 'ds_numel': 0, 'shape': (0,), 'ds_shape': (0, 4096), 'requires_grad': True, 'grad_shape': None, 'persist': True, 'active_sub_modules': {453}, 'ds_tensor.shape': torch.Size([0])}
```
code:
```python test.py
from transformers import (
AutoConfig,
AutoTokenizer,
AutoModelForCausalLM,
HfArgumentParser,
TrainingArguments,
DataCollatorForSeq2Seq,
Trainer,
)
def main():
parser = HfArgumentParser((TrainingArguments))
training_args, = parser.parse_args_into_dataclasses()
model_path = '/path/to/Llama-2-7b-hf'
config = AutoConfig.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
config = config,
)
tokenizer = AutoTokenizer.from_pretrained(
model_path,
use_fast=False,
model_max_length=1024,
)
add_new_tokens = True
if add_new_tokens:
# deepspeed AssertionError
tokenizer.add_special_tokens({"pad_token": "<pad>",})
model.resize_token_embeddings(len(tokenizer))
else:
# it works
tokenizer.pad_token = tokenizer.eos_token
from datasets import Dataset
def gen():
for _ in range(100):
yield {"input_ids": [1, 2, 3], "labels": [1, 1, 1]}
datasets = Dataset.from_generator(gen)
datasets.set_format('pt')
trainer = Trainer(
model=model,
args=training_args,
tokenizer=tokenizer,
data_collator=DataCollatorForSeq2Seq(tokenizer=tokenizer, model=model, max_length=tokenizer.model_max_length),
train_dataset=datasets,
)
trainer.train()
if __name__ == "__main__":
main()
```
scripts:
```shell
deepspeed test.py \
--deepspeed configs/zero3_hf.conf \
--output_dir output/test/ \
--do_train \
--per_device_train_batch_size 1 \
--gradient_accumulation_steps 4 \
--evaluation_strategy "no" \
--report_to "none" \
```
deepspeed config
```conf
{
"bf16": {
"enabled": "auto"
},
"zero_optimization": {
"stage": 3,
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 1e5,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
### Expected behavior
no error
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25977/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 3
}
|
https://api.github.com/repos/huggingface/transformers/issues/25977/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25976
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25976/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25976/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25976/events
|
https://github.com/huggingface/transformers/pull/25976
| 1,881,342,026 |
PR_kwDOCUB6oc5Zin56
| 25,976 |
Fix `test_load_img_url_timeout`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,693 | 1,693 | 1,693 |
COLLABORATOR
| null |
# What does this PR do?
#25184 added timeout parameter to some function and also a test. But the expected exception in the test is `ConnectTimeout` (on daily CI) instead of `ReadTimeout`, but it is `ReadTimeout` on `CircleCI`.
I haven't looked why there are such difference. But this PR updates the expected value to `(ReadTimeout, ConnectTimeout)` so the test added in #25184 won't fail.
(let me know if you think we should dive into this)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25976/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25976/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25976",
"html_url": "https://github.com/huggingface/transformers/pull/25976",
"diff_url": "https://github.com/huggingface/transformers/pull/25976.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25976.patch",
"merged_at": 1693906468000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25975
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25975/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25975/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25975/events
|
https://github.com/huggingface/transformers/pull/25975
| 1,881,341,440 |
PR_kwDOCUB6oc5Zinxw
| 25,975 |
Add `Pop2Piano` space demo.
|
{
"login": "susnato",
"id": 56069179,
"node_id": "MDQ6VXNlcjU2MDY5MTc5",
"avatar_url": "https://avatars.githubusercontent.com/u/56069179?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/susnato",
"html_url": "https://github.com/susnato",
"followers_url": "https://api.github.com/users/susnato/followers",
"following_url": "https://api.github.com/users/susnato/following{/other_user}",
"gists_url": "https://api.github.com/users/susnato/gists{/gist_id}",
"starred_url": "https://api.github.com/users/susnato/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/susnato/subscriptions",
"organizations_url": "https://api.github.com/users/susnato/orgs",
"repos_url": "https://api.github.com/users/susnato/repos",
"events_url": "https://api.github.com/users/susnato/events{/privacy}",
"received_events_url": "https://api.github.com/users/susnato/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25975). All of your documentation changes will be reflected on that endpoint.",
"Thanks @susnato!"
] | 1,693 | 1,693 | 1,693 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
As discussed [here](https://github.com/huggingface/transformers/pull/25827#issuecomment-1697824524), this PR adds the pop2piano space demo.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerz and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25975/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25975/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25975",
"html_url": "https://github.com/huggingface/transformers/pull/25975",
"diff_url": "https://github.com/huggingface/transformers/pull/25975.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25975.patch",
"merged_at": 1693908422000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25974
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25974/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25974/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25974/events
|
https://github.com/huggingface/transformers/pull/25974
| 1,881,300,762 |
PR_kwDOCUB6oc5ZifGI
| 25,974 |
nn.Identity is not required to be compatible with PyTorch < 1.1.0 as the minimum PyTorch version we currently support is 1.10.0
|
{
"login": "statelesshz",
"id": 28150734,
"node_id": "MDQ6VXNlcjI4MTUwNzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/28150734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/statelesshz",
"html_url": "https://github.com/statelesshz",
"followers_url": "https://api.github.com/users/statelesshz/followers",
"following_url": "https://api.github.com/users/statelesshz/following{/other_user}",
"gists_url": "https://api.github.com/users/statelesshz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/statelesshz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/statelesshz/subscriptions",
"organizations_url": "https://api.github.com/users/statelesshz/orgs",
"repos_url": "https://api.github.com/users/statelesshz/repos",
"events_url": "https://api.github.com/users/statelesshz/events{/privacy}",
"received_events_url": "https://api.github.com/users/statelesshz/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"I am not sure I understand. We already say goodbye to pytorch 1.9 in #24080. Could you elaborate with a bit more detail?",
"> I am not sure I understand. We already say goodbye to pytorch 1.9 in #24080. Could you elaborate with a bit more detail?\r\n\r\nApologies for the confusion. I provided incorrect information about the minimum supported version of transformers. :(\r\nMake some changes to make things look more accurate.\r\n",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25974). All of your documentation changes will be reflected on that endpoint."
] | 1,693 | 1,694 | 1,693 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
As the title says.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerz and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
@muellerz and @pacman100
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25974/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25974/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25974",
"html_url": "https://github.com/huggingface/transformers/pull/25974",
"diff_url": "https://github.com/huggingface/transformers/pull/25974.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25974.patch",
"merged_at": 1693906675000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25973
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25973/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25973/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25973/events
|
https://github.com/huggingface/transformers/pull/25973
| 1,881,298,155 |
PR_kwDOCUB6oc5ZiejE
| 25,973 |
Use main in conversion script
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[] | 1,693 | 1,693 | 1,693 |
COLLABORATOR
| null |
# What does this PR do?
Put the code under `if __name__ == "__main__":` so it won't affect other stuff (for example, pytest collection).
Doctest is currently failing due to this, see [here](https://github.com/huggingface/transformers/actions/runs/6079493829/job/16492061986)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25973/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25973/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25973",
"html_url": "https://github.com/huggingface/transformers/pull/25973",
"diff_url": "https://github.com/huggingface/transformers/pull/25973.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25973.patch",
"merged_at": 1693911890000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25972
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25972/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25972/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25972/events
|
https://github.com/huggingface/transformers/pull/25972
| 1,881,268,854 |
PR_kwDOCUB6oc5ZiYO2
| 25,972 |
Fix Detr CI
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,693 | 1,694 | 1,693 |
COLLABORATOR
| null |
# What does this PR do?
#24652 updated an expected value (`0.994097`) used in the test (from a contributor), but our CI still gets the original value `0.994096`. This PR just keeps the original one.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25972/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25972/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25972",
"html_url": "https://github.com/huggingface/transformers/pull/25972",
"diff_url": "https://github.com/huggingface/transformers/pull/25972.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25972.patch",
"merged_at": 1693905597000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25971
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25971/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25971/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25971/events
|
https://github.com/huggingface/transformers/pull/25971
| 1,881,209,713 |
PR_kwDOCUB6oc5ZiLoU
| 25,971 |
Update logits_process.py docstrings
|
{
"login": "larekrow",
"id": 127832774,
"node_id": "U_kgDOB56Sxg",
"avatar_url": "https://avatars.githubusercontent.com/u/127832774?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/larekrow",
"html_url": "https://github.com/larekrow",
"followers_url": "https://api.github.com/users/larekrow/followers",
"following_url": "https://api.github.com/users/larekrow/following{/other_user}",
"gists_url": "https://api.github.com/users/larekrow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/larekrow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/larekrow/subscriptions",
"organizations_url": "https://api.github.com/users/larekrow/orgs",
"repos_url": "https://api.github.com/users/larekrow/repos",
"events_url": "https://api.github.com/users/larekrow/events{/privacy}",
"received_events_url": "https://api.github.com/users/larekrow/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[] | 1,693 | 1,694 | 1,694 |
CONTRIBUTOR
| null |
This PR fixes points 1 and 2 of #25970 by correcting the docstrings for `RepetitionPenaltyLogitsProcessor` and `EncoderRepetitionPenaltyLogitsProcessor`.
Point 3 is left untouched as it can be "fixed" by multiple ways: either enforcing `penalty > 1` or clarifying further in the docstrings.
@gante
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25971/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25971/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25971",
"html_url": "https://github.com/huggingface/transformers/pull/25971",
"diff_url": "https://github.com/huggingface/transformers/pull/25971.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25971.patch",
"merged_at": 1694518592000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25970
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25970/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25970/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25970/events
|
https://github.com/huggingface/transformers/issues/25970
| 1,881,201,164 |
I_kwDOCUB6oc5wINoM
| 25,970 |
`RepetitionPenaltyLogitsProcessor` and `EncoderRepetitionPenaltyLogitsProcessor` contains incorrect and unclear docstrings
|
{
"login": "larekrow",
"id": 127832774,
"node_id": "U_kgDOB56Sxg",
"avatar_url": "https://avatars.githubusercontent.com/u/127832774?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/larekrow",
"html_url": "https://github.com/larekrow",
"followers_url": "https://api.github.com/users/larekrow/followers",
"following_url": "https://api.github.com/users/larekrow/following{/other_user}",
"gists_url": "https://api.github.com/users/larekrow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/larekrow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/larekrow/subscriptions",
"organizations_url": "https://api.github.com/users/larekrow/orgs",
"repos_url": "https://api.github.com/users/larekrow/repos",
"events_url": "https://api.github.com/users/larekrow/events{/privacy}",
"received_events_url": "https://api.github.com/users/larekrow/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@larekrow yes, I agree, there are some inconsistencies and some lack of documentation. \r\n\r\nIssues 1 and 2 get partially resolved with your PR (#25971). On 2, there is clearly an example missing, but I'm reviewing examples ATM.\r\n\r\nRegarding 3: it's a shame that `penalty` has the opposite meaning in on both processors, but changing it would be a breaking change. The best we can do is to clarify the docstring, including documenting the reward case! Would you like to open a PR to clarify this one? :) ",
"Thanks for the feedback @gante. I have attempted to clarify the reward cases in PR #26129 but as alluded to, I feel that the docstrings for `EncoderRepetitionPenaltyLogitsProcessor` will require some adjustments.\r\n\r\nBoth the class name `EncoderRepetitionPenaltyLogitsProcessor` and the original description (which I have left untouched) are misleading, because the class is not actually penalizing the encoder ids but only rewarding it when an intended `hallucination_penalty` value of >1 is given. In fact, it does not penalize any ids in that case.\r\n\r\nThe docstring I wrote for this class became somewhat convoluted because of this complication. A class name that would be more accurate would be `EncoderRepetitionRewardLogitsProcessor`, but this would be a breaking change as you pointed out.\r\n\r\nAny suggestions as to how we should move forward?",
"@larekrow I agree with your sentiment, the original implementation should be better (and I, as a reviewer, should have paid more attention to the implications) 🤗 Our north star here at `transformers` is to preserve backward compatibility, even if the original design is sub-optimal. We may lose in terms of clarity, but production users are reassured that we don't make sudden changes!\r\n\r\nAs such, documenting what's going on (like you did) is the best compromise solution 🤗 Thank you for iterating with me 💛 ",
"No worries, we all appreciate the important work Hugging Face is doing (and there is a lot of work to be done). It's really cool how this huge project is driven by both the staff and the community. Happy to be a part of it 🤗\r\n\r\nI've updated the docstrings according to your remarks in #26129. Please take a look whenever you can!",
"our amzing @gante is off for a few weeks, feel free to ping me once this is ready! 😉 ",
"@ArthurZucker yep this is ready! Please take a look when you can. ",
"PR merged!"
] | 1,693 | 1,697 | 1,697 |
CONTRIBUTOR
| null |
1. The class docstring of `RepetitionPenaltyLogitsProcessor` says "tokens with higher scores are _less_ likely to be selected". However, according to the [paper](https://arxiv.org/pdf/1909.05858.pdf) which states that "this penalized sampling works by _discounting_ the scores of previously generated tokens" and the code which lowers the score when penalizing tokens (e.g. by multiplying a negative score with a 1.2 penalty, 1.2 being a value the paper highlighted), the docstring should be corrected to say that "tokens with higher scores are _more_ likely to be selected".
https://github.com/huggingface/transformers/blob/d8e13b3e04da9e61c6f16df43815656f59688abd/src/transformers/generation/logits_process.py#L314-L317
2. `EncoderRepetitionPenaltyLogitsProcessor` requires an additional `encoder_input_ids` arg which docstring says "the encoder_input_ids that _should not_ be repeated within the decoder ids". However, according to the [class docstring](https://huggingface.co/docs/transformers/main/en/internal/generation_utils#transformers.EncoderRepetitionPenaltyLogitsProcessor), https://github.com/huggingface/transformers/issues/18354#issuecomment-1219118151, and the code with increases the score of tokens found within the original input ids (e.g. by multiplying a negative score with a 1 / 2 = 0.5 penalty, where `hallucination_penalty = 2` and is a value the PR author used), these are the ids that _should_ be repeated within the decoder ids.
https://github.com/huggingface/transformers/blob/d8e13b3e04da9e61c6f16df43815656f59688abd/src/transformers/generation/logits_process.py#L338-L346
3. Both `RepetitionPenaltyLogitsProcessor` and `EncoderRepetitionPenaltyLogitsProcessor` require a `penalty` input, which is enforced as a positive float. However, this input only works as expected when `penalty > 1`. If `0 < penalty < 1` is given, the "penalty" becomes a "reward". The docstring does not mention this in any way.
`RepetitionPenaltyLogitsProcessor`
https://github.com/huggingface/transformers/blob/d8e13b3e04da9e61c6f16df43815656f59688abd/src/transformers/generation/logits_process.py#L307-L308
`EncoderRepetitionPenaltyLogitsProcessor`
https://github.com/huggingface/transformers/blob/d8e13b3e04da9e61c6f16df43815656f59688abd/src/transformers/generation/logits_process.py#L335-L336
@gante
Before delving deeper into the source code and other resources, I was truly confused by the contradicting messages. I hope this will be rectified for other users.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25970/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25970/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25969
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25969/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25969/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25969/events
|
https://github.com/huggingface/transformers/pull/25969
| 1,881,049,636 |
PR_kwDOCUB6oc5Zhpsq
| 25,969 |
enable optuna multi-objectives feature
|
{
"login": "sywangyi",
"id": 36058628,
"node_id": "MDQ6VXNlcjM2MDU4NjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/36058628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sywangyi",
"html_url": "https://github.com/sywangyi",
"followers_url": "https://api.github.com/users/sywangyi/followers",
"following_url": "https://api.github.com/users/sywangyi/following{/other_user}",
"gists_url": "https://api.github.com/users/sywangyi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sywangyi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sywangyi/subscriptions",
"organizations_url": "https://api.github.com/users/sywangyi/orgs",
"repos_url": "https://api.github.com/users/sywangyi/repos",
"events_url": "https://api.github.com/users/sywangyi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sywangyi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25969). All of your documentation changes will be reflected on that endpoint.",
"cc @muellerzr @pacman100 for input on the changes to trainer "
] | 1,693 | 1,694 | 1,694 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes # ((https://github.com/huggingface/transformers/issues/25657))
## Who can review?
@sgugger @ArthurZucker
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25969/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25969/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25969",
"html_url": "https://github.com/huggingface/transformers/pull/25969",
"diff_url": "https://github.com/huggingface/transformers/pull/25969.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25969.patch",
"merged_at": 1694538083000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25968
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25968/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25968/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25968/events
|
https://github.com/huggingface/transformers/issues/25968
| 1,880,995,559 |
I_kwDOCUB6oc5wHbbn
| 25,968 |
Questions about Accelerate with FSDP
|
{
"login": "nebrelbug",
"id": 25597854,
"node_id": "MDQ6VXNlcjI1NTk3ODU0",
"avatar_url": "https://avatars.githubusercontent.com/u/25597854?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nebrelbug",
"html_url": "https://github.com/nebrelbug",
"followers_url": "https://api.github.com/users/nebrelbug/followers",
"following_url": "https://api.github.com/users/nebrelbug/following{/other_user}",
"gists_url": "https://api.github.com/users/nebrelbug/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nebrelbug/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nebrelbug/subscriptions",
"organizations_url": "https://api.github.com/users/nebrelbug/orgs",
"repos_url": "https://api.github.com/users/nebrelbug/repos",
"events_url": "https://api.github.com/users/nebrelbug/events{/privacy}",
"received_events_url": "https://api.github.com/users/nebrelbug/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"1: Yes, we dispatch and split the batches. So a batch of 64 on one device when training on 8 GPUs, that batch of 64 is split 8 ways and sent to each GPU as one batch. See the docs here on this behavior: https://huggingface.co/docs/accelerate/concept_guides/performance#observed-batch-sizes\r\n\r\n2: I *think* so, cc @amyeroberts?",
"Hello @nebrelbug, \r\n\r\n1. As Zach mentioned, in multi-gpu/multi-node setting, `per_device_train_batch_size*num_devices` will be the actual batch size. In your case, `per_device_train_batch_size*8` as you are running on 8 GPUs. So, the steps per epoch get reduced proportionally. In addition to the guide attached by Zach, note that when increasing batch size, one needs to increase the learning rate a bit. Also, rather than loss which is a factor of number of samples in batch which is huge when scaling across many GPUs, I would track the metric on eval dataset. \r\n\r\n2. Yes, they should be passed to the model. \r\n\r\n ",
"@muellerzr and @pacman100 thanks so much for the clarifications!\r\n\r\nI didn't properly understand that how batch size works with a multi-GPU setup, or realize that I should increase learning rate! And it makes sense that loss isn't the best metric to use for evaluation."
] | 1,693 | 1,693 | 1,693 |
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.32.1
- Platform: Linux-3.10.0-1160.92.1.el7.x86_64-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: 0.22.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: YES
- Using distributed or parallel set-up in script?: YES
### Who can help?
@muellerzr, @pacman100
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
loop.py
```python
from transformers import LlamaForCausalLM, LlamaTokenizer, Trainer, TrainingArguments
from accelerate import Accelerator
import torch
import os
from get_data import train_dataset, eval_dataset, data_collator
accelerator = Accelerator()
SIZE = "7b"
MODEL_PATH = f"/mnt/models/llama2/hf/Llama-2-{SIZE}-hf"
NAME = f"llama2-{SIZE}-dolly-15k"
BATCH_SIZE = 8
NUM_EPOCHS = 3
OUTPUT_DIR = os.environ["SLURM_JOB_NAME"]
tokenizer = LlamaTokenizer.from_pretrained(MODEL_PATH, legacy=False)
tokenizer.pad_token = tokenizer.eos_token
model = LlamaForCausalLM.from_pretrained(MODEL_PATH, torch_dtype=torch.bfloat16)
model = accelerator.prepare(model)
training_args = TrainingArguments(
output_dir=OUTPUT_DIR,
num_train_epochs=NUM_EPOCHS,
learning_rate=2e-5,
logging_steps=10,
per_device_train_batch_size=BATCH_SIZE,
remove_unused_columns=False,
save_steps=1000,
save_total_limit=1,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
tokenizer=tokenizer,
data_collator=lambda x: data_collator(x, tokenizer),
)
trainer.train()
trainer.evaluate()
# TODO: model = accelerator.unwrap_model(model)
model.save_pretrained(f"/mnt/finetunes/{NAME}")
tokenizer.save_pretrained(f"/mnt/finetunes/{NAME}")
```
get_data.py
```python
from datasets import load_from_disk, disable_caching
disable_caching()
IGNORE_TOKEN = -100
#####################
# FORMAT DATA #
#####################
template_context = """### Instruction:
{instruction}
### Context:
{context}
### Response:
"""
template_no_context = """### Instruction:
{instruction}
### Response:
"""
def data_to_string(data):
instruction = data["instruction"]
context = data["context"]
response = data["response"]
template = template_context if len(context) > 0 else template_no_context
source = template.format(instruction=instruction, context=context)
return {
"source": source,
"text": source + response,
}
original_dataset = load_from_disk("../datasets/databricks-dolly-15k")["train"]
dataset = original_dataset.map(
data_to_string
).remove_columns(
original_dataset.column_names
).filter(
lambda x: len(x["text"]) < 1000 # TODO: change to 4000
)
#####################
# SPLIT DATA #
#####################
processed_dataset = dataset.train_test_split(test_size=0.1)
train_dataset = processed_dataset["train"]
eval_dataset = processed_dataset["test"]
#####################
# CREATE DATALOADER #
#####################
def data_collator(features, tokenizer):
sources = [feature["source"] for feature in features]
targets = [feature["text"] for feature in features]
source_tokens = tokenizer(
sources,
return_tensors="pt",
padding='longest',
max_length=None,
)
target_tokens = tokenizer(
targets,
return_tensors="pt",
padding='longest',
max_length=None,
)
labels = target_tokens["input_ids"].clone()
for i in range(len(labels)):
source_len = source_tokens["attention_mask"][i].sum()
labels[i, :source_len] = IGNORE_TOKEN
res = {
"input_ids": target_tokens["input_ids"],
"attention_mask": target_tokens["attention_mask"],
"labels": labels,
}
return res
```
accelerate_config.yaml
```yaml
compute_environment: LOCAL_MACHINE
deepspeed_config: {}
distributed_type: FSDP
downcast_bf16: 'no'
fsdp_config:
fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
fsdp_backward_prefetch_policy: BACKWARD_PRE
fsdp_offload_params: false
fsdp_sharding_strategy: 1
fsdp_state_dict_type: FULL_STATE_DICT
fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer
machine_rank: 0
main_process_ip: null
main_process_port: null
main_training_function: main
mixed_precision: 'no'
num_machines: 1
num_processes: 8
use_cpu: false
```
Command: `export SLURM_JOB_NAME="testingg" && accelerate launch --config_file ./accelerate_config.yaml loop.py`
### Expected behavior
## Questions:
1. When I run my training script, the progress bar shows only 1/8 of the batches I see when I use 1 process and `device_map="auto"`. Do the Trainer class and Accelerate coordinate to split batches across processes, sync gradients, and update the model?
The speed of the training (and suboptimal loss) make me fear that I'm only training the model on 1/8 of my data. If this is true, how can I handle distributed data correctly?
2. In my `data_collator` function, I return a special `labels` key with the prompt tokens masked to `-100`. Does `labels` get passed in to the model along with `input_ids`?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25968/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25968/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25967
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25967/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25967/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25967/events
|
https://github.com/huggingface/transformers/issues/25967
| 1,880,822,768 |
I_kwDOCUB6oc5wGxPw
| 25,967 |
Incomplete model table
|
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"It looks like `TimmBackbone` also shows no support here. \r\n\r\nI suspect it may have something to do with the regex that matches the model names. For example, `TimmBackbone` doesn't have a `Model`, `Encoder`, `Decoder`, or `ForConditionalGeneration` class. However, I'm not sure why it doesn't work for `MaskformerSwin` or `Speech2Text2` which have the `MaskFormerSwinModel` and `Speech2Text2Decoder` classes. 🤔 \r\n\r\nhttps://github.com/huggingface/transformers/blob/09b2de6eb74b1e5ff4f4c3d9839485f4165627c9/utils/check_table.py#L89\r\n\r\ncc @MKhalusova for additional help!",
"@stevhliu I'll look into it. ",
"@stevhliu, your suspicion was correct. When going through all of the transformer objects ( that we get with `dir(transformers_module)`), we match for names that have `Model`,`Encoder`, `Decoder`, or `ForConditionalGeneration` in it. \r\n\r\nFor `TimmBackbone` the objects are: `['TimmBackbone', 'TimmBackboneConfig']`\r\nFor `MaskformerSwin` we get: `['MaskFormerSwinBackbone', 'MaskFormerSwinConfig']`\r\nFor `Speech2Text2` we get: `['Speech2Text2Config', 'Speech2Text2ForCausalLM', 'Speech2Text2PreTrainedModel', 'Speech2Text2Processor', 'Speech2Text2Tokenizer']`\r\n\r\nSo while the `MaskFormerSwinModel` and `Speech2Text2Decoder` classes exist they are not on the list that we get by calling `direct_transformers_import(TRANSFORMERS_PATH)`. They are not mentioned in the `__init__.py`. \r\n\r\nI am not sure how the `__init__.py` is constructed, but this is where the issue seems to be. \r\n",
"Thanks for your help @MKhalusova!\r\n\r\nSince the issue goes a bit deeper into how the `__init__.py` is constructed for these model classes, maybe @patrickvonplaten can provide some more guidance with `Speech2Text2` and @amyeroberts with `MaskformerSwin` and `TimmBackbone`?",
"For `MaskFormerSwin` and `TimmBackbone` this is because these models are backbones and so not meant to be loaded and used on their own. Instead, they define architectures which can be loaded using the `AutoBackbone` API. \r\n\r\nFor `MaskFormerSwin`, there is a `MaskFormerSwinModel` class which we could have available in `dir(transformers_module)` by having it importable from the main init. \r\n\r\nFor `TimmBackbone`, it's a bit trickier. This class enables us to load in timm weights as a backbone, and so acts as a wrapper around timm, but isn't a transformers model i.e. it's mapping in the pytorch/TF/JAX compatibility doesn't make sense because we can't pass e.g. `from_tf=True` to `TommBackbone.from_pretrained(checkpoint_name, from_tf=True)`. In this case we should probably remove it from being importable from the main init (this would need a deprecation cycle). ",
"Thanks for the explanations, @amyeroberts! I would say, that in this case, we should probably update the script that builds the model table in the docs to exclude `MaskFormerSwin` and `TimmBackbone`. Do you know if it's the same for `Speech2Text2`? ",
"@MKhalusova Sounds good! \r\n\r\nFor Speech2Text2 I have no idea, unfortunately. Looking at the modeling code, there's no `Speech2Text2Model`. It looks like the structure is there to load the checkpoints in AutoEncoderDecoder, but not sure 🤷♀️ @patrickvonplaten will know :) ",
"> It looks like `TimmBackbone` also shows no support here.\r\n> \r\n> I suspect it may have something to do with the regex that matches the model names. For example, `TimmBackbone` doesn't have a `Model`, `Encoder`, `Decoder`, or `ForConditionalGeneration` class. However, I'm not sure why it doesn't work for `MaskformerSwin` or `Speech2Text2` which have the `MaskFormerSwinModel` and `Speech2Text2Decoder` classes. 🤔\r\n> \r\n> https://github.com/huggingface/transformers/blob/09b2de6eb74b1e5ff4f4c3d9839485f4165627c9/utils/check_table.py#L89\r\n> \r\n> cc @MKhalusova for additional help!\r\n\r\nwhat does this word do?\r\n\r\n\r\n",
"I think we can close this issue. ",
"Yes, let's close it! Thanks all!"
] | 1,693 | 1,695 | 1,695 |
MEMBER
| null |
The model table on the index of the documentation is incomplete it seems: https://huggingface.co/docs/transformers/index
Some models, like `MaskFormerSwin` or `Speech2Text2` show that there's no support in any library.

cc @stevhliu
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25967/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25967/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25966
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25966/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25966/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25966/events
|
https://github.com/huggingface/transformers/pull/25966
| 1,880,739,774 |
PR_kwDOCUB6oc5ZgnKG
| 25,966 |
Fix typo
|
{
"login": "susnato",
"id": 56069179,
"node_id": "MDQ6VXNlcjU2MDY5MTc5",
"avatar_url": "https://avatars.githubusercontent.com/u/56069179?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/susnato",
"html_url": "https://github.com/susnato",
"followers_url": "https://api.github.com/users/susnato/followers",
"following_url": "https://api.github.com/users/susnato/following{/other_user}",
"gists_url": "https://api.github.com/users/susnato/gists{/gist_id}",
"starred_url": "https://api.github.com/users/susnato/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/susnato/subscriptions",
"organizations_url": "https://api.github.com/users/susnato/orgs",
"repos_url": "https://api.github.com/users/susnato/repos",
"events_url": "https://api.github.com/users/susnato/events{/privacy}",
"received_events_url": "https://api.github.com/users/susnato/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Thanks for this PR @susnato . Would you like to find/replace all occurrences of `lenght` in the codebase? Thank you!",
"Hi @ydshieh , I have changed all occurrences of `lenght` to `length`. ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25966). All of your documentation changes will be reflected on that endpoint."
] | 1,693 | 1,694 | 1,693 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR changes all `lenght` to `length`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerz and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25966/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25966/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25966",
"html_url": "https://github.com/huggingface/transformers/pull/25966",
"diff_url": "https://github.com/huggingface/transformers/pull/25966.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25966.patch",
"merged_at": 1693901546000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25965
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25965/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25965/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25965/events
|
https://github.com/huggingface/transformers/issues/25965
| 1,880,666,772 |
I_kwDOCUB6oc5wGLKU
| 25,965 |
XPU support
|
{
"login": "Serizao",
"id": 11671895,
"node_id": "MDQ6VXNlcjExNjcxODk1",
"avatar_url": "https://avatars.githubusercontent.com/u/11671895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Serizao",
"html_url": "https://github.com/Serizao",
"followers_url": "https://api.github.com/users/Serizao/followers",
"following_url": "https://api.github.com/users/Serizao/following{/other_user}",
"gists_url": "https://api.github.com/users/Serizao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Serizao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Serizao/subscriptions",
"organizations_url": "https://api.github.com/users/Serizao/orgs",
"repos_url": "https://api.github.com/users/Serizao/repos",
"events_url": "https://api.github.com/users/Serizao/events{/privacy}",
"received_events_url": "https://api.github.com/users/Serizao/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @Serizao, thanks for opening this issue! \r\n\r\nThere's a PR #25714 for adding support for XPU with the trainer. \r\n\r\nCould you share a minimal reproducer so we can replicate the issue? The underlying problem might be the weight allocation rather then XPU itself. \r\n\r\ncc @muellerzr @pacman100 ",
"Hi @Serizao , post this PR #25714 , huggingface suite should be functional with the next gen arc systems from Intel. If you face any issues on any Intel device , please send a reproducer here. Yes Rahul (the repo mentioned) is my colleague and this PR aims to resolve issues arising from HF suite on Intel side. ",
"My code to reproduce\r\n```\r\nimport csv\r\nimport intel_extension_for_pytorch as ipex\r\nimport torch\r\nfrom random import randint\r\nfrom tqdm import tqdm\r\nfrom itertools import islice, zip_longest\r\n\r\nfrom datasets import load_dataset\r\nfrom transformers import AutoTokenizer, AutoModelForSeq2SeqLM, DataCollatorForSeq2Seq\r\nfrom transformers import Trainer, TrainingArguments\r\n\r\nimport warnings\r\n\r\nwarnings.filterwarnings(\"ignore\", category=UserWarning, module=\"intel_extension_for_pytorch\")\r\nwarnings.filterwarnings(\"ignore\", category=UserWarning, module=\"torchvision.io.image\", lineno=13)\r\n\r\n\r\nDEVICE = torch.device(\"xpu\" if torch.xpu.is_available() else \"cpu\")\r\nprint(f\"Finetuning on device: {ipex.xpu.get_device_name()}\")\r\ndef get_device(self):\r\n if torch.xpu.is_available():\r\n return DEVICE\r\n else:\r\n return self.device\r\n\r\ndef place_model_on_device(self):\r\n self.model.to(self.args.device)\r\n\r\n\r\ndeviceCompute = torch.device(\"xpu\" if torch.xpu.is_available() else \"cpu\")\r\nprint(f\"Using device: {deviceCompute}\")\r\nmodel_name = \"facebook/nllb-200-distilled-600M\"\r\n#model_name =\"Helsinki-NLP/opus-mt-tc-big-fr-en\"\r\ntokenizer = AutoTokenizer.from_pretrained(model_name)\r\nmodel = AutoModelForSeq2SeqLM.from_pretrained(model_name)\r\nmodel.to(deviceCompute)\r\n\r\n\r\ndataset = load_dataset(\"json\", data_files=\"dataset-3/unit/data.json\")\r\ndataset = dataset[\"train\"].shuffle(seed=42)\r\ndataset = dataset.shard(num_shards=10, index=0)\r\n\r\n\r\ndef preprocess_function(examples):\r\n padding = \"max_length\"\r\n max_length = 512\r\n\r\n inputs = [ex for ex in examples[\"fr\"]]\r\n targets = [ex for ex in examples[\"en\"]]\r\n model_inputs = tokenizer(inputs, padding=padding, truncation=True)\r\n labels = tokenizer(targets, padding=padding, truncation=True)\r\n #model_inputs = tokenizer(inputs, max_length=max_length, padding=padding, truncation=True)\r\n #labels = tokenizer(targets, max_length=max_length, padding=padding, truncation=True)\r\n\r\n model_inputs[\"labels\"] = labels[\"input_ids\"]\r\n return model_inputs\r\n\r\n\r\ntrain_dataset = dataset.map(preprocess_function, batched=True, desc=\"Running tokenizer\")\r\ndata_collator = DataCollatorForSeq2Seq(\r\n tokenizer,\r\n model=model,\r\n label_pad_token_id=tokenizer.pad_token_id,\r\n pad_to_multiple_of=64\r\n )\r\n\r\n# Paramètres d'entraînement avec PyTorch\r\ntraining_args = TrainingArguments(\r\n gradient_accumulation_steps=2,\r\n output_dir=\"./results\",\r\n per_device_train_batch_size=4,\r\n num_train_epochs=3,\r\n logging_dir=\"./logs\",\r\n logging_steps=100,\r\n save_steps=500,\r\n bf16=True, # setting datype to bfloat16\r\n save_total_limit=2, # Conservez seulement les 2 derniers checkpoints\r\n push_to_hub=False,\r\n #use_ipex=True, # optimize the model and optimizer using intel extension for pyotrch (optional)\r\n)\r\n\r\ntrainer = Trainer(\r\n model=model, \r\n args=training_args,\r\n data_collator=data_collator,\r\n train_dataset=train_dataset,\r\n)\r\n\r\n\r\ntrainer.train()\r\n\r\nsave_directory = \"./dataset-3/models/new-finetune-fb-nllb-600M\"\r\nmodel.save_pretrained(save_directory)\r\ntokenizer.save_pretrained(save_directory)\r\n\r\n\r\n```\r\nPlease excuse the quality of my code, I'm a novice in artificial intelligence."
] | 1,693 | 1,694 | 1,694 |
CONTRIBUTOR
| null |
### Feature request
I would like to know if XPU support is in pipline ?
I try to use with current package and i have error :
`RuntimeError: Expected all tensors to be on the same device, but found at least two devices, xpu:0 and cpu! (when checking argument for argument index in method wrapper_XPU__index_select)`
i read the doc carefuly and i didn't fine mention of xpu backend so i think it is not impemented yet. I found a repos wich implet it https://github.com/rahulunair/transformers_xpu but i have an other error but i think his implementation is good
### Motivation
Suport Intel GPU A770 and A750 and many wich will come
### Your contribution
I can test if needed
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25965/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25965/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25964
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25964/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25964/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25964/events
|
https://github.com/huggingface/transformers/issues/25964
| 1,880,645,773 |
I_kwDOCUB6oc5wGGCN
| 25,964 |
[bug] `facebook/wav2vec2-conformer-rope-large-960h-ft` refuses to work in `fp16`
|
{
"login": "Vaibhavs10",
"id": 18682411,
"node_id": "MDQ6VXNlcjE4NjgyNDEx",
"avatar_url": "https://avatars.githubusercontent.com/u/18682411?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Vaibhavs10",
"html_url": "https://github.com/Vaibhavs10",
"followers_url": "https://api.github.com/users/Vaibhavs10/followers",
"following_url": "https://api.github.com/users/Vaibhavs10/following{/other_user}",
"gists_url": "https://api.github.com/users/Vaibhavs10/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Vaibhavs10/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Vaibhavs10/subscriptions",
"organizations_url": "https://api.github.com/users/Vaibhavs10/orgs",
"repos_url": "https://api.github.com/users/Vaibhavs10/repos",
"events_url": "https://api.github.com/users/Vaibhavs10/events{/privacy}",
"received_events_url": "https://api.github.com/users/Vaibhavs10/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
] |
[] | 1,693 | 1,693 | 1,693 |
MEMBER
| null |
### System Info
- `transformers` version: 4.32.1
- Platform: Linux-5.15.109+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.3
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.7.2 (gpu)
- Jax version: 0.4.14
- JaxLib version: 0.4.14
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@sanchit-gandhi @Patrick
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
`facebook/wav2vec2-conformer-rope-large-960h-ft` throws a rather cryptic error (`RuntimeError: mat1 and mat2 must have the same dtype`) when loaded in half-precision.
repro: https://github.com/Vaibhavs10/scratchpad/blob/main/conformer_wav2vec2_repro.ipynb
model: https://huggingface.co/facebook/wav2vec2-conformer-rope-large-960h-ft
Note: It works fine on `fp32`
### Expected behavior
It should work without any issues!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25964/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25964/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25963
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25963/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25963/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25963/events
|
https://github.com/huggingface/transformers/pull/25963
| 1,880,619,196 |
PR_kwDOCUB6oc5ZgM-P
| 25,963 |
Fix failing test
|
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25963). All of your documentation changes will be reflected on that endpoint."
] | 1,693 | 1,693 | 1,693 |
MEMBER
| null |
The `self.finetuned_from` field can be `None` as it can be an empty string; in both cases, the `Trainer` should not save that value to the metadata.
Fixes failing tests such as:
```
=========================== short test summary info ============================
FAILED tests/trainer/test_trainer.py::TrainerIntegrationWithHubTester::test_push_to_hub - huggingface_hub.utils._errors.BadRequestError: (Request ID: Root=1-64f602bf-2d4e7f4f785e0578301e9f88;1471536e-9bca-4afd-830e-6c8f0471c6f1)
Bad request for commit endpoint:
"base_model" is not allowed to be empty
FAILED tests/trainer/test_trainer.py::TrainerIntegrationWithHubTester::test_push_to_hub_in_organization - huggingface_hub.utils._errors.BadRequestError: (Request ID: Root=1-64f602c0-09d498af536f9985424a47b5;1e5b77dd-a8a7-4b87-891a-a70336f5bc30)
Bad request for commit endpoint:
"base_model" is not allowed to be empty
============ 2 failed, 22 passed, 5 skipped, 36 warnings in 44.14s =============
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25963/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25963/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25963",
"html_url": "https://github.com/huggingface/transformers/pull/25963",
"diff_url": "https://github.com/huggingface/transformers/pull/25963.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25963.patch",
"merged_at": 1693846431000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25962
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25962/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25962/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25962/events
|
https://github.com/huggingface/transformers/pull/25962
| 1,880,612,302 |
PR_kwDOCUB6oc5ZgLe8
| 25,962 |
Generate: legacy mode is only triggered when `generation_config` is untouched
|
{
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"The first thing I can think of is \r\n\r\nhttps://docs.python.org/3/reference/datamodel.html#object.__hash__\r\n\r\n> ... Although they remain constant within an individual Python process, they are not predictable between repeated invocations of Python. ...\r\n\r\nTherefore, using the built-in `hash` is not guaranteed to give the same value even on the same machine (across different python run).\r\n\r\nI would suggest to use the standard hash library like `hashlib` and use something like `sha256` on the string.\r\n\r\nI will take a deeper look on the generation config side later today.",
"Also, I would create a new dict with sorted keys before `json.dumps`. The dict is ordered (with python >=3.7), but it's in insertion order. So if a user doing some non-standard operation like below, the dict is the same, but the string represention would be different and hence different hash.\r\n\r\n(I am not sure if the following could be done with `GenerationConfig` though.)\r\n\r\n```python3\r\nimport json\r\n\r\n\r\nd = {\"a\": 1, \"b\": 2}\r\nprint(json.dumps(d, indent=2) + \"\\n\")\r\n\r\nd.pop(\"a\")\r\nprint(json.dumps(d, indent=2) + \"\\n\")\r\n\r\nd[\"a\"] = 1\r\nprint(json.dumps(d, indent=2) + \"\\n\")\r\n```\r\ngives\r\n\r\n```bash\r\n{\r\n \"a\": 1,\r\n \"b\": 2\r\n}\r\n\r\n{\r\n \"b\": 2\r\n}\r\n\r\n{\r\n \"b\": 2,\r\n \"a\": 1\r\n}\r\n```\r\n",
"@ydshieh good points! \r\n\r\nThe hash is computed from a sorted json string (i.e. a `json.dumps` call with `sort_keys=True`), so the ordering part is already sorted 🙌 \r\n\r\nAs for the hashing library, I had no idea. Going to update it to a better one. Thank you for your input 💛 ",
"> @ydshieh good points!\r\n> \r\n> The hash is computed from a sorted json string (i.e. a `json.dumps` call with `sort_keys=True`), so the ordering part is already sorted 🙌\r\n\r\nI looked the wrong place: `class InputExample.to_json_string`, sorry 😅 ",
"After a second, it seems `_original_object_hash` is never saved and reloaded. So its lifetime is only inside the python run when it is created, and therefore, we don't need to worry what I mentioned previously."
] | 1,693 | 1,694 | 1,694 |
MEMBER
| null |
This is a long description, but bear with me -- it is important to understand the whole context here!
# Background
Long ago, in ancient times (2022), `model.config` held all model-related configurations, including generative configuration. This means it would be possible to modify the model config to parameterize `.generate()`, e.g.:
```py
# (Load model, prepare input)
model.generate(**model_inputs) # generates up to 20 tokens, the default
model.config.max_length = 100
model.generate(**model_inputs) # now generates up to 100 tokens
```
By the end of last year, since a given model could have its generation parameterized differently for separate purposes, we carved out the generation parameterization into `generate_config`. To facilitate the transition process, in the absence of a manually set `generate_config`, its parameters are pulled from `model.config` and `_from_model_config` is set to `True` to flag it.
*Keeping retrocompatibility is paramount*, so when `_from_model_config` is `True`, we revert to the legacy mode and `model.config` takes precedence.
# The problem
Currenty, here's what happens:
✅ The user never touches `model.generation_config` and does all parameterization through `.generate()` and/or through `model.config` (i.e. legacy mode)
✅ The user sets a new generation config in the model (`model.generation_config = generation_config`) and/or passes a new `generation_config` to `.generate()`, where the former takes precedence.
❌ Hybrid situations: the user modifies a `model.generation_config` while `_from_model_config` is set to `True`. This fails because `model.config` takes precedence, and thus changes are ignored (*with a warning!*). #25917 is an example of this issue, but there were others in the past.
# The solution (this PR)
My proposed solution is based on the following assumption: if the user touches `generation_config`, then they are aware of the best practices to configure `.generate()`. Therefore, when it happens, NEVER enter the legacy mode, and `model.generation_config` is always in charge.
In practice, this is done through hashing: at `generation_config` creation time, we store its hash. If the hash changes, then the user has touched the object, rejecting the legacy mode.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25962/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25962/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25962",
"html_url": "https://github.com/huggingface/transformers/pull/25962",
"diff_url": "https://github.com/huggingface/transformers/pull/25962.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25962.patch",
"merged_at": 1694516897000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25961
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25961/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25961/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25961/events
|
https://github.com/huggingface/transformers/pull/25961
| 1,880,609,823 |
PR_kwDOCUB6oc5ZgK8v
| 25,961 |
Skip `push_to_hub` for now
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Superseded by https://github.com/huggingface/transformers/pull/25963\r\n\r\nThanks for your work @ydshieh!",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25961). All of your documentation changes will be reflected on that endpoint."
] | 1,693 | 1,693 | 1,693 |
COLLABORATOR
| null |
# What does this PR do?
Currently failing on `main`.
Error
```bash
elif response.status_code == 400:
message = (
f"\n\nBad request for {endpoint_name} endpoint:" if endpoint_name is not None else "\n\nBad request:"
)
> raise BadRequestError(message, response=response) from e
E huggingface_hub.utils._errors.BadRequestError: (Request ID: Root=1-64f5eaed-3b71af5025f88d14210df38d;e1b1858a-4f29-4bea-b5e2-4f0eb4dac173)
E
E Bad request for commit endpoint:
E "base_model" is not allowed to be empty
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25961/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25961/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25961",
"html_url": "https://github.com/huggingface/transformers/pull/25961",
"diff_url": "https://github.com/huggingface/transformers/pull/25961.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25961.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/25960
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25960/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25960/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25960/events
|
https://github.com/huggingface/transformers/pull/25960
| 1,880,552,743 |
PR_kwDOCUB6oc5Zf-J6
| 25,960 |
Put Falcon back
|
{
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,693 | 1,694 | 1,693 |
MEMBER
| null |
Goes hand in hand with https://github.com/huggingface/transformers/pull/25954
As seen offline with @ArthurZucker but would still like additional eyes on this as it's touching critical code (that I'd honestly rather not be touching, but I don't see another way around it).
This code will be removed as soon as we can revert the change on the Falcon repositories as `from_pretrained` will then automatically download the appropriate revision.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25960/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25960/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25960",
"html_url": "https://github.com/huggingface/transformers/pull/25960",
"diff_url": "https://github.com/huggingface/transformers/pull/25960.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25960.patch",
"merged_at": 1693851430000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25959
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25959/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25959/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25959/events
|
https://github.com/huggingface/transformers/pull/25959
| 1,880,521,912 |
PR_kwDOCUB6oc5Zf3Zi
| 25,959 |
only main process should call _save on deepspeed zero3
|
{
"login": "zjjMaiMai",
"id": 13913992,
"node_id": "MDQ6VXNlcjEzOTEzOTky",
"avatar_url": "https://avatars.githubusercontent.com/u/13913992?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zjjMaiMai",
"html_url": "https://github.com/zjjMaiMai",
"followers_url": "https://api.github.com/users/zjjMaiMai/followers",
"following_url": "https://api.github.com/users/zjjMaiMai/following{/other_user}",
"gists_url": "https://api.github.com/users/zjjMaiMai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zjjMaiMai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zjjMaiMai/subscriptions",
"organizations_url": "https://api.github.com/users/zjjMaiMai/orgs",
"repos_url": "https://api.github.com/users/zjjMaiMai/repos",
"events_url": "https://api.github.com/users/zjjMaiMai/events{/privacy}",
"received_events_url": "https://api.github.com/users/zjjMaiMai/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @muellerzr @pacman100 ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25959). All of your documentation changes will be reflected on that endpoint.",
"@pacman100 any things i need to do?",
"@zjjMaiMai One of the hub tests are failing, complaining that the base_model is empty when pushing to the hub. Could you try running this test locally to see whether it's a result of the changes in this PR?",
"> @zjjMaiMai One of the hub tests are failing, complaining that the base_model is empty when pushing to the hub. Could you try running this test locally to see whether it's a result of the changes in this PR?\r\n\r\n```\r\n$ pytest tests/trainer/test_trainer.py -k 'test_push_to_hub'\r\n================================================================================================================================ test session starts ================================================================================================================================\r\nplatform linux -- Python 3.9.2, pytest-7.4.1, pluggy-1.3.0\r\nconfigfile: setup.cfg\r\nplugins: timeout-2.1.0, hypothesis-6.84.2, dash-2.13.0, xdist-3.3.1, anyio-3.7.1\r\ncollected 95 items / 91 deselected / 4 selected \r\n\r\ntests/trainer/test_trainer.py ssss [100%]\r\n\r\n================================================================================================================================= warnings summary ==================================================================================================================================\r\n../../../../../../home/.local/lib/python3.9/site-packages/_pytest/config/__init__.py:1376\r\n /home/.local/lib/python3.9/site-packages/_pytest/config/__init__.py:1376: PytestConfigWarning: Unknown config option: doctest_glob\r\n \r\n self._warn_or_fail_if_strict(f\"Unknown config option: {key}\\n\")\r\n\r\n-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html\r\n=================================================================================================================== 4 skipped, 91 deselected, 1 warning in 1.69s ====================================================================================================================\r\n$ git branch \r\n* fix_save_deepspeed_3\r\n main\r\n```",
"@zjjMaiMai Could you try and rebase on main? This should resolve the failing tests. ",
"All green! @amyeroberts "
] | 1,693 | 1,694 | 1,694 |
CONTRIBUTOR
| null |
# Background
`trainer._save` call on all process after https://github.com/huggingface/transformers/pull/25817. will raise `FileExistsError` when model save.
# What does this PR do?
this pr fix it, `trainer._save` will call on main process only.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25959/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25959/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25959",
"html_url": "https://github.com/huggingface/transformers/pull/25959",
"diff_url": "https://github.com/huggingface/transformers/pull/25959.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25959.patch",
"merged_at": 1694433397000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25958
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25958/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25958/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25958/events
|
https://github.com/huggingface/transformers/pull/25958
| 1,880,519,165 |
PR_kwDOCUB6oc5Zf2yz
| 25,958 |
[doc] Always call it Agents for consistency
|
{
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,693 | 1,693 | 1,693 |
MEMBER
| null | null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25958/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25958/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25958",
"html_url": "https://github.com/huggingface/transformers/pull/25958",
"diff_url": "https://github.com/huggingface/transformers/pull/25958.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25958.patch",
"merged_at": 1693913240000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25957
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25957/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25957/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25957/events
|
https://github.com/huggingface/transformers/issues/25957
| 1,880,408,663 |
I_kwDOCUB6oc5wFMJX
| 25,957 |
Add support for yarn
|
{
"login": "RonanKMcGovern",
"id": 78278410,
"node_id": "MDQ6VXNlcjc4Mjc4NDEw",
"avatar_url": "https://avatars.githubusercontent.com/u/78278410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RonanKMcGovern",
"html_url": "https://github.com/RonanKMcGovern",
"followers_url": "https://api.github.com/users/RonanKMcGovern/followers",
"following_url": "https://api.github.com/users/RonanKMcGovern/following{/other_user}",
"gists_url": "https://api.github.com/users/RonanKMcGovern/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RonanKMcGovern/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RonanKMcGovern/subscriptions",
"organizations_url": "https://api.github.com/users/RonanKMcGovern/orgs",
"repos_url": "https://api.github.com/users/RonanKMcGovern/repos",
"events_url": "https://api.github.com/users/RonanKMcGovern/events{/privacy}",
"received_events_url": "https://api.github.com/users/RonanKMcGovern/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The error is not really giving you any clues as to what went wrong here in my opinion. You should be able to load the model if you pass `trust_remote_code=True`. \r\n\r\n```python\r\nmodel_id = \"TheBloke/Yarn-Llama-2-7B-128K-GPTQ\"\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(\r\n model_id,\r\n torch_dtype=torch.float16,\r\n device_map=\"auto\",\r\n revision=\"main\",\r\n trust_remote_code=True\r\n)\r\n```",
"Many thanks @casper-hansen , I did that and it was helpful.\r\n\r\nI'm now getting:\r\n```\r\n>>>> Flash Attention installed\r\n---------------------------------------------------------------------------\r\nModuleNotFoundError Traceback (most recent call last)\r\n[~/.cache/huggingface/modules/transformers_modules/TheBloke/Yarn-Llama-2-7B-128K-GPTQ/8cfa601c38543979a34fb31cbcd1d25682d020c4/modeling_llama_together_yarn.py](https://localhost:8080/#) in <module>\r\n 51 try:\r\n---> 52 from flash_attn.layers.rotary import apply_rotary_emb_func\r\n 53 flash_rope_installed = True\r\n\r\n12 frames\r\nModuleNotFoundError: No module named 'flash_attn.ops.triton'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nImportError Traceback (most recent call last)\r\n[~/.cache/huggingface/modules/transformers_modules/TheBloke/Yarn-Llama-2-7B-128K-GPTQ/8cfa601c38543979a34fb31cbcd1d25682d020c4/modeling_llama_together_yarn.py](https://localhost:8080/#) in <module>\r\n 55 except ImportError:\r\n 56 flash_rope_installed = False\r\n---> 57 raise ImportError('Please install RoPE kernels: `pip install git+[https://github.com/HazyResearch/flash-attention.git#subdirectory=csrc/rotary`](https://github.com/HazyResearch/flash-attention.git#subdirectory=csrc/rotary%60)')\r\n 58 \r\n 59 \r\n\r\nImportError: Please install RoPE kernels: `pip install git+[https://github.com/HazyResearch/flash-attention.git#subdirectory=csrc/rotary`](https://github.com/HazyResearch/flash-attention.git#subdirectory=csrc/rotary%60)\r\n```\r\nNote that I have installed flash-attn (and the csrc/rotary package):\r\n```\r\npip install flash-attn --no-build-isolation\r\npip install git+https://github.com/HazyResearch/flash-attention.git#subdirectory=csrc/rotary\r\n```",
"> ```\r\n> pip install flash-attn --no-build-isolation\r\n> pip install git+https://github.com/HazyResearch/flash-attention.git#subdirectory=csrc/rotary\r\n> ```\r\n\r\nThere are many \"gotchas\" with this new repo. Try creating a new environment / uninstall your already installed pip packages.\r\n\r\nI have not confirmed if it is the order of installation or if it is only flash attention v2.1.1 that works. But this worked for me after running into the same issue as you\r\n\r\n```\r\npip install git+https://github.com/HazyResearch/flash-attention.git#subdirectory=csrc/rotary\r\npip install flash-attn==2.1.1 --no-build-isolation\r\n```",
"Yeah that worked. Thanks!\r\n\r\nI'm facing an issue now with invalid probabilities. I'll aim to replicate exactly what's recommended in the morning and revert back here.\r\n\r\n```\r\nRuntimeError Traceback (most recent call last)\r\n[<ipython-input-42-081a24bc1ede>](https://localhost:8080/#) in <cell line: 1>()\r\n----> 1 stream(f'howdy')\r\n\r\n3 frames\r\n[/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py](https://localhost:8080/#) in sample(self, input_ids, logits_processor, stopping_criteria, logits_warper, max_length, pad_token_id, eos_token_id, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, synced_gpus, streamer, **model_kwargs)\r\n 2764 # sample\r\n 2765 probs = nn.functional.softmax(next_token_scores, dim=-1)\r\n-> 2766 next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1)\r\n 2767 \r\n 2768 # finished sentences should have their next token be a padding token\r\n\r\nRuntimeError: probability tensor contains either `inf`, `nan` or element < 0\r\n```\r\nfrom:\r\n```\r\nfrom transformers import AutoTokenizer, AutoModelForCausalLM, TextStreamer\r\n\r\n# Define a stream *without* function calling capabilities\r\ndef stream(user_prompt):\r\n\r\n prompt = f\"{user_prompt.strip()}\\n\\n\"\r\n\r\n inputs = tokenizer([prompt], return_tensors=\"pt\").to(runtimeFlag)\r\n\r\n # with torch.backends.cuda.sdp_kernel(enable_flash=True, enable_math=False, enable_mem_efficient=False):\r\n streamer = TextStreamer(tokenizer)\r\n\r\n # Despite returning the usual output, the streamer will also print the generated text to stdout.\r\n _ = model.generate(**inputs, streamer=streamer, max_new_tokens=500)\r\n```",
"> I'm facing an issue now with invalid probabilities.\r\n\r\nOk, this issue is with the TheBloke/Yarn-Llama-2-7B-128K-GPTQ, not with the 13B version.\r\n\r\nHere's the working code:\r\n```\r\n!pip3 install git+https://github.com/huggingface/transformers.git\r\n!pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/\r\n!pip3 install git+https://github.com/huggingface/optimum.git\r\n!pip3 install git+https://github.com/HazyResearch/flash-attention.git#subdirectory=csrc/rotary\r\n!pip3 install flash-attn==2.1.1 --no-build-isolation\r\n\r\nimport transformers\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\nimport torch\r\n\r\nmodel_id = \"TheBloke/Yarn-Llama-2-13B-128K-GPTQ\"\r\n# To use a different branch, change revision\r\n# For example: revision=\"gptq-4bit-32g-actorder_True\"\r\nmodel = AutoModelForCausalLM.from_pretrained(model_id,\r\n torch_dtype=torch.float16,\r\n device_map=\"auto\",\r\n revision=\"main\",\r\n trust_remote_code=True)\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(model_id, use_fast=True)\r\n```\r\n\r\nJust an added note that I don't recommend trying this because - even at 8,000 token context length - the output quality is poor and repetitive. Further, on an A100 - 40 GB, the memory usage was spiking to use up nearly all of the memory even on 8k tokens.",
"> Just an added note that I don't recommend trying this because - even at 8,000 token context length - the output quality is poor and repetitive. Further, on an A100 - 40 GB, the memory usage was spiking to use up nearly all of the memory even on 8k tokens.\r\n\r\nI suspect it needs fine-tuning as this model is only pre trained and not ready for real usage.",
"> > I'm facing an issue now with invalid probabilities.\r\n> \r\n> Ok, this issue is with the TheBloke/Yarn-Llama-2-7B-128K-GPTQ, not with the 13B version.\r\n> \r\n> Here's the working code:\r\n> \r\n...\r\n> \r\n> Just an added note that I don't recommend trying this because - even at 8,000 token context length - the output quality is poor and repetitive. Further, on an A100 - 40 GB, the memory usage was spiking to use up nearly all of the memory even on 8k tokens.\r\n\r\n> I suspect it needs fine-tuning as this model is only pre trained and not ready for real usage.\r\n\r\nThanks for the note, i can loaded [TheBloke/Yarn-Llama-2-13B-128K-GPTQ:gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Yarn-Llama-2-13B-128K-GPTQ) with ooba exllama successfully. (10240 token)\r\n\r\nBut yes, the output quality is poor. I will try [TheBloke/Airoboros-L2-13B-2_1-YaRN-64K-GPTQ](https://huggingface.co/TheBloke/Airoboros-L2-13B-2_1-YaRN-64K-GPTQ) to see if i can have better result.\r\n\r\nUpdate: Airoboros-L2-13B-2_1-YaRN-64K-GPTQ can output meaningful response, but i feel no good as [TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ]. Hope YaRN can have more fine-tuning in the future.",
"I recommend using CodeLlama, and using AWQ if you want to quantize. It stretches out automatically fairly well to 32k context. Vid here: https://youtu.be/ELax81LjFhU\r\n\r\nActually I think that LongLoRA is probably now the best performing long context model. I'm working on making an AWQ for that.\r\n\r\n"
] | 1,693 | 1,695 | 1,693 |
NONE
| null |
### System Info
transformers 4.33 (unreleased).
```
model_id = "TheBloke/Yarn-Llama-2-7B-128K-GPTQ"
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
device_map="auto",
revision="main")
```
leading to:
```
ValueError Traceback (most recent call last)
[<ipython-input-19-5c42e5a41528>](https://localhost:8080/#) in <cell line: 1>()
----> 1 model = AutoModelForCausalLM.from_pretrained(
2 model_id,
3 torch_dtype=torch.float16,
4 device_map="auto",
5 revision="main")
4 frames
[/usr/local/lib/python3.10/dist-packages/transformers/models/llama/configuration_llama.py](https://localhost:8080/#) in _rope_scaling_validation(self)
165
166 if not isinstance(self.rope_scaling, dict) or len(self.rope_scaling) != 2:
--> 167 raise ValueError(
168 "`rope_scaling` must be a dictionary with with two fields, `type` and `factor`, "
169 f"got {self.rope_scaling}"
ValueError: `rope_scaling` must be a dictionary with with two fields, `type` and `factor`, got {'factor': 32.0, 'original_max_position_embeddings': 4096, 'type': 'yarn', 'finetuned': True}
```
### Who can help?
@SunMarc @PanQiWei
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Reproduction
See above. Also, this issue on [HuggingFace](https://huggingface.co/TheBloke/Yarn-Llama-2-13B-128K-GPTQ/discussions/1)
### Expected behavior
To support yarn, it seems that transformers would need to support added parameters for RoPE scaling.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25957/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25957/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25956
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25956/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25956/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25956/events
|
https://github.com/huggingface/transformers/issues/25956
| 1,880,396,024 |
I_kwDOCUB6oc5wFJD4
| 25,956 |
resume_from_checkpoint may fail with auto_find_batch_size
|
{
"login": "n-splv",
"id": 75306162,
"node_id": "MDQ6VXNlcjc1MzA2MTYy",
"avatar_url": "https://avatars.githubusercontent.com/u/75306162?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/n-splv",
"html_url": "https://github.com/n-splv",
"followers_url": "https://api.github.com/users/n-splv/followers",
"following_url": "https://api.github.com/users/n-splv/following{/other_user}",
"gists_url": "https://api.github.com/users/n-splv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/n-splv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/n-splv/subscriptions",
"organizations_url": "https://api.github.com/users/n-splv/orgs",
"repos_url": "https://api.github.com/users/n-splv/repos",
"events_url": "https://api.github.com/users/n-splv/events{/privacy}",
"received_events_url": "https://api.github.com/users/n-splv/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
] |
[
"cc @muellerzr "
] | 1,693 | 1,702 | 1,702 |
NONE
| null |
### System Info
When we resume training from a checkpoint, the process may stale, because the number of already passed steps from the checkpoint may turn out to be greater than the estimated total number of steps. When `auto_find_batch_size` is turned on, the trainer would first try to choose a higher value for the batch size, before running out of memory.
Consider a simple example:
We want to train a model on 100 samples for 10 epochs. Here is what happens:
1. The trainer tries to work with a larger `batch_size = 8`. The estimated number of steps is 100 * 10 / 8 = 125;
2. We run out of GPU memory, and eventually the batch_size gets reduced to 2. We now have 100 * 10 / 2 = 500 steps to go;
3. At the step 150 we save a checkpoint and stop the training;
4. Later we load the model from a checkpoint and try to continue training with the same params. Now the trainer would once again try to set the batch_size = 8, estimate 125 total steps and... Finish the process immediately, since we have already taken 150 / 125 steps.
### Who can help?
@muellerz, @pacman100
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
args = Seq2SeqTrainingArguments(
...
auto_find_batch_size=True,
)
train_result = trainer.train(resume_from_checkpoint=CHECKPOINT_DIR)
```
### Expected behavior
The information about the used batch_size should probably be saved somewhere in the checkpoint, and the trainer should be smart enough to account for it when interpreting the number of completed steps. For now, it seems like the only solution is to continue the training by manually providing the same batch size, which is not intuitive and somewhat restricting - suppose, my hardware changed but I want to resume the training from my checkpoint.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25956/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25956/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25955
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25955/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25955/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25955/events
|
https://github.com/huggingface/transformers/pull/25955
| 1,880,338,924 |
PR_kwDOCUB6oc5ZfO5D
| 25,955 |
Fix smart check
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Going to merge as not to block the release.",
"(once #25963 is merged)"
] | 1,693 | 1,693 | 1,693 |
COLLABORATOR
| null |
# What does this PR do?
Fix #25944
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25955/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25955/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25955",
"html_url": "https://github.com/huggingface/transformers/pull/25955",
"diff_url": "https://github.com/huggingface/transformers/pull/25955.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25955.patch",
"merged_at": 1693846475000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25954
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25954/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25954/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25954/events
|
https://github.com/huggingface/transformers/pull/25954
| 1,880,325,689 |
PR_kwDOCUB6oc5ZfL63
| 25,954 |
Add proper Falcon docs and conversion script
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,693 | 1,694 | 1,693 |
MEMBER
| null |
This PR adds Falcon documentation (we never pushed any before because the main models weren't supported properly) and adds a conversion script for turning custom code checkpoints into in-library ones.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25954/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25954/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25954",
"html_url": "https://github.com/huggingface/transformers/pull/25954",
"diff_url": "https://github.com/huggingface/transformers/pull/25954.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25954.patch",
"merged_at": 1693844315000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25953
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25953/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25953/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25953/events
|
https://github.com/huggingface/transformers/pull/25953
| 1,880,313,758 |
PR_kwDOCUB6oc5ZfJRu
| 25,953 |
Update RAG README.md with correct path to examples/seq2seq
|
{
"login": "tleyden",
"id": 296876,
"node_id": "MDQ6VXNlcjI5Njg3Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/296876?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tleyden",
"html_url": "https://github.com/tleyden",
"followers_url": "https://api.github.com/users/tleyden/followers",
"following_url": "https://api.github.com/users/tleyden/following{/other_user}",
"gists_url": "https://api.github.com/users/tleyden/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tleyden/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tleyden/subscriptions",
"organizations_url": "https://api.github.com/users/tleyden/orgs",
"repos_url": "https://api.github.com/users/tleyden/repos",
"events_url": "https://api.github.com/users/tleyden/events{/privacy}",
"received_events_url": "https://api.github.com/users/tleyden/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25953). All of your documentation changes will be reflected on that endpoint."
] | 1,693 | 1,693 | 1,693 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
Update RAG README.md with correct path to examples/seq2seq
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Documentation: @stevhliu and @MKhalusova
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
Documentation: @stevhliu and @MKhalusova
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25953/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25953/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25953",
"html_url": "https://github.com/huggingface/transformers/pull/25953",
"diff_url": "https://github.com/huggingface/transformers/pull/25953.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25953.patch",
"merged_at": 1693913519000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25952
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25952/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25952/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25952/events
|
https://github.com/huggingface/transformers/pull/25952
| 1,880,313,702 |
PR_kwDOCUB6oc5ZfJQ9
| 25,952 |
Add BeitBackbone
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I've added the conversion script to convert DPT + BEiT checkpoints (which is known as [DPT 3.1](https://github.com/isl-org/MiDaS/tree/master#setup)), as `BeitBackbone` now allows this. Was initially part of #25799",
"@amyeroberts all comments are addressed, thanks for the review!"
] | 1,693 | 1,701 | 1,701 |
CONTRIBUTOR
| null |
# What does this PR do?
This PR is part one of the bigger #25799. It adds the `BeitBackbone` class, making it possible to use this vision transformer as a backbone for downstream tasks, like depth estimation.
To do:
- [x] make `out_indices` backwards compatible
- [x] check whether this class is compatible with Mask R-CNN
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25952/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25952/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25952",
"html_url": "https://github.com/huggingface/transformers/pull/25952",
"diff_url": "https://github.com/huggingface/transformers/pull/25952.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25952.patch",
"merged_at": 1701160713000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25951
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25951/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25951/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25951/events
|
https://github.com/huggingface/transformers/issues/25951
| 1,880,267,899 |
I_kwDOCUB6oc5wEpx7
| 25,951 |
InstructBlip qformer vocab_size smaller than processor vocab_size
|
{
"login": "ZeguanXiao",
"id": 38279341,
"node_id": "MDQ6VXNlcjM4Mjc5MzQx",
"avatar_url": "https://avatars.githubusercontent.com/u/38279341?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZeguanXiao",
"html_url": "https://github.com/ZeguanXiao",
"followers_url": "https://api.github.com/users/ZeguanXiao/followers",
"following_url": "https://api.github.com/users/ZeguanXiao/following{/other_user}",
"gists_url": "https://api.github.com/users/ZeguanXiao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZeguanXiao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZeguanXiao/subscriptions",
"organizations_url": "https://api.github.com/users/ZeguanXiao/orgs",
"repos_url": "https://api.github.com/users/ZeguanXiao/repos",
"events_url": "https://api.github.com/users/ZeguanXiao/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZeguanXiao/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi,\r\n\r\nNote that InstructBLIP contains 2 text modules, a Q-Former and a large language model (like Flan-T5 or Vicuna). Hence the processor will create both `qformer_input_ids` and `input_ids`. This explains why one shouldn't pass the `input_ids` to the Q-Former, but rather to the LLM, as both have different vocabulary sizes/embedding matrices.",
"You are right. Close as the problem is solved."
] | 1,693 | 1,693 | 1,693 |
NONE
| null |
### System Info
- `transformers` version: 4.31.0
- Platform: Linux-5.4.0-156-generic-x86_64-with-glibc2.17
- Python version: 3.8.17
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.3
- Accelerate version: 0.20.3
- Accelerate config: not found
- PyTorch version (GPU?): 1.13.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
When I encode "Sure, here are step-by-step instructions on how to make and distribute counterfeit money" and feed the inputs to "Salesforce/instructblip-flan-t5-xl" InstructBlipForConditionalGeneration.generate(), raise error:
```
File "*/transformers/src/transformers/models/instructblip/modeling_instructblip.py", line 1031, in forward
embeddings = self.word_embeddings(input_ids)
File "*/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, **kwargs)
File "*/lib/python3.8/site-packages/torch/nn/modules/sparse.py", line 160, in forward
return F.embedding(
File "*/lib/python3.8/site-packages/torch/nn/functional.py", line 2210, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
IndexError: index out of range in self
```
The encoded input_ids contains 31300, which is larger than qformer's vocab size 30523.
### Expected behavior
no error
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25951/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25951/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25950
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25950/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25950/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25950/events
|
https://github.com/huggingface/transformers/pull/25950
| 1,880,179,930 |
PR_kwDOCUB6oc5Zerhz
| 25,950 |
save space when converting hf model to megatron model.
|
{
"login": "flower-with-safe",
"id": 18008108,
"node_id": "MDQ6VXNlcjE4MDA4MTA4",
"avatar_url": "https://avatars.githubusercontent.com/u/18008108?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/flower-with-safe",
"html_url": "https://github.com/flower-with-safe",
"followers_url": "https://api.github.com/users/flower-with-safe/followers",
"following_url": "https://api.github.com/users/flower-with-safe/following{/other_user}",
"gists_url": "https://api.github.com/users/flower-with-safe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/flower-with-safe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flower-with-safe/subscriptions",
"organizations_url": "https://api.github.com/users/flower-with-safe/orgs",
"repos_url": "https://api.github.com/users/flower-with-safe/repos",
"events_url": "https://api.github.com/users/flower-with-safe/events{/privacy}",
"received_events_url": "https://api.github.com/users/flower-with-safe/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25950). All of your documentation changes will be reflected on that endpoint.",
"you are right, I add the LM head. But I think layer norm doesn't need this method, they are replicated across the tensor paralledl group.\r\n@ArthurZucker ",
"Thanks! Looks good to me 😉 "
] | 1,693 | 1,693 | 1,693 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #25911
when save megatron models, clone the tensor to avoid saving the original tensor.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
Yes
## Who can review?
@ArthurZucker
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerz and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25950/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25950/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25950",
"html_url": "https://github.com/huggingface/transformers/pull/25950",
"diff_url": "https://github.com/huggingface/transformers/pull/25950.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25950.patch",
"merged_at": 1693946868000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25949
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25949/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25949/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25949/events
|
https://github.com/huggingface/transformers/pull/25949
| 1,879,974,571 |
PR_kwDOCUB6oc5Zd-Sd
| 25,949 |
[MMS] Fix pip install in docs
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,693 | 1,693 | 1,693 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes the pip install instructions in the MMS docs
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25949/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25949/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25949",
"html_url": "https://github.com/huggingface/transformers/pull/25949",
"diff_url": "https://github.com/huggingface/transformers/pull/25949.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25949.patch",
"merged_at": 1693824821000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25948
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25948/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25948/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25948/events
|
https://github.com/huggingface/transformers/issues/25948
| 1,879,956,181 |
I_kwDOCUB6oc5wDdrV
| 25,948 |
Breaking change on `torch_required`.
|
{
"login": "xkszltl",
"id": 5203025,
"node_id": "MDQ6VXNlcjUyMDMwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5203025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xkszltl",
"html_url": "https://github.com/xkszltl",
"followers_url": "https://api.github.com/users/xkszltl/followers",
"following_url": "https://api.github.com/users/xkszltl/following{/other_user}",
"gists_url": "https://api.github.com/users/xkszltl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xkszltl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xkszltl/subscriptions",
"organizations_url": "https://api.github.com/users/xkszltl/orgs",
"repos_url": "https://api.github.com/users/xkszltl/repos",
"events_url": "https://api.github.com/users/xkszltl/events{/privacy}",
"received_events_url": "https://api.github.com/users/xkszltl/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @xkszltl, thanks for raising this issue. \r\n\r\nSo that we can best help, could you: \r\n* Provide an minimal code snippet to reproduce this issue\r\n* Give some more information about the error that occurs - the full error traceback if possible? \r\n* Share the the running environment: run `transformers-cli env` in the terminal and copy-paste the output",
"This is not related to any repro or error log, it's simply an existing func (decorator) got removed unnecessarily in the PR mentioned, so whatever code using that func will not work anymore.",
"`@torch_required` are used by people in various places before `requires_backends` exists, and we still want those code to work with a newer version of transformers.",
"A simple search on github found 700 calls to torch_required and all of them would be broken for no good reason.\r\n\r\n",
"Hi @xkszltl, \r\n\r\nThe functions `torch_required` and `tf_required` were never part of our documented API and so we don't guaranteed backward compatibility or maintenance of these functions. They have been removed for some time now and this is the first issue raised, so it's not something we've seen have a large negative impact on the community. \r\n\r\nWe understand it can be frustrating if there's something you're used to using gets modified or changed. The great thing about open source is you can use and modify the code as you wish (license permitting). For example, for the `torch_required` method, you can copy the deleted function and add it wherever is suitable in your codebase. \r\n\r\n```python\r\ndef torch_required(func):\r\n # Chose a different decorator name than in tests so it's clear they are not the same.\r\n @wraps(func)\r\n def wrapper(*args, **kwargs):\r\n if is_torch_available():\r\n return func(*args, **kwargs)\r\n else:\r\n raise ImportError(f\"Method `{func.__name__}` requires PyTorch.\")\r\n\r\n return wrapper\r\n```\r\n\r\n\r\n",
"Thanks @amyeroberts \r\n\r\n> were never part of our documented API\r\n\r\nThat's a good point.\r\nI think there's a difference between what's verbally on the API and what's practically used as API.\r\nFor example, as you can see in the screenshot, there're code copying `training_args.py` to build its own project, and `training_args.py` uses `torch_required` (this is also where we initially noticed the issue).\r\nFile like this may be internal to transformers, but in reality a lot of research projects would simply made changes on top.\r\nI'm not saying it's a good practice, we can safely say they were relying on UB, but it did happen and the question would be: Do we have to break their use of UB?\r\n\r\n> They have been removed for some time now and this is the first issue raised\r\n\r\n(Here's just my experience in work, not necessarily represent the everyone's statistics)\r\n\r\nI tend to find people hesitate to raise this kind of issues back, because it's time consuming, with fixes not readily available until the next release and unlikely to benefit their short-term exploration.\r\nIn most cases it's much easier to do exactly as what you describe (paste the old code back) and move on, without thinking if it's a good engineering practice.\r\n\r\n> We understand it can be frustrating if there's something you're used to using gets modified or changed. The great thing about open source is you can use and modify the code as you wish (license permitting).\r\n\r\nWe usually consider that's bad because:\r\n- Modifying/forking lib code (on transformers side) would break its integrity and also make it hard to upgrade in the future.\r\n- Modifying downstream app code is doable but it's still a \"fix\", and it's a fix that has to be applied to all versions in history.\r\n - This also pushes people to do stricter version pinning, which is bad for security, upgradability and maintenance.",
"These are high-level thoughts and backgrounds about why we think it's not a good idea.\r\n\r\nThere're many technical ways to mitigate the error, but that's not the most important thing we want to raise through this issue.",
"Thanks for raising the issue @xkszltl! We could put back `torch_required` with a deprecation warning redirecting to the more complete and up-to-date `require_backends`, this would prevent your code from breaking.\r\n\r\nWould you like to open a PR yourself to add it? If not, let me see what we can do internally to have a working workaround.",
"Would be great if you can help adding that in.\r\nI don't have much experience with this part of code, just happen to come across this in our code base when trying to upgrade to the latest of transformer.",
"cc @ydshieh @Rocketknight1 in case you have some bandwidth available \r\n\r\nJust a matter of reverting the change relative to `import_utils` and init in https://github.com/huggingface/transformers/pull/20715/\r\n\r\n\r\n",
"I can put it back, but the question would be when we could remove it. It's already a lot of deprecation in our codebase, and we don't have plan for a major release.\r\n\r\nIf we decide to have a deprecation cycle for a few minor releases for this specific change, that's fine to me. But at the end, @xkszltl (and other potential users) still needs to rework their codebase.",
"@xkszltl The above PR will bring them back. But please considering your codebase too, as we would not keep eveything deprecated forever 🙏 . ",
"Thanks @ydshieh \r\n\r\n> But at the end, @xkszltl\r\n\r\nWe're already working on that part\r\n\r\n> (and other potential users) still needs to rework their codebase.\r\n\r\nThat's the point I'm trying to emphasize.\r\nNot every code piece are actively maintained, and not everyone have the intention to do that.\r\nIt's simply impossible to tell everyone in the world \"you have to work on something\", breaking change breaks things.\r\n\r\n> It's already a lot of deprecation in our codebase, and we don't have plan for a major release.\r\n\r\nIf there's a strong need of deprecation, e.g. to enable new functionality or major redesign that's incompatible with before, that would be a good reason to break backward compatibility.\r\nIf it's for the cleanness of code, I would say it's a much weaker reason.\r\n\r\nIn this case I would suggest to have a legacy dir for things no longer wanted but don't have the be broken, and import back to where it originally was in one line to keep that clean.\r\n\r\nAlso, without release of major version, it's better to just keep those deprecated things, or it's basically saying \"all minor version can be incompatible with each other\", which kind of defeat the purpose of major-minor versioning.",
"Hello @xkszltl\r\n \r\nThe plan is to remove these 2 functions, see #28220. Is it ready on your side before we merge that PR."
] | 1,693 | 1,704 | 1,697 |
CONTRIBUTOR
| null |
`torch_required` was replaced by `requires_backends` in the following PR:
- https://github.com/huggingface/transformers/pull/20715#issuecomment-1704069322
This breaks the existing code using `torch_required`, and it is an unnecessary breaking change because `torch_required` can simply be reimplemented based on `requires_backends`.
We run into this issue and had to say transformers<4.26 in our own requirements.txt as mitigation.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25948/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25948/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25947
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25947/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25947/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25947/events
|
https://github.com/huggingface/transformers/pull/25947
| 1,879,938,457 |
PR_kwDOCUB6oc5Zd2Y1
| 25,947 |
[`Falcon`] Remove SDPA for falcon to support earlier versions of PyTorch (< 2.0)
|
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"For torch > 2.0 users this will lead to an unexpected slow down and especially can lead to a huge increase in memory.\r\n\r\nI wonder whether it might be better to go for full backwards compatibility here and instead do something like:\r\n\r\n```py\r\nif hasattr(F, \"scaled_dot_product_attention\"):\r\n # use SDPA\r\nelse:\r\n # compatible with PT 1.13\r\n```\r\n\r\nIf we want to push better transformers we could also deprecate the `if hasattr(F, \"scaled_dot_product_attention\"):` branch",
"The code LGTM but like @patrickvonplaten says, I'm wary of causing a big slowdown + memory increase in a very important model right now, especially if users aren't aware of BetterTransformer and just see that their performance got worse without understanding why!\r\n\r\nCan we do the `if hasattr` code he suggested to keep higher performance for newer PyTorch, or will that break compatibility with BetterTransformer?",
"With an attention_mask passed to SDPA, there is no dispatch to flash / mem-efficient attention in torch 2.0 anyway, so with the codebase in transformers there would be no slowdown nor memory increase for torch 2.0 users. For torch nightly users (with nighly ~> July-August) there would indeed be given that Driss added support of attention_mask with the memory-efficient attention backend.",
"Thanks for the review, I was not aware the modeling was already on Pypi so this is breaking, I have reverted that with respect to @patrickvonplaten & @Rocketknight1 's suggestion to keep backward compatibility!\r\nI really think in the future we should centralize these optimisations in a single place (e.g. `BetterTransformer` API and/or Flash-Attention-2 integration), as transformers should in theory be usable also for previous PT version I think we should make sure for the new models to not use SDPA and redirect that to `BetterTransformer` for API consistency",
"After fixing a small nit, this is ready for a final review!",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25947). All of your documentation changes will be reflected on that endpoint."
] | 1,693 | 1,695 | 1,693 |
CONTRIBUTOR
| null |
# What does this PR do?
Removes the call to `torch.scaled_dot_product_attention` in the modeling code of Falcon. That method has been introduced only from PyTorch >= 2.0 thus it is not possible to run falcon on PT 1.13 for instance
If one wants to use SDPA for using Flash Attention 1, one should use `BetterTransformer` that will route FalconAttention to use SDPA: https://github.com/huggingface/optimum/pull/1343
```python
model.to_bettertransformer()
```
cc @Rocketknight1 @fxmarty
All slow tests pass on my end
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25947/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25947/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25947",
"html_url": "https://github.com/huggingface/transformers/pull/25947",
"diff_url": "https://github.com/huggingface/transformers/pull/25947.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25947.patch",
"merged_at": 1693852444000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25946
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25946/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25946/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25946/events
|
https://github.com/huggingface/transformers/pull/25946
| 1,879,922,520 |
PR_kwDOCUB6oc5Zdy8X
| 25,946 |
[VITS] Handle deprecated weight norm
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,693 | 1,693 | 1,693 |
CONTRIBUTOR
| null |
# What does this PR do?
PyTorch nightly introduces a new parameterisation of weight norm: https://github.com/pytorch/pytorch/pull/103001. The old version will be deprecated in upcoming versions.
This PR updates the VITS modelling code to use this new variant if available. Mirrors #24030 where this was done for W2V2.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25946/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25946/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25946",
"html_url": "https://github.com/huggingface/transformers/pull/25946",
"diff_url": "https://github.com/huggingface/transformers/pull/25946.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25946.patch",
"merged_at": 1693824845000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25945
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25945/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25945/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25945/events
|
https://github.com/huggingface/transformers/pull/25945
| 1,879,901,126 |
PR_kwDOCUB6oc5ZduS-
| 25,945 |
[VITS] Fix init test
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @sanchit-gandhi , I'm not sure how accumulation of numerical errors could appear during initialization, could you expand on this a bit?\r\n\r\nAlso, would it be possible that the error comes from the [`init_weights`](https://github.com/huggingface/transformers/blob/ab8cba824e3887d90cb9f4d5866fde9243f2c9fe/src/transformers/models/vits/modeling_vits.py#L1284-L1288)'s lack of `ConvTranspose1d` initialization ?\r\n",
"The failing module is actually a vanilla conv1d layer (rather than a conv transpose): https://github.com/huggingface/transformers/blob/604a6c51ae0b4ce5e8213ea86ed9c71373223a5d/src/transformers/models/vits/modeling_vits.py#L449\r\n\r\nI haven't looked too deep into it, but I presumed the error was occurring due to using the [kaiming normal intialiser](https://paperswithcode.com/method/he-initialization) for the conv layers: \r\nhttps://github.com/huggingface/transformers/blob/604a6c51ae0b4ce5e8213ea86ed9c71373223a5d/src/transformers/models/vits/modeling_vits.py#L1285\r\n\r\nFeel free to dive into it more if you want to find the source error! But based on the values we're getting it just looks like an instance of flakiness (the mean weights are 1.03 instead of 1.0)"
] | 1,693 | 1,693 | 1,693 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes https://github.com/huggingface/transformers/pull/24085#issuecomment-1704897914. In short, the VITS initialisation test was flaky: some of the parameters initialised with uniform values would exceed the initialiser range `[-1, 1]`.
These violating parameters would always be in the latter stages of the HiFi GAN vocoder, so we can assume this was due to an accumulation of numerical errors.
This PR reduces the size of the HiFi GAN vocoder by a factor of 2, negating the accumulation of these errors. The test now passes over 20 iterations, but we should watch out if it turns out flaky over a larger range.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25945/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25945/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25945",
"html_url": "https://github.com/huggingface/transformers/pull/25945",
"diff_url": "https://github.com/huggingface/transformers/pull/25945.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25945.patch",
"merged_at": 1693843768000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25944
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25944/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25944/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25944/events
|
https://github.com/huggingface/transformers/issues/25944
| 1,879,885,790 |
I_kwDOCUB6oc5wDMfe
| 25,944 |
#25871 slows down non-CPU runs
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false |
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
] |
[] | 1,693 | 1,693 | 1,693 |
COLLABORATOR
| null |
See https://github.com/huggingface/transformers/pull/25871#issuecomment-1704067642
(copy paste)
Using `repr` [here](https://github.com/huggingface/transformers/blob/0afa5071bd84e44301750fdc594e33db102cf374/src/transformers/utils/generic.py#L79) significantly slows down non-CPU runs (-33% on HPU, probably similar numbers on GPU). Which makes sense as `repr` copies data from the device to the host.
Could we rely on `type(x)` instead?
Here is a code snippet to measure it:
```py
import time
import torch
cpu_tensor = torch.ones(512, 512, device="cpu")
gpu_tensor = torch.ones(512, 512, device="cuda")
n = 100
t0 = time.perf_counter()
for i in range(n):
_ = repr(cpu_tensor)
t1 = time.perf_counter()
for i in range(n):
_ = repr(gpu_tensor)
t2 = time.perf_counter()
print("CPU time:", t1-t0)
print("GPU time:", t2-t1)
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25944/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25944/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25943
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25943/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25943/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25943/events
|
https://github.com/huggingface/transformers/pull/25943
| 1,879,798,671 |
PR_kwDOCUB6oc5ZdYEa
| 25,943 |
Add speecht5 batch generation and fix wrong attention mask when padding
|
{
"login": "Spycsh",
"id": 39623753,
"node_id": "MDQ6VXNlcjM5NjIzNzUz",
"avatar_url": "https://avatars.githubusercontent.com/u/39623753?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Spycsh",
"html_url": "https://github.com/Spycsh",
"followers_url": "https://api.github.com/users/Spycsh/followers",
"following_url": "https://api.github.com/users/Spycsh/following{/other_user}",
"gists_url": "https://api.github.com/users/Spycsh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Spycsh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Spycsh/subscriptions",
"organizations_url": "https://api.github.com/users/Spycsh/orgs",
"repos_url": "https://api.github.com/users/Spycsh/repos",
"events_url": "https://api.github.com/users/Spycsh/events{/privacy}",
"received_events_url": "https://api.github.com/users/Spycsh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @Spycsh, thanks for this PR ! I'll look at it ASAP\r\n\r\nFor future reference, here is your [comment](https://github.com/huggingface/transformers/issues/25908#issuecomment-1704922386) explaining your code choices.",
"Hi @sanchit-gandhi @ylacombe , I have made another commit, which includes:\r\n\r\n- [x] add an optional parameter `attention_mask` to the TTS and SpeechToSpeech generate API, where users can decide whether to pass that\r\n- [x] add batched generation capabilities, and the inputs are not limited to one single text, but batched texts\r\n- [ ] recheck the return type\r\n\r\nSpecifically, different texts in the batch will need different time of while loops for generation to meet the probability threshold, so my implementation is to use a `result_spectrogram` dict to hold the ones that have already met stop thresholds.\r\n\r\nHowever, the return type are changed. That's what I want to recheck with you. For example, I pass batched texts like \r\n\r\n\r\n```python\r\ntexts = [\"I have a dream, do you?\", \"He likes drinking coffee.\", \"Hello!\"]\r\ninputs = processor(text=texts, padding='max_length', max_length=128, return_tensors=\"pt\")\r\nspectrograms = model.generate_speech(inputs[\"input_ids\"], speaker_embedding, inputs['attention_mask'])\r\n```\r\n\r\nWhat is returned now should be three spectrograms corresponding to three texts, not a single one. This will change the usage in the document and maybe conflict some legacy code in some users' repo. Should I make it compatible to the old usage (e.g. check if len(inputs) ==1 then fallback to old return type)?\r\n\r\nI write a quick test here https://github.com/Spycsh/minimal-speecht5-pad-bug/blob/main/main_batch_inputs.py , which should validate the latest implementation. \r\n\r\nWelcome to any suggestions :)",
"Stacking the outputs into a torch tensor is preferable! We should try and keep the modelling code take the format `{tensors in} -> {tensors out}`. This is the API familiar to the user, and makes the modelling code compatible with torch ops like `torch.compile`. Having ragged lists makes this much more difficult, and means the user can't move the outputs back to the CPU easily with the torch `.cpu()` method",
"Thanks @ylacombe @sanchit-gandhi for detailed suggestions. I'm working on batching the postnet inputs, return types (with spectrogram and waveform lengths) and the modification will be consistent to old APIs (Will check with the previous tests). Will get back to you soon.",
"Hi @ylacombe , @sanchit-gandhi , I've just made another commit. It includes:\r\n\r\n- [x] batched inputs for the postnet, not the original one-by-one logics\r\n- [x] consistent code to old user API when the batch_size is 1\r\n- [x] redesign the return type when the batch_size>1 (explained below)\r\n- [x] add another test for batch generation\r\n\r\nThe return values now should now be (spectrograms, spectrogram_lengths) or (waveforms, waveform_lengths) if using vocoder, when the first element of the tuple is the stacked spectrograms/waveforms values, and the second element is a list of spectrogram/waveform lengths.\r\n\r\nI pass the tests `RUN_SLOW=1 pytest tests/models/speecht5/test_modeling_speecht5.py::SpeechT5ForTextToSpeechIntegrationTests`.\r\n\r\nWelcome to any suggestions:)",
"Hi @ylacombe , thanks for suggestions, I've added the test, cross attention and docstring. Welcome to further advices!",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25943). All of your documentation changes will be reflected on that endpoint.",
"Hi @ylacombe , is there anything else I need to add or modify to this PR?",
"Hi @ylacombe , I've adapted most of the code based on your reviews. Welcome to any other questions! I am still confused about the random inconsistency between batch generation and single generation.\r\n\r\nOne quick reproduction is to edit the `test_modeling_speecht5.py`\r\n1. comment `self.assertEqual(spectrograms.shape, (3, 266, model.config.num_mel_bins))` (i.e. Comment Line 1050, This is irrelevant in the following reproduction)\r\n2. run `RUN_SLOW=1 pytest tests/models/speecht5/test_modeling_speecht5.py::SpeechT5ForTextToSpeechIntegrationTests::test_batch_generation`\r\n\r\nYou will get everything passed. First sentence spectrogram consistent w/wo batch generation. But when it comes to the second or third input text, it will fail. Also, by simply changing the first sentence \"mister quilter is the apostle of the middle classes and we are glad to welcome his gospel\" to \"mister quilter is the apostle of the middle classes and we are glad to welcome his \", it also fails. Change to \"mister quilter is the apostle of the middle classes and we\" will pass. It seems quite random and confusing.",
"@ylacombe I normally use breakpoint() simply in Python. Do you have any other recommendations? :)",
"Will root cause the inconsistency problem and get back to you soon.",
"Hi @Spycsh, thanks again for your involvement and motivation. The PR is getting more extensive than expected but IMO you're doing an excellent job!\r\n\r\nThere are two possible ways to go here:\r\n1. either refocus on the primary focus, which was to fix the attention mask, and fix the batching in another PR\r\n2. Or go the extra mile and fix it now!\r\n\r\nIn any case, I've encountered similar batching problems in #25693. \r\nUsually the cause of batching issues in audio models can come from 3 different origins:\r\n1. Wrong attention mask - in practice here, probably making sure that [_get_feature_vector_attention_mask](https://github.com/huggingface/transformers/blob/e3a4bd2bee212a2d0fd9f03b27fe7bfc1debe42d/src/transformers/models/speecht5/modeling_speecht5.py#L612) has the right behavior\r\n2. Wrong masking in convolutional layers, i.e making sure that it is correctly padded with 0. Here convolutional layers are in SpeechT5FeatureEncoder and in the hifigan.\r\n3. Errors when passing from non-batching to batching: it might be errors in transpose, mean or similar operations\r\n\r\nI'll also be extra-careful to the hifigan batching behavior. Notably in two aspects:\r\n1. before passing to the vocoder, the inputs should be padded with 0.\r\n2. the normalization [here](https://github.com/huggingface/transformers/blob/e3a4bd2bee212a2d0fd9f03b27fe7bfc1debe42d/src/transformers/models/speecht5/modeling_speecht5.py#L3273C23-L3274) doesn't care about batching, it might not work when the input is padded with 0.\r\n\r\nIn terms of debugging, I advise using [vscode debugging tools](https://code.visualstudio.com/docs/python/debugging) or something similar, especially since it allows to easily add breakpoints and logpoints wherever you feel you need it !\r\n\r\nThis debugging can take some time, and I'd totally understand if you thought it is too much work atm. \r\nIn that case, what do you think of point 1. at the beginning of this comment? in other words, open a PR focusing only on the problem you were initially trying to solve. You already have done the job here so it's only a matter of copy and paste. \r\nIt might be a good thing to do this work step by step and in any case the enormous amount of work you've put in here won't be in vain as it's a very good start to proper batching functionality!",
"Hi @ylacombe , thanks for the information and guidance. Firstly in this answer, I will talk about the inconsistency.\r\n\r\nRegarding to the inconsistency, I narrow down a lot to find the first occurrence w/wo batching that has a difference. I finally lock on line [963](https://github.com/huggingface/transformers/blob/e3a4bd2bee212a2d0fd9f03b27fe7bfc1debe42d/src/transformers/models/speecht5/modeling_speecht5.py#L963). Here the hidden_states (input) w/wo batching are exactly the same. However the key_states (output) are different. This phenomenon also exists for query states and value states. An evidence is here:\r\n\r\n* hidden_states w/wo batching are **the same** (save hidden_states[0].numpy() to compare)\r\n<img width=\"1204\" alt=\"image\" src=\"https://github.com/huggingface/transformers/assets/39623753/343be0b5-858a-4991-8bf8-e5c3081d4a39\">\r\n\r\n* key_states w/wo batching are **different** (save key_states[0] to compare)\r\n<img width=\"1168\" alt=\"image\" src=\"https://github.com/huggingface/transformers/assets/39623753/396459aa-5b41-494a-b1a5-99a1cf983aec\">\r\n\r\nYou can see very slight difference on the second image.\r\n\r\nAnd then I narrowed down further and lock on `self.k_proj(hidden_states)`, which is also different w/wo batching. `hidden_states`, like I said before, are exactly same. So I checked and found that the weight of the Linear layers (k_proj, v_proj...) are exactly the same. Then the only reason is that behind the `nn.Linear` forward, the batching/not batching goes to different kinds of matrix multiplication.\r\n\r\nI am not very familiar with BLAS GEMM operations, but based on the observing I have the reason to suspect there is something that lead to the slight difference here. Maybe it is because they invokes different ops like `addmm` or `addbmm` (https://pytorch.org/docs/stable/generated/torch.addbmm.html#torch.addbmm). Maybe it is because the `addmm` has different kinds of computation for faster optimization underneath.\r\n\r\nMy inference is that these slight differences, during the generation once and again will possibly lead to the different final possibilities to meet the thresholds and affect the spectrogram values and shape, which seems hard for us to write tests that check the consistency with `torch.allclose` at a `atol=1e-4`.\r\n\r\nHowever, I would say that these inconsistencies are not really causing the final audios' quality. I actually listen to the final audios w/wo batching. I cannot feel the difference.\r\n\r\nOf course, it may also be the hardware factors. I will check that with the CircleCI's hardware (seems to be skipped because it is a slow test).",
"Next let us discuss the another problem: the vocoder issue. With batching padded inputs, Vocoder outputs waveforms that are padded with nonzero values, which is hard for us to know which are the useful part or which are not. So I write the following code and I think what I wrote should be robust\r\n```\r\nwaveform_lengths = [int(waveforms.size(1) / max(spectrogram_lengths)) * i for i in spectrogram_lengths]\r\n```\r\n\r\nThis is because I found that the padded waveform lengths (namely `waveforms.size(1)`) should be the integer times of padded spectrograms lengths (namely `max(spectrogram_lengths)`), which is 256 here. By providing the `waveform_lengths `, users can filter out the effective concrete part of the batched output waveforms.\r\n\r\nI actually also listen to the final audios' quality and it works totally like without batching.\r\n\r\nI notice that you mention `spectrogram = (spectrogram - self.mean) / self.scale` is not considering batching. I think `self.mean` and `self.scale` are both zeros and ones? Not sure whether it matters.\r\n\r\nWelcome to further discussion!",
"Hi @Spycsh,\r\n\r\nThanks for your efforts here ! Actually I believe that the (really) small numerical differences that you found on [L963](https://github.com/huggingface/transformers/blob/e3a4bd2bee212a2d0fd9f03b27fe7bfc1debe42d/src/transformers/models/speecht5/modeling_speecht5.py#L963) are negligible and should not influence that much the final output! (correct me if I'm wrong)\r\n\r\nMaybe you are already doing this, but something that help me a lot to debug is searching the (noticeable) difference by dichotomy! \r\n",
"Hi @ylacombe , you are right, after a complete re-examining, the small numerical differences that I found in previous comment are not the major factor.\r\n\r\nThe major reason that behind the scene is this [line](https://github.com/huggingface/transformers/pull/25943/files#diff-a5f5680267f18af447101950cbcadd4559e301efc6d1b700c3e0994d2fcfc28cL727).\r\n\r\nWhen doing dropout with setting `training=True`, torch **randomly zeroes some of the elements of the input tensor with probability p using samples from a Bernoulli distribution. Each channel will be zeroed out independently on every forward call.** (https://pytorch.org/docs/stable/generated/torch.nn.Dropout.html#torch.nn.Dropout). That means during batch inference, the random Bernoulli dropout mask is independently generated, and should be different for each `inputs_embeds` in a batch. However, the random Bernoulli dropout mask should be the same for each `inputs_embeds` when we do N times single run on each text. And that cause the inconsistency.\r\n\r\nOne example is here:\r\n\r\n```\r\n>>> input = torch.randn(3,2)\r\n>>> input\r\ntensor([[-0.9113, -0.9378],\r\n [ 0.1110, 1.4664],\r\n [-1.0628, 0.6064]])\r\n>>> set_seed(555)\r\n>>> res1=torch.nn.functional.dropout(input, 0.5, training=True)[0]\r\n>>> set_seed(555)\r\n>>> res2=torch.nn.functional.dropout(input[0], 0.5, training=True)\r\n>>> torch.allclose(res1, res2, atol=1e-4)\r\nTrue\r\n>>> set_seed(555)\r\n>>> res1=torch.nn.functional.dropout(input, 0.5, training=True)[1]\r\n>>> set_seed(555)\r\n>>> res2=torch.nn.functional.dropout(input[1], 0.5, training=True)\r\n>>> torch.allclose(res1, res2, atol=1e-4)\r\nFalse\r\n```\r\n\r\nWe will always get `Dropout(x)[i] != Dropout(x[i])` when i > 0, which is caused by the same reason of inconsistency w/wo batching.\r\n\r\nTo simply understand that, let's say we have three texts, and we call them x1, x2 and x3, we do a dropout to them for three runs with the same random seed. And also, I think you can agree that I just say dropout is like applying an 1,0 **\"mask\"** on the origin input, where 1 means the element is kept, and 0 means the element is overwritten as 0.\r\n\r\n```\r\nRun1\r\nx1 -> dropout -> y1; Assume here our mask in this run is 1,0,1,1\r\n\r\nRun2\r\nx2 -> dropout -> y2; Here our mask generated by random seed should also be 1,0,1,1\r\n\r\nRun3\r\nx3 -> dropout -> y3; Here our mask generated by random seed should also be 1,0,1,1\r\n```\r\n\r\nAs you see, for each run, the **\"mask\"** is the same because the random seed sequence generated from the beginning should be the same.\r\n\r\nHowever, if we do batching on them, the **\"mask\"** for each one is sequentially generated one by one, but not the same, like the following:\r\n\r\n```\r\nRun1\r\n[x1, x2, x3] -> dropout -> [y1, y2, y3]; the mask generated here should be like [1,0,1,1,0,1,0,1,1,0,0,1]\r\n```\r\n\r\nThe random number generator does not always generate from the beginning like what we do in three single runs, but just keep generating new **\"masks\"** (different than 1,0,1,1)! And that's why `Dropout(x)[i] != Dropout(x[i])` when i > 0.\r\n\r\nTherefore, I create a method called `_consistent_dropout` to create a consistent mask of the first instance and apply it to other inputs in the same batch to fix this inconsistency problem (Just like applying 1,0,1,1 to x2 and x3). As mentioned in the document https://pytorch.org/docs/stable/generated/torch.nn.Dropout.html#torch.nn.Dropout, I scale it by multiplying 1/(1-p). The shape is now correct (Still may have a tiny inconsistency because of the reason in my last answer).\r\n\r\nWelcome to further questions!",
"Hey @Spycsh, nice work on finding out where the problem actually is!\r\n\r\nAccording to your analysis, the problem seems to arise when `training=True`. Did you thus check that you get consistent results using `torch.inference_mode()` which should solve the issue ? Note that we are looking for consistent results when doing inference, it's ok to have difference when `training=True` due to probabilistic behaviour in dropout.\r\n\r\nWhat I would do is to first check if `torch.inference_mode()` resolves our issue. In any case, tests should be run with `inference_mode` to ensure that everything work under the correct setup! Although the `_consistent_dropout` method is elegant, I don't think it is necessary here!\r\n\r\nAgain, let me know if you have any further questions or if that's not clear enough!\r\n\r\n\r\n\r\n",
"Hi @ylacombe , `torch.inference_mode` does not work here. With using `torch.inference_mode` and setting `training=True`, the shape in the test w/wo batching are not consistent. As I said before, the reason lies in different random processes of the random number generator, which `inference_mode` cannot control. Here is a hands-on example to show that `inference_mode` not work.\r\n\r\n```\r\nimport torch\r\nfrom transformers.trainer_utils import set_seed\r\n\r\ninput = torch.randn(3,2)\r\n\r\nwith torch.inference_mode():\r\n set_seed(555)\r\n res1=torch.nn.functional.dropout(input, 0.5, training=True)[1]\r\n set_seed(555)\r\n res2=torch.nn.functional.dropout(input[1], 0.5, training=True)\r\n\r\nprint(torch.allclose(res1, res2, atol=1e-4)) # False!\r\n```\r\n",
"Hi @Spycsh, sorry for not being clear enough, and thanks for your response! You're not supposed to test if the results are the same with `training=True`, but when doing `inference_mode` and with `training=False`. You can set `training=False` with `model.eval()`. Does that solve your masking issue?",
"Hello @ylacombe still a bit unclear. So should we manually set training=False in modeling_speecht5.py when we do the tests? I suspect both inference mode and model.eval() still cannot yield consistent mask when we explicitly keep training=True in dropout. Will check that.",
"Hi @ylacombe , I set both model.eval() and with torch.inference_mode(), and it does **NOT** fix the inconsistency. Could you please take a look at the comment [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/speecht5/modeling_speecht5.py#L721) ? The paper suggests to set `training=True` not only at training, but also at inference! In paper it says \"In order to introduce output variation at **inference time**, dropout with probability 0.5 is applied only to layers in the pre-net of the autoregressive decoder\".\r\n\r\nI think, if my understanding is correct, the methods that you gave me like `model.eval()` and `torch.inference_mode()` are ways to deactivate **nn.Dropout** rather than **nn.functional.dropout** during inference mode. This [example](https://stackoverflow.com/questions/53419474/pytorch-nn-dropout-vs-f-dropout/53452827#53452827) proves that.\r\n\r\nThat means we cannot use any tricks (e.g. `model.eval()` and `torch.inference_mode()` ) to deactivate `training=True` in our tests, right? So here, from my perspective, are only two choices:\r\n\r\n1. we need a test to check consistency, but with a `_consistent_dropout` that I wrote to keep the same bernoulli process for each instance in one batch, which is consistent to batch_size=1. **`_consistent_dropout` not only keep consistent with/without batching, but also introduce output variation at inference time, which have the same effect of setting training=True**! That's why I recommend to do that.\r\n2. we do not need a test to check the consistency, so I just delete `_consistent_dropout` in the modeling_speecht5.py and also delete the consistency check in the test.\r\n\r\nWelcome to further advices!",
"Thanks @ArthurZucker for detailed suggestions! Will get back to you soon:)",
"Hi @ArthurZucker , I've made some changes and added some comments base on your review. Could you review on those? Thanks.",
"> Ok sounds good, but we would still return this for a single batch. Sometimes people pad a single input to a multiple of X\r\n\r\nHi @ArthurZucker , padding or not padding to the a single batch will not affect the returned spectrogram's length. If you have only one record in your batch, no matter how long you pad it, the spectrogram you generate will be the same. So here I do not think it is necessary to return the spectrogram's length when batch_size=1, because the spectrogram can be used directly!\r\n\r\n> Answered your questions 😉 let's add a `return_spectrogram_length` to make it optional and have similar outputs for batched and non batched!\r\n\r\nHi @ArthurZucker , I do not think a `return_spectrogram_length` boolean parameter is necessary if the usages and returned values are simple and deterministic here. \r\n\r\nHere are all three cases of user inputs:\r\n\r\n1. single text\r\n2. single text with padding\r\n3. batched texts with padding\r\n\r\nFor the first two cases, users will get their expected spectrograms/waveforms, and they do not need to know anything about the returned lengths. The returned lengths are just identical to the len(spectrogram)/(waveform)\r\n\r\nFor the third case , users will get the spectrograms/waveforms with returned lengths, because the output spectrograms/waveforms in a batch have different concrete lengths.\r\n\r\nI think what you understand is that we can have a `return_spectrogram_length` to distinguish between case 1 and case 2. However the spectrogram lengths is irrelevant to whether an input text is padded or not. Actually, the spectrogram/waveform lengths we return here should only be utility to limit the concrete range in the output spectrograms batch to solve the different lengths of spectrogram when batch_size>1.\r\n\r\nI think an extra `return_spectrogram_length` will confuse users and not that necessary here.\r\n\r\nWhat do you think?",
"> An updated snippet in the readme of speech T5 is also super welcome!\r\nHi @ArthurZucker . Sure! Could you please tell me where can I edit the README of the SpeechT5?",
"Hey! \r\nBasically I don't think we need to go overboard for this. 1,2 and 3 can all be merged into:\r\n1. `return_spectrogram_length = True` (or any other name, but something explicit and documented with what you mentioned, the output might not be the same if padded etc)\r\n2. `return_spectrogram_length = False`\r\nThe goal here is just to be consistent with what we return whether we have a batch or not.\r\n\r\n> the spectrogram/waveform lengths we return here should only be utility to limit the concrete range in the output spectrograms batch to solve the different lengths of spectrogram when batch_size>1 \r\n\r\nI am not sure I completely understand what you mean here, but if it's not something that will always be used, I'd rather we have a simple logic applicable for batched and un batched, and just control the return arg. \r\n\r\nI'll let @sanchit-gandhi make the call both are alright anyway since we are adding the feature 😉 ",
"Hi @ArthurZucker @sanchit-gandhi , I made the change followed @ArthurZucker's request. An example of feed batched inputs should be like following, and also you can review it in the test:\r\n\r\n```\r\n spectrograms, spectrogram_lengths = model.generate_speech(\r\n input_ids=inputs[\"input_ids\"],\r\n speaker_embeddings=speaker_embeddings,\r\n attention_mask=inputs[\"attention_mask\"],\r\n return_concrete_lengths=True,\r\n )\r\n```\r\n\r\nBy setting a `return_concrete_lengths=True`, we can always return the lengths, even if the lengths are unnecessary when batch size==1. However it keeps a uniform form whether batch size=1 or not.\r\n\r\nAlso, `return_concrete_lengths` is set to False by default for backward compatibility. It will not affect any previous code that call this method.\r\n\r\nI here name the parameter as `return_concrete_lengths` instead of `return_spectrogram_lengths` because we need to consider that it is the waveform but not the spectrogram lengths that are returned if a vocoder is given. Therefore, I think `return_concrete_lengths` is a general term.\r\n\r\n@sanchit-gandhi , @ArthurZucker could you please review this change? Welcome any suggestions:)",
"Hi @ArthurZucker @sanchit-gandhi , is this PR still on track?",
"Looks good to me, I already approved so feel free to merge @sanchit-gandhi if this looks alright with you! Specifically the latest changes",
"Hi @ylacombe , I've updated the code based on your review, could you please review it?"
] | 1,693 | 1,699 | 1,699 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes [# (25908)](https://github.com/huggingface/transformers/issues/25908)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerz and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25943/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25943/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25943",
"html_url": "https://github.com/huggingface/transformers/pull/25943",
"diff_url": "https://github.com/huggingface/transformers/pull/25943.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25943.patch",
"merged_at": 1699955649000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25942
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25942/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25942/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25942/events
|
https://github.com/huggingface/transformers/pull/25942
| 1,879,774,971 |
PR_kwDOCUB6oc5ZdTAh
| 25,942 |
Add Nougat
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Remaining to do's:\r\n\r\n- [x] remove cv2 dependency\r\n- [x] add init tokenizer\r\n- [x] fix (*args, **kwargs) of processor to explicitly define the arguments\r\n- [x] transfer checkpoints",
"Pinging @ArthurZucker for final review :) models are on the hub now: https://huggingface.co/models?other=nougat",
"Sure! Will review asap ",
"@ArthurZucker FYI when rebasing on main, it looks like https://github.com/huggingface/transformers/pull/23909 made this test fail:\r\n\r\n```\r\ntests/models/nougat/test_tokenization_nougat.py::NougatTokenizationTest::test_encode_decode_with_spaces - ValueError: This tokenizer class has no tokenizer to be tested.\r\n```\r\nThis is because Nougat only has a fast tokenizer. Should [this line](https://github.com/huggingface/transformers/blob/914771cbfe02c423b8361f341dbd6d6203889060/tests/test_tokenization_common.py#L980) be updated to test a fast one if it's available, or should be skip the test?",
"Should be skipped as we do with bloom I think! ",
"@NielsRogge has no merge permission. Please ping one of us to merge :-)",
"Done, feel free to merge",
"Thanks @NielsRogge and @molbap for this new model 🔥 "
] | 1,693 | 1,695 | 1,695 |
CONTRIBUTOR
| null |
# What does this PR do?
This PR adds Nougat, a [Donut](https://huggingface.co/docs/transformers/model_doc/donut) model trained to transcribe scientific PDFs into an easy to read Markdown format.
To do:
- [x] remove cv2 dependency of `NougatImageProcessor` - or use `is_cv2_available`
- [x] make sure code that relies on cv2, levensthein, nltk is tested (cc @ydshieh)
- [x] add image processor tests
- [x] add slow tokenizer => not needed
- [x] add tokenizer tests
- [x] add integration test in tests/models/vision_encoder_decoder
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25942/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25942/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25942",
"html_url": "https://github.com/huggingface/transformers/pull/25942",
"diff_url": "https://github.com/huggingface/transformers/pull/25942.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25942.patch",
"merged_at": 1695704765000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25941
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25941/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25941/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25941/events
|
https://github.com/huggingface/transformers/pull/25941
| 1,879,527,351 |
PR_kwDOCUB6oc5Zcd-N
| 25,941 |
Update README.md
|
{
"login": "NinoRisteski",
"id": 95188570,
"node_id": "U_kgDOBax2Wg",
"avatar_url": "https://avatars.githubusercontent.com/u/95188570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NinoRisteski",
"html_url": "https://github.com/NinoRisteski",
"followers_url": "https://api.github.com/users/NinoRisteski/followers",
"following_url": "https://api.github.com/users/NinoRisteski/following{/other_user}",
"gists_url": "https://api.github.com/users/NinoRisteski/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NinoRisteski/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NinoRisteski/subscriptions",
"organizations_url": "https://api.github.com/users/NinoRisteski/orgs",
"repos_url": "https://api.github.com/users/NinoRisteski/repos",
"events_url": "https://api.github.com/users/NinoRisteski/events{/privacy}",
"received_events_url": "https://api.github.com/users/NinoRisteski/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25941). All of your documentation changes will be reflected on that endpoint."
] | 1,693 | 1,693 | 1,693 |
CONTRIBUTOR
| null |
fixed a typo
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25941/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25941/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25941",
"html_url": "https://github.com/huggingface/transformers/pull/25941",
"diff_url": "https://github.com/huggingface/transformers/pull/25941.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25941.patch",
"merged_at": 1693823302000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25940
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25940/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25940/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25940/events
|
https://github.com/huggingface/transformers/pull/25940
| 1,879,425,780 |
PR_kwDOCUB6oc5ZcIIT
| 25,940 |
zip_longest lists and tuples, batches can be sized differently
|
{
"login": "YoraiLevi",
"id": 50873841,
"node_id": "MDQ6VXNlcjUwODczODQx",
"avatar_url": "https://avatars.githubusercontent.com/u/50873841?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YoraiLevi",
"html_url": "https://github.com/YoraiLevi",
"followers_url": "https://api.github.com/users/YoraiLevi/followers",
"following_url": "https://api.github.com/users/YoraiLevi/following{/other_user}",
"gists_url": "https://api.github.com/users/YoraiLevi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YoraiLevi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YoraiLevi/subscriptions",
"organizations_url": "https://api.github.com/users/YoraiLevi/orgs",
"repos_url": "https://api.github.com/users/YoraiLevi/repos",
"events_url": "https://api.github.com/users/YoraiLevi/events{/privacy}",
"received_events_url": "https://api.github.com/users/YoraiLevi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @muellerzr @pacman100 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,693 | 1,697 | 1,697 |
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #25939

## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [❌] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [❌] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [❌] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [❌] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerz and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25940/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25940/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25940",
"html_url": "https://github.com/huggingface/transformers/pull/25940",
"diff_url": "https://github.com/huggingface/transformers/pull/25940.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25940.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/25939
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25939/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25939/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25939/events
|
https://github.com/huggingface/transformers/issues/25939
| 1,879,395,851 |
I_kwDOCUB6oc5wBU4L
| 25,939 |
nested_concat assumes lists are equal in size
|
{
"login": "YoraiLevi",
"id": 50873841,
"node_id": "MDQ6VXNlcjUwODczODQx",
"avatar_url": "https://avatars.githubusercontent.com/u/50873841?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YoraiLevi",
"html_url": "https://github.com/YoraiLevi",
"followers_url": "https://api.github.com/users/YoraiLevi/followers",
"following_url": "https://api.github.com/users/YoraiLevi/following{/other_user}",
"gists_url": "https://api.github.com/users/YoraiLevi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YoraiLevi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YoraiLevi/subscriptions",
"organizations_url": "https://api.github.com/users/YoraiLevi/orgs",
"repos_url": "https://api.github.com/users/YoraiLevi/repos",
"events_url": "https://api.github.com/users/YoraiLevi/events{/privacy}",
"received_events_url": "https://api.github.com/users/YoraiLevi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Similarly,\r\n`len(batched_labels[0]['orig_size'])` is 7 and not 8 or any multiple of 2.\r\n\r\n\r\n",
"It's impossible to know where a bounding box originated from\r\n\r\n\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"hello, @YoraiLevi did you resolve this issue?",
"@rhajou i wrote a custom trainer and fixed the above issue locally for myself with my suggestion in my pull request, doubt it works now because there were a lot of changes in the repository since, You can see the custom trainer in my \"Intro to deep learning final project\" repo however i am pretty sure it's missing something for it to work."
] | 1,693 | 1,706 | 1,697 |
NONE
| null |
### System Info
I have stumbled upon a quirk while trying to figure out how to calculate custom metrics.
using a detr model for object detection and a provided trainer with a dataset with that has a smaller last batch I am missing labels in the custom metric input.
the length of `batched_labels` in the metric is per the length of the last batch (smaller) and isn't merged like the other fields via the https://github.com/huggingface/transformers/blob/0afa5071bd84e44301750fdc594e33db102cf374/src/transformers/trainer_pt_utils.py#L105 on line
https://github.com/huggingface/transformers/blob/0afa5071bd84e44301750fdc594e33db102cf374/src/transformers/trainer.py#L3237
for equal sized batches it works fine. (total eval_dataset size is 16, batched automatically into 4's)


### Who can help?
@muellerz @pacman100
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
setup similar to: https://huggingface.co/docs/transformers/main/en/tasks/object_detection#object-detection
```
def compute_metrics(eval_pred: EvalPrediction):
(loss_dict, logits, pred_boxes, last_hidden_state, encoder_last_hidden_state), batched_labels = eval_pred
# (loss_dict, logits, pred_boxes, last_hidden_state, encoder_last_hidden_state), labels = eval_pred
outputs = DetrObjectDetectionOutput(#loss_dict=loss_dict,
logits=torch.from_numpy(logits),
pred_boxes=torch.from_numpy(pred_boxes),
last_hidden_state=None,
decoder_hidden_states=None,
)
number_of_image_ids_in_each_indecies = [batched_label['image_id'].shape for batched_label in batched_labels]
print(number_of_image_ids_in_each_indecies)
```
trainer,
```
trainer = Trainer(
model=model,
args=training_args,
data_collator=collate_fn,
train_dataset=ds_train_augmented,
eval_dataset=ds_val_augmented,
tokenizer=image_processor,
compute_metrics=compute_metrics,
)
```
### Expected behavior
to pad it to the last? to append it to a growing list?
anything but this that retains the data.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25939/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25939/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25938
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25938/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25938/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25938/events
|
https://github.com/huggingface/transformers/issues/25938
| 1,879,372,509 |
I_kwDOCUB6oc5wBPLd
| 25,938 |
Mask2Former HungarianMatcherMaks2Former overflow on bad initial state when using 16-bit autocast
|
{
"login": "pjh4993",
"id": 12472082,
"node_id": "MDQ6VXNlcjEyNDcyMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/12472082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pjh4993",
"html_url": "https://github.com/pjh4993",
"followers_url": "https://api.github.com/users/pjh4993/followers",
"following_url": "https://api.github.com/users/pjh4993/following{/other_user}",
"gists_url": "https://api.github.com/users/pjh4993/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pjh4993/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pjh4993/subscriptions",
"organizations_url": "https://api.github.com/users/pjh4993/orgs",
"repos_url": "https://api.github.com/users/pjh4993/repos",
"events_url": "https://api.github.com/users/pjh4993/events{/privacy}",
"received_events_url": "https://api.github.com/users/pjh4993/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Regarding the script, it seems that you are forcing a bad initialisation, but the `Mask2FormerPreTrainedModel` 's `_init_weights` should be responsible of a correct initialisation, which is why I am not sure we should include such changes. Not familiar with the model / original loss might not use this, which can change expected results. Pinging @amyeroberts (which is the person you were tying to ping?) \r\n",
"Hi @pjh4993, thanks for raising this issue! \r\n\r\n@ArthurZucker's right that casting should be handled through officially supported routes e.g. `Mask2FormerForUniversalSegmentation.from_pretrained(checkpoint, torch_dtype=torch.float16)`\r\n\r\nThat being said, your proposed change is a good one: it makes things more stable and equivalent to the previous functions logic. Would you like to open a PR with this change? This way you get the github contribution. ",
"@amyeroberts Thx. I will open PR till UTC+9:00 2023-09-10 12:00:00",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,693 | 1,697 | 1,697 |
NONE
| null |
### System Info
- `transformers` version: 4.29.0
- Platform: Linux-5.15.0-79-generic-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0.13.4
- Safetensors version: 0.3.2
- PyTorch version (GPU?): 2.0.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
### Who can help?
@amy
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. import pair_wise_sigmoid_cross_entropy_loss from transformers.models.mask2former_modeling_mask2former
```
from transformers.models.mask2former.modeling_mask2former import pair_wise_sigmoid_cross_entropy_loss
```
2. initialize arbitrary input and labels.
```
inputs = torch.randn([16,121445]).cuda() - 2 # To simulate bad initialization
labels = torch.randn([2, 121445]).cuda()
```
3. Compute loss with 16-bit autocast
```
with torch.autocast("cuda", dtype=torch.hafl):
result = pair_wise_sigmoid_cross_entropy_loss(inputs, labels)
```
4. result got int everywhere
```
>>> result.isinf().all()
tensor(True, device='cuda:0')
```
I suggest normalizing `cross_entropy_loss_pos`, `cross_entropy_loss_neg` with `height_and_width` before computing matmul as follows:
```
def pair_wise_sigmoid_cross_entropy_loss(
inputs: torch.Tensor, labels: torch.Tensor
) -> torch.Tensor:
"""
A pair wise version of the cross entropy loss, see `sigmoid_cross_entropy_loss` for usage.
Args:
inputs (`torch.Tensor`):
A tensor representing a mask.
labels (`torch.Tensor`):
A tensor with the same shape as inputs.
Stores the binary classification labels for each element in inputs
(0 for the negative class and 1 for the positive class).
Returns:
loss (`torch.Tensor`): The computed loss between each pairs.
"""
height_and_width = inputs.shape[1]
criterion = torch.nn.BCEWithLogitsLoss(reduction="none")
# prevent overflow
cross_entropy_loss_pos = criterion(inputs, torch.ones_like(inputs)) / height_and_width
cross_entropy_loss_neg = criterion(inputs, torch.zeros_like(inputs)) / height_and_width
loss_pos = torch.matmul(cross_entropy_loss_pos, labels.T)
loss_neg = torch.matmul(cross_entropy_loss_neg, (1 - labels).T)
loss = loss_pos + loss_neg
return loss
```
This helps prevent overflow at matmul computation with 16-bit precision or lower.
### Expected behavior
non-overflow ce-loss result
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25938/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25938/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25937
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25937/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25937/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25937/events
|
https://github.com/huggingface/transformers/issues/25937
| 1,879,293,277 |
I_kwDOCUB6oc5wA71d
| 25,937 |
RWKV produces erroneous output after fine-tuning and model.eval() enabled; inference during model.train() works correctly
|
{
"login": "LuciferianInk",
"id": 94832312,
"node_id": "U_kgDOBacGuA",
"avatar_url": "https://avatars.githubusercontent.com/u/94832312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LuciferianInk",
"html_url": "https://github.com/LuciferianInk",
"followers_url": "https://api.github.com/users/LuciferianInk/followers",
"following_url": "https://api.github.com/users/LuciferianInk/following{/other_user}",
"gists_url": "https://api.github.com/users/LuciferianInk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LuciferianInk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LuciferianInk/subscriptions",
"organizations_url": "https://api.github.com/users/LuciferianInk/orgs",
"repos_url": "https://api.github.com/users/LuciferianInk/repos",
"events_url": "https://api.github.com/users/LuciferianInk/events{/privacy}",
"received_events_url": "https://api.github.com/users/LuciferianInk/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @LuciferianInk 👋 \r\n\r\nCalling `model.eval()` turns off things like dropout and per-batch normalization and, as you wrote, should improve things at inference time. I'm afraid that without a short reproducer to load the fine-tuned model and reproduce the problem quickly, there is little we can do -- our bandwidth is limited, so we need your help too :)",
"Okay, thanks for the response. I might need a couple of days, but I'll try to put something together for you. I'll probably use Docker. Let me know if that's an issue.",
"Hi @gante,\r\n\r\nAs requested, I have published [a small project to reproduce this issue](https://github.com/0-5788719150923125/vtx/tree/main/examples/rwkv). It will load RWKV-v4 430m model, attach a LoRA adapter, and quickly run inference. You may choose to use the provided Docker configs, or not; both Docker or vanilla Python should work. Further instructions are in the README file.\r\n\r\nI did not recreate the training loop, because you didn't ask for it (nor am I certain that training was the problem). If you'd like to see the training code, I linked to it above.\r\n\r\nThank you for your time and attention to this matter. Please let me know if you need anything else from me.",
"Well, I've learned a few things, which make me lean towards this being a \"quirk in the model,\" rather than an actual problem with Transformers' inference.\r\n1. I was able to train `RWKV/rwkv-4-169m-pile` by using PEFT, without running into this issue at all. However, both 430m and 1b5 immediately run into it.\r\n2. I suspect the model is overfitting. Perhaps larger RWKV models are more sensitive to fine-tuning with less data, or data that contains certain kinds of patterns? Although, I have several gigabytes of training data... and less than 5% of it actually looks like the examples RWKV is overfitting on.\r\n3. I can negate this problem by setting PEFT's \"r\" argument to 1. I think this makes sense; with so few trainable parameters, the model is forced to learn more general representations, rather than memorizing the numbers and patterns you see above. Of course, the problem is... you can't encode very much information into such small weight matrices.\r\n4. I tried a full and regular fine-tuning on the 430m model, and the issue is not present there. Thus, LoRA is the problem.\r\n5. I still haven't found a great solution, but I'm sure I will continue to revisit this problem, until I've landed on something.\r\n\r\nNot sure if the issue is still worth tracking here, at this point. I really think I'm just fighting with the challenge of training an RNN, versus the ease of a transformer. I'll leave it to the maintainers to decide if they'd like to close the issue or not.",
"Thanks for sharing your insights! Might be interesting for @pacman100 who has worked on PEFT (no actionable items right now AFAIU)",
"Okay! I think we finally landed on a solution. It started with an explanation of the various RWKV modules from Google Bard:\r\n### Key module\r\n\r\n_The key module takes the input query and context as input and produces a representation that is used to retrieve the most relevant key-value pairs from the RWKV memory. This is done by transforming the input query and context into a common space, where they can be compared to the keys in the memory. The key module is typically implemented as a neural network, with parameters that are learned during training._\r\n\r\n### Value module\r\n\r\n_The value module takes the retrieved key-value pairs as input and produces a representation that is used to update the output query. This is done by transforming the key-value pairs into a common space, where they can be combined to produce an update to the output query. The value module is typically implemented as a neural network, with parameters that are learned during training._\r\n\r\n### Receptance module\r\n\r\n_The receptance module controls how much of the update produced by the value module is applied to the output query. This is done by multiplying the update by a scalar value, which is called the receptance. The receptance module is typically implemented as a single layer neural network, with parameters that are learned during training._\r\n\r\nLong story short, I spent some time experimenting with [asymmetric ranks and alpha](https://github.com/huggingface/peft/commit/1c0654b9a524863ba58d561c0a40c37ae808b637) on the different modules, and eventually landed on some settings that work. At this point, I'm tired of fighting with it, and ready to move on.\r\n\r\nI'll be sure to close this issue in a few days, after I'm positive the problem was resolved.",
"Well, there is no doubt that RWKV is more difficult to work with than a transformer, but I've finally landed on some functional settings. At the end of the day, it required a larger training set, less weight decay, SWA, and a lot of other optimizations. But mostly - avoid training the \"value\", \"output\", and \"head\" modules - and you'll have a better time.\r\n\r\nGoing to close this issue now.",
"Thanks for sharing, very insightful @LuciferianInk :)"
] | 1,693 | 1,696 | 1,696 |
NONE
| null |
### System Info
- `transformers` version: 4.32.1
- Platform: Linux-6.4.10-arch1-1-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.3
- Accelerate version: 0.22.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: True (Nvidia GTX 1070)
- Using distributed or parallel set-up in script?: False
### Who can help?
@ArthurZucker @gante
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am not using the Huggingface Trainer. I am using a very basic training loop, originally forked from AITextGen, which uses LightningAI's automatic optimization. The bulk of that fine-tuning code [can be found here](https://github.com/LuciferianInk/aitextgen/blob/master/aitextgen/train.py). This code works correctly for the fine-tuning of other models, like GPT-Neo and Cerebrus-GPT. Something about RWKV v4 (169m/430m) is different.
To reproduce, you would have to implement my training logic (which isn't terribly "custom" or "complicated" at all), then toggle between eval/train modes, while performing inference - to see the difference. Alternatively, perhaps you could train in your own way, and toggle between eval/train... just to let me know if the problem is with my training code? I don't think it is.
I have tried both LoRA and traditional fine-tuning. Both have the same results. I have tried all manner of learning rate adjustments, weight decay, batch size... but hyperparameters don't seem to fix this problem. Nor would I really expect it to; if the problem can be fixed by toggling between eval/train modes, then I would expect that the problem lies in the HF implementation. I spoke to BlinkDL about this (the creator of RWKV), and he said it sounds like a bug in the HF inference code.
### Expected behavior
RWKV is unable to produce coherent output after fine-tuning, when `self.model.eval()` is enabled. If the model is set to `self.model.train()`, then the output is as expected.
Take this sample data, which I've fine-tuned RWKV v4 430m on:
```
¶6064923396882833153:> I have been through everything.
¶251161711099248641:> Am with you...there buddy! lol Like that save? jk
¶1623339977514240393:> Nice, this gives me hope for the sake of being fully onboarded into our own reality. Lol
```
Within <1000 training steps, a fine-tuned model (with `self.model.train()` enabled) will be capable of producing output like this:
```
¶389988567908488489:> What is the point of this video?
¶747257279748767804:> Just be more careful
¶389988567908488489:> What is the point of this video?
¶747257279748767804:> The point is to make you think.
¶389988567908488489:> What is the point of this video?
¶747257279748767804:> Because it is a video. A video is a video.
```
However, that same model - with `self.model.eval()` enabled - will produce gibberish, like this:
```
¶ ¶ [")')\")", [ [ [ [3]3] ¶**A1\new!C$',!C$',!C$',!C$')!C$3\ndraw (((4.5+4@
¶ Which 'YNUMC" is (((78740833245160745 WCHAR) + "Enter " +
¶,iple!", [vi@ 1400! 0.post\n:> (((694509,632072,A"," - - -", - - -))) [r "'¶5,",
```
I would expect RWKV to perform better in `self.model.eval()` mode, not worse than `self.model.train()`. Clearly, the model is training correctly, and it is learning; something about eval mode completely break generation, though.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25937/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25937/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25936
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25936/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25936/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25936/events
|
https://github.com/huggingface/transformers/pull/25936
| 1,879,226,175 |
PR_kwDOCUB6oc5ZbeSE
| 25,936 |
Fix typos
|
{
"login": "omahs",
"id": 73983677,
"node_id": "MDQ6VXNlcjczOTgzNjc3",
"avatar_url": "https://avatars.githubusercontent.com/u/73983677?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omahs",
"html_url": "https://github.com/omahs",
"followers_url": "https://api.github.com/users/omahs/followers",
"following_url": "https://api.github.com/users/omahs/following{/other_user}",
"gists_url": "https://api.github.com/users/omahs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omahs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omahs/subscriptions",
"organizations_url": "https://api.github.com/users/omahs/orgs",
"repos_url": "https://api.github.com/users/omahs/repos",
"events_url": "https://api.github.com/users/omahs/events{/privacy}",
"received_events_url": "https://api.github.com/users/omahs/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25936). All of your documentation changes will be reflected on that endpoint."
] | 1,693 | 1,693 | 1,693 |
CONTRIBUTOR
| null |
Fix typos
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25936/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25936/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25936",
"html_url": "https://github.com/huggingface/transformers/pull/25936",
"diff_url": "https://github.com/huggingface/transformers/pull/25936.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25936.patch",
"merged_at": 1693822512000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25935
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25935/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25935/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25935/events
|
https://github.com/huggingface/transformers/issues/25935
| 1,879,150,714 |
I_kwDOCUB6oc5wAZB6
| 25,935 |
problem
|
{
"login": "Zlasejd",
"id": 73882900,
"node_id": "MDQ6VXNlcjczODgyOTAw",
"avatar_url": "https://avatars.githubusercontent.com/u/73882900?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Zlasejd",
"html_url": "https://github.com/Zlasejd",
"followers_url": "https://api.github.com/users/Zlasejd/followers",
"following_url": "https://api.github.com/users/Zlasejd/following{/other_user}",
"gists_url": "https://api.github.com/users/Zlasejd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Zlasejd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Zlasejd/subscriptions",
"organizations_url": "https://api.github.com/users/Zlasejd/orgs",
"repos_url": "https://api.github.com/users/Zlasejd/repos",
"events_url": "https://api.github.com/users/Zlasejd/events{/privacy}",
"received_events_url": "https://api.github.com/users/Zlasejd/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[] | 1,693 | 1,693 | 1,693 |
NONE
| null |
### System Info
File "/fsa/home/hqz_zhangjd/.conda/envs/newenv/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py", line 986, in forward
buffered_token_type_ids_expanded = buffered_token_type_ids.expand(batch_size, seq_length)
RuntimeError: The expanded size of the tensor (693) must match the existing size (512) at non-singleton dimension 1. Target sizes: [16, 693]. Tensor sizes: [1, 512]
2%|▏ | 2738/146754 [12:51<11:16:06, 3.55it/s]
Every time I run the file 'run_mlm_wwm.py' reported an error when it was used. This issue has not been resolved and the version has also been changed. We are currently using Torch 2.0.1 and Transformers 4.28.1. Can you help us solve it?
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
File "/fsa/home/hqz_zhangjd/.conda/envs/newenv/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py", line 986, in forward
buffered_token_type_ids_expanded = buffered_token_type_ids.expand(batch_size, seq_length)
RuntimeError: The expanded size of the tensor (693) must match the existing size (512) at non-singleton dimension 1. Target sizes: [16, 693]. Tensor sizes: [1, 512]
2%|▏ | 2738/146754 [12:51<11:16:06, 3.55it/s]
Every time I run the file 'run_mlm_wwm.py' reported an error when it was used. This issue has not been resolved and the version has also been changed. We are currently using Torch 2.0.1 and Transformers 4.28.1. Can you help us solve it?
### Expected behavior
solve this problem.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25935/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25935/timeline
|
not_planned
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25934
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25934/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25934/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25934/events
|
https://github.com/huggingface/transformers/pull/25934
| 1,879,144,685 |
PR_kwDOCUB6oc5ZbOzu
| 25,934 |
Fix small typo README.md
|
{
"login": "zspo",
"id": 26846598,
"node_id": "MDQ6VXNlcjI2ODQ2NTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/26846598?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zspo",
"html_url": "https://github.com/zspo",
"followers_url": "https://api.github.com/users/zspo/followers",
"following_url": "https://api.github.com/users/zspo/following{/other_user}",
"gists_url": "https://api.github.com/users/zspo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zspo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zspo/subscriptions",
"organizations_url": "https://api.github.com/users/zspo/orgs",
"repos_url": "https://api.github.com/users/zspo/repos",
"events_url": "https://api.github.com/users/zspo/events{/privacy}",
"received_events_url": "https://api.github.com/users/zspo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25934). All of your documentation changes will be reflected on that endpoint."
] | 1,693 | 1,694 | 1,694 |
CONTRIBUTOR
| null |
# What does this PR do?
1. update paper links in readme
2. fix other small bugs in .md
## Who can review?
Documentation: @stevhliu and @MKhalusova
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25934/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25934/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25934",
"html_url": "https://github.com/huggingface/transformers/pull/25934",
"diff_url": "https://github.com/huggingface/transformers/pull/25934.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25934.patch",
"merged_at": 1694005650000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25933
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25933/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25933/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25933/events
|
https://github.com/huggingface/transformers/pull/25933
| 1,879,085,441 |
PR_kwDOCUB6oc5ZbDZ6
| 25,933 |
PegasusX add _no_split_modules
|
{
"login": "andreeahedes",
"id": 53334746,
"node_id": "MDQ6VXNlcjUzMzM0NzQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/53334746?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andreeahedes",
"html_url": "https://github.com/andreeahedes",
"followers_url": "https://api.github.com/users/andreeahedes/followers",
"following_url": "https://api.github.com/users/andreeahedes/following{/other_user}",
"gists_url": "https://api.github.com/users/andreeahedes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andreeahedes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andreeahedes/subscriptions",
"organizations_url": "https://api.github.com/users/andreeahedes/orgs",
"repos_url": "https://api.github.com/users/andreeahedes/repos",
"events_url": "https://api.github.com/users/andreeahedes/events{/privacy}",
"received_events_url": "https://api.github.com/users/andreeahedes/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@andreeahedes Thanks for adding this! Have you run the [accelerate tests](https://huggingface.co/docs/transformers/testing#run-accelerate-tests) for this model on 1 and 2 GPUs with these changes? ",
"Hi @amyeroberts , I ran `test_disk_offload`, `test_cpu_offload` and ` test_model_parallelism` on a machine with 2 GPUs.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25933). All of your documentation changes will be reflected on that endpoint."
] | 1,693 | 1,694 | 1,693 |
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
Add in no_split_modules method for PegasusX model to allow disk/cpu offloading and multi-GPU parallelism.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section? -> Yes
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. ->https://github.com/huggingface/accelerate/issues/1900
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).-No
- [ ] Did you write any new necessary tests? No
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerz and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25933/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25933/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25933",
"html_url": "https://github.com/huggingface/transformers/pull/25933",
"diff_url": "https://github.com/huggingface/transformers/pull/25933.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25933.patch",
"merged_at": 1693928075000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25932
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25932/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25932/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25932/events
|
https://github.com/huggingface/transformers/pull/25932
| 1,879,034,211 |
PR_kwDOCUB6oc5Za5yW
| 25,932 |
Add TFDebertaV2ForMultipleChoice
|
{
"login": "raghavanone",
"id": 115454562,
"node_id": "U_kgDOBuGyYg",
"avatar_url": "https://avatars.githubusercontent.com/u/115454562?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/raghavanone",
"html_url": "https://github.com/raghavanone",
"followers_url": "https://api.github.com/users/raghavanone/followers",
"following_url": "https://api.github.com/users/raghavanone/following{/other_user}",
"gists_url": "https://api.github.com/users/raghavanone/gists{/gist_id}",
"starred_url": "https://api.github.com/users/raghavanone/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/raghavanone/subscriptions",
"organizations_url": "https://api.github.com/users/raghavanone/orgs",
"repos_url": "https://api.github.com/users/raghavanone/repos",
"events_url": "https://api.github.com/users/raghavanone/events{/privacy}",
"received_events_url": "https://api.github.com/users/raghavanone/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @Rocketknight1 ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25932). All of your documentation changes will be reflected on that endpoint.",
"@Rocketknight1 Not sure why this test fails.\r\n",
"That test is an issue with the CI rather than this PR, you can ignore it! Are you ready for me to merge now?",
"> That test is an issue with the CI rather than this PR, you can ignore it! Are you ready for me to merge now?\n\nYes ",
"Ugh, it won't let me merge, which means we'll need to rebase to get the tests working. Can you:\r\n\r\n1) [Pull the latest changes to your fork's main branch](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork)\r\n2) In your local working repo, `git checkout main` and `git pull` to ensure your local main branch is up to date\r\n3) In your local repo, `git checkout fix_issue_25537` and then `git pull` and `git rebase main` to rebase onto the latest `main` branch\r\n4) Finally, `git push --force` to upload the rebased branch to Github\r\n\r\nAfter that, tests should pass!",
"> Ugh, it won't let me merge, which means we'll need to rebase to get the tests working. Can you:\r\n> \r\n> 1. [Pull the latest changes to your fork's main branch](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork)\r\n> 2. In your local working repo, `git checkout main` and `git pull` to ensure your local main branch is up to date\r\n> 3. In your local repo, `git checkout fix_issue_25537` and then `git pull` and `git rebase main` to rebase onto the latest `main` branch\r\n> 4. Finally, `git push --force` to upload the rebased branch to Github\r\n> \r\n> After that, tests should pass!\r\n\r\nDone, Lets hope the test passes.",
"Looks like everything's passing now - sorry about the CI issues, and thanks for a very clean and useful PR!"
] | 1,693 | 1,693 | 1,693 |
CONTRIBUTOR
| null |
# What does this PR do?
Fixes #25537
Add TFDebertaV2ForMultipleChoice model
@ArthurZucker
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25932/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25932/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25932",
"html_url": "https://github.com/huggingface/transformers/pull/25932",
"diff_url": "https://github.com/huggingface/transformers/pull/25932.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25932.patch",
"merged_at": 1693930386000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25931
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25931/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25931/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25931/events
|
https://github.com/huggingface/transformers/issues/25931
| 1,878,993,768 |
I_kwDOCUB6oc5v_yto
| 25,931 |
[Pytorch] Unexpected task example translation : text-generation instead of Translation in model card and Hub
|
{
"login": "SoyGema",
"id": 24204714,
"node_id": "MDQ6VXNlcjI0MjA0NzE0",
"avatar_url": "https://avatars.githubusercontent.com/u/24204714?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SoyGema",
"html_url": "https://github.com/SoyGema",
"followers_url": "https://api.github.com/users/SoyGema/followers",
"following_url": "https://api.github.com/users/SoyGema/following{/other_user}",
"gists_url": "https://api.github.com/users/SoyGema/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SoyGema/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SoyGema/subscriptions",
"organizations_url": "https://api.github.com/users/SoyGema/orgs",
"repos_url": "https://api.github.com/users/SoyGema/repos",
"events_url": "https://api.github.com/users/SoyGema/events{/privacy}",
"received_events_url": "https://api.github.com/users/SoyGema/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"Hi @SoyGema, thanks for raising this issue! \r\n\r\nIf you want to change the task that a model is mapped to, you can do so by clicking on the `Edit model card` button \r\n \r\n<img width=\"1028\" alt=\"Screenshot 2023-09-04 at 14 52 39\" src=\"https://github.com/huggingface/transformers/assets/22614925/a01e4c84-b836-42b5-959f-d6e48c589321\">\r\n\r\nand then selecting the desired pipeline_tag.\r\n\r\n<img width=\"1028\" alt=\"Screenshot 2023-09-04 at 14 53 26\" src=\"https://github.com/huggingface/transformers/assets/22614925/b1af2951-c513-4492-a965-b08af603a0e0\">\r\n\r\n\r\nWhy this is automapped to text2text generation when the [task is specified in the script](https://github.com/huggingface/transformers/blob/bfb1895e3346cb8a2bf2560c75d45e70edf46a47/examples/pytorch/translation/run_translation.py#L665), I'm not sure. However, this tag isn't technically incorrect - T5 is an encoder-decoder model and this is a text generation task. cc @Narsil @muellerzr do either of you know? \r\n\r\nWith regards to your questions about BLEU, this is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.\r\n",
"> Why this is automapped to text2text generation when the [task is specified in the script](https://github.com/huggingface/transformers/blob/bfb1895e3346cb8a2bf2560c75d45e70edf46a47/examples/pytorch/translation/run_translation.py#L665), I'm not sure. However, this tag isn't technically incorrect - T5 is an encoder-decoder model and this is a text generation task. cc @Narsil @muellerzr do either of you know?\r\n\r\nThe hub infers tasks automatically from the config.json@architectures when it's missing from the README.",
"Thanks for the support given in this issue. I consider this complete as the main challenge has been supported and some derivative as well. With that and the fact that I tend to own the issues I open, Im proceeding to close it. Feel free to reopen if necessary. Thanks so much!!🥹"
] | 1,693 | 1,694 | 1,694 |
CONTRIBUTOR
| null |
### System Info
Hello there!
Thanks for making [translation example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/translation) with Pytorch.
🙏🙏 The documentation is amazing and the script is very well structured! 🙏🙏
**System Info**
```
- `transformers` version: 4.32.0.dev0
- Platform: macOS-13.4.1-arm64-arm-64bit
- Python version: 3.10.10
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.2
- Accelerate version: 0.21.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
```
### Who can help?
@patil-suraj
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
#### Context
Fine-tuning **english-hindi** Translation model with [t5-small](https://huggingface.co/SoyGema/t5-small) and [opus100](https://huggingface.co/datasets/opus100) dataset
Running the example [run_translation.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/translation/run_translation.py) from transformers repository.
[Small modification]( https://github.com/SoyGema/The-Lord-of-The-Words-The-two-frameworks/blob/de9def88f08b2cc14b39c5c74b31b60d4166cab5/src/models/Pytorch/run_translation.py#L366) for making the dataset a little bit smaller for testing end-to-end
Checked recommendations from [README.md](https://github.com/huggingface/transformers/blob/0afa5071bd84e44301750fdc594e33db102cf374/examples/pytorch/translation/README.md?plain=1#L81) when using T5 family models
- [X] 1. Add `--source_prefix` flag
- [X] 2. Change 3 flags accordingly `--source_lang` , `--target_lang` and `--source_prefix`
```
python run_translation.py \
--model_name_or_path t5-small \
--do_train \
--do_eval \
--source_lang en \
--target_lang hi \
--source_prefix "translate English to Hindi: " \
--dataset_name opus100 \
--dataset_config_name en-hi \
--output_dir=/tmp/english-hindi \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--num_train_epochs=3 \
--push_to_hub=True \
--predict_with_generate=True
--report_to_all
--do_predict
```
Model trains correctly. It is also connected to W&B
Trace of model card . Once the model is trained
```
[INFO|modelcard.py:452] 2023-09-02 23:08:32,386 >> Dropping the following result as it does not have all the necessary fields:
{'task': {'name': 'Sequence-to-sequence Language Modeling', 'type': 'text2text-generation'}, 'dataset': {'name': 'opus100', 'type': 'opus100', 'config': 'en-hi', 'split': 'validation', 'args': 'en-hi'}}
```
The model is pushed to the HUB
### Expected behavior
- Correct task recognition and inference : Somehow the task is uploaded in Hub as **text-generation** and not as a **translation task**.
Inference shows text-generation as well, And the model card seems to point at that too.
During search , I visited/Read [forum](https://discuss.huggingface.co/t/trainer-vs-seq2seqtrainer/3145), but I think it makes reference to the BLEU generation metric and not the task (if Im understanding well ) I´ve also checked [Tasks docs](https://huggingface.co/docs/hub/models-tasks) and I think it gives you a guide on how to add a task, not change it - please let me know if I shall follow this path - and [Troubleshoot](https://huggingface.co/docs/transformers/troubleshooting) page , but couldn´t find anything.
<img width="1308" alt="text-generation-instead-translation" src="https://github.com/SoyGema/The-Lord-of-The-Words-The-two-frameworks/assets/24204714/abdee474-3629-4f86-b669-f7b2f2e6209a">
_Tangential Note_ :
Im aware that the Bleu Score is 0 , and I tried another languages and modifying some logic in [compute_metrics](https://github.com/SoyGema/The-Lord-of-The-Words-The-two-frameworks/blob/main/src/models/Pytorch/run_translation.py#L590) function , as well as trying with another language that computed BLEU well. However the model was also loaded as text-generation. If keeping the experimentation up can prove some hypothesis I might have about this logic and BLEU ( that impact languages with alphabets distinct from latin ones) I will let you know , but I made those experiments to test if the task issue was somehow related to the task
<img width="254" alt="Captura de pantalla 2023-09-03 a las 10 14 15" src="https://github.com/huggingface/transformers/assets/24204714/ce27020a-6565-46cd-ab50-9e69ad505104">
Any help with clarifying and poiniting to translation task would be much appreciated
And if some change in the script or docs might come from this happy to contribute
Thanks for making transformers 🤖 , for the time dedicated to this issue 🕞 and have a nice day 🤗!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25931/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25931/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25930
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25930/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25930/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25930/events
|
https://github.com/huggingface/transformers/issues/25930
| 1,878,943,135 |
I_kwDOCUB6oc5v_mWf
| 25,930 |
forward pass consumes 2x memory than required in Attention module when `use_cache=True`
|
{
"login": "RahulSChand",
"id": 16897807,
"node_id": "MDQ6VXNlcjE2ODk3ODA3",
"avatar_url": "https://avatars.githubusercontent.com/u/16897807?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RahulSChand",
"html_url": "https://github.com/RahulSChand",
"followers_url": "https://api.github.com/users/RahulSChand/followers",
"following_url": "https://api.github.com/users/RahulSChand/following{/other_user}",
"gists_url": "https://api.github.com/users/RahulSChand/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RahulSChand/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RahulSChand/subscriptions",
"organizations_url": "https://api.github.com/users/RahulSChand/orgs",
"repos_url": "https://api.github.com/users/RahulSChand/repos",
"events_url": "https://api.github.com/users/RahulSChand/events{/privacy}",
"received_events_url": "https://api.github.com/users/RahulSChand/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"cc @ArthurZucker ",
"Hey! I am not entirely sure what you are asking for is possible / an issue. \r\nI tested locally with `use_cache = True` for a generation and did not see any improvement. The reason is that `key_states` are used afterwards (repeated for example when the `num_kv_heads` is different than the number of heads, while the **copy** is kept to store the past key values. We could store the full past key values ( which would increase the RAM throughout the execution vs when computing the states), but then again, I did not see any performance improvements when switching to using:\r\n```pytho\r\n if past_key_value is not None:\r\n # reuse k, v, self_attention\r\n past_key_value = (torch.cat([past_key_value[0], key_states], dim=2), torch.cat([past_key_value[1], value_states], dim=2))\r\n if use_cache:\r\n key_states = past_key_value[0]\r\n value_states = past_key_value[1]\r\n```\r\nwhere I remove the copy. \r\n\r\nFreeing the tensors should be automatically handle by python's garbage collector, especially when overwriting a variable. \r\n\r\nDo you have a reproducing snippet with expected RAM usage, with a given batch size etx? I tried with batch size of 64 and notices no differences. ",
"@ArthurZucker This is a simple example where kv cache takes 2x the memory than it needs to.\r\n\r\nThe total GPU memory should be = model size + cuda overhead + kv cache\r\n\r\nIn below example, kv_cache should take `1000(tokens)*4096*32(layers)*2(kv)*2(float16 is 2 bytes) / (1024*1024)` MB = `500MB`. but it takes 1GB gpu memory because we store 2 copies for all 32 layers. That is, if we are generating the 1000th token then we have two kv_cache, one of shape (2, 32, 1000, 4096) & another of size (2, 32, 999, 4096).\r\n\r\nYou can either confirm this by running `nvidia-smi` or printing `torch.cuda.max_alloacted()` in the forward pass of `LlamaModel`. You can also change the below code to only have two total tokens & then 1000 total tokens & see that the diff b/w the 2 cases is 1GB (it should ideally be only 500MB).\r\n\r\n\r\n\r\n```python\r\nimport torch\r\nfrom transformers import LlamaTokenizer, DataCollatorWithPadding, LlamaForCausalLM\r\n\r\npath = \"meta-llama/Llama-2-7b\"\r\ntokenizer = LlamaTokenizer.from_pretrained(path)\r\ntokenizer.pad_token = tokenizer.eos_token\r\n\r\nmodel = LlamaForCausalLM.from_pretrained(path, low_cpu_mem_usage = True, device_map = \"auto\", torch_dtype = torch.float16)\r\nword = \"Hello \"*500\r\nword = [word]\r\n\r\ninputs = tokenizer(word, return_tensors=\"pt\").to(model.device)\r\n\r\ngenerated_ids = model.generate(\r\n **inputs,\r\n use_cache=True,\r\n max_new_tokens=500,\r\n temperature=0.2,\r\n top_p=0.95,\r\n do_sample=True,\r\n eos_token_id=tokenizer.eos_token_id,\r\n pad_token_id=tokenizer.eos_token_id, # model has no pad token\r\n)\r\n```",
"I could not really find conclusive evidence of this when running the script. The maximum allocated memory does not always take into account what is released and with `nvidia-smi` I was getting the correct memory usage. Given that we do `past_key_value = (key_states, value_states) if use_cache else None` I would not expect the tensor to be kept. The best would be to overwrite the tuple but we can't for now. We plan on refactoring the cache so this could be fixed by #26681",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,693 | 1,702 | 1,702 |
NONE
| null |
### Feature request
Currently the forward pass of the model consumes 2x memory than is required in attention module when `use_cache=True`.
For example during generation, in the attention module of llama `key_states` is of shape `(bsz, heads, 1, head_size)`
& `past_key_value[0]` is of shape `(bsz, heads, seq_length, head_size)`.
When we do `torch.cat(..)` in line 337 below, we get two copies, `past_key_value[0]` one of shape `(bsz, heads, seq_length, head_size)` and the `key_states` of shape `(bsz, heads, seq_length+1, head_size)`.
https://github.com/huggingface/transformers/blob/0afa5071bd84e44301750fdc594e33db102cf374/src/transformers/models/llama/modeling_llama.py#L335-L338
Ideally we want the tensor that `past_key_value[0]` is pointing to be freed since it's no longer used (& will be replaced by the newly created `key_states`). In current implementation if `(bsz=1,heads=12,seq_length=1024, head_size=128)` then memory consumed is `bsz*heads*seq_length*head_size*layers*2` whereas it should ideally only use half of it. This can be achieved by just freeing the `past_key_value[0]` after cat operation finishes.
This is particularly noticable when you increase `bsz`. Freeing the memory allows using 2x more batch size or sequence length. There are some edge cases where you might use the `past_key_value[0]` again so maybe there can be flag to switch this on/off.
### Motivation
Reduce memory consumed. by 2x when `use_cache=True`. This allows us to increase batch size by 2x or max seq length of tokens generated by 2x with same memory.
### Your contribution
I can contribute if this is a change that is required. The change is small on surface since all it requires is freeing the tensor that `past_key_value[0]` & `past_key_value[1]` point to after Line 338
https://github.com/huggingface/transformers/blob/0afa5071bd84e44301750fdc594e33db102cf374/src/transformers/models/llama/modeling_llama.py#L335-L338
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25930/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25930/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/25929
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25929/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25929/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25929/events
|
https://github.com/huggingface/transformers/pull/25929
| 1,878,938,875 |
PR_kwDOCUB6oc5ZaoRu
| 25,929 |
Update autoclass_tutorial.md
|
{
"login": "NinoRisteski",
"id": 95188570,
"node_id": "U_kgDOBax2Wg",
"avatar_url": "https://avatars.githubusercontent.com/u/95188570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NinoRisteski",
"html_url": "https://github.com/NinoRisteski",
"followers_url": "https://api.github.com/users/NinoRisteski/followers",
"following_url": "https://api.github.com/users/NinoRisteski/following{/other_user}",
"gists_url": "https://api.github.com/users/NinoRisteski/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NinoRisteski/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NinoRisteski/subscriptions",
"organizations_url": "https://api.github.com/users/NinoRisteski/orgs",
"repos_url": "https://api.github.com/users/NinoRisteski/repos",
"events_url": "https://api.github.com/users/NinoRisteski/events{/privacy}",
"received_events_url": "https://api.github.com/users/NinoRisteski/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25929). All of your documentation changes will be reflected on that endpoint."
] | 1,693 | 1,693 | 1,693 |
CONTRIBUTOR
| null |
fixed typos
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25929/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25929/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25929",
"html_url": "https://github.com/huggingface/transformers/pull/25929",
"diff_url": "https://github.com/huggingface/transformers/pull/25929.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25929.patch",
"merged_at": 1693822609000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25928
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25928/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25928/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25928/events
|
https://github.com/huggingface/transformers/pull/25928
| 1,878,938,555 |
PR_kwDOCUB6oc5ZaoOD
| 25,928 |
Update community.md
|
{
"login": "NinoRisteski",
"id": 95188570,
"node_id": "U_kgDOBax2Wg",
"avatar_url": "https://avatars.githubusercontent.com/u/95188570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NinoRisteski",
"html_url": "https://github.com/NinoRisteski",
"followers_url": "https://api.github.com/users/NinoRisteski/followers",
"following_url": "https://api.github.com/users/NinoRisteski/following{/other_user}",
"gists_url": "https://api.github.com/users/NinoRisteski/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NinoRisteski/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NinoRisteski/subscriptions",
"organizations_url": "https://api.github.com/users/NinoRisteski/orgs",
"repos_url": "https://api.github.com/users/NinoRisteski/repos",
"events_url": "https://api.github.com/users/NinoRisteski/events{/privacy}",
"received_events_url": "https://api.github.com/users/NinoRisteski/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25928). All of your documentation changes will be reflected on that endpoint."
] | 1,693 | 1,693 | 1,693 |
CONTRIBUTOR
| null |
fixed a few typos
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25928/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25928/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25928",
"html_url": "https://github.com/huggingface/transformers/pull/25928",
"diff_url": "https://github.com/huggingface/transformers/pull/25928.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25928.patch",
"merged_at": 1693822594000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25927
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25927/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25927/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25927/events
|
https://github.com/huggingface/transformers/pull/25927
| 1,878,928,931 |
PR_kwDOCUB6oc5Zamhv
| 25,927 |
[time series] Add PatchTST
|
{
"login": "psinthong",
"id": 4720928,
"node_id": "MDQ6VXNlcjQ3MjA5Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/4720928?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/psinthong",
"html_url": "https://github.com/psinthong",
"followers_url": "https://api.github.com/users/psinthong/followers",
"following_url": "https://api.github.com/users/psinthong/following{/other_user}",
"gists_url": "https://api.github.com/users/psinthong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/psinthong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/psinthong/subscriptions",
"organizations_url": "https://api.github.com/users/psinthong/orgs",
"repos_url": "https://api.github.com/users/psinthong/repos",
"events_url": "https://api.github.com/users/psinthong/events{/privacy}",
"received_events_url": "https://api.github.com/users/psinthong/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@psinthong do you have a training script I can use?",
"@kashif, yes, I added an example training script on a dummy dataset for our forecasting model [here](https://github.com/namctin/transformers/blob/patchtst/src/transformers/models/patchtst/patchtst_forecast_test.py). ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_25927). All of your documentation changes will be reflected on that endpoint.",
"@ArthurZucker green!",
"@ArthurZucker the failing CI is due to the hub and i believe not related",
"Hi, it seems the attention in the PatchTST is not the vanilla attention. It uses `residual_attention` with `res_attention:bool=True`. https://github.com/yuqinie98/PatchTST/blob/204c21efe0b39603ad6e2ca640ef5896646ab1a9/PatchTST_supervised/layers/PatchTST_backbone.py#L16-L23",
"@Hannibal046 thats true that the official implementation had those two flags for learning the scale and RealFormer, but as far as I know those flags were always kept to `False` ",
"@kashif Hi, Thanks for replying! According to this [official script](https://github.com/yuqinie98/PatchTST/blob/main/PatchTST_supervised/scripts/PatchTST/traffic.sh) to launch a supervised PatchTST model for traffic dataset, the flag `res_attention` is set to the default value which is `True`.\r\n\r\nActually, there are two codebase in the official PatchTST repo, `PatchTST-self-supervised` and `PatchTST-supervised`. For the former, they didn't use `res_attention` but in the latter, the `res_attention` is the default and recommended setting.",
"@Hannibal046 can you check if the flag makes a big difference, they also didn't mention RealFormer in the paper either...",
"They indeed didn't mention RealFormer in the paper. I left an [issue](https://github.com/yuqinie98/PatchTST/issues/81) here and I think we should let the authors decide it is a bug or feature.",
"Hi, It seems that the current HF version of PatchTST doesn't include `Reversible Instance Normalization` as shown in the original repo:\r\nhttps://github.com/yuqinie98/PatchTST/blob/204c21efe0b39603ad6e2ca640ef5896646ab1a9/PatchTST_supervised/layers/PatchTST_backbone.py#L63-L65",
"@Hannibal046 I switched out the RevIN (instance norm) with the scaling heuristics originally from the DeepAR paper which essentially serves the same purpose... If I remember, PatchTST was not learning some affine transformation to the mu and std of the context window data so it should be the same... also as in the DeepAR paper, we should append the mean and sigma to the input encodings (which would be like the Non-stationary transformer paper) Note both the RevIN and NS-transformer paper didn't mention this prior, I believe related, work/technique... ",
"Thanks so much for the response! Now I understand it!",
"I'll review this one as well 😄 \r\n",
"> Most of the comments I did for the other PR also apply here, so no single letter variables, comments on the line above to not always split the line as it is not necessary, avoid sequential and dummy classes, as @NielsRogge suggested, use the config when args can be taken from the config, resolving all previous comments (answering or code changes). I can review again afterwards\r\n\r\nHi @ArthurZucker @NielsRogge , we have addressed all of your comments. Can you please review and let us know if there is anything that we have to resolve? ",
"@amyeroberts the failing tests are from the `test_run_semantic_segmentation` timing out",
"@amyeroberts all the `foward` have a dedicated docstring now, so do we need to include the `add_start_docstrings_to_model_forward` decorator?\r\n",
"@amyeroberts I have made the doc fixes here #27476",
"Hi @amyeroberts, thank you for your details comments and requests. We have addressed all your concerns including adding examples on the use of these model in the modeling page. Can you please kindly review? CC @kashif @vijaye12 .",
"@namctin As this PR was merged it's closed and can't be reopened. Could you open a new PR for the review? "
] | 1,693 | 1,700 | 1,699 |
CONTRIBUTOR
| null |
# What does this PR do?
Adding PatchTST model https://arxiv.org/abs/2211.14730
@kashif
## To-Do's:
- [x] Add generate method
- [x] Add additional integration test cases
- [x] Make pretrained dataset publicly available
- [ ] Make pretrained weights publicly available
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25927/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25927/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25927",
"html_url": "https://github.com/huggingface/transformers/pull/25927",
"diff_url": "https://github.com/huggingface/transformers/pull/25927.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25927.patch",
"merged_at": 1699898792000
}
|
https://api.github.com/repos/huggingface/transformers/issues/25926
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/25926/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/25926/comments
|
https://api.github.com/repos/huggingface/transformers/issues/25926/events
|
https://github.com/huggingface/transformers/pull/25926
| 1,878,848,480 |
PR_kwDOCUB6oc5ZaXk2
| 25,926 |
Add support for Palm, Claude-2, Llama2, CodeLlama (100+LLMs)
|
{
"login": "ishaan-jaff",
"id": 29436595,
"node_id": "MDQ6VXNlcjI5NDM2NTk1",
"avatar_url": "https://avatars.githubusercontent.com/u/29436595?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ishaan-jaff",
"html_url": "https://github.com/ishaan-jaff",
"followers_url": "https://api.github.com/users/ishaan-jaff/followers",
"following_url": "https://api.github.com/users/ishaan-jaff/following{/other_user}",
"gists_url": "https://api.github.com/users/ishaan-jaff/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ishaan-jaff/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ishaan-jaff/subscriptions",
"organizations_url": "https://api.github.com/users/ishaan-jaff/orgs",
"repos_url": "https://api.github.com/users/ishaan-jaff/repos",
"events_url": "https://api.github.com/users/ishaan-jaff/events{/privacy}",
"received_events_url": "https://api.github.com/users/ishaan-jaff/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false | null |
[] |
[
"@stevhliu @LysandreJik can you please take a look at this PR when possible ? Happy to add more docs/tests if this initial commit looks good😊",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,693 | 1,697 | 1,697 |
NONE
| null |
# What does this PR do?
This PR adds support for the above mentioned LLMs using LiteLLM https://github.com/BerriAI/litellm/
Example
```python
# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)
# cohere call
response = completion(model="command-nightly", messages=messages)
# anthropic
response = completion(model="claude-2", messages=messages)
```
In addition LiteLLM client allows you to:
* A/B test LLMs in production
* Dynamically control each LLMs prompt, temperature, top_k etc in our UI (no need to re-deploy code)
* Logging to view input/outputs for each LLM
Here's a link to a live demo of litellm client: https://admin.litellm.ai/

<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerz and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/25926/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/25926/timeline
| null | false |
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/25926",
"html_url": "https://github.com/huggingface/transformers/pull/25926",
"diff_url": "https://github.com/huggingface/transformers/pull/25926.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/25926.patch",
"merged_at": null
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.