url
stringlengths
62
66
repository_url
stringclasses
1 value
labels_url
stringlengths
76
80
comments_url
stringlengths
71
75
events_url
stringlengths
69
73
html_url
stringlengths
50
56
id
int64
377M
2.15B
node_id
stringlengths
18
32
number
int64
1
29.2k
title
stringlengths
1
487
user
dict
labels
list
state
stringclasses
2 values
locked
bool
2 classes
assignee
dict
assignees
list
comments
list
created_at
int64
1.54k
1.71k
updated_at
int64
1.54k
1.71k
closed_at
int64
1.54k
1.71k
โŒ€
author_association
stringclasses
4 values
active_lock_reason
stringclasses
2 values
body
stringlengths
0
234k
โŒ€
reactions
dict
timeline_url
stringlengths
71
75
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
https://api.github.com/repos/huggingface/transformers/issues/26330
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26330/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26330/comments
https://api.github.com/repos/huggingface/transformers/issues/26330/events
https://github.com/huggingface/transformers/pull/26330
1,907,664,516
PR_kwDOCUB6oc5a7Rwh
26,330
feat: adding gradient_accumulate for run_clm_flax
{ "login": "pphuc25", "id": 81808312, "node_id": "MDQ6VXNlcjgxODA4MzEy", "avatar_url": "https://avatars.githubusercontent.com/u/81808312?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pphuc25", "html_url": "https://github.com/pphuc25", "followers_url": "https://api.github.com/users/pphuc25/followers", "following_url": "https://api.github.com/users/pphuc25/following{/other_user}", "gists_url": "https://api.github.com/users/pphuc25/gists{/gist_id}", "starred_url": "https://api.github.com/users/pphuc25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pphuc25/subscriptions", "organizations_url": "https://api.github.com/users/pphuc25/orgs", "repos_url": "https://api.github.com/users/pphuc25/repos", "events_url": "https://api.github.com/users/pphuc25/events{/privacy}", "received_events_url": "https://api.github.com/users/pphuc25/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "thank you for your information", "Thanks for opening this PR @pphuc25! To expand a bit on what @ArthurZucker mentioned, the Flax examples scripts are designed to be extremely minimalistic, i.e. contain the minimum level of complexity to perform a training run with the library models. This helps make them easy to build on and expand for custom use cases, since there is very little code and complexity to comprehend. To this effect, gradient accumulation is a bit too advanced for the Flax examples, since it won't be leveraged by all users. If you're interested, you ca use Optax MultiSteps for storing mini-batch updates until the final update step: https://optax.readthedocs.io/en/latest/gradient_accumulation.html#splitting-updates-for-one-batch-over-multiple-steps. I believe this is the recommended way for implementing gradient accumulation in Flax, and is one I've tried before with good success.", "This explanation is well-done and provides valuable information. I appreciate it, @sanchit-gandhi. It was very informative for me. Thank you" ]
1,695
1,695
1,695
CONTRIBUTOR
null
# What does this PR do? I've added gradient accumulation for Flax file training, recognizing that resource constraints can be a limitation. Gradient accumulation assists people in training models with large batch sizes, mitigating these limitations. I would like to cc dual @sanchit-gandhi and @ArthurZucker for review my PR
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26330/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26330/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26330", "html_url": "https://github.com/huggingface/transformers/pull/26330", "diff_url": "https://github.com/huggingface/transformers/pull/26330.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26330.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/26329
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26329/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26329/comments
https://api.github.com/repos/huggingface/transformers/issues/26329/events
https://github.com/huggingface/transformers/pull/26329
1,907,616,752
PR_kwDOCUB6oc5a7HTy
26,329
chorse: change numpy to jax numpy
{ "login": "pphuc25", "id": 81808312, "node_id": "MDQ6VXNlcjgxODA4MzEy", "avatar_url": "https://avatars.githubusercontent.com/u/81808312?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pphuc25", "html_url": "https://github.com/pphuc25", "followers_url": "https://api.github.com/users/pphuc25/followers", "following_url": "https://api.github.com/users/pphuc25/following{/other_user}", "gists_url": "https://api.github.com/users/pphuc25/gists{/gist_id}", "starred_url": "https://api.github.com/users/pphuc25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pphuc25/subscriptions", "organizations_url": "https://api.github.com/users/pphuc25/orgs", "repos_url": "https://api.github.com/users/pphuc25/repos", "events_url": "https://api.github.com/users/pphuc25/events{/privacy}", "received_events_url": "https://api.github.com/users/pphuc25/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Makes sense to me to use the built-in NumPy module, but I'm not too familiar with how robust JAX is compared to NumPy. What do you think @LysandreJik ?", "From a quick look this looks OK, ok for you @sanchit-gandhi ?", "This is not such a great idea! Let me explain why: any operations run as `numpy` code will be executed on the CPU. Any operation run as `jax.numpy` code will be executed on the **accelerator device** (GPU or TPU). Note that this behaviour in JAX is different to PyTorch: by default, JAX code will be run on the accelerator device (if available), whereas in PyTorch it will be the CPU (unless specified otherwise).\r\n\r\nThis is significant because JAX uses [*asynchronous dispatch*](https://jax.readthedocs.io/en/latest/async_dispatch.html) to coordinate code between the CPU and accelerator device. If the accelerator device is running `jax.numpy` code, the CPU is free to carry on and run subsequent `numpy` code. It is not blocked by the accelerator running `jax` code, and can in fact already execute any 'future' code. This is useful in ML applications, since the accelerator can stay busy running the modelling code (forward and backward pass), whilst the CPU can go ahead and prepare the next batch of data ahead of time.\r\n\r\nIf we change our data preparation to use `jax.numpy` code, then we execute it on the accelerator device. This means we have to wait until the accelerator is free before running the data preparation, so we loose our asynchronous dispatch advantage.\r\n\r\nAn easy rule to keep in mind when writing JAX code is that **only the modelling code should be in JAX**. Everything else should be in NumPy. This way, we ensure only the modelling code is executed on the accelerator device, and everything else on CPU, thus maximising the async dispatch potential.\r\n\r\nLet me know if you have any questions @pphuc25 - more than happy to answer!", "Hmm, that's sound great insight, this is the first time I hear about it, very helpful information for me, thank you so much" ]
1,695
1,695
1,695
CONTRIBUTOR
null
Hi, I just do a simple remove numpy and change to jax numpy, seen at now jax is more robust and can replace to numpy. I would like to cc @stevhliu to review my PR.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26329/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26329/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26329", "html_url": "https://github.com/huggingface/transformers/pull/26329", "diff_url": "https://github.com/huggingface/transformers/pull/26329.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26329.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/26328
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26328/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26328/comments
https://api.github.com/repos/huggingface/transformers/issues/26328/events
https://github.com/huggingface/transformers/pull/26328
1,907,607,465
PR_kwDOCUB6oc5a7FRl
26,328
[Falcon] Set `use_cache=False` before creating `presents` which relies on `use_cache`
{ "login": "yundai424", "id": 43726198, "node_id": "MDQ6VXNlcjQzNzI2MTk4", "avatar_url": "https://avatars.githubusercontent.com/u/43726198?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yundai424", "html_url": "https://github.com/yundai424", "followers_url": "https://api.github.com/users/yundai424/followers", "following_url": "https://api.github.com/users/yundai424/following{/other_user}", "gists_url": "https://api.github.com/users/yundai424/gists{/gist_id}", "starred_url": "https://api.github.com/users/yundai424/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yundai424/subscriptions", "organizations_url": "https://api.github.com/users/yundai424/orgs", "repos_url": "https://api.github.com/users/yundai424/repos", "events_url": "https://api.github.com/users/yundai424/events{/privacy}", "received_events_url": "https://api.github.com/users/yundai424/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26328). All of your documentation changes will be reflected on that endpoint.", "Hi @yundai424 thanks a lot for iterating, in order to move forward with the PR could you merge your branch with main branch?", "Hi @younesbelkada are you referring to merging to HF main? ๐Ÿค” ", "Hi @yundai424 \r\nI meant to merge your local branch with HF's main branch, assuming the hf remote is tagged as `upstream` (you can also add it with `git remote add upstream https://github.com/huggingface/transformers.git`)\r\n\r\n```bash\r\ngit fetch upstream\r\ngit merge upstream/main\r\ngit push\r\n```", "oh cool i see what you mean.. merged, thanks! @younesbelkada " ]
1,695
1,696
1,696
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #26327 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. - text models: @ArthurZucker and @younesbelkada <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26328/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26328/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26328", "html_url": "https://github.com/huggingface/transformers/pull/26328", "diff_url": "https://github.com/huggingface/transformers/pull/26328.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26328.patch", "merged_at": 1696493907000 }
https://api.github.com/repos/huggingface/transformers/issues/26327
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26327/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26327/comments
https://api.github.com/repos/huggingface/transformers/issues/26327/events
https://github.com/huggingface/transformers/issues/26327
1,907,594,640
I_kwDOCUB6oc5xs5WQ
26,327
[Falcon] forward pass will fail if `use_cache` is automatically flipped to False
{ "login": "yundai424", "id": 43726198, "node_id": "MDQ6VXNlcjQzNzI2MTk4", "avatar_url": "https://avatars.githubusercontent.com/u/43726198?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yundai424", "html_url": "https://github.com/yundai424", "followers_url": "https://api.github.com/users/yundai424/followers", "following_url": "https://api.github.com/users/yundai424/following{/other_user}", "gists_url": "https://api.github.com/users/yundai424/gists{/gist_id}", "starred_url": "https://api.github.com/users/yundai424/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yundai424/subscriptions", "organizations_url": "https://api.github.com/users/yundai424/orgs", "repos_url": "https://api.github.com/users/yundai424/repos", "events_url": "https://api.github.com/users/yundai424/events{/privacy}", "received_events_url": "https://api.github.com/users/yundai424/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks! I answered on the pR ๐Ÿ˜‰ " ]
1,695
1,696
1,696
CONTRIBUTOR
null
### System Info - `transformers` version: 4.33.2 - Platform: Linux-5.15.111.1-rolling-lts-linkedin-x86_64-with-glibc2.17 - Python version: 3.10.2 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.1 - Accelerate version: 0.21.0 - Accelerate config: not found - PyTorch version (GPU?): 2.1.0a0+gitf998869 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: 8 A100 GPUs - Using distributed or parallel set-up in script?: Nah just `torchrun` ### Who can help? @ArthurZucker and @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Full-parameter fine-tune a Falcon 180B using any kind of task, using activation checkpointing (`--gradient_checkpointing`). When doing `model = transformers.AutoModelForCausalLM.from_pretrained()`, don't set `use_cache=False` but leave it default. This will result in the `use_cache` flag being [flapped to False](https://github.com/huggingface/transformers/blob/39df4eca739b0870f73dbcfdfa09179e3135c75d/src/transformers/models/falcon/modeling_falcon.py#L951). But the `presents` used for cache is not reset to `None` and later on [this code branch](https://github.com/huggingface/transformers/blob/39df4eca739b0870f73dbcfdfa09179e3135c75d/src/transformers/models/falcon/modeling_falcon.py#L993) which should be exclusive for `use_cache=True` will be entered and then hit following error: ``` File "/home/jobuser/.local/lib/python3.10/site-packages/accelerate/utils/operations.py", line 636, in forward return model_forward(*args, **kwargs) File "/home/jobuser/.local/lib/python3.10/site-packages/accelerate/utils/operations.py", line 624, in __call__ return convert_to_fp32(self.model_forward(*args, **kwargs)) File "/home/jobuser/.local/lib/python3.10/site-packages/torch/amp/autocast_mode.py", line 16, in decorate_autocast return func(*args, **kwargs) File "/home/jobuser/.local/lib/python3.10/site-packages/transformers/models/falcon/modeling_falcon.py", line 1031, in forward transformer_outputs = self.transformer( File "/home/jobuser/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/jobuser/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "/home/jobuser/.local/lib/python3.10/site-packages/transformers/models/falcon/modeling_falcon.py", line 952, in forward presents = self._convert_cache_to_standard_format(presents, batch_size) File "/home/jobuser/.local/lib/python3.10/site-packages/transformers/models/falcon/modeling_falcon.py", line 705, in _convert_cache_to_standard_format batch_size_times_num_heads, kv_length, head_dim = past_key_value[0][0].shape IndexError: tuple index out of range ``` ### Expected behavior The `presents` tuple needs to be set to None along with `use_cache=False`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26327/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26327/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26326
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26326/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26326/comments
https://api.github.com/repos/huggingface/transformers/issues/26326/events
https://github.com/huggingface/transformers/pull/26326
1,907,507,862
PR_kwDOCUB6oc5a6vUf
26,326
feat: adding num_proc to load_dataset
{ "login": "pphuc25", "id": 81808312, "node_id": "MDQ6VXNlcjgxODA4MzEy", "avatar_url": "https://avatars.githubusercontent.com/u/81808312?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pphuc25", "html_url": "https://github.com/pphuc25", "followers_url": "https://api.github.com/users/pphuc25/followers", "following_url": "https://api.github.com/users/pphuc25/following{/other_user}", "gists_url": "https://api.github.com/users/pphuc25/gists{/gist_id}", "starred_url": "https://api.github.com/users/pphuc25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pphuc25/subscriptions", "organizations_url": "https://api.github.com/users/pphuc25/orgs", "repos_url": "https://api.github.com/users/pphuc25/repos", "events_url": "https://api.github.com/users/pphuc25/events{/privacy}", "received_events_url": "https://api.github.com/users/pphuc25/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26326). All of your documentation changes will be reflected on that endpoint." ]
1,695
1,695
1,695
CONTRIBUTOR
null
# What does this PR do? As the demand for LLM models continues to grow, the size of the data being downloaded can slow down the process. To address this issue, I've submitted a PR to incorporate the "num_proc" feature into the "load_dataset" function, making data loading faster and more efficient. I would like cc @sanchit-gandhi to review my PR.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26326/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26326/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26326", "html_url": "https://github.com/huggingface/transformers/pull/26326", "diff_url": "https://github.com/huggingface/transformers/pull/26326.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26326.patch", "merged_at": 1695403367000 }
https://api.github.com/repos/huggingface/transformers/issues/26325
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26325/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26325/comments
https://api.github.com/repos/huggingface/transformers/issues/26325/events
https://github.com/huggingface/transformers/issues/26325
1,907,456,086
I_kwDOCUB6oc5xsXhW
26,325
The efficiency of transformers running in thread
{ "login": "SeaSpring17", "id": 66456697, "node_id": "MDQ6VXNlcjY2NDU2Njk3", "avatar_url": "https://avatars.githubusercontent.com/u/66456697?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SeaSpring17", "html_url": "https://github.com/SeaSpring17", "followers_url": "https://api.github.com/users/SeaSpring17/followers", "following_url": "https://api.github.com/users/SeaSpring17/following{/other_user}", "gists_url": "https://api.github.com/users/SeaSpring17/gists{/gist_id}", "starred_url": "https://api.github.com/users/SeaSpring17/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SeaSpring17/subscriptions", "organizations_url": "https://api.github.com/users/SeaSpring17/orgs", "repos_url": "https://api.github.com/users/SeaSpring17/repos", "events_url": "https://api.github.com/users/SeaSpring17/events{/privacy}", "received_events_url": "https://api.github.com/users/SeaSpring17/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! The `transformer` (and the `torch/tf/jax`) library was made to leverage CPU operation in a builtin-manner. It does not support threading. If you want acceleration, you should look into the [optimum library](https://github.com/huggingface/optimum), or torch compilation of model. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,695
1,698
1,698
NONE
null
### System Info transformers version: 4.29.1 platform: arm64 python version: 3.7.5 When using the model to get the generation in the Thread, the speed is only 1/4 of the main thread, while the CPU usage is the same. ```python model = AutoModelWithLMHead.from_pretrained(os.path.join(model_path, "opus-mt-en-zh")) tokenizer = AutoTokenizer.from_pretrained(os.path.join(model_path, "opus-mt-en-zh")) batch = self.toenizer.prepare_seq2seq_batch(src_texts=[text]) batch["input_ids"] = torch.Tensor(np.array(batch["input_ids"])[:, :512]).long() batch["attention_mask"] = torch.Tensor(np.array(batch["attention_mask"])[:, :512]).long() tt = time.time() ans = self.base.generate(**batch) print(f'generate:{time.time() - tt}') ``` In main: ![image](https://github.com/huggingface/transformers/assets/66456697/aed9e837-3efc-4027-9d0d-ab00510f74be) In thread: ![image](https://github.com/huggingface/transformers/assets/66456697/d50bacd3-7f00-4da1-a7be-ca31b8775ea4) @ArthurZucker and @younesbelkada ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python model = AutoModelWithLMHead.from_pretrained(os.path.join(model_path, "opus-mt-en-zh")) tokenizer = AutoTokenizer.from_pretrained(os.path.join(model_path, "opus-mt-en-zh")) text = "It's a challenge. We have to face it. God is challenging you. He's calling you a chump. This is the sentence to be translated" class MyModel_token(nn.Module): # ๅˆๅง‹ๅŒ–ๅฎšไน‰ def __init__(self, m, t): super(MyModel_token, self).__init__() self.base = m self.toenizer = t print("init success") # ๆญฃๅ‘ไผ ๆ’ญ def forward(self, text): print(f'input text: {text}') tt = time.time() batch = self.toenizer.prepare_seq2seq_batch(src_texts=[text]) print(f'toenizer.prepare_seq2seq_batch:{time.time()-tt}') tt = time.time() batch["input_ids"] = torch.Tensor(np.array(batch["input_ids"])[:, :512]).long() batch["attention_mask"] = torch.Tensor(np.array(batch["attention_mask"])[:, :512]).long() print(f'Tensor:{time.time()-tt}') tt = time.time() ans = self.base.generate(**batch) print(f'generate:{time.time() - tt}') tt = time.time() ans1 = self.toenizer.batch_decode(ans, skip_special_tokens=True) print(f'batch_decode:{time.time() - tt}') # print(f'ans: {ans}') # print(f'ans1: {ans1}') return ans1 myModel_token = MyModel_token(model, tokenizer) def get_translation(raw_s): start = time.time() ans = myModel_token(raw_s) print(f'ans:{ans}, all time:{time.time()-start}') return ans ## start = time.time() print("\nwith tokenizer") get_translation(text) print(f"all time:{time.time()-start:.4f}") print('\n'*3) p = Thread(target=get_translation, args=(text,)) p.start() ``` ### Expected behavior Can a thread run at the same speed as a main function with the same CPU usage
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26325/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26325/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26324
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26324/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26324/comments
https://api.github.com/repos/huggingface/transformers/issues/26324/events
https://github.com/huggingface/transformers/pull/26324
1,907,267,625
PR_kwDOCUB6oc5a57M4
26,324
Fix doctest CI
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,695
1,695
1,695
COLLABORATOR
null
# What does this PR do? The daily doctest CI is killed due to memory issue (from `PersimmonForCausalLM.forward`'s docstring). It's not only about GPU, even running with the 60G CPU memory, the job is still killed. If I run that test only - it could pass (with running on CPU), but the process is killed when the whole suite is run. (There might be some memory (leak) issue to check as a whole). This PR put `src/transformers/models/persimmon/modeling_persimmon.py` to `not_doctested.txt`. We probably better to further separate what are not doctested yet and what are ignored intentionally, so we won't forget to try to put them back to doctests.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26324/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26324/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26324", "html_url": "https://github.com/huggingface/transformers/pull/26324", "diff_url": "https://github.com/huggingface/transformers/pull/26324.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26324.patch", "merged_at": 1695365911000 }
https://api.github.com/repos/huggingface/transformers/issues/26323
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26323/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26323/comments
https://api.github.com/repos/huggingface/transformers/issues/26323/events
https://github.com/huggingface/transformers/issues/26323
1,906,931,966
I_kwDOCUB6oc5xqXj-
26,323
T5ForConditionalGeneration model runs on cuda:0 but not on cuda:1
{ "login": "Kushdesh", "id": 10446551, "node_id": "MDQ6VXNlcjEwNDQ2NTUx", "avatar_url": "https://avatars.githubusercontent.com/u/10446551?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Kushdesh", "html_url": "https://github.com/Kushdesh", "followers_url": "https://api.github.com/users/Kushdesh/followers", "following_url": "https://api.github.com/users/Kushdesh/following{/other_user}", "gists_url": "https://api.github.com/users/Kushdesh/gists{/gist_id}", "starred_url": "https://api.github.com/users/Kushdesh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Kushdesh/subscriptions", "organizations_url": "https://api.github.com/users/Kushdesh/orgs", "repos_url": "https://api.github.com/users/Kushdesh/repos", "events_url": "https://api.github.com/users/Kushdesh/events{/privacy}", "received_events_url": "https://api.github.com/users/Kushdesh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey, you should try using accelerate's device map, or change the `\"CUDA_VISIBLE_DEVICE\"` os env variable! You probably have `apex` and are using the fused layer norm. Doesn't seem to work out of the box. \r\ncc @SunMarc ", "@ArthurZucker Thanks for your suggestions\r\n@SunMarc \r\n\r\nI tried CUDA_VISIBLE_DEVICES by running 'CUDA_VISIBLE_DEVICES=1 python test.py '\r\nand also tried by adding following lines at the beginning of code\r\n```python\r\nimport os\r\nos.environ[\"CUDA_VISIBLE_DEVICES\"] =\"1\"\r\n```\r\nBut it gives same error.\r\n\r\nI couldn't understand how to use device map to use 'cuda:1' for entire model. Do I have to map each layer of the model to cuda:1.\r\n\r\nI also tried to run the code by adding line `torch.cuda.set_device('cuda:1')` in the beginning. In this case the code run correctly. But it doesn't look correct way to handle this. \r\n\r\n\r\n", "Hi @Kushdesh , when you use \r\n```py\r\nimport os\r\nos.environ[\"CUDA_VISIBLE_DEVICES\"] =\"1\"\r\n```\r\nyou also need to change the device to `device = 'cuda:0 if cuda.is_available() else 'cpu'` because the gpu 0 is not visible anymore.\r\n\r\nTo use device_map, you just need to add the device_map arg like that:\r\n`model = T5ForConditionalGeneration.from_pretrained(\"t5-small\", device_map={\"\":1})`\r\n\r\nLet me know if it works now. ", "Hi @SunMarc \r\nThanks for your response.\r\nchanging device to `device = 'cuda:0 if cuda.is_available() else 'cpu'` does works and it uses 2nd GPU.\r\n\r\nIn following code I setting evironment variable CUDA_VISIBLE_DEVICES to \"0,1\" and then use cuda:1 and it gives same error. Look like the error is not because of CUDA_VISIBLE_DEVICES setting but use of cuda:1\r\n\r\n```Python\r\nimport os\r\nos.environ[\"CUDA_VISIBLE_DEVICES\"] =\"0,1\"\r\n\r\nimport torch\r\nfrom torch import cuda, tensor\r\nfrom transformers import T5ForConditionalGeneration\r\ndevice = 'cuda:1' if cuda.is_available() else 'cpu'\r\n\r\nmodel = T5ForConditionalGeneration.from_pretrained(\"t5-small\")\r\nmodel = model.to(device)\r\n\r\nids = tensor([[ 363, 19, 8, 792, 381, 13, 7634, 7, 58, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]).to(device, dtype=torch.long)\r\nmask = tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]).to(device, dtype=torch.long)\r\n\r\nres = model.generate(input_ids=ids, max_length=50, attention_mask=mask)\r\n```\r\n\r\n`model = T5ForConditionalGeneration.from_pretrained(\"t5-small\", device_map={\"\":1})` does the mapping, but it gives same error. I think @ArthurZucker mentioned **accelerate's device map** . How can I use accelerate in this case to use specific devices.\r\n", "By passing `device_map={\"\":1}`, you are already using accelerate under the hood. On my setup, it works as expected. Can you check if you have the same behavior with other models ? ", "@SunMarc \r\nI tried a different model and it works on cuda:1. Following is the code\r\n```Python\r\nimport torch\r\nfrom torch import cuda\r\ndevice = 'cuda:1' if cuda.is_available() else 'cpu'\r\n\r\nfrom transformers import EncoderDecoderModel, AutoTokenizer\r\nsentence_fuser = EncoderDecoderModel.from_pretrained(\"google/roberta2roberta_L-24_discofuse\", device_map={\"\":1})\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"google/roberta2roberta_L-24_discofuse\")\r\n\r\ninput_ids = tokenizer(\r\n\r\n \"This is the first sentence. This is the second sentence.\", add_special_tokens=False, return_tensors=\"pt\"\r\n\r\n).input_ids.to(device, dtype=torch.long)\r\n\r\noutputs = sentence_fuser.generate(input_ids)\r\n\r\nprint(tokenizer.decode(outputs[0]))\r\n```\r\n\r\nI tried to find at which line of the code accessing data on GPU produce error while running T5ForConditionalGeneration. Below is output of pdb `where` command and the last line gives position of https://github.com/NVIDIA/apex/blob/741bdf50825a97664db08574981962d66436d16a/apex/normalization/fused_layer_norm.py#L69\r\nBefore running this line I can print variables input_, weight_ mentioned on the last line and is on GPU. But executing this line if I tried to print input_ then it gives the error I mentioned in the first post.\r\n\r\n```\r\nres = model.generate(input_ids=ids, max_length=50, attention_mask=mask)\r\n /usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py(115)decorate_context()\r\n-> return func(*args, **kwargs)\r\n /home/qai/.local/lib/python3.10/site-packages/transformers/generation/utils.py(1490)generate()\r\n-> model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(\r\n /home/qai/.local/lib/python3.10/site-packages/transformers/generation/utils.py(660)_prepare_encoder_decoder_kwargs_for_generation()\r\n-> model_kwargs[\"encoder_outputs\"]: ModelOutput = encoder(**encoder_kwargs)\r\n /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1505)_wrapped_call_impl()\r\n-> return self._call_impl(*args, **kwargs)\r\n /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1514)_call_impl()\r\n-> return forward_call(*args, **kwargs)\r\n /home/qai/.local/lib/python3.10/site-packages/accelerate/hooks.py(165)new_forward()\r\n-> output = old_forward(*args, **kwargs)\r\n /home/qai/.local/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py(1123)forward()\r\n-> layer_outputs = layer_module(\r\n /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1505)_wrapped_call_impl()\r\n-> return self._call_impl(*args, **kwargs)\r\n /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1514)_call_impl()\r\n-> return forward_call(*args, **kwargs)\r\n /home/qai/.local/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py(695)forward()\r\n-> self_attention_outputs = self.layer[0](\r\n /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1505)_wrapped_call_impl()\r\n-> return self._call_impl(*args, **kwargs)\r\n /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1514)_call_impl()\r\n-> return forward_call(*args, **kwargs)\r\n /home/qai/.local/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py(601)forward()\r\n-> normed_hidden_states = self.layer_norm(hidden_states)\r\n /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1505)_wrapped_call_impl()\r\n-> return self._call_impl(*args, **kwargs)\r\n /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1514)_call_impl()\r\n-> return forward_call(*args, **kwargs)\r\n /usr/local/lib/python3.10/dist-packages/apex/normalization/fused_layer_norm.py(386)forward()\r\n-> return fused_rms_norm_affine(input, self.weight, self.normalized_shape, self.eps)\r\n /usr/local/lib/python3.10/dist-packages/apex/normalization/fused_layer_norm.py(189)fused_rms_norm_affine()\r\n-> return FusedRMSNormAffineFunction.apply(*args)\r\n /usr/local/lib/python3.10/dist-packages/torch/autograd/function.py(506)apply()\r\n-> return super().apply(*args, **kwargs) # type: ignore[misc]\r\n> /usr/local/lib/python3.10/dist-packages/apex/normalization/fused_layer_norm.py(70)forward()\r\n-> output, invvar = fused_layer_norm_cuda.rms_forward_affine(\r\n(Pdb) \r\n```\r\n", "@Kushdesh thanks for this in-depth investigation ! Looks like it is more of an issue on apex side. Unfortunately, I don't think that we will be able to solve it on our side. In the modeling script of t5t, we are just calling `self.layer_norm(hidden_states)` and since you have apex, it uses `fused_rms_norm_affine`. I suggest you to raise that issue on apex library. LMK if it makes sense. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,695
1,700
1,700
NONE
null
### System Info transformers : 4.33.2 torch: 2.1.0a0+29c30b1 Python: 3.10.12 GPUs: Two identical NVIDIA GeForce RTX 4090 GPUs OS: Ubuntu 22.04.3 LTS ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction ```python import torch from torch import cuda, tensor from transformers import T5ForConditionalGeneration device = 'cuda:1' if cuda.is_available() else 'cpu' model = T5ForConditionalGeneration.from_pretrained("t5-small") model = model.to(device) ids = tensor([[ 363, 19, 8, 792, 381, 13, 7634, 7, 58, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]).to(device, dtype=torch.long) mask = tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]).to(device, dtype=torch.long) res = model.generate(input_ids=ids, max_length=50, attention_mask=mask) ``` gives following error: `RuntimeError: CUDA error: an illegal memory access was encountered CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.` The code run perfectly if in above code we change 'cuda:1' to 'cuda:0'. I tried to find where the problem occurring and found that the error happens when program reaches and run following line of file https://github.com/NVIDIA/apex/blob/741bdf50825a97664db08574981962d66436d16a/apex/normalization/fused_layer_norm.py#L69 ```python output, invvar = fused_layer_norm_cuda.rms_forward_affine( input_, ctx.normalized_shape, weight_, ctx.eps) ``` ### Expected behavior The code should run with 'cuda:1'
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26323/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26323/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26322
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26322/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26322/comments
https://api.github.com/repos/huggingface/transformers/issues/26322/events
https://github.com/huggingface/transformers/pull/26322
1,906,869,433
PR_kwDOCUB6oc5a4kc8
26,322
Fix model integration ci
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[ { "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false } ]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,695
1,696
1,696
COLLABORATOR
null
# What does this PR do? Fixes the CI broken by #23909
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26322/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26322/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26322", "html_url": "https://github.com/huggingface/transformers/pull/26322", "diff_url": "https://github.com/huggingface/transformers/pull/26322.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26322.patch", "merged_at": 1696247747000 }
https://api.github.com/repos/huggingface/transformers/issues/26321
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26321/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26321/comments
https://api.github.com/repos/huggingface/transformers/issues/26321/events
https://github.com/huggingface/transformers/issues/26321
1,906,853,026
I_kwDOCUB6oc5xqESi
26,321
TensorBoard integration on huggingface
{ "login": "omermazig", "id": 95534441, "node_id": "U_kgDOBbG9aQ", "avatar_url": "https://avatars.githubusercontent.com/u/95534441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/omermazig", "html_url": "https://github.com/omermazig", "followers_url": "https://api.github.com/users/omermazig/followers", "following_url": "https://api.github.com/users/omermazig/following{/other_user}", "gists_url": "https://api.github.com/users/omermazig/gists{/gist_id}", "starred_url": "https://api.github.com/users/omermazig/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/omermazig/subscriptions", "organizations_url": "https://api.github.com/users/omermazig/orgs", "repos_url": "https://api.github.com/users/omermazig/repos", "events_url": "https://api.github.com/users/omermazig/events{/privacy}", "received_events_url": "https://api.github.com/users/omermazig/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false }
[ { "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false } ]
[ "cc @Rocketknight1 our TF expert!", "Hmmm indeed it seems like there is an issue with `runs` not being saved. I don't think that's related to TensorFlow but rather to the `Trainer`. \r\n\r\n@pacman100 @muellerzr could you please give this issue a look? Thank you!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "This issue has gone stale even though it still needs to be addressed, so I'm commenting to bump it as instructed ", "Thanks, this will be solved with #27022 as long as you do `trainer.push_to_hub()`" ]
1,695
1,698
1,698
NONE
null
### System Info I'm running a somewhat changed version of [this](https://github.com/huggingface/notebooks/blob/main/examples/video_classification.ipynb) tutorial, and somewhere last month. As the tutorial explains, it used to be that when I ran `push_to_hub`, a TensorBoard was created and appeared under the model page on huggingface: ![image](https://github.com/huggingface/transformers/assets/95534441/af510d37-4641-4ad0-8669-7da18b64e90b) But lately I've noticed that it has stopped generating that tab. I have two runs from the 20 and 24 of Aug, where the tab appears in the former and doesn't appear in the latter: https://huggingface.co/omermazig/videomae-base-finetuned-kinetics-finetuned-nba-data-8-batch-5-epochs-dataset_10_classes https://huggingface.co/omermazig/videomae-base-kinetics-finetuned-nba-binary-data-2-batch-50-epochs-399-train-vids-multilabel You can see that the transformer's verion updated from `4.31.0` to `4.32.0`. All runs before these two had a `Training Metrics` tab, and all runs after them didn't. Is it possible that the integration was broken on `4.32.0`? How can I further investigate that? Thanks! ### Who can help? @Rocketknight1 ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Run this notebook: https://github.com/huggingface/notebooks/blob/main/examples/video_classification.ipynb ### Expected behavior Creation of a `Training Metrics` tab on huggingface
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26321/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26321/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26320
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26320/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26320/comments
https://api.github.com/repos/huggingface/transformers/issues/26320/events
https://github.com/huggingface/transformers/issues/26320
1,906,763,688
I_kwDOCUB6oc5xpueo
26,320
Text generation does not complete after tuning with QLora
{ "login": "50516017", "id": 23068536, "node_id": "MDQ6VXNlcjIzMDY4NTM2", "avatar_url": "https://avatars.githubusercontent.com/u/23068536?v=4", "gravatar_id": "", "url": "https://api.github.com/users/50516017", "html_url": "https://github.com/50516017", "followers_url": "https://api.github.com/users/50516017/followers", "following_url": "https://api.github.com/users/50516017/following{/other_user}", "gists_url": "https://api.github.com/users/50516017/gists{/gist_id}", "starred_url": "https://api.github.com/users/50516017/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/50516017/subscriptions", "organizations_url": "https://api.github.com/users/50516017/orgs", "repos_url": "https://api.github.com/users/50516017/repos", "events_url": "https://api.github.com/users/50516017/events{/privacy}", "received_events_url": "https://api.github.com/users/50516017/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "postscript\r\nWhen I ran it with the number of data set to 1/100 and the number of epochs halved, it worked fine, although it was slow. Is there a case where generation becomes slow due to overtraining?", "> Is there a case where generation becomes slow due to overtraining?\r\n\r\n@50516017 The text generation speed (measured in tokens per second, for outputs of similar lengths) should not change with fine-tuning :)" ]
1,695
1,697
1,695
NONE
null
### System Info I use Google colab V100 instance ` `transformers` version: 4.33.2 - Platform: Linux-5.15.120+-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.17.2 - Safetensors version: 0.3.3 - Accelerate version: 0.23.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tensorflow version (GPU?): 2.13.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.7.2 (gpu) - Jax version: 0.4.14 - JaxLib version: 0.4.14 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ` ### Who can help? @younesbelkada,@ArthurZucker,@gante ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I fine-tuned "elyza/ELYZA-japanese-Llama-2-7b-fast-instruct" using QLora. The train loss and eval loss were also output within a range that seemed to be OK. After fine-tuning, I tried to generate sentences based on the user's input using the code below, but model.generate did not complete even after more than 10 minutes. When I did model.generate without QLora, a response was returned immediately. Was my training method bad? Is there any way to find the cause? toraining code ``` model_name= "elyza/ELYZA-japanese-Llama-2-7b-fast-instruct" tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True) config = AutoConfig.from_pretrained(model_name,use_fast=True) bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16 ) model = AutoModelForCausalLM.from_pretrained( model_name, config=config, device_map="auto", #load_in_8bit=True, quantization_config=bnb_config ) tokenized_train = tokenize_dataset(train_data, tokenizer) tokenized_val = tokenize_dataset(val_data, tokenizer) collator = InstructCollator(tokenizer) loader = DataLoader(tokenized_train, collate_fn=collator, batch_size=8, shuffle=True) eval_steps = 10 save_steps = 30 logging_steps = 3 MICRO_BATCH_SIZE = 2 BATCH_SIZE = 32 def find_all_linear_names(model): cls = bnb.nn.Linear4bit # if args.bits == 4 else (bnb.nn.Linear8bitLt if args.bits == 8 else torch.nn.Linear) lora_module_names = set() for name, module in model.named_modules(): if isinstance(module, cls): names = name.split('.') lora_module_names.add(names[0] if len(names) == 1 else names[-1]) if 'lm_head' in lora_module_names: # needed for 16-bit lora_module_names.remove('lm_head') return list(lora_module_names) linear_name = find_all_linear_names(model) lora_config = LoraConfig( r= 8, lora_alpha=16, #target_modules=["query_key_value"], lora_dropout=0.05, bias="none", task_type=TaskType.CAUSAL_LM, target_modules = linear_name ) model = prepare_model_for_int8_training(model) model = get_peft_model(model, lora_config) trainer = transformers.Trainer( model = model, data_collator=collator, train_dataset=tokenized_train, eval_dataset=tokenized_val, args=transformers.TrainingArguments( num_train_epochs=7, learning_rate=3e-5, evaluation_strategy="steps", save_strategy="steps", eval_steps=eval_steps, #bf16=True, save_steps=save_steps, per_device_train_batch_size=MICRO_BATCH_SIZE, per_device_eval_batch_size=MICRO_BATCH_SIZE, gradient_accumulation_steps=BATCH_SIZE // MICRO_BATCH_SIZE, dataloader_num_workers=12, logging_steps=logging_steps, output_dir=f"{output_dir}/{learn_start_time}", report_to="wandb", save_total_limit=1, load_best_model_at_end=True, greater_is_better=False, metric_for_best_model="eval_loss", auto_find_batch_size=True ) ) model.config.use_cache = False trainer.train() model.config.use_cache = True ``` inference code ``` def generate(text,tokenizer,model,history=None): prompt_no_output = "instruction" token_ids = tokenizer.encode(prompt_no_output, add_special_tokens=False, return_tensors="pt") with torch.no_grad(): output_ids = model.generate( input_ids=token_ids.to(model.device), do_sample=True, max_new_tokens=2000, # temperature=temperature, top_p=0.95, repetition_penalty=1.0, pad_token_id=tokenizer.pad_token_id, bos_token_id=tokenizer.bos_token_id, eos_token_id=tokenizer.eos_token_id ) output = tokenizer.decode(output_ids.tolist()[0]) return "========\n" + tokenizer.decode(output_ids.tolist()[0]) def main(model_name,output_dir): LORA_WEIGHTS=output_dir MODEL_NAME = model_name device_map = "auto" bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16 ) base_model = AutoModelForCausalLM.from_pretrained( MODEL_NAME, device_map=device_map, #load_in_8bit=True, quantization_config=bnb_config ) tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME, use_fast=True) model = PeftModel.from_pretrained( base_model, LORA_WEIGHTS, device_map="auto" ) path = "./sample_text_small.txt" base_model.eval() (generate("input text",tokenizer,)) if __name__ == "__main__": main("elyza/ELYZA-japanese-Llama-2-7b-fast-instruct","output checkpoint file") ``` If you need anything, such as tokenized data, fine-tuned data format, wandb loss graph, etc., we will provide it immediately. Please let me know how to solve it! ### Expected behavior model.generate runs for more than 10 minutes. It does not output any errors. GPU doesn't seem to be a memory issue with usage around 6GB
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26320/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26320/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26319
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26319/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26319/comments
https://api.github.com/repos/huggingface/transformers/issues/26319/events
https://github.com/huggingface/transformers/issues/26319
1,906,759,185
I_kwDOCUB6oc5xptYR
26,319
Registering Models in MLflow Callback
{ "login": "Tolga-Karahan", "id": 18248258, "node_id": "MDQ6VXNlcjE4MjQ4MjU4", "avatar_url": "https://avatars.githubusercontent.com/u/18248258?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Tolga-Karahan", "html_url": "https://github.com/Tolga-Karahan", "followers_url": "https://api.github.com/users/Tolga-Karahan/followers", "following_url": "https://api.github.com/users/Tolga-Karahan/following{/other_user}", "gists_url": "https://api.github.com/users/Tolga-Karahan/gists{/gist_id}", "starred_url": "https://api.github.com/users/Tolga-Karahan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Tolga-Karahan/subscriptions", "organizations_url": "https://api.github.com/users/Tolga-Karahan/orgs", "repos_url": "https://api.github.com/users/Tolga-Karahan/repos", "events_url": "https://api.github.com/users/Tolga-Karahan/events{/privacy}", "received_events_url": "https://api.github.com/users/Tolga-Karahan/received_events", "type": "User", "site_admin": false }
[ { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
open
false
null
[]
[ "fyi @pacman100 and @muellerzr ", "A PR would be most welcome! Feel free to open one and try :)" ]
1,695
1,695
null
NONE
null
### Feature request We can add registering models functionality to MLflow callback so that we can use MLflow model registry with ๐Ÿค— models. To do that we can introduce an optional _registered_model_name field to the callback, and can register the model in case these field is not None. ``` class MLflowCallback(TrainerCallback): #... def setup(self, args, state, model): #... self._log_artifacts = os.getenv("HF_MLFLOW_LOG_ARTIFACTS", "FALSE").upper() in ENV_VARS_TRUE_VALUES self._registered_model_name = os.getenv("HF_REGISTERED_MODEL_NAME") # Suggested Change #... def on_save(self, args, state, control, **kwargs): if self._initialized and state.is_world_process_zero and self._log_artifacts: ckpt_dir = f"checkpoint-{state.global_step}" artifact_path = os.path.join(args.output_dir, ckpt_dir) logger.info(f"Logging checkpoint artifacts in {ckpt_dir}. This may take time.") self._ml_flow.pyfunc.log_model( ckpt_dir, artifacts={"model_path": artifact_path}, python_model=self._ml_flow.pyfunc.PythonModel(), registered_model_name=self._registered_model_name or None # Suggested Change ) ``` ### Motivation Model registry is one of the most useful features of MLflow, but current callback doesn't support it. It forces users to make a custom implementation to use this functionality. Instead, we can extend ๐Ÿค— MLflow callback to provide this feature. ### Your contribution I can implement this extension and create a PR.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26319/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26319/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/26318
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26318/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26318/comments
https://api.github.com/repos/huggingface/transformers/issues/26318/events
https://github.com/huggingface/transformers/issues/26318
1,906,688,883
I_kwDOCUB6oc5xpcNz
26,318
T5 tokenizer adds whitespace after added token
{ "login": "harshil-shah", "id": 12370376, "node_id": "MDQ6VXNlcjEyMzcwMzc2", "avatar_url": "https://avatars.githubusercontent.com/u/12370376?v=4", "gravatar_id": "", "url": "https://api.github.com/users/harshil-shah", "html_url": "https://github.com/harshil-shah", "followers_url": "https://api.github.com/users/harshil-shah/followers", "following_url": "https://api.github.com/users/harshil-shah/following{/other_user}", "gists_url": "https://api.github.com/users/harshil-shah/gists{/gist_id}", "starred_url": "https://api.github.com/users/harshil-shah/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/harshil-shah/subscriptions", "organizations_url": "https://api.github.com/users/harshil-shah/orgs", "repos_url": "https://api.github.com/users/harshil-shah/repos", "events_url": "https://api.github.com/users/harshil-shah/events{/privacy}", "received_events_url": "https://api.github.com/users/harshil-shah/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[ { "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false } ]
[ "Hey, this is the same as #25881 , the fix to `rust` has not been done yet and is more involved. I'll try to get to it! ", "Thank you!", "Linked the fix PR ๐Ÿ˜‰ ", "The PR is in a good state, should be mergeable this week. It uncovers more \"inconsistencies\" with slow and fast, but I'll document all of this there! ๐Ÿ˜‰ You can already do something like:\r\n```python \r\nfrom tokenizers.pre_tokenizers import Metaspace\r\n.... # tokenizer.from_pretrained etc\r\ntokenizer._tokenizer.pre_tokenizer = Metaspace(add_prefix_space = True, replacement='โ–', prepend_scheme = \"first\") \r\n```\r\n", "@ArthurZucker Even after following the step in your previous comment, it still seems to be producing incorrect output for certain inputs:\r\n```py\r\nfrom transformers import AutoTokenizer\r\ntok = AutoTokenizer.from_pretrained('t5-base', use_fast=True)\r\nprint(tok.encode(\"</s>test</s>\", add_special_tokens=False)) # Broken\r\n\r\nfrom tokenizers.pre_tokenizers import Metaspace\r\ntok._tokenizer.pre_tokenizer = Metaspace(add_prefix_space = True, replacement='โ–', prepend_scheme = \"first\")\r\nprint(tok.encode(\"</s>test</s>\", add_special_tokens=False)) # Should be fixed, but isn't\r\n```\r\nIn both cases, `[1, 794, 1]` is printed which corresponds to `['</s>', 'โ–test', '</s>']`... but it should be `[1, 4377, 1]` which corresponds to `['</s>', 'test', '</s>']`. This can be achieved with the slow tokenizer with legacy set to false:\r\n```py\r\nfrom transformers import AutoTokenizer\r\nslow = AutoTokenizer.from_pretrained('t5-base', use_fast=False, legacy=False)\r\nprint(slow.encode(\"</s>test</s>\", add_special_tokens=False)) # [1, 4377, 1]\r\n```\r\n\r\nI've also tested saving and loading the tokenizer again (see [here](https://huggingface.co/Xenova/t5-tokenizer-new)), but that has the same problem. I'm using `tokenizers==0.15.0` and `transformers==4.36.1` (latest).\r\n\r\n\r\nIt is worth noting that it does fix other problems, like `\"Hey </s>. how are you\"`:\r\n- Old (incorrect): `['โ–Hey', 'โ–', '</s>', 'โ–', '.', 'โ–how', 'โ–are', 'โ–you']`\r\n- New (correct): `['โ–Hey', 'โ–', '</s>', '.', 'โ–how', 'โ–are', 'โ–you']`\r\n", "Indeed. That's a different issue which also comes from the `extract_and_normalize` piece of code. I'll see if there is a quick fix thanks for reporting ", "Also note that the template processors usually use this: https://github.com/huggingface/transformers/blob/e6dcf8abd6f65bb4b6dfc1831b20d9ba49ce00e2/src/transformers/models/llama/tokenization_llama_fast.py#L160\r\nwith a prefix space before the sequence. ", "> Also note that the template processors usually use this:\r\n> \r\n> https://github.com/huggingface/transformers/blob/e6dcf8abd6f65bb4b6dfc1831b20d9ba49ce00e2/src/transformers/models/llama/tokenization_llama_fast.py#L160\r\n> \r\n> \r\n> with a prefix space before the sequence.\r\n\r\nEven with `add_special_tokens=False`? ๐Ÿ‘€ " ]
1,695
1,705
1,705
NONE
null
### System Info - `transformers` version: 4.33.2 - Platform: Linux-6.2.0-33-generic-x86_64-with-glibc2.35 - Python version: 3.11.5 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: 0.21.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu117 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Hi, When adding a token to the T5 tokenizer and then tokenizing a string, it seems that the encoding step is inserting an unwanted space after the added token. ```python from transformers import AddedToken, T5TokenizerFast tokenizer = T5TokenizerFast.from_pretrained("google/flan-t5-small") tokenizer.add_tokens(["<"]) print(tokenizer.encode("<body>")) # [32100, 643, 3155, 1] print(tokenizer.decode(tokenizer.encode("<body>"))) # < body></s> print(tokenizer.convert_ids_to_tokens(tokenizer.encode("<body>"))) # ['<', 'โ–body', '>', '</s>'] ``` It's unclear why the model is using the token `"โ–body"` when `"body"` is also in the vocabulary? And even if `"body"` weren't in the vocabulary, I'd still expect `convert_ids_to_tokens` to give back something like `["<", "b", "o", "d", "y", ">", "</s>"]`. ### Expected behavior The following script should print `<body></s>`. ```python from transformers import AddedToken, T5TokenizerFast tokenizer = T5TokenizerFast.from_pretrained("google/flan-t5-small") tokenizer.add_tokens(["<"]) print(tokenizer.decode(tokenizer.encode("<body>"))) ``` I saw https://github.com/huggingface/transformers/pull/24565 but this doesn't seem to have solved it for this case?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26318/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26318/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26317
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26317/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26317/comments
https://api.github.com/repos/huggingface/transformers/issues/26317/events
https://github.com/huggingface/transformers/pull/26317
1,906,531,365
PR_kwDOCUB6oc5a3ZsT
26,317
[ViTMatte] Add resources
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,695
1,695
1,695
CONTRIBUTOR
null
# What does this PR do? This PR adds a link to a demo notebook for ViTMatte. It also adds a figure to make the docs a bit less boring :)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26317/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26317/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26317", "html_url": "https://github.com/huggingface/transformers/pull/26317", "diff_url": "https://github.com/huggingface/transformers/pull/26317.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26317.patch", "merged_at": 1695704799000 }
https://api.github.com/repos/huggingface/transformers/issues/26316
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26316/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26316/comments
https://api.github.com/repos/huggingface/transformers/issues/26316/events
https://github.com/huggingface/transformers/issues/26316
1,906,420,828
I_kwDOCUB6oc5xoaxc
26,316
Streaming and assisted decoding strategy for pipeline OR clearer understanding of pipelines to manually apply tasks
{ "login": "gidzr", "id": 83053994, "node_id": "MDQ6VXNlcjgzMDUzOTk0", "avatar_url": "https://avatars.githubusercontent.com/u/83053994?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gidzr", "html_url": "https://github.com/gidzr", "followers_url": "https://api.github.com/users/gidzr/followers", "following_url": "https://api.github.com/users/gidzr/following{/other_user}", "gists_url": "https://api.github.com/users/gidzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/gidzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gidzr/subscriptions", "organizations_url": "https://api.github.com/users/gidzr/orgs", "repos_url": "https://api.github.com/users/gidzr/repos", "events_url": "https://api.github.com/users/gidzr/events{/privacy}", "received_events_url": "https://api.github.com/users/gidzr/received_events", "type": "User", "site_admin": false }
[ { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
open
false
null
[]
[]
1,695
1,695
null
NONE
null
### Feature request Currently looking to get a number of non-pipeline functions with pipelines - Decoder strategies - Assisted decoding - Streaming output etc **Alternately**, if I can adopt the non-pipeline transformer approach with tokenizer act more pipeline'ish, that'd also be good. Specifically, what are the underlying changes that happen when selecting a task for pipeline? Eg. if I select 'text-generation' as a pipeline, what's that actually doing (so can apply the same thing to the non-pipeline approach) If for example pipeline_task = "text-generation", does this autoconfigure the model and tokenizer config, like sampling, temperature, beams, top-k, top-p, max_new_tokens?... What does the pipeline task actually do to the underlying model? If I knew this, I wouldn't need the above features for pipelines. ### Motivation I want to have all the features from all the approaches available from the one approach ### Your contribution Nothing yet
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26316/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26316/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/26315
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26315/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26315/comments
https://api.github.com/repos/huggingface/transformers/issues/26315/events
https://github.com/huggingface/transformers/issues/26315
1,906,418,561
I_kwDOCUB6oc5xoaOB
26,315
Adding OwlV2 checkpoint to Owl-vit model
{ "login": "flavourabbit", "id": 45381460, "node_id": "MDQ6VXNlcjQ1MzgxNDYw", "avatar_url": "https://avatars.githubusercontent.com/u/45381460?v=4", "gravatar_id": "", "url": "https://api.github.com/users/flavourabbit", "html_url": "https://github.com/flavourabbit", "followers_url": "https://api.github.com/users/flavourabbit/followers", "following_url": "https://api.github.com/users/flavourabbit/following{/other_user}", "gists_url": "https://api.github.com/users/flavourabbit/gists{/gist_id}", "starred_url": "https://api.github.com/users/flavourabbit/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/flavourabbit/subscriptions", "organizations_url": "https://api.github.com/users/flavourabbit/orgs", "repos_url": "https://api.github.com/users/flavourabbit/repos", "events_url": "https://api.github.com/users/flavourabbit/events{/privacy}", "received_events_url": "https://api.github.com/users/flavourabbit/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[ { "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false } ]
[ "Thanks for informing us!\r\n\r\nWe're on it, cc @ydshieh ", "Hi folks, happy to share that OWLv2 is now available: https://huggingface.co/docs/transformers/main/en/model_doc/owlv2." ]
1,695
1,697
1,697
NONE
null
### Feature request Google scenic team has released checkpoint of OwlV2 in the below link https://github.com/google-research/scenic/tree/main/scenic/projects/owl_vit#pretrained-checkpoints Could you please integrate this to HFโ€™s transfomers? ### Motivation OwlV2 has better performance while having simliar backbone as Owl-vit. Please refer to performance comparison as following (https://arxiv.org/pdf/2306.09683.pdf) ### Your contribution I donโ€™t think I can be a main contributer especially I donโ€™t know model converting. However, I know A to Z of transformersโ€™ Owl-vit code so I can do some job (assign me part of the work!)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26315/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 2, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26315/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26314
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26314/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26314/comments
https://api.github.com/repos/huggingface/transformers/issues/26314/events
https://github.com/huggingface/transformers/issues/26314
1,906,367,642
I_kwDOCUB6oc5xoNya
26,314
Ray Release 2.7.0 breaks Trainer.hyperparameter_search
{ "login": "AphinityAT", "id": 65361600, "node_id": "MDQ6VXNlcjY1MzYxNjAw", "avatar_url": "https://avatars.githubusercontent.com/u/65361600?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AphinityAT", "html_url": "https://github.com/AphinityAT", "followers_url": "https://api.github.com/users/AphinityAT/followers", "following_url": "https://api.github.com/users/AphinityAT/following{/other_user}", "gists_url": "https://api.github.com/users/AphinityAT/gists{/gist_id}", "starred_url": "https://api.github.com/users/AphinityAT/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AphinityAT/subscriptions", "organizations_url": "https://api.github.com/users/AphinityAT/orgs", "repos_url": "https://api.github.com/users/AphinityAT/repos", "events_url": "https://api.github.com/users/AphinityAT/events{/privacy}", "received_events_url": "https://api.github.com/users/AphinityAT/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hello,\r\n\r\nI am encountering the same error. \r\n\r\n\r\nThanks\r\n\r\n", "The team told me that they are working on a foward fix, did not check yet but feel free to confirm! ", "@ArthurZucker just FYI this is linked to some failing `Trainer` tests currently as well, so once resolved those should be fixed too", "Hello! Just wondering if there have been any updates to this. ", "This problem still exists in Ray-2.8.0. Are there any solutions to fix this?", "Some one please fix it!!!", "Still facing this issue.", "cc @justinvyu ", "@jaanli Are you running with `transformers>=4.36.0` where https://github.com/huggingface/transformers/pull/26499 is included?" ]
1,695
1,707
1,702
NONE
null
### System Info When performing hyperparameter search with the ray backend, the following warning is raised and hyperparameter optimization is not performed: ``` File /ray/tune/trainable/util.py:315, in with_parameters(trainable, **kwargs) 310 if _detect_checkpoint_function(trainable, partial=True): 311 from ray.tune.trainable.function_trainable import ( 312 _CHECKPOINT_DIR_ARG_DEPRECATION_MSG, 313 ) --> 315 raise DeprecationWarning(_CHECKPOINT_DIR_ARG_DEPRECATION_MSG) 317 def inner(config): 318 fn_kwargs = {} DeprecationWarning: Accepting a `checkpoint_dir` argument in your training function is deprecated. Please use `ray.train.get_checkpoint()` to access your checkpoint as a `ray.train.Checkpoint` object instead. See below for an example: Before ------ from ray import tune def train_fn(config, checkpoint_dir=None): if checkpoint_dir: torch.load(os.path.join(checkpoint_dir, "checkpoint.pt")) ... tuner = tune.Tuner(train_fn) tuner.fit() After ----- from ray import train, tune def train_fn(config): checkpoint: train.Checkpoint = train.get_checkpoint() if checkpoint: with checkpoint.as_directory() as checkpoint_dir: torch.load(os.path.join(checkpoint_dir, "checkpoint.pt")) ... tuner = tune.Tuner(train_fn) tuner.fit() ``` Python: 3.9 Transformers: 4.33.2 Ray: 2.7.0 **Possible Reason:** The new Ray Release 2.7.0 changes the behavior of Checkpoints https://github.com/ray-project/ray/releases/tag/ray-2.7.0 **Temporary Solution:** Manually downgrade Ray to 2.6.3 ### Who can help? @richardliaw @amogkam ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Steps to reproduce the behavior: 1. Install Ray 2.7.0 2. Run any hyperparameter code: e.g. https://huggingface.co/docs/transformers/hpo_train or https://github.com/huggingface/notebooks/blob/main/examples/text_classification.ipynb ### Expected behavior Expected behiour is the execution of the hyperparameter search as described in the documentation or the text classification example
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26314/reactions", "total_count": 9, "+1": 9, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26314/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26313
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26313/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26313/comments
https://api.github.com/repos/huggingface/transformers/issues/26313/events
https://github.com/huggingface/transformers/pull/26313
1,906,359,975
PR_kwDOCUB6oc5a20W2
26,313
๐ŸŒ [i18n-KO] Translated `code_llama.md` to Korean
{ "login": "mjk0618", "id": 39152134, "node_id": "MDQ6VXNlcjM5MTUyMTM0", "avatar_url": "https://avatars.githubusercontent.com/u/39152134?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mjk0618", "html_url": "https://github.com/mjk0618", "followers_url": "https://api.github.com/users/mjk0618/followers", "following_url": "https://api.github.com/users/mjk0618/following{/other_user}", "gists_url": "https://api.github.com/users/mjk0618/gists{/gist_id}", "starred_url": "https://api.github.com/users/mjk0618/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mjk0618/subscriptions", "organizations_url": "https://api.github.com/users/mjk0618/orgs", "repos_url": "https://api.github.com/users/mjk0618/repos", "events_url": "https://api.github.com/users/mjk0618/events{/privacy}", "received_events_url": "https://api.github.com/users/mjk0618/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,695
1,698
1,698
CONTRIBUTOR
null
<!-- PR์˜ ์ œ๋ชฉ์€ "๐ŸŒ [i18n-KO] Translated `<your_file>.md` to Korean" ์œผ๋กœ ๋ถ€ํƒ๋“œ๋ฆฝ๋‹ˆ๋‹ค! --> # What does this PR do? Translated the `code_llama.md` file of the documentation to Korean. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 ## Before reviewing - [x] Check for missing / redundant translations (๋ฒˆ์—ญ ๋ˆ„๋ฝ/์ค‘๋ณต ๊ฒ€์‚ฌ) - [x] Grammar Check (๋งž์ถค๋ฒ• ๊ฒ€์‚ฌ) - [x] Review or Add new terms to glossary (์šฉ์–ด ํ™•์ธ ๋ฐ ์ถ”๊ฐ€) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [ ] Check live-preview for gotchas (live-preview๋กœ ์ •์ƒ์ž‘๋™ ํ™•์ธ) ## Who can review? (Initial) @bolizabeth, @nuatmochoi, @heuristicwave, @mjk0618, @keonju2, @harheem, @HongB1, @junejae, @54data, @seank021, @augustinLib, @sronger, @TaeYupNoh, @kj021, @eenzeenee <!-- 1. ์œ„ ์ฒดํฌ๊ฐ€ ๋ชจ๋‘ ์™„๋ฃŒ๋œ ๋’ค์—, ์ด ์•„๋ž˜์— ๋ฆฌ๋ทฐ๋ฅผ ์š”์ฒญํ•  ํŒ€์›๋“ค์„ ๋ฉ˜์…˜ํ•ด์ฃผ์„ธ์š”! --> <!-- May you please review this PR? @member1 @member2 ... --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) <!-- 2. ํŒ€์›๋“ค๊ณผ ๋ฆฌ๋ทฐ๊ฐ€ ๋๋‚œ ํ›„์—๋งŒ ํ—ˆ๊น…ํŽ˜์ด์Šค ์ง์›๋“ค์—๊ฒŒ ๋ฆฌ๋ทฐ ์š”์ฒญํ•˜๋Š” ์•„๋ž˜ ์ฃผ์„์„ ๋…ธ์ถœํ•ด์ฃผ์„ธ์š”! --> <!-- May you please review this PR? @sgugger, @ArthurZucker, @eunseojo -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26313/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26313/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26313", "html_url": "https://github.com/huggingface/transformers/pull/26313", "diff_url": "https://github.com/huggingface/transformers/pull/26313.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26313.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/26312
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26312/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26312/comments
https://api.github.com/repos/huggingface/transformers/issues/26312/events
https://github.com/huggingface/transformers/issues/26312
1,906,358,421
I_kwDOCUB6oc5xoLiV
26,312
Order of `loss/gradient_accumulation_steps` and `backward(loss)`
{ "login": "haoxiangsnr", "id": 28479613, "node_id": "MDQ6VXNlcjI4NDc5NjEz", "avatar_url": "https://avatars.githubusercontent.com/u/28479613?v=4", "gravatar_id": "", "url": "https://api.github.com/users/haoxiangsnr", "html_url": "https://github.com/haoxiangsnr", "followers_url": "https://api.github.com/users/haoxiangsnr/followers", "following_url": "https://api.github.com/users/haoxiangsnr/following{/other_user}", "gists_url": "https://api.github.com/users/haoxiangsnr/gists{/gist_id}", "starred_url": "https://api.github.com/users/haoxiangsnr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/haoxiangsnr/subscriptions", "organizations_url": "https://api.github.com/users/haoxiangsnr/orgs", "repos_url": "https://api.github.com/users/haoxiangsnr/repos", "events_url": "https://api.github.com/users/haoxiangsnr/events{/privacy}", "received_events_url": "https://api.github.com/users/haoxiangsnr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Accelerate will also divide the loss if `accelerator.backward(loss)` is used, so it should already be done in https://github.com/huggingface/transformers/blob/e3a4bd2bee212a2d0fd9f03b27fe7bfc1debe42d/src/transformers/trainer.py#L2785.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,695
1,700
1,700
NONE
null
### System Info Hi, @muellerz @pacman100 . I have a question about the gradient accumulation. Should we divide the `gradient_accumulation_steps` before `accelerator.backward(loss)`? https://github.com/huggingface/transformers/blob/e3a4bd2bee212a2d0fd9f03b27fe7bfc1debe42d/src/transformers/trainer.py#L2787 It seems that `self.accelerator.accumulate(model)` [here](https://github.com/huggingface/transformers/blob/e3a4bd2bee212a2d0fd9f03b27fe7bfc1debe42d/src/transformers/trainer.py#L1889C17-L1889C57) is just a wrapper for `no_sync()` and will not automatically help divide the loss by the `gradient_accumulation_steps`. ### Who can help? _No response_
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26312/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26312/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26311
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26311/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26311/comments
https://api.github.com/repos/huggingface/transformers/issues/26311/events
https://github.com/huggingface/transformers/pull/26311
1,906,351,355
PR_kwDOCUB6oc5a2yfg
26,311
[i18n-DE] Complete first toc chapter
{ "login": "flozi00", "id": 47894090, "node_id": "MDQ6VXNlcjQ3ODk0MDkw", "avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4", "gravatar_id": "", "url": "https://api.github.com/users/flozi00", "html_url": "https://github.com/flozi00", "followers_url": "https://api.github.com/users/flozi00/followers", "following_url": "https://api.github.com/users/flozi00/following{/other_user}", "gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}", "starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/flozi00/subscriptions", "organizations_url": "https://api.github.com/users/flozi00/orgs", "repos_url": "https://api.github.com/users/flozi00/repos", "events_url": "https://api.github.com/users/flozi00/events{/privacy}", "received_events_url": "https://api.github.com/users/flozi00/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@ArthurZucker or who is the correct one to ping ?", "I just wondered why the doc builder job did not started as the last times\r\nCould you approve the workflows to start ?", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26311). All of your documentation changes will be reflected on that endpoint." ]
1,695
1,695
1,695
CONTRIBUTOR
null
# What does this PR do ? This PR translates all the file of the first toc chapter to german It continues https://github.com/huggingface/transformers/issues/18564 - [x] add new model - [x] add new pipeline - [x] add tensorflow model - [x] llm tutorial - [x] peft - [x] run scripts - [x] transformers agents - [x] re-read at the live documentation preview from PR
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26311/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26311/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26311", "html_url": "https://github.com/huggingface/transformers/pull/26311", "diff_url": "https://github.com/huggingface/transformers/pull/26311.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26311.patch", "merged_at": 1695839585000 }
https://api.github.com/repos/huggingface/transformers/issues/26310
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26310/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26310/comments
https://api.github.com/repos/huggingface/transformers/issues/26310/events
https://github.com/huggingface/transformers/pull/26310
1,906,288,664
PR_kwDOCUB6oc5a2k6H
26,310
[WIP] Add ImageBind Model Implementation
{ "login": "dg845", "id": 58458699, "node_id": "MDQ6VXNlcjU4NDU4Njk5", "avatar_url": "https://avatars.githubusercontent.com/u/58458699?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dg845", "html_url": "https://github.com/dg845", "followers_url": "https://api.github.com/users/dg845/followers", "following_url": "https://api.github.com/users/dg845/following{/other_user}", "gists_url": "https://api.github.com/users/dg845/gists{/gist_id}", "starred_url": "https://api.github.com/users/dg845/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dg845/subscriptions", "organizations_url": "https://api.github.com/users/dg845/orgs", "repos_url": "https://api.github.com/users/dg845/repos", "events_url": "https://api.github.com/users/dg845/events{/privacy}", "received_events_url": "https://api.github.com/users/dg845/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Awesome @dg845! Let us know when you'd like for us to review this PR", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hey! Do you need some help on this integration ? ๐Ÿค— ", "Hi @ArthurZucker, unfortunately I haven't been able to find time to work on this PR recently, but should be able to work on it more in the near future. I don't think I've hit any blockers yet.", "Hi @dg845 - any update on progress with adding the model? Do you think you'll be able to finish the PR soon? It's an impactful model and we'd like to have in the library as soon as possible. If it's not something you'll have time for, would you be open to someone help to finish the PR - making sure of course you still get the contribution as you've already done a large part? ", "Hi @amyeroberts, I'm not sure if I will be able to finish it soon. I'm open to having someone else help finish the PR - I will also try to work on it/help out as much as I can.", "@dg845 just curious, what is left on your TO-DO list for this PR? Would be helpful to whoever is assisting.", "I believe the current TODOs are as follows:\r\n\r\n1. Test the checkpoint conversion script `convert_imagebind_original_pytorch_to_hf.py` to make sure there aren't any errors\r\n2. Use the checkpoint conversion script to create a small random test model (it looks like there might already be one at [dg845/imagebind-test-dev](https://huggingface.co/dg845/imagebind-test-dev) but not sure if it's error-free)\r\n3. Use the checkpoint conversion script to convert the [full ImageBind checkpoint](https://github.com/facebookresearch/ImageBind#imagebind-model)\r\n4. Fix the imports for the preprocessing code (e.g. `feature_extraction_imagebind.py`, `image_processing_imagebind.py`, `processing_imagebind.py`, `tokenization_imagebind.py`, etc.) if necessary\r\n5. Test the preprocessing code against the [reference implementation](https://github.com/facebookresearch/ImageBind) (e.g. make sure the tests in `test_image_processing_imagebind.py`, `test_processor_imagebind.py`, `test_tokenization_imagebind.py` are passing)\r\n6. Test the modeling code against the reference implementation (e.g. make sure the tests in `test_modeling_imagebind.py` are passing, using the test checkpoint from (2))\r\n7. Write integration tests (combining preprocessing code and modeling code) and make sure they pass (using the full checkpoint created in (3))\r\n8. Finish writing the docstrings and other documentation in the code itself\r\n9. Finish the documentation in `/docs/source/en/model_doc/imagebind.md`\r\n\r\nAs a note, I believe the [official ImageBind repo](https://github.com/facebookresearch/ImageBind/tree/main) doesn't explicitly specify how to preprocess IMU data (e.g. in [`imagebind/data.py`](https://github.com/facebookresearch/ImageBind/blob/main/imagebind/data.py)), and I'm not sure if there is extra preprocessing needed for depth and thermal data that's not in [load_and_transform_vision_data](https://github.com/facebookresearch/ImageBind/blob/c6a47d6dc2b53eced51d398c181d57049ca59286/imagebind/data.py#L78).\r\n\r\nFor IMU data preprocessing, I referred to the [IMU2Clip repo](https://github.com/facebookresearch/imu2clip), also from Facebook/Meta Research, as well as this issue in the ImageBind repo: https://github.com/facebookresearch/ImageBind/issues/66.\r\n\r\nFor depth and thermal data preprocessing, I referred to the [Omnivore repo](https://github.com/facebookresearch/omnivore) (which I believe is previous work by the same authors as ImageBind).\r\n\r\nIt's not obvious that either of these things is the right thing to do - might make sense to confirm with the authors that doing so is reasonable. I guess another possible path would be to only implement the text/image/audio portion of the model, but in my opinion this is less than ideal.\r\n\r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,695
1,708
null
CONTRIBUTOR
null
# What does this PR do? This PR adds the ImageBind model ([paper](https://arxiv.org/abs/2305.05665), [code](https://github.com/facebookresearch/ImageBind)), a multimodal model which can map six different modalities to the same shared representation space. As stated in their [blog post](https://ai.facebook.com/blog/imagebind-six-modalities-binding-ai/), > "[ImageBind is] the first AI model capable of binding information from six modalities. The [model](https://github.com/facebookresearch/ImageBind) learns a single embedding, or shared representation space, not just for text, image/video, and audio, but also for sensors that record depth (3D), thermal (infrared radiation), and inertial measurement units (IMU), which calculate motion and position." <img width="625" alt="imagebind_figure_2" src="https://github.com/huggingface/transformers/assets/58458699/2eb66af1-883b-4705-9a1b-cdc009bf82fc"> Fixes #23240. Based on a previous PR #23284. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @amyeroberts @ArthurZucker @shehanmunasinghe
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26310/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26310/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26310", "html_url": "https://github.com/huggingface/transformers/pull/26310", "diff_url": "https://github.com/huggingface/transformers/pull/26310.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26310.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/26309
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26309/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26309/comments
https://api.github.com/repos/huggingface/transformers/issues/26309/events
https://github.com/huggingface/transformers/issues/26309
1,906,143,152
I_kwDOCUB6oc5xnW-w
26,309
isses with converting llama-70B-chat model with convert_llama_weights_to_hf.py
{ "login": "Braxtogoo", "id": 7225240, "node_id": "MDQ6VXNlcjcyMjUyNDA=", "avatar_url": "https://avatars.githubusercontent.com/u/7225240?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Braxtogoo", "html_url": "https://github.com/Braxtogoo", "followers_url": "https://api.github.com/users/Braxtogoo/followers", "following_url": "https://api.github.com/users/Braxtogoo/following{/other_user}", "gists_url": "https://api.github.com/users/Braxtogoo/gists{/gist_id}", "starred_url": "https://api.github.com/users/Braxtogoo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Braxtogoo/subscriptions", "organizations_url": "https://api.github.com/users/Braxtogoo/orgs", "repos_url": "https://api.github.com/users/Braxtogoo/repos", "events_url": "https://api.github.com/users/Braxtogoo/events{/privacy}", "received_events_url": "https://api.github.com/users/Braxtogoo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey, could you confirm that the checkpoints you download are from https://huggingface.co/meta-llama ? ", "They came from Meta themselves, the files I have seem to match the checksums on the huggingface repo though I am missing some files like the tokenizer and such. Would I need to grab the whole thing off of huggingface again for it to work? If so I can and I'd be happy to report back with the results.", "Probably the `params.json` file. The weights were converted quite a few number of times without problem so that will be the first culprit ", "Looks like mine matches the one on huggingface, still downloaded it and retried. Same error.", "Downgrading to `4.33.1` beats this boss and lets you advance to the next level (at least in my case).", "Arf that's not normal! @Braxtogoo could you share the full traceback please? ๐Ÿ˜‰ \r\nThere were not many changes to the script, one most notable is from #25740 ! ", "Sorry for the delay.\r\n\r\n@tycrimm Downgrading appears to have the same unbeatable boss bug. :)", "@ArthurZucker Here's the trace back from my last attempt with the downgraded version. It appears to be the same.\r\n\r\nTraceback (most recent call last):\r\n File \"/pathToLlama2/./transformers-4.33.1/src/transformers/models/llama/convert_llama_weights_to_hf.py\", line 318, in <module>\r\n main()\r\n File \"/pathToLlama2/./transformers-4.33.1/src/transformers/models/llama/convert_llama_weights_to_hf.py\", line 306, in main\r\n write_model(\r\n File \"/pathToLlama2/./transformers-4.33.1/src/transformers/models/llama/convert_llama_weights_to_hf.py\", line 270, in write_model\r\n model = LlamaForCausalLM.from_pretrained(tmp_model_path, torch_dtype=torch.bfloat16, low_cpu_mem_usage=True)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/homePath/anaconda3/lib/python3.11/site-packages/transformers/modeling_utils.py\", line 2777, in from_pretrained\r\n ) = cls._load_pretrained_model(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/homePath/anaconda3/lib/python3.11/site-packages/transformers/modeling_utils.py\", line 3118, in _load_pretrained_model\r\n new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/homePath/anaconda3/lib/python3.11/site-packages/transformers/modeling_utils.py\", line 702, in _load_state_dict_into_meta_model\r\n set_module_tensor_to_device(model, param_name, param_device, **set_module_kwargs)\r\n File \"/homePath/anaconda3/lib/python3.11/site-packages/accelerate/utils/modeling.py\", line 281, in set_module_tensor_to_device\r\n raise ValueError(\r\nValueError: Trying to set a tensor of shape torch.Size([1024, 8192]) in \"weight\" (which has shape torch.Size([8192, 8192])), this look incorrect.", "Hey, I'm reproducing right now, will let you know ๐Ÿ˜‰ ", "Here are the steps I used: (transformers==4.34.1)\r\n```bash \r\npip install transformers\r\nhuggingface-cli login\r\nhuggingface-cli download meta-llama/Llama-2-70b --local-dir=\"70B\" --cache-dir=\"./cache\"\r\nhuggingface-cli download meta-llama/Llama-2-70b-hf tokenizer.model --local-dir=\"70B\" --cache-dir=\"./cache\"\r\npython ./transformers/src/transformers/models/llama/convert_llama_weights_to_hf.py --input_dir absolute/path/to/moved/repo --model_size 70B --output_dir llama-2-70b-chat-hf --safe_serialization true\r\n```\r\nthis worked as expected. I'll close this as I can't reproduce and we have quite a lot of users who successfully converted the checkpoints. \r\nMight be a transformers version issue ๐Ÿ˜‰ ", "Maybe you can try 4-bit quantization." ]
1,695
1,699
1,698
NONE
null
### System Info Howdy. I'm trying to convert the new llama 70B chat model with convert_llama_weights_to_hf.py but I'm getting the following error during loading the checkpoint shards: `ValueError: Trying to set a tensor of shape torch.Size([1024, 8192]) in "weight" (which has shape torch.Size([8192, 8192])), this look incorrect.` I'm running the script on Ubuntu 22.04.1 LTS and python 3.11.4 with Transformers version 4.33.2. Thank you in advance. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction This was the command I ran. `python ./transformers/src/transformers/models/llama/convert_llama_weights_to_hf.py --input_dir . --model_size 70B --output_dir llama-2-70b-chat-hf --safe_serialization true` ### Expected behavior At the end of the process, I should have the model in a bin format. With config.json and tokenizer file.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26309/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26309/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26308
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26308/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26308/comments
https://api.github.com/repos/huggingface/transformers/issues/26308/events
https://github.com/huggingface/transformers/issues/26308
1,906,040,265
I_kwDOCUB6oc5xm93J
26,308
Speedup module imports
{ "login": "apoorvkh", "id": 7005565, "node_id": "MDQ6VXNlcjcwMDU1NjU=", "avatar_url": "https://avatars.githubusercontent.com/u/7005565?v=4", "gravatar_id": "", "url": "https://api.github.com/users/apoorvkh", "html_url": "https://github.com/apoorvkh", "followers_url": "https://api.github.com/users/apoorvkh/followers", "following_url": "https://api.github.com/users/apoorvkh/following{/other_user}", "gists_url": "https://api.github.com/users/apoorvkh/gists{/gist_id}", "starred_url": "https://api.github.com/users/apoorvkh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/apoorvkh/subscriptions", "organizations_url": "https://api.github.com/users/apoorvkh/orgs", "repos_url": "https://api.github.com/users/apoorvkh/repos", "events_url": "https://api.github.com/users/apoorvkh/events{/privacy}", "received_events_url": "https://api.github.com/users/apoorvkh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! Thanks for opening this issue, are you using `main`. There was a PR recently to fix this, see #26090 and #26106", "I am indeed using main (specifically, `transformers[deepspeed]` at commit 382ba67)!", "The code I mentioned above is run directly in the header of `trainer.py`. And, if I understand correctly, I think `accelerate` is not covered by the Lazy imports in #26090.\r\n\r\nhttps://github.com/huggingface/transformers/blob/e3a4bd2bee212a2d0fd9f03b27fe7bfc1debe42d/src/transformers/trainer.py#L203-L217", "Cc @younesbelkada I think you mentioned that accelerate is the bottleneck that we canโ€™t get rid of no? ", "Hi @apoorvkh \r\nhttps://github.com/huggingface/accelerate/pull/1963 being merged in accelerate I think you can switch to accelerate main and see if it resolves your issue", "Hey, thanks! I think that commit (https://github.com/huggingface/accelerate/commit/5dec654aaea0c92d4ccb7ad389fc33adcbbf79fc) reduces the runtime for the import from 8-9 seconds to 3-4 seconds (on my machine). That is still not ideal but is certainly more tolerable.", "Thanks! \r\nHm I see ok, I am curious what module takes so much time for import, would you be able to run a quick benchmark with [`tuna`](https://github.com/nschloe/tuna) and share the results here?\r\n\r\n```bash\r\n# benchmark\r\npython -X importtime -c \"import transformers\" 2> transformers-import-profile.log\r\n\r\n# visualize\r\ntuna <path to log file>\r\n```", "For sure. That's a nice tool!\r\n\r\nReally quickly, I found that `from transformers import Trainer` was particularly taking 4 seconds to import -- whereas `import transformers` is actually faster (< 1 second).\r\n\r\nWe can see the result for `from transformers import Trainer` below:\r\n\r\n![image](https://github.com/huggingface/transformers/assets/7005565/f3f54bc3-69d0-4d92-8e14-f9bd2d43317b)\r\n\r\nAlso, for `from transformers import TrainingArguments`:\r\n\r\n![image](https://github.com/huggingface/transformers/assets/7005565/d06b349e-fd3c-4697-b3a6-2655cea31b2b)\r\n\r\nAnd we can compare to `import transformers`:\r\n\r\n![image](https://github.com/huggingface/transformers/assets/7005565/bb0ba6f5-16c1-489a-9c90-c5c609564ed2)\r\n\r\nSeems like `accelerate` is no longer the biggest culprit. A lot of time is also spent importing `torch`.\r\n\r\nMy point is that we sometimes just import these tools for typing purposes or in an interactive terminal for later use. From a developer perspective, it would be more convenient to have fast imports and move the time-consuming parts to the moment we actually want to init/use the modules (and are actually expecting to expend time). Thanks!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,695
1,698
1,698
CONTRIBUTOR
null
### Feature request Can we please consider importing the deepspeed module when needed, rather than in the [import header of `trainer.py`](https://github.com/huggingface/transformers/blob/e3a4bd2bee212a2d0fd9f03b27fe7bfc1debe42d/src/transformers/trainer.py#L217)? ### Motivation When `deepspeed` is installed, `from transformers import Trainer` takes a long time! On my system that's 9 seconds! ```python >>> import timeit; timeit.timeit("from transformers import Trainer") [2023-09-20 23:49:13,899] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect) 8.906949461437762 ``` I believe this import is the culprit. As we can see, it takes 8.5 seconds of the load time. https://github.com/huggingface/transformers/blob/e3a4bd2bee212a2d0fd9f03b27fe7bfc1debe42d/src/transformers/trainer.py#L217 ```python >>> timeit.timeit("from accelerate.utils import DeepSpeedSchedulerWrapper") [2023-09-20 23:45:53,185] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect) 8.525534554384649 ``` This is quite cumbersome, because all scripts that import Trainer (e.g. even for typing) are impacted! ### Your contribution Happy to submit a PR. We could make this a class variable or just import it directly at both places it's used. https://github.com/huggingface/transformers/blob/e3a4bd2bee212a2d0fd9f03b27fe7bfc1debe42d/src/transformers/trainer.py#L2437-L2439 https://github.com/huggingface/transformers/blob/e3a4bd2bee212a2d0fd9f03b27fe7bfc1debe42d/src/transformers/trainer.py#L2508-L2514
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26308/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26308/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26307
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26307/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26307/comments
https://api.github.com/repos/huggingface/transformers/issues/26307/events
https://github.com/huggingface/transformers/pull/26307
1,906,016,226
PR_kwDOCUB6oc5a1psS
26,307
Move applying rotary embeddings inside LlamaRotaryEmbedding class
{ "login": "kunal-vaishnavi", "id": 115581922, "node_id": "U_kgDOBuOj4g", "avatar_url": "https://avatars.githubusercontent.com/u/115581922?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kunal-vaishnavi", "html_url": "https://github.com/kunal-vaishnavi", "followers_url": "https://api.github.com/users/kunal-vaishnavi/followers", "following_url": "https://api.github.com/users/kunal-vaishnavi/following{/other_user}", "gists_url": "https://api.github.com/users/kunal-vaishnavi/gists{/gist_id}", "starred_url": "https://api.github.com/users/kunal-vaishnavi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kunal-vaishnavi/subscriptions", "organizations_url": "https://api.github.com/users/kunal-vaishnavi/orgs", "repos_url": "https://api.github.com/users/kunal-vaishnavi/repos", "events_url": "https://api.github.com/users/kunal-vaishnavi/events{/privacy}", "received_events_url": "https://api.github.com/users/kunal-vaishnavi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thank you @kunal-vaishnavi, is it for later easier pattern matching on the rotary positional embedding? Could you share a given ONNX exported with the argument `export_modules_as_functions` set (e.g. https://huggingface.co/fxmarty/tiny-llama-fast-tokenizer)?\r\n\r\nThere are a few other archs that do the same: persimmon, gptj, gpt_neox_japanese, gpt_neox, idefics, esm, codegen.\r\n\r\nI am wondering whether removing `rotate_half` and `apply_rotary_pos_emb` is not a breaking change given that those are not private functions, but @ArthurZucker will be a better judge.", "> Thank you @kunal-vaishnavi, is it for later easier pattern matching on the rotary positional embedding?\r\n\r\nYes, it is for easier pattern matching on the rotary positional embedding. Without representing the rotary embedding as a function, there is a lot of careful pattern matching to do.\r\n\r\n> Could you share a given ONNX exported with the argument `export_modules_as_functions` set (e.g. https://huggingface.co/fxmarty/tiny-llama-fast-tokenizer)?\r\n\r\nThe `export_modules_as_functions` option will convert an `nn.Module` into an ONNX `FunctionProto` (see the [ONNX docs](https://onnx.ai/onnx/intro/python.html#functions) for more details on functions). In Netron, the function will look something like this at the top level.\r\n\r\n![TorchScript Export Modules as Functions Example](https://github.com/huggingface/transformers/assets/115581922/41dee37d-08e5-4fe2-a189-61c54e281d3d)\r\n\r\nIf you click on the \"f\" in the top right, you can open the function and examine its contents.\r\n\r\n![TorchScript Functions Example](https://github.com/huggingface/transformers/assets/115581922/1f963dd5-66df-4627-9ffc-21c802a53e1e)\r\n\r\n\r\n> There are a few other archs that do the same: persimmon, gptj, gpt_neox_japanese, gpt_neox, idefics, esm, codegen.\r\n\r\nYes, these changes can be made for the other model architectures as well to enable easier pattern matching after exporting to ONNX.\r\n", "It's a breaking change and unless you have a strong argument for this in terms of speed of exportation or runtime, it's gonna be a hard to have this one merged ๐Ÿ˜“ We can however try to have a deprecation cycle if this makes sense. But if it's just for easier pattern matching I'm not sure it's worth the effort wdyt? ", "Could you explain a bit more on why the proposed approach would be a breaking change? Is it with regards to the scope of the functions and that they would only be seen in the class now? From my understanding, the two methods are used to apply the rotary embeddings and should be called in the forward pass of the class for the rotations to be performed. Additionally, the methods are only called in the attention forward pass (which has access to the class through `self.rotary_emb`) and don't appear to be used elsewhere.\r\n\r\nAs a workaround, can we create another subclass similar to the scaling variants that inherit from `LlamaRotaryEmbedding`? The new class can have the two functions in it, and we can still leave the default rotary embedding class as `LlamaRotaryEmbedding` during execution. In the attention layer, there can be a check to see if the new class is being used. If yes, then we can call the forward pass to get the new query and key states. If no, then we can leave the current code as is.\r\n\r\nWith the subclass approach, we can maintain current behavior and the deprecation can also be done over time to switch from `LlamaRotaryEmbedding` to the new class as the default.", "The simplest fix is to keep both functions and also add them in the class. The breaking part is that we canโ€™t just remove it as itโ€™s a ยซย globalย ยป function. And the issue is that LlamaRotary is used in other models which would need the same changes. \r\n\r\nmy question still applies: is there a strong incentive? ", "> The simplest fix is to keep both functions and also add them in the class.\r\n\r\nAdding the functions to the class would not be enough. They need to be used in the forward pass to fully capture the operations with `export_modules_as_functions`. I was suggesting another class if modifying the forward pass to `LlamaRotaryEmbedding` is considered a breaking change.\r\n\r\n> And the issue is that LlamaRotary is used in other models which would need the same changes.\r\n\r\nThe `LlamaRotaryEmbedding` code is copied for other models and not imported (as seen [here](https://github.com/search?q=repo%3Ahuggingface%2Ftransformers%20path%3A%2F%5Esrc%5C%2Ftransformers%5C%2Fmodels%5C%2F%2F%20llamarotaryembedding&type=code)) so changes to `LlamaRotaryEmbedding` wouldn't directly impact those models. A similar change could be made for those models to maintain consistency. Given the goal of this proposed PR is to make changes to enable optimizing the LLaMA-2 ONNX model and integrate with Optimum, those changes could be in another PR.\r\n\r\n> is there a strong incentive?\r\n\r\nUsing `export_modules_as_functions` speeds up the export to ONNX and decreases model size. This is because the functions are templated so only one copy is needed for reference. Without it, the current exported model has a copy of the nodes that comprise the function at each layer. Additionally, the ONNX model loads much faster in Netron and is much easier to view because there are less nodes at the top level.\r\n\r\n> it's gonna be a hard to have this one merged\r\n\r\nSince it seems unlikely that these PR changes will be approved, I will begin switching to another approach. With the new approach, however, I won't be able to integrate the ORTOptimizer class in Optimum with optimizing the LLaMA-2 ONNX model converted with Optimum.", "Sounds good. I added a logger warning for deprecation and made two model changes to satisfy the CI pipeline requirements. \r\n\r\n- OpenLLaMA: The changes from LLaMA seem fine to use so I copied them over. \r\n- Persimmon: The rotary embeddings calculations appear different so I removed the \"Copied from transformers.models.llama.modeling_llama\" references to satisfy the copies checker.", "I removed the \"copied from\" comments because the copy checker was failing. After further examination, it was failing because a space was needed at the end of the first line in the logger warning. I added the spaces and the \"copied from\" comments are now back.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26307). All of your documentation changes will be reflected on that endpoint." ]
1,695
1,700
1,700
NONE
null
# What does this PR do? This PR moves `rotate_half` and `apply_rotary_pos_emb` inside the `LlamaRotaryEmbedding` base class. When all of the rotary embedding computations are in one class, the [export of LLaMA-2 to ONNX](https://github.com/pytorch/pytorch/pull/109759) can be done with the `export_modules_as_functions` option for the `LlamaRotaryEmbedding` class in `torch.onnx.export`. With all of the rotary embedding computations in one function in the exported ONNX model, it is easier to optimize the model and [integrate with Optimum](https://github.com/huggingface/optimum/pull/1289). ## Who can review? @fxmarty, @ArthurZucker, @younesbelkada
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26307/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26307/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26307", "html_url": "https://github.com/huggingface/transformers/pull/26307", "diff_url": "https://github.com/huggingface/transformers/pull/26307.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26307.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/26306
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26306/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26306/comments
https://api.github.com/repos/huggingface/transformers/issues/26306/events
https://github.com/huggingface/transformers/issues/26306
1,905,890,442
I_kwDOCUB6oc5xmZSK
26,306
[example/recipe request] CodeLlama supervised fine tuning for infilling
{ "login": "alexpeys", "id": 5770141, "node_id": "MDQ6VXNlcjU3NzAxNDE=", "avatar_url": "https://avatars.githubusercontent.com/u/5770141?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alexpeys", "html_url": "https://github.com/alexpeys", "followers_url": "https://api.github.com/users/alexpeys/followers", "following_url": "https://api.github.com/users/alexpeys/following{/other_user}", "gists_url": "https://api.github.com/users/alexpeys/gists{/gist_id}", "starred_url": "https://api.github.com/users/alexpeys/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alexpeys/subscriptions", "organizations_url": "https://api.github.com/users/alexpeys/orgs", "repos_url": "https://api.github.com/users/alexpeys/repos", "events_url": "https://api.github.com/users/alexpeys/events{/privacy}", "received_events_url": "https://api.github.com/users/alexpeys/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! Feel free to share your code on the hub! It's the best way I believe to share something like this ๐Ÿ˜‰ You should even be able to write a `Community Blogpost` !\r\n\r\nRegarding the format, the tokenizer will format the prompt 1. if you give it `\"{prefix} <FILL_ME> {suffix}\"`. The target probably needs what you mentioned: `\"{middle} โ–<EOT>\"`", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@alexpeys Hi... do you have any basic script or initial scribbled idea to achieve this? Am trying to finetune codellama for infilling task. It would be helpful. " ]
1,695
1,700
1,698
NONE
null
### Feature request I am interested in fine tuning CodeLlama 7/13B to do some infilling. I have data that I want to fine tune on of the form: {prefix} {suffix} {middle} I am trying to understand how to format my supervised fine tuning dataset. Should I be setting up my fine tuning data as: ``` prompt="_<PRE> {prefix} _<SUF> {suffix} _<MID>", completion="{middle} โ–<EOT>" or prompt="{prefix} <FILL_ME> {suffix}", completion="{middle} โ–<EOT>" or some secret third thing ``` The documentation https://huggingface.co/docs/transformers/main/model_doc/code_llama#transformers.CodeLlamaTokenizer seems to suggest that the second will work and the tokenizer will do all the token healing etc... as it does for generation, but I am not sure. ### Motivation Fine tuning for infilling is a little trickier than just standard PROMPT, RESPONSE fine tuning! ### Your contribution Happy to write a basic example script if someone wants to guide me through the correct way to structure the prompt given the CodeLlamaTokenizer.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26306/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/26306/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26305
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26305/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26305/comments
https://api.github.com/repos/huggingface/transformers/issues/26305/events
https://github.com/huggingface/transformers/issues/26305
1,905,885,255
I_kwDOCUB6oc5xmYBH
26,305
Swin-B model performance lower than mmpretrain
{ "login": "zhaojun060708", "id": 26863591, "node_id": "MDQ6VXNlcjI2ODYzNTkx", "avatar_url": "https://avatars.githubusercontent.com/u/26863591?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhaojun060708", "html_url": "https://github.com/zhaojun060708", "followers_url": "https://api.github.com/users/zhaojun060708/followers", "following_url": "https://api.github.com/users/zhaojun060708/following{/other_user}", "gists_url": "https://api.github.com/users/zhaojun060708/gists{/gist_id}", "starred_url": "https://api.github.com/users/zhaojun060708/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zhaojun060708/subscriptions", "organizations_url": "https://api.github.com/users/zhaojun060708/orgs", "repos_url": "https://api.github.com/users/zhaojun060708/repos", "events_url": "https://api.github.com/users/zhaojun060708/events{/privacy}", "received_events_url": "https://api.github.com/users/zhaojun060708/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi,\r\n\r\nWhich models are you comparing exactly? Did you verify logits match between those models on the same input image?", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,695
1,698
1,698
NONE
null
I tried to train my own dataset using huggingface's transformers and mmpretrain in swin-b model With the same learning rate and same augmentation pipeline, the performance of huggingface is 4% lower than mmpretrain, any one have met the same with me?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26305/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26305/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26304
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26304/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26304/comments
https://api.github.com/repos/huggingface/transformers/issues/26304/events
https://github.com/huggingface/transformers/pull/26304
1,905,868,352
PR_kwDOCUB6oc5a1Kji
26,304
feat: Sequential beam search(a.k.a Low-memory beam search)
{ "login": "Saibo-creator", "id": 53392976, "node_id": "MDQ6VXNlcjUzMzkyOTc2", "avatar_url": "https://avatars.githubusercontent.com/u/53392976?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Saibo-creator", "html_url": "https://github.com/Saibo-creator", "followers_url": "https://api.github.com/users/Saibo-creator/followers", "following_url": "https://api.github.com/users/Saibo-creator/following{/other_user}", "gists_url": "https://api.github.com/users/Saibo-creator/gists{/gist_id}", "starred_url": "https://api.github.com/users/Saibo-creator/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Saibo-creator/subscriptions", "organizations_url": "https://api.github.com/users/Saibo-creator/orgs", "repos_url": "https://api.github.com/users/Saibo-creator/repos", "events_url": "https://api.github.com/users/Saibo-creator/events{/privacy}", "received_events_url": "https://api.github.com/users/Saibo-creator/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "> repurpose\r\n\r\nHey @gante , sorry I just saw your message. I agree with your suggestion and I will update the code and add a test!\r\n", "Hello @gante ,\r\nI have reused the flag `low_memory` in the code and try to unify part of the code used in sequential beam search and sequential contrastive search. \r\nA test is also added.\r\n\r\nWhat makes me confused is the test case where I failed:\r\n```terminal\r\nself = <tests.models.xglm.test_modeling_xglm.XGLMModelTest testMethod=test_tf_from_pt_safetensors>\r\n\r\n @is_pt_tf_cross_test\r\n def test_tf_from_pt_safetensors(self):\r\n for model_class in self.all_model_classes:\r\n config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()\r\n \r\n tf_model_class_name = \"TF\" + model_class.__name__ # Add the \"TF\" at the beginning\r\n if not hasattr(transformers, tf_model_class_name):\r\n # transformers does not have this model in TF version yet\r\n return\r\n \r\n tf_model_class = getattr(transformers, tf_model_class_name)\r\n \r\n pt_model = model_class(config)\r\n \r\n with tempfile.TemporaryDirectory() as tmpdirname:\r\n pt_model.save_pretrained(tmpdirname, safe_serialization=True)\r\n tf_model_1 = tf_model_class.from_pretrained(tmpdirname, from_pt=True)\r\n \r\n pt_model.save_pretrained(tmpdirname, safe_serialization=False)\r\n tf_model_2 = tf_model_class.from_pretrained(tmpdirname, from_pt=True)\r\n \r\n # Check models are equal\r\n for p1, p2 in zip(tf_model_1.weights, tf_model_2.weights):\r\n> self.assertTrue(np.allclose(p1.numpy(), p2.numpy()))\r\nE AssertionError: False is not true\r\n\r\ntests/test_modeling_common.py:3246: AssertionError\r\n\r\n\r\nFAILED tests/models/speech_to_text/test_modeling_speech_to_text.py::Speech2TextModelTest::test_tf_from_pt_safetensors - AssertionError: False is not true\r\nFAILED tests/models/transfo_xl/test_modeling_transfo_xl.py::TransfoXLModelTest::test_tf_from_pt_safetensors - AssertionError: False is not true\r\nFAILED tests/models/xglm/test_modeling_xglm.py::XGLMModelTest::test_tf_from_pt_safetensors - AssertionError: False is not true\r\n\r\nExited with code exit status 255\r\n```\r\n\r\nThis doesn't seem to be directly related to my changes. Maybe you know what's wrong here ;D\r\n\r\n\r\n", "Hey @Saibo-creator!\r\n\r\nThe failing test is indeed unrelated :) We have skipped the test while we work on it, rebasing this branch should yield a green CI :)", "Hello @gante , I would like to have your opinions over these failure cases\r\n\r\n```terminal\r\nFAILED tests/models/blenderbot/test_modeling_blenderbot.py::BlenderbotModelTest::test_beam_search_low_memory - AssertionError: Lists differ: [[1, 51, 51, 51, 51], [1, 37, 37, 43, 43]] != [[1, 37, 37, 43, 43], [1, 74, 74, 74, 74]]\r\nFAILED tests/models/bloom/test_modeling_bloom.py::BloomModelTest::test_beam_search_low_memory - RuntimeError: Sizes of tensors must match except in dimension 2. Expected size 2 but got size 8 for tensor number 1 in the list.\r\nFAILED tests/models/codegen/test_modeling_codegen.py::CodeGenModelTest::test_beam_search_low_memory - AssertionError: Lists differ: [[135, 102, 167, 94, 172, 103, 209], [233, 32, 101, 213, 234, 172, 103]] != [[135, 102, 167, 94, 172, 103, 209], [233, 32, 101, 203, 147, 47, 164]]\r\nFAILED tests/models/clvp/test_modeling_clvp.py::ClvpDecoderTest::test_beam_search_low_memory - AssertionError: Lists differ: [[38, 235, 110, 144, 159], [93, 177, 267, 144, 55]] != [[38, 235, 110, 177, 144], [93, 177, 67, 235, 116]]\r\nFAILED tests/models/git/test_modeling_git.py::GitModelTest::test_beam_search_low_memory - AssertionError: Lists differ: [[56, 96, 46, 38, 92, 42, 1], [25, 63, 12, 22, 49, 91, 19]] != [[56, 96, 46, 38, 92, 42, 1], [25, 63, 12, 45, 72, 77, 69]]\r\nFAILED\r\nFAILED tests/models/gpt_neox/test_modeling_gpt_neox.py::GPTNeoXModelTest::test_beam_search_low_memory - AssertionError: Lists differ: [[6, 34, 87, 20, 93, 20, 93], [82, 10, 76, 68, 62, 48, 10]] != [[6, 34, 87, 20, 93, 20, 93], [82, 10, 76, 68, 62, 68, 62]]\r\nFAILED tests/models/gptj/test_modeling_gptj.py::GPTJModelTest::test_beam_search_low_memory - AssertionError: Lists differ: [[84, 2, 29, 55, 59, 47, 88], [50, 23, 76, 50, 76, 16, 20]] != [[84, 2, 29, 94, 76, 16, 20], [50, 23, 76, 50, 94, 76, 50]]\r\nFAILED\r\nFAILED tests/models/imagegpt/test_modeling_imagegpt.py::ImageGPTModelTest::test_beam_search_low_memory - AssertionError: Lists differ: [[57, 97, 6, 36, 77, 62, 18], [11, 23, 94, 85, 88, 81, 61]] != [[57, 97, 6, 36, 77, 62, 18], [11, 23, 94, 54, 9, 72, 18]]\r\nFAILED tests/models/llama/test_modeling_llama.py::LlamaModelTest::test_beam_search_low_memory - AssertionError: Lists differ: [[83, 48, 56, 54, 56, 54, 56], [91, 84, 33, 72, 56, 54, 56]] != [[83, 48, 56, 54, 56, 54, 56], [91, 84, 33, 72, 56, 70, 78]]\r\nFAILED tests/models/mega/test_modeling_mega.py::MegaModelTest::test_beam_search_low_memory - AssertionError: Lists differ: [[53, 16, 33, 21, 21, 34, 70], [82, 45, 21, 21, 34, 34, 70]] != [[53, 16, 33, 21, 34, 34, 70], [82, 45, 21, 21, 34, 34, 70]]\r\nFAILED tests/models/mistral/test_modeling_mistral.py::MistralModelTest::test_beam_search_low_memory - AssertionError: Lists differ: [[52, 24, 42, 54, 4, 34, 21], [31, 71, 18, 5, 27, 28, 52]] != [[52, 24, 42, 54, 4, 34, 21], [31, 71, 18, 5, 73, 73, 73]]\r\nFAILED tests/models/mpt/test_modeling_mpt.py::MptModelTest::test_beam_search_low_memory - AssertionError: Lists differ: [[8, 13, 85, 19, 19, 19, 19], [35, 61, 88, 19, 19, 19, 19]] != [[8, 13, 85, 19, 19, 19, 19], [35, 61, 88, 92, 92, 92, 92]]\r\nFAILED tests/models/persimmon/test_modeling_persimmon.py::PersimmonModelTest::test_beam_search_low_memory - AssertionError: Lists differ: [[81, 83, 11, 58, 81, 28, 26], [54, 81, 18, 38, 18, 38, 18]] != [[81, 83, 11, 85, 58, 13, 38], [54, 81, 18, 38, 18, 38, 18]]\r\nFAILED tests/models/phi/test_modeling_phi.py::PhiModelTest::test_beam_search_low_memory - AssertionError: Lists differ: [[28, 14, 49, 19, 56, 62, 50], [9, 30, 60, 19, 19, 19, 57]] != [[28, 14, 49, 37, 88, 35, 80], [9, 30, 60, 19, 57, 9, 57]]\r\n```\r\n\r\nThe mixin tests reveal that a subset of models will generate different outputs over `sequential beam search` and `beam search`, though the difference is not very big.\r\n\r\nIt seems all these models are recent llm, such as `phi`, `llama`, `mistral` etc. While for other models, the outputs are identical.\r\n\r\nI have tried debugging locally, and I confirm that:\r\n- the beam search is deterministic across multi-runs\r\n- the sequential beam search is also deterministic across multi-runs\r\n\r\nSo indeed these models's outputs differ in the SBS vs BS\r\n\r\nDo you have any insights into what may have caused this difference?\r\n\r\nOn interesting question is : do these models give same outputs when running with different batch sizes ? Because SBS is just like unbatched BS.\r\n\r\nThank you!\r\n\r\n", "Relevant discussions:\r\n- https://discuss.huggingface.co/t/results-of-model-generate-are-different-for-different-batch-sizes-of-the-decode-only-model/34878\r\n- https://github.com/ggerganov/llama.cpp/issues/3014\r\n- https://github.com/huggingface/transformers/issues/23017, seems to be related to FP16!\r\n\r\n", "Hey @Saibo-creator ๐Ÿ‘‹ We might be seeing the same effect as described [here](https://github.com/huggingface/transformers/issues/25420#issuecomment-1775317535)\r\n\r\nIf you run the test multiple times on a single model, from the list of failing tests above, do you always get a failure? In other words, is the test stable or flaky? \r\n\r\nTo see whether a test is flaky, install\r\n```\r\npip install pytest-flakefinder\r\n```\r\n\r\nand then run\r\n```\r\npytest --flake-finder --flake-runs=100 tests/test_failing_test.py\r\n```\r\n\r\n๐Ÿ‘‰ Since the models in the mixin are randomly initialized, and the inputs are random as well, all runs are different. If the test is flaky, then the mismatch is probably due to the effect described in the threat at the beginning of this comment, and there are quick solutions for the test :)", "Hello @gante , thanks for your feedback, I tried running`pytest-flakefinder` and indeed the tests pass 7 out of 100 times and fails the other 93 times. \r\n\r\nI think it's indeed due to the effect you explained in the thread(btw, bravo for the detailed explanation!) \r\n\r\nWhat is your solution for it ? :)\r\n\r\nIn addition to the flaky failure case, there are some other failure cases, mostly due to model specific implementations, I dive into some of them:\r\n- bloom: special implementation of past key,value tensor shape\r\n- ctrl: TODO\r\n- ~fsmt~: old model with different cache format, won't fix\r\n- gpt_bigcode: due to non-standard implementation of past key values , easy to fix\r\n- reformer: old model with different cache format, won't fix\r\n- transfo_xl: TODO\r\n- xlnet: TODO\r\n- cpm: TODO\r\n\r\nI would like to know what is the general principle about these specific models? On the one hand, we could try to add if-else in the code to handle their specificity, but this also decreases the code quality a bit and our energy.\r\n\r\nLet me know which models do you think we must have them supported. :)\r\n\r\n(just saw this from https://github.com/huggingface/transformers/blob/main/tests/generation/test_utils.py#L1979)\r\n> 3. TODO (joao): A few models have different formats, skipping those until the cache refactor is complete\r\n> models_without_standard_cache = (\"bloom\", \"ctrl\", \"fsmt\", \"gptbigcode\", \"mega\", \"reformer\")\r\n \r\n\r\n\r\n\r\n\r\n", "Hey @Saibo-creator!\r\n\r\nRegarding the model-specific implementations, feel free to ignore them for now (including skipping the tests, as you pasted at the end of your comment) :)\r\n\r\nRegarding flakiness: you mentioned \"the tests pass 7 out of 100 times and fails the other 93 times\". Is this per model, or when you test all models at once? \r\n๐Ÿ‘‰ If it is the latter, for all models at once, then it means the per model failure rate is low -- we can simply add the `@is_flaky` test decorator, adding a comment pointing to the comment I linked. \r\n๐Ÿ‘‰ If it is the former, per model, then the failure rate is super high! There may be something else that we must uncover before merging :)", "Hey @gante \r\n\r\nI think it's the former case, per model... \r\n\r\nI need to dive deep into the problem to understand why..\r\n\r\nIf you are interested, here is what is happening, If I run `pytest --flake-finder --flake-runs=100 tests/models/llama/test_modeling_llama.py`,\r\n\r\n I get ` 93 failed, 8307 passed, 4700 skipped, 1613 warnings in 930.30s (0:15:30) ` \r\n\r\nand the failing cases are from `test_beam_search_low_memory` like:\r\n```\r\nFAILED tests/models/llama/test_modeling_llama.py::LlamaModelTest::test_beam_search_low_memory - AssertionError: Lists differ: [[7, 68, 64, 28, 26, 37, 49], [52, 58, 92, 50, 7, 37, 49]] != [[7, 68, 64, 7, 37, 49, 37], [52, 58, 92, 71, 41, 7, 41]]\r\nFAILED tests/models/llama/test_modeling_llama.py::LlamaModelTest::test_beam_search_low_memory - AssertionError: Lists differ: [[32, 79, 18, 50, 94, 48, 93], [52, 51, 3, 56, 24, 12, 24]] != [[32, 79, 18, 50, 94, 5, 25], [52, 51, 3, 56, 24, 12, 24]]\r\nFAILED tests/models/llama/test_modeling_llama.py::LlamaModelTest::test_beam_search_low_memory - AssertionError: Lists differ: [[39, 97, 28, 57, 0, 0, 0], [34, 69, 91, 34, 63, 0, 0]] != [[39, 97, 28, 57, 0, 0, 0], [34, 69, 91, 5, 58, 18, 14]]\r\n```\r\n\r\nSo this is 93/ 100 where the two outputs are different\r\n\r\n\r\nFor the other models such as `GPT2`, I also get som failure cases, though not frequent\r\nwith `pytest --flake-finder --flake-runs=10 tests/models/gpt2/test_modeling_gpt2.py`\r\n\r\n```\r\nFAILED tests/models/gpt2/test_modeling_gpt2.py::GPT2ModelTest::test_beam_search_low_memory - AssertionError: Lists differ: [[29, 58, 3, 68, 68, 68, 68], [90, 65, 45, 45, 68, 68, 68]] != [[29, 58, 3, 68, 68, 68, 68], [90,...\r\nFAILED tests/models/gpt2/test_modeling_gpt2.py::GPT2ModelTest::test_beam_search_low_memory - AssertionError: Lists differ: [[18, 32, 29, 29, 87, 87, 87], [40, 85, 64, 64, 30, 30, 30]] != [[18, 32, 29, 87, 87, 87, 87], [4...\r\nFAILED tests/models/gpt2/test_modeling_gpt2.py::GPT2ModelTest::test_beam_search_low_memory - AssertionError: Lists differ: [[98, 79, 92, 9, 9, 9, 9], [46, 16, 93, 9, 9, 9, 9]] != [[98, 79, 92, 9, 9, 9, 9], [46, 16, 93, 9...\r\n=============================== 3 failed, 907 passed, 510 skipped, 197 warnings in 470.60s (0:07:50) \r\n```\r\n\r\n\r\n\r\n", "@Saibo-creator sadly I have no easy advice here :( it could be a kernel-related numerical issue due to different shapes, like when using `past_key_values`, or it can be a subtle bug.\r\n\r\nI see two paths forward:\r\n1 - you are able to pin the source of the mismatch, and we can safely confirm that it is something like a numerical issue\r\n2 - you run a few benchmarks over your PR -- if the resulting metrics are similar with the low memory mode, then there shouldn't be a bug", "I spent some time investigating the reason for mismatch. I notice that if batch-size = 1, all tests pass without any failure(as shown in the latest commit)\r\n\r\nSo it seems this is not a numerical issue but more like something to do with batch processing. \r\n\r\nI will try to figure out if my implementation has a bug with batched input.", "I confirm that if we disable `use_cache`, the output are identical regardless of `batch_size`\r\nSo now we identified the origin was from the `key_value_cache`", "Hello @gante ๏ผŒ\r\n After a whole afternoon debugging, I found the bug, it's a tiny bug [in this line](https://github.com/epfl-dlab/transformers-GCD-PR/blob/fix_issue_22639/src/transformers/generation/utils.py#L4955). I somehow wrote `for i in range(0, full_batch_size, 1)` instead of `for i in range(0, full_batch_size, split_size)`. The former makes no sense at all so I guess it was just a typo, but hidden very well....\r\n\r\nI'm happy that it works now : )\r\n\r\n(The failure cases are irrelevant)", "@Saibo-creator any reason this hasn't been merged?", "or @gante ?", "Thanks for implementing it btw @Saibo-creator ", "I could really use this", "> @Saibo-creator any reason this hasn't been merged?\r\n\r\nHello Jules! Sorry I was a bit out of bandwidth last week, I'm fixing the small things pointed by Arthur now and will push immediately : )", "@ArthurZucker I updated code to incorporate your feedbacks. Let me know any other things to do ", "Is there a way the batch size of the beam-search could be made separate from `num_return_sequences`?", "For example, for RL, I would like to generate let's say 64 different outputs with a single input sequence. However, my hardware (multiple gpu or not) hardware can't handle more than 16 beams at once.", "From trying to run the code, it seems like I couldn't get those 64 outputs? Just 16?", "@JulesGM \r\n\r\n\r\n> For example, for RL, I would like to generate let's say 64 different outputs with a single input sequence. However, my hardware (multiple gpu or not) hardware can't handle more than 16 beams at once.\r\n\r\nIn your case, say you have a batch size = 1(single input), you want to do beam search of k=64 with num_return_sequences=64 and the maximum number of sequences your gpu can handle in parallel is 16.\r\nIf you do beam_search with `low_memory=True` then this method will divide your beam into 64 batches(each of 1 sequence), and run them sequentially. \r\nCurrently there is no argument to specify the `parallel_size`, it's set to be identical to the `batch_size`, this is something we could discuss if we want to add. It involves adding a new argument to the generation function.\r\n\r\n\r\n\r\n> From trying to run the code, it seems like I couldn't get those 64 outputs? Just 16?\r\n\r\nThat is something I don't expect to happen. If you set beam search k =64 and `num_return_seq`=64, you should get 64 outputs, regardless you have `low_memory` option True or False. Am I wrong ? Could you provide a reproducible example ?\r\n\r\n\r\n\r\n", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26304). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Merging as it's approved as it solves the original target use case.\r\n\r\n@JulesGM don't refrain from commenting if it doesn't solve your use case: we may still open a follow-up PR to further refine this ๐Ÿค— ", "@Saibo-creator thank you for iterating with us ๐Ÿ’› ", "> Merging as it's approved as it solves the original target use case.\r\n> \r\n> @JulesGM don't refrain from commenting if it doesn't solve your use case: we may still open a follow-up PR to further refine this ๐Ÿค—\r\n\r\n@JulesGM If you think it's necessary, we could consider adding more flexibility by allowing users to specify the `parallel_size` for example. It would be an easy add-on. ", "@Saibo-creator alternatively, a try/except loop could be tried (try with `batch_size` -> if it fails, try with `batch_size/2` -> ... -> try with batch_size = 1). I'd prefer that to an additional flag -- we already have too many flags in `generate` :)", "> @Saibo-creator alternatively, a try/except loop could be tried (try with `batch_size` -> if it fails, try with `batch_size/2` -> ... -> try with batch_size = 1). I'd prefer that to an additional flag -- we already have too many flags in `generate` :)\r\n\r\nThanks for the suggestion @gante ! I think this would be a good improvement to reduce work from user side. Should I open another PR to proceed ?" ]
1,695
1,706
1,705
CONTRIBUTOR
null
# What does this PR do? This is to address issue in https://github.com/huggingface/transformers/issues/22639 This PR is based on the idea from https://github.com/huggingface/transformers/issues/22639#issuecomment-1507525155 The original implementation of beam search effectively multiplies the batch size memory-wise and compute-wise by the batch size. If you have a batch size of 1 and a beam search of 8, model.forward sees 8 samples as effective batch size. This implementation is not necessary per se and can consume a lot of memory. The new implementation split the full_batch(num_beam x batch size) inputs into a list of reduced_batch(beam_search_batch_size), run them sequentially and concat them back to a single model_output object. It involves two helper function: - `def concat_model_outputs(objs: List[ModelOutput]) -> ModelOutput` - `def split_model_inputs( obj: Union[ModelOutput, Dict], split_size: int, full_batch_size: int ) -> List[Union[ModelOutput, Dict]]` The new implementation can be used in 4 decoding methods: `beam_search`, `beam_sample`, `group_beam_search` and `constrained_beam_search` The expected behavior is that it produces exactly the same output(logits) as the original implementation. I tested it with the following quick test and it works regardless of input ```python from transformers import GPT2Tokenizer, AutoModelForCausalLM import numpy as np from transformers import ( AutoTokenizer, AutoModelForSeq2SeqLM, LogitsProcessorList, MinLengthLogitsProcessor, BeamSearchScorer, ) tokenizer = GPT2Tokenizer.from_pretrained("gpt2") model = AutoModelForCausalLM.from_pretrained("gpt2") tokenizer.pad_token_id = tokenizer.eos_token_id model_inputs = tokenizer('I enjoy walking with my cute dog', return_tensors='pt') # activate beam search and early_stopping beam_output = model.generate( **model_inputs, max_new_tokens=40, num_beams=5, early_stopping=True ) print("Output:\n" + 100 * '-') print(tokenizer.decode(beam_output[0], skip_special_tokens=True)) beam_output_w_subbatch = model.generate( **model_inputs, max_new_tokens=40, num_beams=5, early_stopping=True, beam_search_batch_size=1 ) print("Output:\n" + 100 * '-') print(tokenizer.decode(beam_output_w_subbatch[0], skip_special_tokens=True)) assert (beam_output == beam_output_w_subbatch).all(), "Beam search results from sub batch and full batch are different" ``` TODO: only did it for pytorch models, if you think this PR is promising, I can do it for tensorflow too. # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> Fixes #22639 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @gante @ArthurZucker
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26304/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26304/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26304", "html_url": "https://github.com/huggingface/transformers/pull/26304", "diff_url": "https://github.com/huggingface/transformers/pull/26304.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26304.patch", "merged_at": 1705664215000 }
https://api.github.com/repos/huggingface/transformers/issues/26303
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26303/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26303/comments
https://api.github.com/repos/huggingface/transformers/issues/26303/events
https://github.com/huggingface/transformers/issues/26303
1,905,740,112
I_kwDOCUB6oc5xl0lQ
26,303
resize_token_embeddings warning should provide more context and be easier to squelch
{ "login": "keturn", "id": 83819, "node_id": "MDQ6VXNlcjgzODE5", "avatar_url": "https://avatars.githubusercontent.com/u/83819?v=4", "gravatar_id": "", "url": "https://api.github.com/users/keturn", "html_url": "https://github.com/keturn", "followers_url": "https://api.github.com/users/keturn/followers", "following_url": "https://api.github.com/users/keturn/following{/other_user}", "gists_url": "https://api.github.com/users/keturn/gists{/gist_id}", "starred_url": "https://api.github.com/users/keturn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/keturn/subscriptions", "organizations_url": "https://api.github.com/users/keturn/orgs", "repos_url": "https://api.github.com/users/keturn/repos", "events_url": "https://api.github.com/users/keturn/events{/privacy}", "received_events_url": "https://api.github.com/users/keturn/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! On 1. Sur this si something that can be added. \r\n2. No, we can use `logger.warning_once` however.\r\n\r\nWould you like to open a PR for both? ๐Ÿค— ", "That is something I could do! You should be able to find my application in the collection ๐Ÿค— has collected via Workable.", "I don't know if it's safe or not, but I added pad_to_multiple_of=32 on line 1039 of env/lib/python3.10/site-packages/diffusers/loaders.py, and the warning disappeared.\r\n\r\nfrom: `text_encoder.resize_token_embeddings(len(tokenizer) + len(tokens))` to `text_encoder.resize_token_embeddings(len(tokenizer) + len(tokens), pad_to_multiple_of=32)`", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,695
1,699
1,699
CONTRIBUTOR
null
### System Info - `transformers` version: 4.33.2 - Platform: Linux-5.15.0-84-generic-x86_64-with-glibc2.35 - Python version: 3.11.5 - Huggingface_hub version: 0.17.2 - Safetensors version: 0.3.1 - Accelerate version: 0.23.0 - Accelerate config: - compute_environment: LOCAL_MACHINE - distributed_type: NO - mixed_precision: bf16 - use_cpu: False - debug: False - num_processes: 1 - machine_rank: 0 - num_machines: 1 - gpu_ids: cuda:0 - rdzv_backend: static - same_network: True - main_training_function: main - downcast_bf16: no - tpu_use_cluster: False - tpu_use_sudo: False - tpu_env: [] - PyTorch version (GPU?): 2.1.0+cu121 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction call ClipTextEncoder.resize_token_embeddings and it spews > You are resizing the embedding layer without providing a pad_to_multiple_of parameter. This means that the new embedding dimension will be None. This might induce some performance reduction as Tensor Cores will not be available. For more details about this, or help on choosing the correct value for resizing, refer to this guide: [โ€ฆ] added in #25088. ### Expected behavior wow that's a big message that happens at a high logging level (`WARNING`) on some code that gets called fairly frequently. A couple things would help: - I had no idea where this was coming from. It would be nice if it used a logger that provided more context (e.g. the class that was emitting the warning), or a `repr(self)`, or stack information. - Use [`warnings.warn`](https://docs.python.org/3/library/warnings.html#warnings.warn) instead of `logger.warning`. This allows it to be reported _once_ instead of on every call, makes it easier for an application author to filter these warnings from the end user, and there's an argument `stacklevel` which can be a more concise way of providing relevant caller information than feeding `stack_info` to a `logger`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26303/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26303/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26302
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26302/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26302/comments
https://api.github.com/repos/huggingface/transformers/issues/26302/events
https://github.com/huggingface/transformers/pull/26302
1,905,436,242
PR_kwDOCUB6oc5aztD9
26,302
[InternLM] Add support for InternLM
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@ArthurZucker Also, to answer the question, there is no `Dropout` anywhere in the LLaMA code or the InternLM code." ]
1,695
1,695
1,695
MEMBER
null
InternLM is based on the LLaMA code but adds a `config.bias` parameter. We can support those models by adding `config.bias` to LLaMA, and preserve backward compatibility by defaulting it to `False`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26302/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26302/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26302", "html_url": "https://github.com/huggingface/transformers/pull/26302", "diff_url": "https://github.com/huggingface/transformers/pull/26302.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26302.patch", "merged_at": 1695743540000 }
https://api.github.com/repos/huggingface/transformers/issues/26301
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26301/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26301/comments
https://api.github.com/repos/huggingface/transformers/issues/26301/events
https://github.com/huggingface/transformers/pull/26301
1,905,311,432
PR_kwDOCUB6oc5azSfw
26,301
update hf hub dependency to be compatible with the new tokenizers
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,695
1,695
1,695
COLLABORATOR
null
# What does this PR do? Fixes #26276, `tokenizers==0.14` requires `"huggingface-hub>=0.16.4,<1.0"`, so updated the dependency in setup.py
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26301/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26301/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26301", "html_url": "https://github.com/huggingface/transformers/pull/26301", "diff_url": "https://github.com/huggingface/transformers/pull/26301.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26301.patch", "merged_at": 1695301056000 }
https://api.github.com/repos/huggingface/transformers/issues/26300
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26300/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26300/comments
https://api.github.com/repos/huggingface/transformers/issues/26300/events
https://github.com/huggingface/transformers/pull/26300
1,905,280,618
PR_kwDOCUB6oc5azLwT
26,300
Code-llama-nit
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "I'll add a few more update it seems that even when performing infilling the prefix tokens is removed, which should not be done (for both fast and slow): TODO for patch if needed" ]
1,695
1,696
1,696
COLLABORATOR
null
# What does this PR do? Fixes #26156, the `fast` CodeLLama tokenizer excepts the fill token to be != None.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26300/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26300/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26300", "html_url": "https://github.com/huggingface/transformers/pull/26300", "diff_url": "https://github.com/huggingface/transformers/pull/26300.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26300.patch", "merged_at": 1696264168000 }
https://api.github.com/repos/huggingface/transformers/issues/26299
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26299/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26299/comments
https://api.github.com/repos/huggingface/transformers/issues/26299/events
https://github.com/huggingface/transformers/pull/26299
1,905,253,446
PR_kwDOCUB6oc5azFzE
26,299
[Whisper Tokenizer] Make decoding faster after adding timestamps
{ "login": "sanchit-gandhi", "id": 93869735, "node_id": "U_kgDOBZhWpw", "avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sanchit-gandhi", "html_url": "https://github.com/sanchit-gandhi", "followers_url": "https://api.github.com/users/sanchit-gandhi/followers", "following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}", "gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}", "starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions", "organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs", "repos_url": "https://api.github.com/users/sanchit-gandhi/repos", "events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}", "received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Yep it was significantly faster once scaled to large datasets and long sequence lengths (>5k samples with 256 seq len)" ]
1,695
1,695
1,695
CONTRIBUTOR
null
# What does this PR do? Following the update to the Whisper tokenizer to handle encoding/decoding timestamps (#26054), there is one line in the decoding which takes extremely long: https://github.com/huggingface/transformers/blob/f94c9b3d863c1a95b44b5b3ea9ce3cbd27fc7609/src/transformers/models/whisper/tokenization_whisper.py#L614 Here we do an order `N * M` operation to filter out all the timestamp tokens, where `N` is the length of the token ids, and `M` the number of timestamp tokens (for each token, check whether itโ€™s in the timestamp token list). In practice, this is causing decoding to take **extremely** long for typical validation sets, e.g. LibriSpeech test clean took ~30 mins for the tokenizer to decode on a TPU v3 (which has lots of CPU power to run this operation). This PR switches the timestamp filtering to a regex string operation, which in a toy benchmark was a factor of > 2000 faster. Would love to hear from @ArthurZucker whether we're happy to sacrifice a bit of readability for this speed-up!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26299/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26299/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26299", "html_url": "https://github.com/huggingface/transformers/pull/26299", "diff_url": "https://github.com/huggingface/transformers/pull/26299.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26299.patch", "merged_at": 1695924147000 }
https://api.github.com/repos/huggingface/transformers/issues/26298
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26298/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26298/comments
https://api.github.com/repos/huggingface/transformers/issues/26298/events
https://github.com/huggingface/transformers/issues/26298
1,905,249,947
I_kwDOCUB6oc5xj86b
26,298
Synk_Dive
{ "login": "yasmws", "id": 57499139, "node_id": "MDQ6VXNlcjU3NDk5MTM5", "avatar_url": "https://avatars.githubusercontent.com/u/57499139?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yasmws", "html_url": "https://github.com/yasmws", "followers_url": "https://api.github.com/users/yasmws/followers", "following_url": "https://api.github.com/users/yasmws/following{/other_user}", "gists_url": "https://api.github.com/users/yasmws/gists{/gist_id}", "starred_url": "https://api.github.com/users/yasmws/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yasmws/subscriptions", "organizations_url": "https://api.github.com/users/yasmws/orgs", "repos_url": "https://api.github.com/users/yasmws/repos", "events_url": "https://api.github.com/users/yasmws/events{/privacy}", "received_events_url": "https://api.github.com/users/yasmws/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
[]
1,695
1,695
null
NONE
null
### Model description O projeto Synk Dive cria um ambiente de harmonia multisensorial que oferece uma experiรชncia imersiva รบnica, combinando inputs visuais e textuais. Ao fornecer um vรญdeo como entrada, por exemplo, o sistema gera uma trilha sonora que se adapta ร s emoรงรตes transmitidas em tela. Esta trilha sonora sincroniza-se com as diferentes cenas ao longo do vรญdeo, resultando em uma experiรชncia que integra elementos visuais e auditivos para intensificar a imersรฃo e a conexรฃo emocional do usuรกrio com o conteรบdo. ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation _No response_
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26298/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26298/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/26297
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26297/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26297/comments
https://api.github.com/repos/huggingface/transformers/issues/26297/events
https://github.com/huggingface/transformers/pull/26297
1,905,209,630
PR_kwDOCUB6oc5ay8H7
26,297
add build_inputs_with_special_tokens to LlamaFast
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "On our test update roadmap cc @ydshieh " ]
1,695
1,696
1,696
COLLABORATOR
null
# What does this PR do? Fixes #26287 where the ouptuts of `prepare_for_model` are different. There is a test for `prepare_for_model` but we don't really make sure that the outputs match.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26297/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26297/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26297", "html_url": "https://github.com/huggingface/transformers/pull/26297", "diff_url": "https://github.com/huggingface/transformers/pull/26297.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26297.patch", "merged_at": 1696264245000 }
https://api.github.com/repos/huggingface/transformers/issues/26296
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26296/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26296/comments
https://api.github.com/repos/huggingface/transformers/issues/26296/events
https://github.com/huggingface/transformers/pull/26296
1,905,202,048
PR_kwDOCUB6oc5ay6d5
26,296
More error message fixup, plus some linebreaks!
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Changes merged, gonna go ahead and merge the PR once CI is green." ]
1,695
1,695
1,695
MEMBER
null
Followup to #26291 - I missed one of the messages! This PR also adds some linebreaks so they display without messy long lines and wrapping.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26296/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26296/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26296", "html_url": "https://github.com/huggingface/transformers/pull/26296", "diff_url": "https://github.com/huggingface/transformers/pull/26296.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26296.patch", "merged_at": 1695314165000 }
https://api.github.com/repos/huggingface/transformers/issues/26295
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26295/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26295/comments
https://api.github.com/repos/huggingface/transformers/issues/26295/events
https://github.com/huggingface/transformers/pull/26295
1,905,137,843
PR_kwDOCUB6oc5aysnF
26,295
[wip: test doc builder fix-lt-html-regex]
{ "login": "mishig25", "id": 11827707, "node_id": "MDQ6VXNlcjExODI3NzA3", "avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mishig25", "html_url": "https://github.com/mishig25", "followers_url": "https://api.github.com/users/mishig25/followers", "following_url": "https://api.github.com/users/mishig25/following{/other_user}", "gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}", "starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mishig25/subscriptions", "organizations_url": "https://api.github.com/users/mishig25/orgs", "repos_url": "https://api.github.com/users/mishig25/repos", "events_url": "https://api.github.com/users/mishig25/events{/privacy}", "received_events_url": "https://api.github.com/users/mishig25/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,695
1,695
1,695
CONTRIBUTOR
null
testing https://github.com/huggingface/doc-builder/pull/398
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26295/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26295/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26295", "html_url": "https://github.com/huggingface/transformers/pull/26295", "diff_url": "https://github.com/huggingface/transformers/pull/26295.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26295.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/26294
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26294/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26294/comments
https://api.github.com/repos/huggingface/transformers/issues/26294/events
https://github.com/huggingface/transformers/pull/26294
1,905,115,022
PR_kwDOCUB6oc5aynrv
26,294
add bbox input validation
{ "login": "jinhopark8345", "id": 60179569, "node_id": "MDQ6VXNlcjYwMTc5NTY5", "avatar_url": "https://avatars.githubusercontent.com/u/60179569?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jinhopark8345", "html_url": "https://github.com/jinhopark8345", "followers_url": "https://api.github.com/users/jinhopark8345/followers", "following_url": "https://api.github.com/users/jinhopark8345/following{/other_user}", "gists_url": "https://api.github.com/users/jinhopark8345/gists{/gist_id}", "starred_url": "https://api.github.com/users/jinhopark8345/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jinhopark8345/subscriptions", "organizations_url": "https://api.github.com/users/jinhopark8345/orgs", "repos_url": "https://api.github.com/users/jinhopark8345/repos", "events_url": "https://api.github.com/users/jinhopark8345/events{/privacy}", "received_events_url": "https://api.github.com/users/jinhopark8345/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "All pass!", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26294). All of your documentation changes will be reflected on that endpoint." ]
1,695
1,695
1,695
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> `BrosModel` will only work if `bbox` is not `None` and currently there is no validation code for this case. Add input validation code to the beginning of `BrosModel.forward` method as @ydshieh suggested. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. https://github.com/huggingface/transformers/pull/23190#issuecomment-1727796291 - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @ydshieh <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26294/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26294/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26294", "html_url": "https://github.com/huggingface/transformers/pull/26294", "diff_url": "https://github.com/huggingface/transformers/pull/26294.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26294.patch", "merged_at": 1695221316000 }
https://api.github.com/repos/huggingface/transformers/issues/26293
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26293/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26293/comments
https://api.github.com/repos/huggingface/transformers/issues/26293/events
https://github.com/huggingface/transformers/pull/26293
1,905,099,864
PR_kwDOCUB6oc5aykXc
26,293
[QUICK FIX LINK] Update trainer.py
{ "login": "SoyGema", "id": 24204714, "node_id": "MDQ6VXNlcjI0MjA0NzE0", "avatar_url": "https://avatars.githubusercontent.com/u/24204714?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SoyGema", "html_url": "https://github.com/SoyGema", "followers_url": "https://api.github.com/users/SoyGema/followers", "following_url": "https://api.github.com/users/SoyGema/following{/other_user}", "gists_url": "https://api.github.com/users/SoyGema/gists{/gist_id}", "starred_url": "https://api.github.com/users/SoyGema/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SoyGema/subscriptions", "organizations_url": "https://api.github.com/users/SoyGema/orgs", "repos_url": "https://api.github.com/users/SoyGema/repos", "events_url": "https://api.github.com/users/SoyGema/events{/privacy}", "received_events_url": "https://api.github.com/users/SoyGema/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @ArthurZucker ! It seems the point character broke things\r\nNow it sohuld work. ", "indeed sorry haha! merging ๐Ÿ˜‰ ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26293). All of your documentation changes will be reflected on that endpoint." ]
1,695
1,695
1,695
CONTRIBUTOR
null
Fix link # What does this PR do? Fix broken link. Change `https://huggingface.co/docs/transformers/model_doc/auto.` for `https://huggingface.co/docs/transformers/model_doc/auto` <img width="1093" alt="Captura de pantalla 2023-09-20 a las 16 10 06" src="https://github.com/huggingface/transformers/assets/24204714/c1d801bf-691e-4591-9071-1e3584e26ada"> <!-- Remove if not applicable --> ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26293/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26293/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26293", "html_url": "https://github.com/huggingface/transformers/pull/26293", "diff_url": "https://github.com/huggingface/transformers/pull/26293.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26293.patch", "merged_at": 1695346410000 }
https://api.github.com/repos/huggingface/transformers/issues/26292
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26292/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26292/comments
https://api.github.com/repos/huggingface/transformers/issues/26292/events
https://github.com/huggingface/transformers/pull/26292
1,905,086,553
PR_kwDOCUB6oc5ayhe-
26,292
Fix FSMT weight sharing
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,695
1,695
1,695
MEMBER
null
The FSMT weight sharing needs to take into account whether `tie_word_embeddings` is `True` or not, given that the model has checkpoints that have it set to `True`, and others set to `False`.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26292/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26292/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26292", "html_url": "https://github.com/huggingface/transformers/pull/26292", "diff_url": "https://github.com/huggingface/transformers/pull/26292.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26292.patch", "merged_at": 1695300366000 }
https://api.github.com/repos/huggingface/transformers/issues/26291
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26291/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26291/comments
https://api.github.com/repos/huggingface/transformers/issues/26291/events
https://github.com/huggingface/transformers/pull/26291
1,905,059,316
PR_kwDOCUB6oc5aybqN
26,291
Rewrite for custom code warning messages
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,695
1,695
1,695
MEMBER
null
I was seeing these a lot while working on InternLM and it's time to fix them!
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26291/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26291/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26291", "html_url": "https://github.com/huggingface/transformers/pull/26291", "diff_url": "https://github.com/huggingface/transformers/pull/26291.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26291.patch", "merged_at": 1695219529000 }
https://api.github.com/repos/huggingface/transformers/issues/26290
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26290/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26290/comments
https://api.github.com/repos/huggingface/transformers/issues/26290/events
https://github.com/huggingface/transformers/pull/26290
1,904,802,848
PR_kwDOCUB6oc5axkSQ
26,290
Fix issue of canine forward requiring input_ids anyway
{ "login": "marcmk6", "id": 31750443, "node_id": "MDQ6VXNlcjMxNzUwNDQz", "avatar_url": "https://avatars.githubusercontent.com/u/31750443?v=4", "gravatar_id": "", "url": "https://api.github.com/users/marcmk6", "html_url": "https://github.com/marcmk6", "followers_url": "https://api.github.com/users/marcmk6/followers", "following_url": "https://api.github.com/users/marcmk6/following{/other_user}", "gists_url": "https://api.github.com/users/marcmk6/gists{/gist_id}", "starred_url": "https://api.github.com/users/marcmk6/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/marcmk6/subscriptions", "organizations_url": "https://api.github.com/users/marcmk6/orgs", "repos_url": "https://api.github.com/users/marcmk6/repos", "events_url": "https://api.github.com/users/marcmk6/events{/privacy}", "received_events_url": "https://api.github.com/users/marcmk6/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26290). All of your documentation changes will be reflected on that endpoint." ]
1,695
1,696
1,696
CONTRIBUTOR
null
The current `forward` requires (the shape of) `input_ids` for deriving other variables whenever `input_ids` or `inputs_embeds` is provided. Change this to use the given one instead of `input_ids` all the time. # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #26288 ## Who can review? @ArthurZucker and @younesbelkada <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26290/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26290/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26290", "html_url": "https://github.com/huggingface/transformers/pull/26290", "diff_url": "https://github.com/huggingface/transformers/pull/26290.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26290.patch", "merged_at": 1696237601000 }
https://api.github.com/repos/huggingface/transformers/issues/26289
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26289/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26289/comments
https://api.github.com/repos/huggingface/transformers/issues/26289/events
https://github.com/huggingface/transformers/issues/26289
1,904,775,137
I_kwDOCUB6oc5xiI_h
26,289
Make the skip_batches_dataloader used in the Trainer customizable
{ "login": "Hubert-Bonisseur", "id": 48770768, "node_id": "MDQ6VXNlcjQ4NzcwNzY4", "avatar_url": "https://avatars.githubusercontent.com/u/48770768?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Hubert-Bonisseur", "html_url": "https://github.com/Hubert-Bonisseur", "followers_url": "https://api.github.com/users/Hubert-Bonisseur/followers", "following_url": "https://api.github.com/users/Hubert-Bonisseur/following{/other_user}", "gists_url": "https://api.github.com/users/Hubert-Bonisseur/gists{/gist_id}", "starred_url": "https://api.github.com/users/Hubert-Bonisseur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Hubert-Bonisseur/subscriptions", "organizations_url": "https://api.github.com/users/Hubert-Bonisseur/orgs", "repos_url": "https://api.github.com/users/Hubert-Bonisseur/repos", "events_url": "https://api.github.com/users/Hubert-Bonisseur/events{/privacy}", "received_events_url": "https://api.github.com/users/Hubert-Bonisseur/received_events", "type": "User", "site_admin": false }
[ { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
open
false
null
[]
[]
1,695
1,695
null
NONE
null
### Feature request In order to allow customizing the skip dataloader, I propose to add a customizable method called `get_skip_dataloader` for example, which by default will contain the current logic but can be changed by inheriting the Trainer. ### Motivation I am using a the [mosaicML streaming](https://github.com/mosaicml/streaming) framework to train models. In order to use their mid epoch resumption feature of this framework, you have to: - use their StreamingDatalaoder, this is already handled by overwriting `get_train_dataloader` - save the dataloader state dict. This is can be handled with a callback. - load the dataloader state dict when resuming training. This can't be done currently because the function responsible for this ( `skip_first_batches` ) of Accelerate is not customizable I am able to overcome this problem by patching `skip_first_batches`, but on official to do it would be much more comfortable. ### Your contribution I can provide a PR if you think this is a good idea
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26289/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26289/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/26288
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26288/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26288/comments
https://api.github.com/repos/huggingface/transformers/issues/26288/events
https://github.com/huggingface/transformers/issues/26288
1,904,718,285
I_kwDOCUB6oc5xh7HN
26,288
CANINE unexpectedly requires input_ids anyway
{ "login": "marcmk6", "id": 31750443, "node_id": "MDQ6VXNlcjMxNzUwNDQz", "avatar_url": "https://avatars.githubusercontent.com/u/31750443?v=4", "gravatar_id": "", "url": "https://api.github.com/users/marcmk6", "html_url": "https://github.com/marcmk6", "followers_url": "https://api.github.com/users/marcmk6/followers", "following_url": "https://api.github.com/users/marcmk6/following{/other_user}", "gists_url": "https://api.github.com/users/marcmk6/gists{/gist_id}", "starred_url": "https://api.github.com/users/marcmk6/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/marcmk6/subscriptions", "organizations_url": "https://api.github.com/users/marcmk6/orgs", "repos_url": "https://api.github.com/users/marcmk6/repos", "events_url": "https://api.github.com/users/marcmk6/events{/privacy}", "received_events_url": "https://api.github.com/users/marcmk6/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,695
1,696
1,696
CONTRIBUTOR
null
### System Info - `transformers` version: 4.33.2 - Platform: Linux-5.15.109+-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.17.2 - Safetensors version: 0.3.3 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu118 (False) - Tensorflow version (GPU?): 2.13.0 (False) - Flax version (CPU?/GPU?/TPU?): 0.7.2 (cpu) - Jax version: 0.4.14 - JaxLib version: 0.4.14 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @ArthurZucker and @younesbelkada ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python from transformers import CanineModel, BertModel import torch BERT_model = BertModel.from_pretrained('bert-base-uncased') canine_model = CanineModel.from_pretrained('google/canine-c') fake_input = torch.rand(1, 10, 768) _ = BERT_model.forward(inputs_embeds=fake_input) # no error _ = canine_model.forward(inputs_embeds=fake_input) # error ``` The error message ``` File /miniconda3/envs/tmp/lib/python3.10/site-packages/transformers/models/canine/modeling_canine.py:1172, in CanineModel.forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, output_attentions, output_hidden_states, return_dict) 1162 input_char_embeddings = self.char_embeddings( 1163 input_ids=input_ids, 1164 position_ids=position_ids, 1165 token_type_ids=token_type_ids, 1166 inputs_embeds=inputs_embeds, 1167 ) 1169 # Contextualize character embeddings using shallow Transformer. 1170 # We use a 3D attention mask for the local attention. 1171 # `input_char_encoding`: shape (batch_size, char_seq_len, char_dim) -> 1172 char_attention_mask = self._create_3d_attention_mask_from_input_mask(input_ids, attention_mask) 1173 init_chars_encoder_outputs = self.initial_char_encoder( 1174 input_char_embeddings, 1175 attention_mask=char_attention_mask, 1176 output_attentions=output_attentions, 1177 output_hidden_states=output_hidden_states, 1178 ) 1179 input_char_encoding = init_chars_encoder_outputs.last_hidden_state File /miniconda3/envs/tmp/lib/python3.10/site-packages/transformers/models/canine/modeling_canine.py:1042, in CanineModel._create_3d_attention_mask_from_input_mask(self, from_tensor, to_mask) 1031 def _create_3d_attention_mask_from_input_mask(self, from_tensor, to_mask): 1032 """ 1033 Create 3D attention mask from a 2D tensor mask. 1034 (...) 1040 float Tensor of shape [batch_size, from_seq_length, to_seq_length]. 1041 """ -> 1042 batch_size, from_seq_length = from_tensor.shape[0], from_tensor.shape[1] 1044 to_seq_length = to_mask.shape[1] 1046 to_mask = torch.reshape(to_mask, (batch_size, 1, to_seq_length)).float() AttributeError: 'NoneType' object has no attribute 'shape' ``` ### Expected behavior According to [doc](https://huggingface.co/docs/transformers/model_doc/canine#transformers.CanineModel.forward), the forward should work with either `input_ids` or `inputs_embeds` provided. But it turns out `input_ids` is used for deriving other variables in the code in all cases.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26288/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26288/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26287
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26287/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26287/comments
https://api.github.com/repos/huggingface/transformers/issues/26287/events
https://github.com/huggingface/transformers/issues/26287
1,904,690,361
I_kwDOCUB6oc5xh0S5
26,287
LLamaV2 tokenizer prepare_for_model produces different results for fast and slow tokenizers
{ "login": "zerogerc", "id": 9149195, "node_id": "MDQ6VXNlcjkxNDkxOTU=", "avatar_url": "https://avatars.githubusercontent.com/u/9149195?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zerogerc", "html_url": "https://github.com/zerogerc", "followers_url": "https://api.github.com/users/zerogerc/followers", "following_url": "https://api.github.com/users/zerogerc/following{/other_user}", "gists_url": "https://api.github.com/users/zerogerc/gists{/gist_id}", "starred_url": "https://api.github.com/users/zerogerc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zerogerc/subscriptions", "organizations_url": "https://api.github.com/users/zerogerc/orgs", "repos_url": "https://api.github.com/users/zerogerc/repos", "events_url": "https://api.github.com/users/zerogerc/events{/privacy}", "received_events_url": "https://api.github.com/users/zerogerc/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Yep, this is because `prepare_input_for_model` uses the `build_inputs_with_special_tokens`, which was never added for LlamaFast as using the template processor is the favoured way to go. I'll add it back for now, good catch! " ]
1,695
1,696
1,696
NONE
null
### System Info transformers==4.33.2 Hi, I noticed that LlamaV2 tokenizer produces different results depending on whether is_fast is enabled or not: ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Here is a code snippet: ``` tokenizer_fast = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf", use_fast=True) tokenizer_slow = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf", use_fast=False) print(tokenizer_fast.prepare_for_model([100, 200, 300])["input_ids"]) >>> [100, 200, 300] print(tokenizer_slow.prepare_for_model([100, 200, 300])["input_ids"]) >>> [1, 100, 200, 300] ``` ### Expected behavior The results should be the same
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26287/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26287/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26286
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26286/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26286/comments
https://api.github.com/repos/huggingface/transformers/issues/26286/events
https://github.com/huggingface/transformers/issues/26286
1,904,678,234
I_kwDOCUB6oc5xhxVa
26,286
`align_to_words=True` in `QuestionAnsweringPipeline` can lead to duplicate answers
{ "login": "MichelBartels", "id": 17650521, "node_id": "MDQ6VXNlcjE3NjUwNTIx", "avatar_url": "https://avatars.githubusercontent.com/u/17650521?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MichelBartels", "html_url": "https://github.com/MichelBartels", "followers_url": "https://api.github.com/users/MichelBartels/followers", "following_url": "https://api.github.com/users/MichelBartels/following{/other_user}", "gists_url": "https://api.github.com/users/MichelBartels/gists{/gist_id}", "starred_url": "https://api.github.com/users/MichelBartels/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MichelBartels/subscriptions", "organizations_url": "https://api.github.com/users/MichelBartels/orgs", "repos_url": "https://api.github.com/users/MichelBartels/repos", "events_url": "https://api.github.com/users/MichelBartels/events{/privacy}", "received_events_url": "https://api.github.com/users/MichelBartels/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "cc @Rocketknight1 if you can have a look! ๐Ÿค— ", "@michelbartels I managed to reproduce the bug, and I think your diagnosis of it is completely correct. Would you be interested in filing a PR with your solution? If you don't have time, that's fine! Just let us know and we'll put it on the list to get fixed internally.", "@Rocketknight1 Thanks for looking into this, I am afraid I currently don't have time to contribute a fix.", "@MichelBartels No problem! Thanks for the clean bug report anyway - I'll let you know when we have a fix.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@BCreativeS , the issue still needs to be addressed.", "This issue still needs to be addressed.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "This issue still needs to be addressed.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "This issue still needs to be addressed.", "cc @Rocketknight1 if you can have a look! Would be nice ๐Ÿค— " ]
1,695
1,707
null
CONTRIBUTOR
null
### System Info - `transformers` version: 4.31.0 - Platform: macOS-13.4.1-arm64-arm-64bit - Python version: 3.11.4 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - Accelerate version: 0.21.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @Nars ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```py from transformers import pipeline answers = pipeline("question-answering", model="deepset/tinyroberta-squad2")( question="Who is the chancellor of Germany?", context="Angela Merkel was the chancellor of Germany.", top_k=10 ) print(answers[0]) # Returns {'score': 0.9961308836936951, 'start': 0, 'end': 13, 'answer': 'Angela Merkel'} print(answers[5]) # Returns {'score': 7.520078361267224e-05, 'start': 0, 'end': 13, 'answer': 'Angela Merkel'} ``` If `align_to_words` is set to `True` (which is the default), all start or end tokens that are contained in the same word are mapped to the same start and end character index (see [here](https://github.com/huggingface/transformers/blob/37c205eb5d5165b70d3100b599a2bcfc483944f5/src/transformers/pipelines/question_answering.py#L617-L620)). This is expected when using `align_to_words`. However, the top_k filtering happens before this step so duplicate answers can remain. ### Expected behavior Ideally, the mapping from token to word should happen at around [this point](https://github.com/huggingface/transformers/blob/37c205eb5d5165b70d3100b599a2bcfc483944f5/src/transformers/pipelines/question_answering.py#L139). You would have a start and end probability for each word. If there are multiple tokens in a word, their probabilities should be summed. This would make the probabilities more correct because every token in the word would affect the probability of selecting the word. If this is too slow, there should at least be a check for duplicates somewhere [here](https://github.com/huggingface/transformers/blob/37c205eb5d5165b70d3100b599a2bcfc483944f5/src/transformers/pipelines/question_answering.py#L604). This would mean that you are not guaranteed to get k answers when setting `top_k`, but only that you get at most k answers. A way to mitigate that somewhat (but not perfectly), would be to use a higher value than top_k when calling `select_starts_ends` [here](https://github.com/huggingface/transformers/blob/37c205eb5d5165b70d3100b599a2bcfc483944f5/src/transformers/pipelines/question_answering.py#L546-L548).
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26286/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26286/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/26285
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26285/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26285/comments
https://api.github.com/repos/huggingface/transformers/issues/26285/events
https://github.com/huggingface/transformers/pull/26285
1,904,642,702
PR_kwDOCUB6oc5axBWE
26,285
Update tiny model information and pipeline tests
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@sanchit-gandhi \r\n\r\nThe `tests/models/vits/test_modeling_vits.py::VitsModelTest::test_pipeline_text_to_audio` fails (in this PR) with `TypeError: forward() got an unexpected keyword argument 'num_return_sequences'`. You can see it [here](https://app.circleci.com/pipelines/github/huggingface/transformers/73367/workflows/4b015711-c595-40be-a0c6-cdacde0f69d1/jobs/927702) or the artifact tab.\r\n\r\nI am not sure if `ViTsModel` should be in `MODEL_FOR_TEXT_TO_WAVEFORM_MAPPING_NAMES` (and therefore used by `TextToAudioPipelineTests` where `generate` is used.\r\n\r\nCould you check this to see if a fix is required?\r\n\r\nYou can checkout this branch `update_tiny` (if you use HF repo. locally) and run it\r\n\r\n```python\r\npython3 -m pytest -v tests/models/vits/test_modeling_vits.py::VitsModelTest::test_pipeline_text_to_audio\r\n```\r\n\r\n-------------------------------------------------------------------------------\r\n\r\nThe full error log\r\n\r\n```bash\r\nself = <tests.models.vits.test_modeling_vits.VitsModelTest testMethod=test_pipeline_text_to_audio>\r\n\r\n @is_pipeline_test\r\n @require_torch\r\n def test_pipeline_text_to_audio(self):\r\n> self.run_task_tests(task=\"text-to-audio\")\r\n\r\ntests/test_pipeline_mixin.py:413: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\ntests/test_pipeline_mixin.py:171: in run_task_tests\r\n self.run_model_pipeline_tests(\r\ntests/test_pipeline_mixin.py:209: in run_model_pipeline_tests\r\n self.run_pipeline_test(task, repo_name, model_architecture, tokenizer_name, processor_name, commit)\r\ntests/test_pipeline_mixin.py:298: in run_pipeline_test\r\n task_test.run_pipeline_test(pipeline, examples)\r\ntests/pipelines/test_pipelines_text_to_audio.py:185: in run_pipeline_test\r\n outputs = speech_generator([\"This is great !\", \"Something else\"], forward_params=forward_params)\r\n../.pyenv/versions/3.8.12/lib/python3.8/site-packages/transformers/pipelines/text_to_audio.py:138: in __call__\r\n return super().__call__(text_inputs, **forward_params)\r\n../.pyenv/versions/3.8.12/lib/python3.8/site-packages/transformers/pipelines/base.py:1121: in __call__\r\n outputs = list(final_iterator)\r\n../.pyenv/versions/3.8.12/lib/python3.8/site-packages/transformers/pipelines/pt_utils.py:124: in __next__\r\n item = next(self.iterator)\r\n../.pyenv/versions/3.8.12/lib/python3.8/site-packages/transformers/pipelines/pt_utils.py:125: in __next__\r\n processed = self.infer(item, **self.params)\r\n../.pyenv/versions/3.8.12/lib/python3.8/site-packages/transformers/pipelines/base.py:1046: in forward\r\n model_outputs = self._forward(model_inputs, **forward_params)\r\n../.pyenv/versions/3.8.12/lib/python3.8/site-packages/transformers/pipelines/text_to_audio.py:114: in _forward\r\n output = self.model(**model_inputs, **kwargs)[0]\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nself = VitsModel(\r\n (text_encoder): VitsTextEncoder(\r\n (embed_tokens): Embedding(38, 16)\r\n (encoder): VitsEncoder(\r\n ... (dropout): Dropout(p=0.0, inplace=False)\r\n )\r\n (conv_proj): Conv1d(16, 32, kernel_size=(1,), stride=(1,))\r\n )\r\n)\r\nargs = ()\r\nkwargs = {'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,\r\n 1, 1, 1]]... 8, 0, 19, 0, 18, 0, 8, 0, 19, 0, 37,\r\n 0, 25, 0, 7, 0, 26, 0, 33, 0]]), 'num_return_sequences': 2}\r\nforward_call = <bound method VitsModel.forward of VitsModel(\r\n (text_encoder): VitsTextEncoder(\r\n (embed_tokens): Embedding(38, 16)... (dropout): Dropout(p=0.0, inplace=False)\r\n )\r\n (conv_proj): Conv1d(16, 32, kernel_size=(1,), stride=(1,))\r\n )\r\n)>\r\n\r\n def _call_impl(self, *args, **kwargs):\r\n forward_call = (self._slow_forward if torch._C._get_tracing_state() else self.forward)\r\n # If we don't have any hooks, we want to skip the rest of the logic in\r\n # this function, and just call forward.\r\n if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks\r\n or _global_backward_pre_hooks or _global_backward_hooks\r\n or _global_forward_hooks or _global_forward_pre_hooks):\r\n> return forward_call(*args, **kwargs)\r\nE TypeError: forward() got an unexpected keyword argument 'num_return_sequences'\r\n\r\n../.pyenv/versions/3.8.12/lib/python3.8/site-packages/torch/nn/modules/module.py:1501: TypeError\r\n```\r\n\r\n", "Hey @ydshieh! The VITS model is registered under the correct mapping, and the pipeline class is correct in that it only calls `.generate` if the model is an auto-regressive model: https://github.com/huggingface/transformers/blob/9a30753485653697c7db79e12b0cb2b8872c94c6/src/transformers/pipelines/text_to_audio.py#L111-L114\r\n\r\nThe problem is in the testing code, namely that `num_return_sequences` and `do_sample` are always passed to the `forward_params` of the pipeline: https://github.com/huggingface/transformers/blob/9a30753485653697c7db79e12b0cb2b8872c94c6/tests/pipelines/test_pipelines_text_to_audio.py#L184\r\n\r\nWe can set the `forward_params` in the test based on whether the model can generate:\r\n```python\r\nforward_params = {\"num_return_sequences\": 2, \"do_sample\": True} if speech_generator.model.can_generate() else {}\r\n```\r\n\r\ncc @ylacombe as well for info", "@sanchit-gandhi Thank you for the information!" ]
1,695
1,695
1,695
COLLABORATOR
null
# What does this PR do? Update tiny model information and pipeline tests
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26285/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26285/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26285", "html_url": "https://github.com/huggingface/transformers/pull/26285", "diff_url": "https://github.com/huggingface/transformers/pull/26285.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26285.patch", "merged_at": 1695658093000 }
https://api.github.com/repos/huggingface/transformers/issues/26284
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26284/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26284/comments
https://api.github.com/repos/huggingface/transformers/issues/26284/events
https://github.com/huggingface/transformers/issues/26284
1,904,577,877
I_kwDOCUB6oc5xhY1V
26,284
GPU memory isn't freed while using trainer. GPU runs out of memory and throws OOM
{ "login": "Datta0", "id": 39181234, "node_id": "MDQ6VXNlcjM5MTgxMjM0", "avatar_url": "https://avatars.githubusercontent.com/u/39181234?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Datta0", "html_url": "https://github.com/Datta0", "followers_url": "https://api.github.com/users/Datta0/followers", "following_url": "https://api.github.com/users/Datta0/following{/other_user}", "gists_url": "https://api.github.com/users/Datta0/gists{/gist_id}", "starred_url": "https://api.github.com/users/Datta0/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Datta0/subscriptions", "organizations_url": "https://api.github.com/users/Datta0/orgs", "repos_url": "https://api.github.com/users/Datta0/repos", "events_url": "https://api.github.com/users/Datta0/events{/privacy}", "received_events_url": "https://api.github.com/users/Datta0/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false }
[ { "login": "muellerzr", "id": 7831895, "node_id": "MDQ6VXNlcjc4MzE4OTU=", "avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerzr", "html_url": "https://github.com/muellerzr", "followers_url": "https://api.github.com/users/muellerzr/followers", "following_url": "https://api.github.com/users/muellerzr/following{/other_user}", "gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions", "organizations_url": "https://api.github.com/users/muellerzr/orgs", "repos_url": "https://api.github.com/users/muellerzr/repos", "events_url": "https://api.github.com/users/muellerzr/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerzr/received_events", "type": "User", "site_admin": false } ]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Ah curious why this is closed. \r\n@muellerzr do I need to reopen this or is this fixed and hence closed?", "@Datta0 Is this solved? I am also curious about this result.", "@SangbumChoi I don't see the issue anymore. IDK what caused the issue. Also off late, I'm using [unsloth](https://github.com/unslothai/unsloth) for all my fine tunes. It is just couple of lines of change over HF code with tremendous improvements.", "@Datta0 Very interesting, also thanks for sharing the new repo!" ]
1,695
1,705
1,698
NONE
null
### System Info - OS : Ubtuntu 2204 - Cuda: 12.0 - Driver: 525.125.06 - Transformers Version: 4.33.2 - GPU: 1 x A100-40GB VRAM via PCIe ### Who can help? @muellerz @pacman100 ( Trainer is what I'm looking at) GPU runs out of memory after a few steps ( generally around 180 to start with, then resume_from_checkpoint gets it to 250, then to 300.... and never goes past 500). Usage keeps on increasing every step. Model: [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) Loaded in BF16. Dataset: [OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca) Lora Config: Given below ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction Finetune a **llama2-7b-hf** model using **A100 40GB** on OpenOrca dataset using LoraConfig as follows: Here's my [notebook](https://colab.research.google.com/drive/10CdhOTWUbEFQCEgD5yflfXH-Jnnsa7wU?usp=sharing) for reference ( No I didn't run on colab, but I found colab easier to share). ``` model = AutoModelForCausalLM.from_pretrained(model_name,torch_dtype = torch.bfloat16,device_map = 'auto', ) for param in model.parameters(): # Freezing original weights param.requires_grad = False lora_config = LoraConfig(r = 16, lora_alpha=64, target_modules=["q_proj","v_proj","o_proj","k_proj"], lora_dropout=0.1, bias = "none", task_type = "CAUSAL_LM" ) # Trainable params : 0.24% ~17 million ~ 68MB in FP32 trainer = Trainer( model = model, train_dataset = train_data, eval_dataset = eval_data, args = TrainingArguments(per_device_train_batch_size=2, gradient_accumulation_steps=4, learning_rate=2e-4, bf16=True, logging_steps=20, output_dir=output_dir, optim=f"paged_adamw_{opt_bits}bit", save_steps = 20, save_total_limit=3, disable_tqdm = False), data_collator=DataCollatorForLanguageModeling(tokenizer, mlm=False) ) model.config.use_cache = False trainer.train(resume_from_checkpoint=resume()) ``` ### Expected behavior GPU RAM should be freeed once the execution of one batch/step is done or atleast when it is needed for the later batches. Note: If I explicitly add the following lines at the start of every step [here](https://github.com/huggingface/transformers/blob/37c205eb5d5165b70d3100b599a2bcfc483944f5/src/transformers/trainer.py#L1865) ``` gc.collect() trainer.accelerator.clear() torch.cuda.empty_cache() ``` It goes on for quite a lot more steps. GPU RAM usage goes down occasionally making the finetune run longer and can be resumed later. Attaching the graph when I added "clear()" in transformers. <img width="1022" alt="image" src="https://github.com/huggingface/transformers/assets/39181234/20cd7c4f-1652-4f7c-afc6-7044207fb108">
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26284/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26284/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26283
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26283/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26283/comments
https://api.github.com/repos/huggingface/transformers/issues/26283/events
https://github.com/huggingface/transformers/issues/26283
1,904,562,620
I_kwDOCUB6oc5xhVG8
26,283
AutoModelForCausalLM.from_pretrained is killed by Loading checkpoint shards:
{ "login": "50516017", "id": 23068536, "node_id": "MDQ6VXNlcjIzMDY4NTM2", "avatar_url": "https://avatars.githubusercontent.com/u/23068536?v=4", "gravatar_id": "", "url": "https://api.github.com/users/50516017", "html_url": "https://github.com/50516017", "followers_url": "https://api.github.com/users/50516017/followers", "following_url": "https://api.github.com/users/50516017/following{/other_user}", "gists_url": "https://api.github.com/users/50516017/gists{/gist_id}", "starred_url": "https://api.github.com/users/50516017/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/50516017/subscriptions", "organizations_url": "https://api.github.com/users/50516017/orgs", "repos_url": "https://api.github.com/users/50516017/repos", "events_url": "https://api.github.com/users/50516017/events{/privacy}", "received_events_url": "https://api.github.com/users/50516017/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Having similar lssue", "Hi @50516017 @jmanhype \r\nThis is because the first shard of the model is ~10GB, what I usually do is to push a smaller sharded version on the Hub.\r\nI have just pushed a version with smaller shards under: `ybelkada/japanese-novel-gpt-j-6b-sharded`: https://huggingface.co/ybelkada/japanese-novel-gpt-j-6b-sharded - can you try that out instead?\r\nIn the future, to create these repos you can run:\r\n\r\n```python\r\nimport torch\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\n\r\nmodel_id = \"AIBunCho/japanese-novel-gpt-j-6b\"\r\ntarget_model_id = \"yourusername/japanese-novel-gpt-j-6b-sharded\"\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, low_cpu_mem_usage=True)\r\ntokenizer = AutoTokenizer.from_pretrained(model_id)\r\n\r\nmodel.push_to_hub(target_model_id, max_shard_size=\"2GB\")\r\ntokenizer.push_to_hub(target_model_id)\r\n```", "When this happened to me, I could fix it by just adding the arguments to load in half precision (`torch_dtype=torch.bfloat16`) and to initialize without duplicating the parameters in memory (`low_cpu_mem_usage=True`)." ]
1,695
1,698
1,695
NONE
null
### System Info `- `transformers` version: 4.34.0.dev0 - Platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.3 - Accelerate version: 0.20.3 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in>` ### Who can help? @younesbelkada,@ArthurZucker ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I would like to fine tune AIBunCho/japanese-novel-gpt-j-6b using QLora. When I executed AutoModelForCausalLM.from_pretrained, it was killed by the python function and execution stopped. I was looking at the task manager and found that it was caused by CPU usage, but is it possible to load pretrained on the GPU? I have been able to fine tune other smaller models with Lora without any problems. code is ```` os.environ["CUDA_VISIBLE_DEVICES"] = "0" model_name = "AIBunCho/japanese-novel-gpt-j-6b" config = AutoConfig.from_pretrained(model_name,use_fast=False) bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16 ) model = AutoModelForCausalLM.from_pretrained( model_name, config=config, device_map="cuda", load_in_8bit=True, #quantization_config=bnb_config ) ` ### Expected behavior ``` Found duplicate ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] files: {PosixPath('/usr/local/cuda/lib64/libcudart.so'), PosixPath('/usr/local/cuda/lib64/libcudart.so.11.0')}.. We'll flip a coin and try one of these, in order to fail forward. Either way, this might cause trouble in the future: If you get `CUDA error: invalid device function` errors, the above might be the cause and the solution is to make sure only one ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] in the paths that we search based on your env. warn(msg) CUDA exception! Error code: no CUDA-capable device is detected CUDA exception! Error code: initialization error CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so /home/shimizu/create_LLM/macin/work2/venv/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: No GPU detected! Check your CUDA paths. Proceeding to load CPU-only library... warn(msg) CUDA SETUP: Detected CUDA version 118 CUDA SETUP: Loading binary /home/shimizu/create_LLM/macin/work2/venv/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so... None Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s]Killed ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26283/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26283/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26282
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26282/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26282/comments
https://api.github.com/repos/huggingface/transformers/issues/26282/events
https://github.com/huggingface/transformers/issues/26282
1,904,262,281
I_kwDOCUB6oc5xgLyJ
26,282
Why not use model_wrapped in trainer evaluation_loop?
{ "login": "tangzhiyi11", "id": 5955111, "node_id": "MDQ6VXNlcjU5NTUxMTE=", "avatar_url": "https://avatars.githubusercontent.com/u/5955111?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tangzhiyi11", "html_url": "https://github.com/tangzhiyi11", "followers_url": "https://api.github.com/users/tangzhiyi11/followers", "following_url": "https://api.github.com/users/tangzhiyi11/following{/other_user}", "gists_url": "https://api.github.com/users/tangzhiyi11/gists{/gist_id}", "starred_url": "https://api.github.com/users/tangzhiyi11/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tangzhiyi11/subscriptions", "organizations_url": "https://api.github.com/users/tangzhiyi11/orgs", "repos_url": "https://api.github.com/users/tangzhiyi11/repos", "events_url": "https://api.github.com/users/tangzhiyi11/events{/privacy}", "received_events_url": "https://api.github.com/users/tangzhiyi11/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "correcting the ping to @pacman100 ๐Ÿ˜‰ ", "> correcting the ping to @pacman100 ๐Ÿ˜‰\r\n\r\nthanks, i fix it ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,695
1,698
1,698
NONE
null
### System Info Transformers >= 4.30.2. ### Who can help? @muellerzr @pacman100 ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Why not use model_wrapped in trainer evaluation_loop? ![image](https://github.com/huggingface/transformers/assets/5955111/9aae5ff8-51d8-4953-ab1c-7b0e27af64d0) ### Expected behavior like #26281
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26282/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26282/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26281
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26281/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26281/comments
https://api.github.com/repos/huggingface/transformers/issues/26281/events
https://github.com/huggingface/transformers/pull/26281
1,904,261,749
PR_kwDOCUB6oc5avuod
26,281
use model_wrapped in evaluation_loop
{ "login": "tangzhiyi11", "id": 5955111, "node_id": "MDQ6VXNlcjU5NTUxMTE=", "avatar_url": "https://avatars.githubusercontent.com/u/5955111?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tangzhiyi11", "html_url": "https://github.com/tangzhiyi11", "followers_url": "https://api.github.com/users/tangzhiyi11/followers", "following_url": "https://api.github.com/users/tangzhiyi11/following{/other_user}", "gists_url": "https://api.github.com/users/tangzhiyi11/gists{/gist_id}", "starred_url": "https://api.github.com/users/tangzhiyi11/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tangzhiyi11/subscriptions", "organizations_url": "https://api.github.com/users/tangzhiyi11/orgs", "repos_url": "https://api.github.com/users/tangzhiyi11/repos", "events_url": "https://api.github.com/users/tangzhiyi11/events{/privacy}", "received_events_url": "https://api.github.com/users/tangzhiyi11/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,695
1,698
1,698
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> use model_wrapped in evaluation_loop #26282 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26281/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26281/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26281", "html_url": "https://github.com/huggingface/transformers/pull/26281", "diff_url": "https://github.com/huggingface/transformers/pull/26281.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26281.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/26280
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26280/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26280/comments
https://api.github.com/repos/huggingface/transformers/issues/26280/events
https://github.com/huggingface/transformers/issues/26280
1,904,221,270
I_kwDOCUB6oc5xgBxW
26,280
stopping criteria for TextGenerationPipeline
{ "login": "geronimi73", "id": 141400217, "node_id": "U_kgDOCG2YmQ", "avatar_url": "https://avatars.githubusercontent.com/u/141400217?v=4", "gravatar_id": "", "url": "https://api.github.com/users/geronimi73", "html_url": "https://github.com/geronimi73", "followers_url": "https://api.github.com/users/geronimi73/followers", "following_url": "https://api.github.com/users/geronimi73/following{/other_user}", "gists_url": "https://api.github.com/users/geronimi73/gists{/gist_id}", "starred_url": "https://api.github.com/users/geronimi73/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/geronimi73/subscriptions", "organizations_url": "https://api.github.com/users/geronimi73/orgs", "repos_url": "https://api.github.com/users/geronimi73/repos", "events_url": "https://api.github.com/users/geronimi73/events{/privacy}", "received_events_url": "https://api.github.com/users/geronimi73/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! You can pass `generation_kwargs` to the pipeline, which are usually used for stopping criteria such as `max_length, max_new_tokens, max_time` . You can also pass some `stopping_criteria` argument to the generate function using `generation_kwargs = {stopping_criteria = StoppingCriteriaList: [MaxTimeCriteria(32)] }.\r\n\r\n```python\r\nfrom transformers import pipeline, StoppingCriteriaList, MaxTimeCriteria\r\n\r\n# Initialize the text generation pipeline\r\ngenerator = pipeline(\"text-generation\")\r\n\r\n# Define the stopping criteria using MaxTimeCriteria\r\nstopping_criteria = StoppingCriteriaList([MaxTimeCriteria(32)])\r\n\r\n# Define the generation_kwargs with stopping criteria\r\ngeneration_kwargs = {\r\n \"max_length\": 100, # Maximum length of the generated text\r\n \"max_new_tokens\": 10, # Maximum number of new tokens to generate\r\n \"generation_kwargs\": {\"stopping_criteria\": stopping_criteria} # Add stopping criteria to generation_kwargs\r\n}\r\n\r\n# Pass the generation_kwargs to the pipeline\r\ngenerated_text = generator(\r\n \"Hey! How are you able.\",\r\n **generation_kwargs\r\n)\r\n\r\n# Print the generated text\r\nprint(generated_text[0][\"generated_text\"])\r\n>>> Hey! How are you able. Do you have a job or do you have\r\n```", "This should probably be added to the documentation! ", "cc @MKhalusova who is currently working on the `generate` docs!", "The `max_length` and `max_new_tokens` are mentioned in nearly every doc on text generation:\r\n- https://huggingface.co/docs/transformers/llm_tutorial#generated-output-is-too-shortlong\r\n- https://huggingface.co/docs/transformers/generation_strategies#customize-text-generation\r\n- https://huggingface.co/docs/transformers/main_classes/text_generation\r\n- upcoming [LLM prompting doc](https://github.com/huggingface/transformers/pull/26274) as well\r\n\r\nHowever, I see a couple of issues: \r\n- the `MaxTimeCriteria` is much less discoverable, as it is only mentioned in the [Utilities for Generation](https://huggingface.co/docs/transformers/internal/generation_utils#transformers.StoppingCriteria). \r\n- A more critical issue from my point of view is the lack of connection between the pipeline documentation and the text generation docs. The [`TextGenerationPipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.TextGenerationPipeline) API doc lists parameters that control how pipeline is instantiated, but doesn't link to text generation docs. \r\n\r\nI can address these. \r\n", "Is this a duplicate of #17562?\r\n\r\nIf so, it seems the only reason that older issue is still open is because of the missing documentation. If the documentation is fixed then both can be closed." ]
1,695
1,696
1,696
NONE
null
### Feature request pass stopping criteria or string to TextGenerationPipeline ### Motivation it does not exist, have not found any way to do it at least, but would be _very_ useful ### Your contribution none
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26280/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26280/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26279
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26279/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26279/comments
https://api.github.com/repos/huggingface/transformers/issues/26279/events
https://github.com/huggingface/transformers/pull/26279
1,904,206,123
PR_kwDOCUB6oc5aviho
26,279
Remove redundant code
{ "login": "hzhiyuan", "id": 41106865, "node_id": "MDQ6VXNlcjQxMTA2ODY1", "avatar_url": "https://avatars.githubusercontent.com/u/41106865?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hzhiyuan", "html_url": "https://github.com/hzhiyuan", "followers_url": "https://api.github.com/users/hzhiyuan/followers", "following_url": "https://api.github.com/users/hzhiyuan/following{/other_user}", "gists_url": "https://api.github.com/users/hzhiyuan/gists{/gist_id}", "starred_url": "https://api.github.com/users/hzhiyuan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hzhiyuan/subscriptions", "organizations_url": "https://api.github.com/users/hzhiyuan/orgs", "repos_url": "https://api.github.com/users/hzhiyuan/repos", "events_url": "https://api.github.com/users/hzhiyuan/events{/privacy}", "received_events_url": "https://api.github.com/users/hzhiyuan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,695
1,695
1,695
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Remove redundant code: The variable "has_default_max_length" will always be "False" for the 2 lines of code: 1. model_kwargs = generation_config.update(**kwargs) 2. has_default_max_length = kwargs.get("max_length") is None and generation_config.max_length is not None So I remove the code related to it. ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @ArthurZucker @gante <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26279/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26279/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26279", "html_url": "https://github.com/huggingface/transformers/pull/26279", "diff_url": "https://github.com/huggingface/transformers/pull/26279.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26279.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/26278
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26278/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26278/comments
https://api.github.com/repos/huggingface/transformers/issues/26278/events
https://github.com/huggingface/transformers/pull/26278
1,904,166,112
PR_kwDOCUB6oc5avZpK
26,278
fix name error when accelerate is not available
{ "login": "pacman100", "id": 13534540, "node_id": "MDQ6VXNlcjEzNTM0NTQw", "avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pacman100", "html_url": "https://github.com/pacman100", "followers_url": "https://api.github.com/users/pacman100/followers", "following_url": "https://api.github.com/users/pacman100/following{/other_user}", "gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}", "starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pacman100/subscriptions", "organizations_url": "https://api.github.com/users/pacman100/orgs", "repos_url": "https://api.github.com/users/pacman100/repos", "events_url": "https://api.github.com/users/pacman100/events{/privacy}", "received_events_url": "https://api.github.com/users/pacman100/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Merge now to prevent further failure on main", "You should also link the PR that was failing, #26180 if I am not mistaken" ]
1,695
1,695
1,695
CONTRIBUTOR
null
# What does this PR do? 1. Fixes the non-torch test failures `ERROR tests/fsdp/test_fsdp.py - NameError: name 'require_fsdp_version' is not defined`
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26278/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26278/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26278", "html_url": "https://github.com/huggingface/transformers/pull/26278", "diff_url": "https://github.com/huggingface/transformers/pull/26278.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26278.patch", "merged_at": 1695189776000 }
https://api.github.com/repos/huggingface/transformers/issues/26277
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26277/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26277/comments
https://api.github.com/repos/huggingface/transformers/issues/26277/events
https://github.com/huggingface/transformers/pull/26277
1,903,954,268
PR_kwDOCUB6oc5aurX6
26,277
Update bros checkpoint
{ "login": "jinhopark8345", "id": 60179569, "node_id": "MDQ6VXNlcjYwMTc5NTY5", "avatar_url": "https://avatars.githubusercontent.com/u/60179569?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jinhopark8345", "html_url": "https://github.com/jinhopark8345", "followers_url": "https://api.github.com/users/jinhopark8345/followers", "following_url": "https://api.github.com/users/jinhopark8345/following{/other_user}", "gists_url": "https://api.github.com/users/jinhopark8345/gists{/gist_id}", "starred_url": "https://api.github.com/users/jinhopark8345/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jinhopark8345/subscriptions", "organizations_url": "https://api.github.com/users/jinhopark8345/orgs", "repos_url": "https://api.github.com/users/jinhopark8345/repos", "events_url": "https://api.github.com/users/jinhopark8345/events{/privacy}", "received_events_url": "https://api.github.com/users/jinhopark8345/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26277). All of your documentation changes will be reflected on that endpoint.", "Is there a reason why we did not go with creating new repos with `naver-clova-ocr/bros-base-uncased-hf` instead of a custom repo? ", "> Is there a reason why we did not go with creating new repos with `naver-clova-ocr/bros-base-uncased-hf` instead of a custom repo?\r\n\r\nNo reason at all. I also considered that it would be more reliable to use checkpoint that huggingface controls.", "I'll ask for the models to be moved if that is okay with you? \r\n", "I am not sure if @jinhopark8345 can move repo to `naver-clova-ocr`. Even for us, we will need `naver-clova-ocr` to approve I think.\r\n\r\nSorry merged this PR without waiting core maintainers' approval.", "@ArthurZucker I can not move the repo to `naver-clova-ocr` as @ydshieh assumed. And I am sorry that I didn't mention about my concern earlier, @ydshieh. ", "I'll ask for the checkpoints to be move then ๐Ÿ˜‰ " ]
1,695
1,695
1,695
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Update [Bros](https://arxiv.org/abs/2108.04539) checkpoint. The `naver-clova-ocr/bros-base-uncased` checkpoint has `bbox_projection` layer in `BrosEmbeddings` class but this layer is moved to `BrosBboxEmbeddings` class. And users are expected to use `jinho8345/bros-base-uncased` checkpoint instead of `naver-clova-ocr/bros-base-uncased`. * `naver-clova-ocr/bros-base-uncased` : original pretrained checkpoint from naver-clova-ocr * `jinho8345/bros-base-uncased` : weights renamed version from `naver-clova-ocr/bros-base-uncased` using [conversion script](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bros/convert_bros_to_pytorch.py) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - https://github.com/huggingface/transformers/pull/23190#issuecomment-1725882060 - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @ydshieh <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26277/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26277/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26277", "html_url": "https://github.com/huggingface/transformers/pull/26277", "diff_url": "https://github.com/huggingface/transformers/pull/26277.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26277.patch", "merged_at": 1695198127000 }
https://api.github.com/repos/huggingface/transformers/issues/26276
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26276/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26276/comments
https://api.github.com/repos/huggingface/transformers/issues/26276/events
https://github.com/huggingface/transformers/issues/26276
1,903,710,838
I_kwDOCUB6oc5xeFJ2
26,276
Transformers breaks when running setup.py
{ "login": "AdamLouly", "id": 27873459, "node_id": "MDQ6VXNlcjI3ODczNDU5", "avatar_url": "https://avatars.githubusercontent.com/u/27873459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AdamLouly", "html_url": "https://github.com/AdamLouly", "followers_url": "https://api.github.com/users/AdamLouly/followers", "following_url": "https://api.github.com/users/AdamLouly/following{/other_user}", "gists_url": "https://api.github.com/users/AdamLouly/gists{/gist_id}", "starred_url": "https://api.github.com/users/AdamLouly/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AdamLouly/subscriptions", "organizations_url": "https://api.github.com/users/AdamLouly/orgs", "repos_url": "https://api.github.com/users/AdamLouly/repos", "events_url": "https://api.github.com/users/AdamLouly/events{/privacy}", "received_events_url": "https://api.github.com/users/AdamLouly/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "A quick fix: update `huggingface_hub` . On our side we should update the dependency requirement. Thanks for reporting! ", "@ArthurZucker have you tested re installing Transformers after this update?\r\nI tested and its still failing.\r\n\r\nhuggingface-hub>=0.16.4,<1.0 won't solve the issue: \r\nhuggingface_hub<0.17,>=0.16.4 is required by {'tokenizers'}\r\n\r\nbecause its installing huggingface_hub 0.17.4 and that will break it.\r\n", "I think tokenizers has to be updated at this point, since it's not using the latest dependency. cc @Narsil any reason why we limited to HF hub 0.17? ", "@ArthurZucker Any updates on this?", "Yep, this [PR](https://github.com/huggingface/tokenizers/pull/1344) was merged in tokenizers, we just need a release and this will be adressed" ]
1,695
1,696
1,695
CONTRIBUTOR
null
### System Info Running `python setup.py install` in transformers folder throws this error: error: huggingface-hub 0.17.2 is installed but huggingface_hub<0.17,>=0.16.4 is required by {'tokenizers'} This starts happening after merging this PR where line 175 was changed from: from "tokenizers>=0.11.1,!=0.11.3,<0.14", to "tokenizers>=0.14,<0.15", https://github.com/huggingface/transformers/issues/23909 when I change it back manually it works fine. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Running `python setup.py install` in transformers folder ### Expected behavior error: huggingface-hub 0.17.2 is installed but huggingface_hub<0.17,>=0.16.4 is required by {'tokenizers'}
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26276/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26276/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26275
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26275/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26275/comments
https://api.github.com/repos/huggingface/transformers/issues/26275/events
https://github.com/huggingface/transformers/issues/26275
1,903,674,253
I_kwDOCUB6oc5xd8ON
26,275
Memory leak when acquiring CLIP embeddings
{ "login": "timgianitsos", "id": 14189758, "node_id": "MDQ6VXNlcjE0MTg5NzU4", "avatar_url": "https://avatars.githubusercontent.com/u/14189758?v=4", "gravatar_id": "", "url": "https://api.github.com/users/timgianitsos", "html_url": "https://github.com/timgianitsos", "followers_url": "https://api.github.com/users/timgianitsos/followers", "following_url": "https://api.github.com/users/timgianitsos/following{/other_user}", "gists_url": "https://api.github.com/users/timgianitsos/gists{/gist_id}", "starred_url": "https://api.github.com/users/timgianitsos/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/timgianitsos/subscriptions", "organizations_url": "https://api.github.com/users/timgianitsos/orgs", "repos_url": "https://api.github.com/users/timgianitsos/repos", "events_url": "https://api.github.com/users/timgianitsos/events{/privacy}", "received_events_url": "https://api.github.com/users/timgianitsos/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "This does not seem like a memory leak, you're not using `torch.no_grad()`, which means that with every forward pass you're stacking up more gradients into your memory.", "Thanks @NielsRogge!\r\n\r\nWhy doesn't memory get reclaimed between multiple invocations to `get_embeddings`? Shouldn't it be garbage collected since the model is initialized within the function?", "It seems that you're doing multiple forward passes within your function, so gradients get accumulated for all batches.\r\n\r\nAre you only using the model for inference to get embeddings? If yes then you can use the `@torch.no_grad()` annotator above your function. It will make sure no gradients are accumulated.", "Yes I was able to use `@torch.inference_mode()` to fix the issue. Much appreciated!\r\n\r\nI'm just curious how it is possible for memory to not get deallocated between function calls.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "There weren't any global variables, so the reference to the model should disappear, and the object itself should be deallocated. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "To clarify, I replaced\r\n```python\r\nfor batch in tqdm(loader, desc=f'Loading image batches of size {batch_size}'):\r\n embeddings.append(encoder(**processor(images=batch, return_tensors='pt')).image_embeds)\r\n gc.collect()\r\n torch.cuda.empty_cache()\r\n gc.collect()\r\n```\r\nwith\r\n```python\r\nwith torch.inference_mode():\r\n for batch in tqdm(loader, desc=f'Loading image batches of size {batch_size}'):\r\n embeddings.append(encoder(**processor(images=batch, return_tensors='pt')).image_embeds)\r\n```\r\nand still got a memory leak. [EDIT I meant to say \"and *stopped* having a memory leak\"]", "> There weren't any global variables, so the reference to the model should disappear, and the object itself should be deallocated.\r\n\r\nIn the example script provided, `encoder` and `embeddings` are still being referenced so `gc.collect()` and `torch.cuda.empty_cache` won't do anything in the for-loop. Using `torch.no_grad` works because no gradients are when doing the forward pass of the model and, as a result, the `embedding` array being generated at each step of the for-loop is smaller. ", "Yes, using `torch.no_grad` and `torch.inference_mode` would indeed prevent the backward graph and intermediate tensors from being created in memory in the first place. But to address my original inquiry about why *if* they are created they *continue* to persist outside of the function scope, I think I found the answer after reviewing this with fresh eyes - I think it's because the `embeddings.grad_fn` attribute is a lingering reference to a closure that has the whole computational backward graph. Using `embeddings.detach()` on that last return statement should likely avoid the leak because it evaluates to a tensor with the same underlying data but without `.grad_fn`." ]
1,695
1,700
1,700
NONE
null
### System Info - `transformers` version: 4.31.0 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.17 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.0.1 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @amyeroberts ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction This function is called twice, both times going into the `else` block. ```python def get_embeddings(data_partition_name, version): load_file = Path(f'{data_partition_name.lower()}-embeddings.pt') if load_file.exists(): print(f'Loading "{load_file}"...') pt = torch.load(load_file) embeddings = pt['embeddings'] classes = pt['classes'] else: encoder = CLIPVisionModelWithProjection.from_pretrained('openai/clip-vit-base-patch32') processor = AutoProcessor.from_pretrained('openai/clip-vit-base-patch32') data = pd.read_csv(f'{data_partition_name.lower()}-swings.csv') data_dir = Path('data') print(f'Generating embeddings for {data_partition_name} data...') extension = '*.jpeg' filenames = sorted((data_dir / data_partition_name).glob(extension)) batch_size = 30 loader = DataLoader( ImageDataset(filenames), batch_size=batch_size, num_workers=1, shuffle=False, ) embeddings = [] for batch in tqdm(loader, desc=f'Loading image batches of size {batch_size}'): embeddings.append(encoder(**processor(images=batch, return_tensors='pt')).image_embeds) gc.collect() torch.cuda.empty_cache() gc.collect() embeddings = torch.cat(embeddings) classes = torch.cat([ torch.tensor(data[data["name"] == jpg.name][version].values) for jpg in filenames ]) torch.save({'embeddings': embeddings, 'classes': classes}, load_file) return embeddings, classes ``` ### Expected behavior There seems to be a memory leak. I have about 2000 jpeg images which takes up more the 1 GB of space, and I am passing them to a HuggingFace model to produce embeddings. The problem is that the program crashes because of memory issues. Upon investigation with `htop`, it seems that the `Swp` usage keeps climbing higher and higher until it surpasses around 50GB in which case it crashes with `Killed: 9` and _no_ stack trace. This happens despite the fact that the memory utilization is not full. Why is my program allocating so much from swap? Is this a bug in HuggingFace? The function is called twice, once for train and once for test. - No memory/swp is reclaimed after `get_embeddings('train')` is run. That is, even when `get_embeddings('test')` is run, the swap remains very high at 40+GB of swap used, and then keeps climbing. - I am loading in batches with a PyTorch `DataLoader`, but the bug persists whether or not I use `DataLoader`. - The bug persists whether or not I use `pandas` to read in the csv file. - The bug persists whether or not I do the garbage collection step - I am not using a GPU The dataset and [csv files](https://github.com/zapan4/BadmintonProject/blob/master/train-swings.csv) are in a private repo currently, but they accord straightforwardly with the description.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26275/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26275/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26274
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26274/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26274/comments
https://api.github.com/repos/huggingface/transformers/issues/26274/events
https://github.com/huggingface/transformers/pull/26274
1,903,648,088
PR_kwDOCUB6oc5atpeS
26,274
[docs] LLM prompting guide
{ "login": "MKhalusova", "id": 1065417, "node_id": "MDQ6VXNlcjEwNjU0MTc=", "avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MKhalusova", "html_url": "https://github.com/MKhalusova", "followers_url": "https://api.github.com/users/MKhalusova/followers", "following_url": "https://api.github.com/users/MKhalusova/following{/other_user}", "gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}", "starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions", "organizations_url": "https://api.github.com/users/MKhalusova/orgs", "repos_url": "https://api.github.com/users/MKhalusova/repos", "events_url": "https://api.github.com/users/MKhalusova/events{/privacy}", "received_events_url": "https://api.github.com/users/MKhalusova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "The first draft of the LLM prompting guide is ready for review, let me know if anything major is missing. cc @patrickvonplaten ", "Feel free to merge when satisfied with it!", "@LysandreJik I'm happy with it, but I think we should wait for @gante to review it once he's back from vacation. ", "Gently pinging @gante for a review :)" ]
1,695
1,697
1,697
CONTRIBUTOR
null
# What does this PR do? This PR addresses part 2.2 ("Prompting" ) of the issue [#24575](https://github.com/huggingface/transformers/issues/24575)โ€จ It adds an LLM Prompting Guide to the docs that covers the following topics: * basics of prompting, * encoder-decoder models vs decoder-only models, * base vs instruct models, * basic prompts to solve common NLP tasks, * best practices for prompting, * advanced techniques like few-shot learning and chain-of-thought * prompting vs fine-tuning Let me know, if there's anything missing that has to be included.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26274/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26274/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26274", "html_url": "https://github.com/huggingface/transformers/pull/26274", "diff_url": "https://github.com/huggingface/transformers/pull/26274.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26274.patch", "merged_at": 1697114882000 }
https://api.github.com/repos/huggingface/transformers/issues/26273
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26273/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26273/comments
https://api.github.com/repos/huggingface/transformers/issues/26273/events
https://github.com/huggingface/transformers/issues/26273
1,903,568,794
I_kwDOCUB6oc5xdiea
26,273
Llama 2 tokenizer: apparition of the token id 29871
{ "login": "piegu", "id": 20000948, "node_id": "MDQ6VXNlcjIwMDAwOTQ4", "avatar_url": "https://avatars.githubusercontent.com/u/20000948?v=4", "gravatar_id": "", "url": "https://api.github.com/users/piegu", "html_url": "https://github.com/piegu", "followers_url": "https://api.github.com/users/piegu/followers", "following_url": "https://api.github.com/users/piegu/following{/other_user}", "gists_url": "https://api.github.com/users/piegu/gists{/gist_id}", "starred_url": "https://api.github.com/users/piegu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/piegu/subscriptions", "organizations_url": "https://api.github.com/users/piegu/orgs", "repos_url": "https://api.github.com/users/piegu/repos", "events_url": "https://api.github.com/users/piegu/events{/privacy}", "received_events_url": "https://api.github.com/users/piegu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey, could you try this on `transformers == 4.33`? Pretty sure the fixes to Llama have been merged", "Same (wrong) result with `transformers == 4.33`.", "Okay, this is actually expected, `29871` is the `SPIECE_UNDERLINE` token. If you encode each prompt individually, you are adding an underline to the prompt, then adding the special token. If you encode everything concatenated, you add the prefix token to the first token only. \r\n", "Hello! Same issue here. @ArthurZucker, can you clarify your comment? What is `SPIECE_UNDERLINE`?\r\n\r\n> you are adding an underline to the prompt\r\n\r\nDo you mean `tokenizer()` call does this automatically (if @piegu is explicitly doing this, I've missed it)? If so, I would at least expect `add_special_tokens==False` to fix this but it does not:\r\n\r\n```python3\r\n>>> tkzr = AutoTokenizer.from_pretrained(\"./llama-2\")\r\n>>> tkzr.encode(\"\\nhi\")\r\n[1, 29871, 13, 2918]\r\n>>> tkzr.encode(\"\\nhi\", add_special_tokens=False)\r\n[29871, 13, 2918]\r\n>>> tkzr.decode([29871, 13, 2918])\r\n'\\nhi'\r\n>>> tkzr.decode([13, 2918])\r\n'\\nhi'\r\n>>> tkzr.decode([29871])\r\n''\r\n```", "No the argument you might be looking for is `add_prefix_space` which we did not include for llama. The `SPIECE_UNDERLINE` is the prefix space added by sentencepiece. We can't de activate this easily but can add more support for this", "Got it, I'm following this now. Thanks!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,695
1,699
1,699
CONTRIBUTOR
null
### System Info transformers==4.31.0 meta-llama/Llama-2-7b-hf ### Who can help? @ArthurZucker ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python # from https://github.com/philschmid/sagemaker-huggingface-llama-2-samples/blob/master/training/sagemaker-notebook.ipynb !pip install "transformers==4.31.0" YOUR_TOKEN = "hf_xxxxx" !huggingface-cli login --token $YOUR_TOKEN from transformers import AutoTokenizer # get tokenizer model_id = "meta-llama/Llama-2-7b-hf" tokenizer = AutoTokenizer.from_pretrained(model_id) tokenizer.pad_token = tokenizer.eos_token instruction = "### Instruction\nRepeat the context." context = "\n\n### Context\n```\nI like Paris.\n```" answer = "\n\n### Answer\nI like Paris." prompt = instruction + context + answer tokens_prompt = tokenizer(prompt, return_tensors="pt").input_ids[0] tokens_instruction = tokenizer(instruction, return_tensors="pt").input_ids[0] tokens_context = tokenizer(context, return_tensors="pt").input_ids[0] tokens_answer = tokenizer(answer, return_tensors="pt").input_ids[0] # from each number of tokens, we take out 1 token that this the <s> token # then, we sum all tokens and expect get the same number of tokens that ones of the prompt num_tokens_of_sum = (len(tokens_instruction) - 1) + (len(tokens_context) - 1) + (len(tokens_answer) - 1) # we compare the number of tokens of the prompt (without the <s> token) with the sum claculated before print((len(tokens_prompt) - 1) - num_tokens_of_sum) # we get -2, which is wrong: it means there are 2 tokens in the sum of tokens that were not in the prompt tokens_prompt # tensor([ 1, 835, 2799, 4080, 13, 1123, 11666, 278, 3030, 29889, # 13, 13, 2277, 29937, 15228, 13, 28956, 13, 29902, 763, # 3681, 29889, 13, 28956, 13, 13, 2277, 29937, 673, 13, # 29902, 763, 3681, 29889]) tokens_instruction # tensor([ 1, 835, 2799, 4080, 13, 1123, 11666, 278, 3030, 29889]) tokens_context # tensor([ 1, 29871, 13, 13, 2277, 29937, 15228, 13, 28956, 13, # 29902, 763, 3681, 29889, 13, 28956]) tokens_answer # tensor([ 1, 29871, 13, 13, 2277, 29937, 673, 13, 29902, 763, # 3681, 29889]) # we can see that the token 29871 appears 2 times (tokens_context and tokens_answer) but it should not! ``` ### Expected behavior We hope that tokenizing each part of the prompt will lead to the same result as tokenizing the prompt. But not. Why does token ID 29871 appear?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26273/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26273/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26272
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26272/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26272/comments
https://api.github.com/repos/huggingface/transformers/issues/26272/events
https://github.com/huggingface/transformers/pull/26272
1,903,564,570
PR_kwDOCUB6oc5atXTd
26,272
[docs] LLM Prompting Guide
{ "login": "MKhalusova", "id": 1065417, "node_id": "MDQ6VXNlcjEwNjU0MTc=", "avatar_url": "https://avatars.githubusercontent.com/u/1065417?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MKhalusova", "html_url": "https://github.com/MKhalusova", "followers_url": "https://api.github.com/users/MKhalusova/followers", "following_url": "https://api.github.com/users/MKhalusova/following{/other_user}", "gists_url": "https://api.github.com/users/MKhalusova/gists{/gist_id}", "starred_url": "https://api.github.com/users/MKhalusova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MKhalusova/subscriptions", "organizations_url": "https://api.github.com/users/MKhalusova/orgs", "repos_url": "https://api.github.com/users/MKhalusova/repos", "events_url": "https://api.github.com/users/MKhalusova/events{/privacy}", "received_events_url": "https://api.github.com/users/MKhalusova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,695
1,695
1,695
CONTRIBUTOR
null
# What does this PR do? This PR adds an LLM Prompting Guide to the docs that covers the following topics: - basics of prompting, - encoder-decoder models vs decoder-only models, - base vs instruct models, - basic prompts to solve common NLP tasks, - best practices for prompting, - advanced techniques like few-shot learning and chain-of-thought - prompting vs fine-tuning Let me know, if there's anything missing that has to be included.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26272/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26272/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26272", "html_url": "https://github.com/huggingface/transformers/pull/26272", "diff_url": "https://github.com/huggingface/transformers/pull/26272.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26272.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/26271
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26271/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26271/comments
https://api.github.com/repos/huggingface/transformers/issues/26271/events
https://github.com/huggingface/transformers/issues/26271
1,903,551,588
I_kwDOCUB6oc5xdeRk
26,271
Marian models broken with latest transformers
{ "login": "orendar", "id": 24236024, "node_id": "MDQ6VXNlcjI0MjM2MDI0", "avatar_url": "https://avatars.githubusercontent.com/u/24236024?v=4", "gravatar_id": "", "url": "https://api.github.com/users/orendar", "html_url": "https://github.com/orendar", "followers_url": "https://api.github.com/users/orendar/followers", "following_url": "https://api.github.com/users/orendar/following{/other_user}", "gists_url": "https://api.github.com/users/orendar/gists{/gist_id}", "starred_url": "https://api.github.com/users/orendar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/orendar/subscriptions", "organizations_url": "https://api.github.com/users/orendar/orgs", "repos_url": "https://api.github.com/users/orendar/repos", "events_url": "https://api.github.com/users/orendar/events{/privacy}", "received_events_url": "https://api.github.com/users/orendar/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey, this is related to https://github.com/Helsinki-NLP/Tatoeba-Challenge/issues/35 and a duplicate of #26216. The conversion script was a bit faulty and models need a re-upload. I can try to open PRs for relevant models ", "That would be greatly appreciated, thank you! Right now I am running on transformers 4.28 but I am sure many others are interested in using the 1k+ translation models on latest version.", "I pushed a lot of PR on the hub, if you are still seeing this feel free to ping me. I think some of the big models were not updated yet, we'll see", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Hello, we have Marian-converted models that are experiencing this issue ([1] and [2]).\r\n\r\nCould anyone recommend a solution please?\r\n\r\nThese models are known to work with transformers versions greater than 4.26.1 up to 4.30.2, after which garbage is returned by both using the model's generate() method and the transformers.pipeline.\r\n\r\nMany thanks!\r\n\r\n[1]: https://huggingface.co/techiaith/mt-dspec-health-en-cy\r\n[2]: https://huggingface.co/techiaith/mt-dspec-health-en-cy\r\n\r\nEdit: Sorry, just found #26216 which seems to be the correct place for this!", "no worries, I think @LysandreJik is planning to finish updating the models that were missed in the great refactor!" ]
1,695
1,708
1,700
NONE
null
### System Info Transformers >= 4.31(and maybe also older), including master. ### Who can help? @joaoga I noticed that you merged an update to Marian models. ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Thank you for your great work :) I can confirm that pretrained Marian models are broken on transformers 4.31 and on master, and that they are working on 4.28. For reproduction see for example the following snippet, taken from Helsinki-NLP/opus-mt-tc-big-he-en model card - on 4.28 I get the expected output, on 4.31 and master I get random gibberish. I don't know if this also happens with other models and languages but I suspect so. ``` from transformers import MarianMTModel, MarianTokenizer import torch src_text = [ "ื”ื™ื ืฉื›ื—ื” ืœื›ืชื•ื‘ ืœื•.", "ืื ื™ ืจื•ืฆื” ืœื“ืขืช ืžื™ื“ ื›ืฉืžืฉื”ื• ื™ืงืจื”." ] model_name = "Helsinki-NLP/opus-mt-tc-big-he-en" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name, torch_dtype=torch.bfloat16).to('cuda') translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True).to('cuda')) for t in translated: print(tokenizer.decode(t, skip_special_tokens=True)) # expected output: # She forgot to write to him. # I want to know as soon as something happens. ``` ### Expected behavior See above
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26271/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26271/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26270
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26270/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26270/comments
https://api.github.com/repos/huggingface/transformers/issues/26270/events
https://github.com/huggingface/transformers/pull/26270
1,903,472,219
PR_kwDOCUB6oc5atDXj
26,270
[`PEFT`]ย introducing `adapter_kwargs` for loading adapters from different Hub location (`subfolder`, `revision`) than the base model
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "I think that I was not clear in the PR description, the PR is not a patch for Falcon but globally for users that want to load a base model from a specific revision and the adapter on the default revision (or another revision), which by luck occured for Falcon because of the patch introduced recently on main. \r\nCurrently on main it is not possible to load a base model from a revision, let's say `\"revision-1\"` and load the adapter from the default revision `\"main\"`, unless if users syncronises the revision between both repos - usually adapter weights are not stored in the same repository as the base model.\r\nLet me know if this is clearer and if this makes sense", "As seen offline, I would rather pack kwargs such as this one within a single `adapter_kwargs` dict to be consumed by a specific method, rather than adding a (potentially) large amount of kwargs as we add more functionality", "The PR should be ready for review! I have added a test, in the past when modifying those parts of the code it caused some issues [see here](https://huggingface.slack.com/archives/C01NE71C4F7/p1692864347980869) I made sure that test pass also with this PR." ]
1,695
1,695
1,695
CONTRIBUTOR
null
# What does this PR do? Currently on main branch the script below fails: ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_id = "andrewrreed/falcon-7b-guanaco-qlora-arr" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, load_in_4bit=True, ) print(model) ``` This is because of falcon models the revision parameter gets overriden here: https://github.com/huggingface/transformers/blob/main/src/transformers/models/falcon/modeling_falcon.py#L779 for BC and to make sure users will load transformers model This introduced an error in the logic of loading adapters as the same `revision` argument was used all over the place. As this scenario might occur more often, I propose to introduce a new argument `adapter_revision` for users that want to load a base model and an adapter from different revisions. cc @ArthurZucker @LysandreJik
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26270/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26270/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26270", "html_url": "https://github.com/huggingface/transformers/pull/26270", "diff_url": "https://github.com/huggingface/transformers/pull/26270.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26270.patch", "merged_at": 1695892384000 }
https://api.github.com/repos/huggingface/transformers/issues/26269
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26269/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26269/comments
https://api.github.com/repos/huggingface/transformers/issues/26269/events
https://github.com/huggingface/transformers/issues/26269
1,903,460,074
I_kwDOCUB6oc5xdH7q
26,269
AugViT TensorFlow implementation
{ "login": "ushareng", "id": 34335028, "node_id": "MDQ6VXNlcjM0MzM1MDI4", "avatar_url": "https://avatars.githubusercontent.com/u/34335028?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ushareng", "html_url": "https://github.com/ushareng", "followers_url": "https://api.github.com/users/ushareng/followers", "following_url": "https://api.github.com/users/ushareng/following{/other_user}", "gists_url": "https://api.github.com/users/ushareng/gists{/gist_id}", "starred_url": "https://api.github.com/users/ushareng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ushareng/subscriptions", "organizations_url": "https://api.github.com/users/ushareng/orgs", "repos_url": "https://api.github.com/users/ushareng/repos", "events_url": "https://api.github.com/users/ushareng/events{/privacy}", "received_events_url": "https://api.github.com/users/ushareng/received_events", "type": "User", "site_admin": false }
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
[]
1,695
1,695
null
NONE
null
### Model description I would like to contribute AugViT Tensorflow implementation to the Transformers library. You can find the implementation in TensorFlow here https://github.com/ushareng/AugViT I have created Model card here https://huggingface.co/tensorgirl/TFaugvit/tree/main Kindly let me know if the above is correct. ### Open source status - [X] The model implementation is available - [x] The model weights are available ### Provide useful links for the implementation TensorFlow implementation of AugViT https://github.com/ushareng/AugViT
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26269/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26269/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/26268
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26268/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26268/comments
https://api.github.com/repos/huggingface/transformers/issues/26268/events
https://github.com/huggingface/transformers/pull/26268
1,903,398,592
PR_kwDOCUB6oc5aszpA
26,268
[DO NOT MERGE] Test docker + FA-2
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Closing as FA2 is currently not supported on T4 GPUs (according to FA2 readme). As we use T4 GPUs in our workflow for slow tests we cannot test https://github.com/huggingface/transformers/pull/25598 automatically for now. \r\nI will test on an A100 if a PR touches a core component of FA2 modules manually", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26268). All of your documentation changes will be reflected on that endpoint." ]
1,695
1,695
1,695
CONTRIBUTOR
null
# What does this PR do? Simply tests if we can successfully build FA2 on docker Addresses: https://github.com/huggingface/transformers/pull/25598/files#r1324800667
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26268/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26268/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26268", "html_url": "https://github.com/huggingface/transformers/pull/26268", "diff_url": "https://github.com/huggingface/transformers/pull/26268.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26268.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/26267
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26267/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26267/comments
https://api.github.com/repos/huggingface/transformers/issues/26267/events
https://github.com/huggingface/transformers/issues/26267
1,903,340,677
I_kwDOCUB6oc5xcqyF
26,267
ImportError: cannot import name 'DeciDiffusionForImageGeneration' from 'transformers'
{ "login": "me-suzy", "id": 2770489, "node_id": "MDQ6VXNlcjI3NzA0ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/2770489?v=4", "gravatar_id": "", "url": "https://api.github.com/users/me-suzy", "html_url": "https://github.com/me-suzy", "followers_url": "https://api.github.com/users/me-suzy/followers", "following_url": "https://api.github.com/users/me-suzy/following{/other_user}", "gists_url": "https://api.github.com/users/me-suzy/gists{/gist_id}", "starred_url": "https://api.github.com/users/me-suzy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/me-suzy/subscriptions", "organizations_url": "https://api.github.com/users/me-suzy/orgs", "repos_url": "https://api.github.com/users/me-suzy/repos", "events_url": "https://api.github.com/users/me-suzy/events{/privacy}", "received_events_url": "https://api.github.com/users/me-suzy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! I think you should have a look at the `diffusers` repository instead, `StabelDiffusion` is a diffusion model. See the documentation [here](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/overview)", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,695
1,698
1,698
NONE
null
### System Info **I get this error:** ``` Traceback (most recent call last): File "E:\Carte\BB\test.py", line 12, in <module> from transformers import DeciDiffusionForImageGeneration, DeciDiffusionTokenizer ImportError: cannot import name 'DeciDiffusionForImageGeneration' from 'transformers' (C:\Users\ME\AppData\Roaming\Python\Python310\site-packages\transformers\__init__.py) >>> ``` **I had install all these libraries:** pip install transformers --upgrade pip install transformers tokenizers datasets huggingface_hub --upgrade -q pip install accelerator --upgrade -q pip install --upgrade accelerate pip install -U accelerate But still get that error. **This is my testing code:** ``` from transformers import StableDiffusionForImageGeneration, StableDiffusionTokenizer tokenizer = StableDiffusionTokenizer.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0") model = StableDiffusionForImageGeneration.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0") text = "love, home" # รŽnlocuiศ›i "YOUR TEXT HERE" cu textul dvs. inputs = tokenizer(text, return_tensors="pt") image = model.generate(**inputs) ``` The problem, I believe, is that the file "StableDiffusionForImageGeneration" doesn't exit in `C:\Users\ME\AppData\Roaming\Python\Python310\site-packages\transformers\` And after many install and updates, that file isn't there. ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` from transformers import StableDiffusionForImageGeneration, StableDiffusionTokenizer tokenizer = StableDiffusionTokenizer.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0") model = StableDiffusionForImageGeneration.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0") text = "love, home" # รŽnlocuiศ›i "YOUR TEXT HERE" cu textul dvs. inputs = tokenizer(text, return_tensors="pt") image = model.generate(**inputs) ``` ### Expected behavior run Python code
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26267/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26267/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26266
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26266/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26266/comments
https://api.github.com/repos/huggingface/transformers/issues/26266/events
https://github.com/huggingface/transformers/issues/26266
1,903,188,062
I_kwDOCUB6oc5xcFhe
26,266
Uninitialized token embeddings MBART when using device_map
{ "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "repos_url": "https://api.github.com/users/BramVanroy/repos", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[ { "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }, { "login": "SunMarc", "id": 57196510, "node_id": "MDQ6VXNlcjU3MTk2NTEw", "avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SunMarc", "html_url": "https://github.com/SunMarc", "followers_url": "https://api.github.com/users/SunMarc/followers", "following_url": "https://api.github.com/users/SunMarc/following{/other_user}", "gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}", "starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions", "organizations_url": "https://api.github.com/users/SunMarc/orgs", "repos_url": "https://api.github.com/users/SunMarc/repos", "events_url": "https://api.github.com/users/SunMarc/events{/privacy}", "received_events_url": "https://api.github.com/users/SunMarc/received_events", "type": "User", "site_admin": false } ]
[ "I've traced this to the definition of the ` find_tied_parameters` function from `accelerate`.\r\n\r\nIn `transformers` we have a control flow to identify tied parameters that does not work for `meta` tensors; if we identify a `meta` tensor, we then rely on the `find_tied_parameters` function from `accelerate`. There seems to be a discrepancy in the number of layers returned by these two methods here, depending on whether we'reusing a `device_map` or not:\r\n\r\nhttps://github.com/huggingface/transformers/blob/0ac3875011d32dc85e0e83970507e3afe8f0febb/src/transformers/modeling_utils.py#L3471-L3481\r\n\r\n@SunMarc, would you like to investigate what might be going on here?", "Yes @LysandreJik, I will check that is going on." ]
1,695
1,695
1,695
COLLABORATOR
null
### System Info Current master ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Loading a finetuned model's safetensors with device_map=auto, I get a warning that the tied embeddings are not initialized. > Some weights of MBartForConditionalGeneration were not initialized from the model checkpoint and are newly initialized: ['model.decoder.embed_tokens.weight', 'model.encoder.embed_tokens.weight'] I finetuned an MBART model with the trainer, use_safetensors was set to True. The model's vocabulary (embedding size) was extended, maybe that matters. ```python from transformers import AutoModelForSeq2SeqLM # Works: model = AutoModelForSeq2SeqLM.from_pretrained("BramVanroy/mbart_test") # Does not work (tied embeddings not loaded correctly - triggers a warning) model = AutoModelForSeq2SeqLM.from_pretrained("BramVanroy/mbart_test", device_map="auto") ``` It is not just the warning, it seems actually the case that the weights are not loaded correctly (random output). ### Expected behavior Correctly loaded safetensors
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26266/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26266/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26265
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26265/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26265/comments
https://api.github.com/repos/huggingface/transformers/issues/26265/events
https://github.com/huggingface/transformers/issues/26265
1,903,118,707
I_kwDOCUB6oc5xb0lz
26,265
๐ŸŒ [i18n-zh-hant] Translating docs to Traditional Chinese (zh-hant)
{ "login": "annahung31", "id": 39179888, "node_id": "MDQ6VXNlcjM5MTc5ODg4", "avatar_url": "https://avatars.githubusercontent.com/u/39179888?v=4", "gravatar_id": "", "url": "https://api.github.com/users/annahung31", "html_url": "https://github.com/annahung31", "followers_url": "https://api.github.com/users/annahung31/followers", "following_url": "https://api.github.com/users/annahung31/following{/other_user}", "gists_url": "https://api.github.com/users/annahung31/gists{/gist_id}", "starred_url": "https://api.github.com/users/annahung31/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/annahung31/subscriptions", "organizations_url": "https://api.github.com/users/annahung31/orgs", "repos_url": "https://api.github.com/users/annahung31/repos", "events_url": "https://api.github.com/users/annahung31/events{/privacy}", "received_events_url": "https://api.github.com/users/annahung31/received_events", "type": "User", "site_admin": false }
[ { "id": 2796628563, "node_id": "MDU6TGFiZWwyNzk2NjI4NTYz", "url": "https://api.github.com/repos/huggingface/transformers/labels/WIP", "name": "WIP", "color": "234C99", "default": false, "description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress" } ]
open
false
null
[]
[ "I'll first work on [training.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.md) !" ]
1,695
1,697
null
NONE
null
<!-- Note: Please search to see if an issue already exists for the language you are trying to translate. --> Hi! Let's bring the documentation to all the Traditional Chinese-speaking community ๐ŸŒ (currently 0 out of 267 complete) Who would want to translate? Please follow the ๐Ÿค— [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list. Some notes: * Please translate using an informal tone (imagine you are talking with a friend about transformers ๐Ÿค—). * Please translate in a gender-neutral way. * Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source). * Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml). * Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @stevhliu and @MKhalusova for review. * ๐Ÿ™‹ If you'd like others to help you with the translation, you can also post in the ๐Ÿค— [forums](https://discuss.huggingface.co/). ## Get Started section - [ ] [index.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.md) - [ ] [quicktour.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.md) - [ ] [installation.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.md). ## Tutorial section - [ ] [training.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.md) - [ ] [pipeline_tutorial.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.md) - [ ] [autoclass_tutorial.md](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.md) - [ ] [preprocessing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.md) - [ ] [accelerate.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.md) - [ ] [model_sharing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.md) - [ ] [multilingual.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.md) - [ ] [peft.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/peft.md) <!-- Keep on adding more as you go ๐Ÿ”ฅ -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26265/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26265/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/26264
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26264/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26264/comments
https://api.github.com/repos/huggingface/transformers/issues/26264/events
https://github.com/huggingface/transformers/pull/26264
1,903,009,179
PR_kwDOCUB6oc5arfDI
26,264
Fix wrong vocab_size updating when calling `resize_token_embeddings` on an uninitialized model
{ "login": "Jingru", "id": 4298653, "node_id": "MDQ6VXNlcjQyOTg2NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4298653?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Jingru", "html_url": "https://github.com/Jingru", "followers_url": "https://api.github.com/users/Jingru/followers", "following_url": "https://api.github.com/users/Jingru/following{/other_user}", "gists_url": "https://api.github.com/users/Jingru/gists{/gist_id}", "starred_url": "https://api.github.com/users/Jingru/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Jingru/subscriptions", "organizations_url": "https://api.github.com/users/Jingru/orgs", "repos_url": "https://api.github.com/users/Jingru/repos", "events_url": "https://api.github.com/users/Jingru/events{/privacy}", "received_events_url": "https://api.github.com/users/Jingru/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26264). All of your documentation changes will be reflected on that endpoint.", "Hey ๐Ÿ‘‹๐Ÿป I am not entirely sure I understand the use case here. Could you share a reproducing snippet? ", "Sorry for the missing information: this issue is found when deepspeed zero3 is enabled, which disables init_weight action.", "I'm noticing that `_get_resized_embeddings` function has already modified in main branch, which won't return old_embedding with zero3 enabled.\r\n\r\nThis may solve this problem in zero3 scenario. But theoretically, there are other ways to disable weight initialization like `no_init_wight` context manager. This patch is still necessary.", "Yeah I thought you might be dealing with deepspeed! Should be fixed on main. \r\nI am not sure we need the patch for such a specific case ", "Theoretically, there are other ways to disable model weight initialization. In such scenarios, this may still be an issue.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,695
1,698
1,698
NONE
null
# What does this PR do? `resize_token_embeddings` intends to update model vocab_size to the size of the first dimension of the newly-built embedding's weight. However, if this method is call on an uninitialized model (e.g.: in `no_init_weight` context) and the parameter `new_num_tokens` equals to the old vocab_size, the internal function `_resize_token_embeddings` will return the old embedding directly whose weight is not initialized. In this scenario, the model's vocab_size will be set to 0 which is unexpected. ## Who can review? @ArthurZucker and @younesbelkada
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26264/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26264/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26264", "html_url": "https://github.com/huggingface/transformers/pull/26264", "diff_url": "https://github.com/huggingface/transformers/pull/26264.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26264.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/26263
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26263/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26263/comments
https://api.github.com/repos/huggingface/transformers/issues/26263/events
https://github.com/huggingface/transformers/pull/26263
1,902,987,321
PR_kwDOCUB6oc5araJG
26,263
add GitTokenizer
{ "login": "jpizarrom", "id": 111236, "node_id": "MDQ6VXNlcjExMTIzNg==", "avatar_url": "https://avatars.githubusercontent.com/u/111236?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jpizarrom", "html_url": "https://github.com/jpizarrom", "followers_url": "https://api.github.com/users/jpizarrom/followers", "following_url": "https://api.github.com/users/jpizarrom/following{/other_user}", "gists_url": "https://api.github.com/users/jpizarrom/gists{/gist_id}", "starred_url": "https://api.github.com/users/jpizarrom/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jpizarrom/subscriptions", "organizations_url": "https://api.github.com/users/jpizarrom/orgs", "repos_url": "https://api.github.com/users/jpizarrom/repos", "events_url": "https://api.github.com/users/jpizarrom/events{/privacy}", "received_events_url": "https://api.github.com/users/jpizarrom/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Feel free to ask for a review when needed!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "> the tokenizer doesn't have the git-specific tokenization logic i.e. adding the `cls_token` to the start o\r\n\r\nMi intention was just to add the new default tokenizer first, and then open new PR to add the new logic in other PR.\r\n\r\nI just leave a message in https://github.com/huggingface/transformers/issues/21110#issuecomment-1820372090 as i will not be able to continue with this for now." ]
1,695
1,700
1,700
CONTRIBUTOR
null
# What does this PR do? This PR adds `GitTokenizer` Fixes part of https://github.com/huggingface/transformers/issues/21110 It was discussed with @amyeroberts in https://github.com/huggingface/transformers/pull/25509#issuecomment-1719288200 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @ArthurZucker @Narsil <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @muellerzr and @pacman100 Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam - Big Model Inference: @SunMarc - quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada Documentation: @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: See Models above and tag the person corresponding to the modality of the example. - TensorFlow: @Rocketknight1 -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26263/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26263/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26263", "html_url": "https://github.com/huggingface/transformers/pull/26263", "diff_url": "https://github.com/huggingface/transformers/pull/26263.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26263.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/26262
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26262/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26262/comments
https://api.github.com/repos/huggingface/transformers/issues/26262/events
https://github.com/huggingface/transformers/pull/26262
1,902,891,529
PR_kwDOCUB6oc5arFLo
26,262
Updating `build-docker-images.yml` to build AMDGPU containers
{ "login": "mfuntowicz", "id": 2241520, "node_id": "MDQ6VXNlcjIyNDE1MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mfuntowicz", "html_url": "https://github.com/mfuntowicz", "followers_url": "https://api.github.com/users/mfuntowicz/followers", "following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}", "gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}", "starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions", "organizations_url": "https://api.github.com/users/mfuntowicz/orgs", "repos_url": "https://api.github.com/users/mfuntowicz/repos", "events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}", "received_events_url": "https://api.github.com/users/mfuntowicz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26262). All of your documentation changes will be reflected on that endpoint.", "I think this could be closed as the changes is already on `main`? cc @mfuntowicz ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,695
1,697
1,697
MEMBER
null
This PR adds the following: - [x] Job to build AMD specific docker images for the CI - [x] Update the GA docker dependencies to their latest version(s)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26262/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26262/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26262", "html_url": "https://github.com/huggingface/transformers/pull/26262", "diff_url": "https://github.com/huggingface/transformers/pull/26262.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26262.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/26261
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26261/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26261/comments
https://api.github.com/repos/huggingface/transformers/issues/26261/events
https://github.com/huggingface/transformers/issues/26261
1,902,887,054
I_kwDOCUB6oc5xa8CO
26,261
Adding support for low end and older GPU
{ "login": "B0rner", "id": 5814570, "node_id": "MDQ6VXNlcjU4MTQ1NzA=", "avatar_url": "https://avatars.githubusercontent.com/u/5814570?v=4", "gravatar_id": "", "url": "https://api.github.com/users/B0rner", "html_url": "https://github.com/B0rner", "followers_url": "https://api.github.com/users/B0rner/followers", "following_url": "https://api.github.com/users/B0rner/following{/other_user}", "gists_url": "https://api.github.com/users/B0rner/gists{/gist_id}", "starred_url": "https://api.github.com/users/B0rner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/B0rner/subscriptions", "organizations_url": "https://api.github.com/users/B0rner/orgs", "repos_url": "https://api.github.com/users/B0rner/repos", "events_url": "https://api.github.com/users/B0rner/events{/privacy}", "received_events_url": "https://api.github.com/users/B0rner/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! This is very specific so most probably not planned. I hear your pain, but it's pretty much impossible to \"go back in time\" in the sense that the current state of the library relies on a lot of new packages. What you should be looking for is rather `quantisation` techniques that are compatible with your device in order to need less memory, quick and easy wins with using less precision etc etc ๐Ÿค— ", "Hey, thanks for your feedback.\r\nI really tried a lot to get the older technology to work, but without success. It was frustrating to be very close to the goal and then realize that Pytorch offers libraries for 1.10, which is compiled in such a way that it accepts cuda 10.2 but requires an NVIDA driver version, which is only available for new graphics cards. Then compatibility with 10.2 makes no sense, because GPUs that support the new driver can also use CUDA 12.x. Why the new cards should use CUDA 10.2?\r\n\r\n\r\nUnfortunately, I don't know enough about this topic to fully understand your recommendations in your post. ...and benefit from it.\r\n\r\nAn option for all older devices would be a Google TPU USB stick. But the experiments in Colab do not show that various models and pipelines can be pushed into a TPU ad hoc. It's all far more complicated. Especially for a beginner like me. For example, I couldn't get a stable diffusion model to run on a Colab TPU. :-/", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,695
1,698
1,698
NONE
null
### Feature request Using, training and processing models with the transformer pipeline is usually very computationally intensive. Using this pipeline in a world with torch 1.8 or before is a difficult / impossible goal. The question in this feature request: is it possible to set the pipeline to work with older versions of torch, including the dependencies to other packages (accellerate 0.19,...)? By this I also mean if it is possible to abstract the pipeline so that new models are also executable with old torch versions? Basically the transformer pipeline can be initialized with older versions of torch. But using current models this practically does not work. Problems arise e.g. by the fact that the model cannot run completely in the RAM of the GPU with old / cheap GPUs, features like: `pipe.enable_model_cpu_offload()` `pipe.enable_xformers_memory_efficient_attention()` require additional packages that are not compatible with old torch versions. However, updating the torch version with GPU support is often not possible since many older models support CUDA 10.1 at best. ### Motivation There has been a big change at Nvidia from CUDA 10.1 to >=11.x. 11.x no longer works on many older or smaller graphics cards. Also those Geforce cards that were installed in millions laptops (GT 700m series). Since you can't easily swap these graphics cards in laptops, you're stuck with CUDA 10.1., although these devices have a compute capability that is sufficient. If you have to use CUDA 10.1, the latest version of torch on Windows is torch 1.8 (https://download.pytorch.org/whl/cu101/torch_stable.html). However, torch 1.8 is not compatible with xformers and is only compatible with an older version of accelerate. As a result, these users have to calculate the pipeline on the CPU, which takes an incredibly long time and is frustrating. So they need reasonably up-to-date hardware for practical use. This results in 2 aspects, which is why I am opening this ticket: * sustainability: if there were any way to use older GPUs (especially in older laptops), these divices can still be taken for testing and entry into the Huggingface - world. * Social limits: I think that whole populations are excluded when data models, pipelines, etc. only work when the latest and therefore often expensive technology is needed. It would be great if access to this kind of technology was not necessarily tied to expensive hardware, but also worked with used alternatives (in line with the aspect of sustainability). Especially to give young people and people with limited budgets an acceptable introduction to the world of transformer pipelines. I know that free online services are often offered as an alternative. But these services are not always available, especially hardly during the day (depending on the time zone) with GPU. This means, for example, that a school with older laptops will not be able to work well with transformer libraries either because of their equipment or because of the limitations of the free online services. Even if their equipment has a GPU that is too weak and is attached to an old torch version. The max matrix is: CUDA 10.1 with cudnn 8.0.4 with tensorflow 2.3.4, torch 1.8 and (probably) python 3.8. Or maybe it is possible to build torch 2.x for CUDA 10.1? There seams for example no Torch version avalibale for CUDA 10.2 ### Your contribution Unfortunately, I do not have the skills to provide solid code changes. But I'm happy to support testing with different versions, etc.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26261/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26261/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26260
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26260/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26260/comments
https://api.github.com/repos/huggingface/transformers/issues/26260/events
https://github.com/huggingface/transformers/pull/26260
1,902,775,565
PR_kwDOCUB6oc5aqsmr
26,260
include changes from llama
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,695
1,695
1,695
COLLABORATOR
null
# What does this PR do? Fixes #26239, where we have an edge case of the tokenization. When porting from CodeLllama a condition in `_tokenize` was ommited, which leads to merging the unk token with part of the input, and then stripping it.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26260/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26260/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26260", "html_url": "https://github.com/huggingface/transformers/pull/26260", "diff_url": "https://github.com/huggingface/transformers/pull/26260.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26260.patch", "merged_at": 1695223171000 }
https://api.github.com/repos/huggingface/transformers/issues/26259
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26259/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26259/comments
https://api.github.com/repos/huggingface/transformers/issues/26259/events
https://github.com/huggingface/transformers/pull/26259
1,902,748,250
PR_kwDOCUB6oc5aqmrz
26,259
DeepSpeed ZeRO-3 handling when resizing embedding layers
{ "login": "pacman100", "id": 13534540, "node_id": "MDQ6VXNlcjEzNTM0NTQw", "avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pacman100", "html_url": "https://github.com/pacman100", "followers_url": "https://api.github.com/users/pacman100/followers", "following_url": "https://api.github.com/users/pacman100/following{/other_user}", "gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}", "starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pacman100/subscriptions", "organizations_url": "https://api.github.com/users/pacman100/orgs", "repos_url": "https://api.github.com/users/pacman100/repos", "events_url": "https://api.github.com/users/pacman100/events{/privacy}", "received_events_url": "https://api.github.com/users/pacman100/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,695
1,695
1,695
CONTRIBUTOR
null
# What does this PR do? 1. PRs https://github.com/huggingface/transformers/pull/25394 and https://github.com/huggingface/transformers/pull/25732 resulted in 17 DeepSpeed slow tests failing. The reason is that those PRs might have missed the fact that the shape will be 0 when working with layers that have been initialized by DeepSpeed init for ZeRO-3. With this PR, all the failing tests pass.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26259/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26259/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26259", "html_url": "https://github.com/huggingface/transformers/pull/26259", "diff_url": "https://github.com/huggingface/transformers/pull/26259.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26259.patch", "merged_at": 1695150296000 }
https://api.github.com/repos/huggingface/transformers/issues/26258
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26258/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26258/comments
https://api.github.com/repos/huggingface/transformers/issues/26258/events
https://github.com/huggingface/transformers/issues/26258
1,902,738,716
I_kwDOCUB6oc5xaX0c
26,258
Make `_fast_init` fast again (by surely skipping model weights init)!
{ "login": "poedator", "id": 24738311, "node_id": "MDQ6VXNlcjI0NzM4MzEx", "avatar_url": "https://avatars.githubusercontent.com/u/24738311?v=4", "gravatar_id": "", "url": "https://api.github.com/users/poedator", "html_url": "https://github.com/poedator", "followers_url": "https://api.github.com/users/poedator/followers", "following_url": "https://api.github.com/users/poedator/following{/other_user}", "gists_url": "https://api.github.com/users/poedator/gists{/gist_id}", "starred_url": "https://api.github.com/users/poedator/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/poedator/subscriptions", "organizations_url": "https://api.github.com/users/poedator/orgs", "repos_url": "https://api.github.com/users/poedator/repos", "events_url": "https://api.github.com/users/poedator/events{/privacy}", "received_events_url": "https://api.github.com/users/poedator/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false }
[ { "login": "ArthurZucker", "id": 48595927, "node_id": "MDQ6VXNlcjQ4NTk1OTI3", "avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArthurZucker", "html_url": "https://github.com/ArthurZucker", "followers_url": "https://api.github.com/users/ArthurZucker/followers", "following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}", "gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}", "starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions", "organizations_url": "https://api.github.com/users/ArthurZucker/orgs", "repos_url": "https://api.github.com/users/ArthurZucker/repos", "events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}", "received_events_url": "https://api.github.com/users/ArthurZucker/received_events", "type": "User", "site_admin": false } ]
[ "> The `no_init_weights` context manager sets `_init_weights` global variable, but it gets ignored by model's code (tested on Llama_2_7B).\r\n\r\nInteresting, could you please describe how you tested this? This sounds like a bug.", "> Interesting, could you please describe how you tested this? This sounds like a bug.\r\n\r\nHi, @BenjaminBossan ,\r\n\r\nThis is how to test this slow loading issue:\r\nselect a model, large enough for the effect to be noticeable. I tested with `meta-llama/Llama-2-7b-hf`; load it as `AutoModel.from_pretrained()`, then delete - this fills the models cache.\r\n\r\nThen try some or all of these:\r\n- load it again and notice time passed before `Loading checkpoint shards:` progress bar appears. Normally it should be few seconds or less.\r\n- compare overal command run time with `Loading checkpoint shards: ` time. In my case it is 41s vs 2s. What takes the other 39s, if the model is cached on SSD already? \r\n- run AutoModel.from_pretrained() with profiler and see that `uniform` (i.e. weight init) process takes most of the time, though it is not needed for from_pretrained().\r\n- try loading model with disabled weight inits (using context manager, see notebook). In my case it reduced Llama2-7B loading time 10X (from 41s to 4s)\r\n\r\nSee my testing notebook as gist here: https://gist.github.com/poedator/792d6c7528a1bc5a84acb550268777ed\r\n", "Thanks for providing the context and notebook. I could replicate your results and also confirmed that the model produces the same output in both cases. This smells like a bug to me, maybe @ArthurZucker can take a look.", "definitely interesting, I'll have a look! ", "@poedator thanks a lot for the deep investigation - do you observe the same behaviour with `low_cpu_mem_usage=True` ? Looking at the gist it seems you are calling `from_pretrained` without any additional arguments - we should maybe start thinking of using that argument as default \r\nI also went through SpQr repository you have shared, I have seen some community interest to support it natively on the HF ecosystem, I did not had a deep look into the repository, I wanted to ask if you think that it is possible, design-wise to integrate that into transformers ? cc @SunMarc FYI", "@younesbelkada,\r\nWhatever is behind `low_cpu_mem_usage=True` may be a good basis for the solution. I knew about it but hesitated to use because it does more magic than that (at least this was my impression from reading the doc). Please see, how much of `low_cpu_mem_usage=True` functionality can be included into default options. Hopefully it is a small fix.\r\n\r\nThank you for your interest in supporting SpQR in the HF ecosystem. Let me discuss with my teammates the best way to do this, and then I will get back to you.", "One possible solution is mentioned here: https://github.com/huggingface/transformers/issues/18505 ", "I met the same issue. And I have another specific scenario, where I want to randomly initialize a large model for debug. So I just want a very fast initialization.\r\n\r\nI tried:\r\n```python\r\nconfig = AutoConfig.from_pretrained(model_name)\r\nmodel = AutoModelForCausalLM.from_config(config)\r\n```\r\n\r\nI found it is even slower than just loading the weights:\r\n```python\r\nmodel = AutoModelForCausalLM.from_pretrained(model_name, _fast_init=True, low_cpu_mem_usage=True)\r\n```\r\n\r\nSo I wonder if there is a way to fast initialize a very large model (without any initialization algorithm) using `from_config`?\r\n\r\nThank you very much!\r\n\r\n", "Ouch sorry about that! Was off for a bit, and it's planned! Will try to open a draft PR asap", "Update ๐Ÿค— \r\nI'll tackle this as I can indeed reproduce and though we have the `low_cpu_mem_usage` flag that requires accelerate, this seems like a somewhat low-hanging fruit. We gotta make sure the weights that are missing from the state-dict are initialized ( non-persistant buffers etc). ", "On main branch of Transformers, I observe the following:\r\n1. `low_cpu_mem_usage` should resolve the issue coupled with `_fast_init ` which is True by default.\r\n2. `low_cpu_mem_usage` internally calls accelerate's `init_empty_weights` which sets the weights on meta device leading to `reset_parameters()` being a no-op. If `include_buffers=True`, it just directly uses `with torch.device(\"meta\")` context manager as suggested by Horace in the other linked issue.\r\n\r\n![Screenshot 2023-11-28 at 11 54 48โ€ฏAM](https://github.com/huggingface/transformers/assets/13534540/10f18422-8be5-48db-9fdb-4a601fcae43e)", "The goal is to still have fast init without accelerate" ]
1,695
1,702
1,702
CONTRIBUTOR
null
I observed that loading pre-trained model takes rather long, even when loading cached models from fast SSD. It is especially noticeable when dealing with LLMs with billions of weights. Apparently, majority of the time is lost [in this section of the code](https://github.com/huggingface/transformers/blob/eb8489971ac1415f67b0abdd1584fde8b659ced9/src/transformers/modeling_utils.py#L2996): ``` # Instantiate model. init_contexts = [no_init_weights(_enable=_fast_init)] # (...) with ContextManagers(init_contexts): model = cls(config, *model_args, **model_kwargs) ``` Time is spent on weights initialization (by `torch.nn.init.kaiming_uniform_()` and similar) is wasted, because the newly initialized weights will be then replaced by loaded ones. The `no_init_weights` context manager sets `_init_weights` global variable, but it gets ignored by model's code (tested on Llama_2_7B). I recently discussed a similar issue with PEFT team, but there it was easier to solve, because in PEFT the init code was dealing with specific torch.nn layer. see https://github.com/huggingface/peft/issues/871 and linked PRs by @BenjaminBossan. Here we need a model-scale solution. One (not perfectly elegant one) - is to temporarily override methods like torch.nn.init.kaiming_uniform_(). It is used in [our SpQR repo](https://github.com/Vahe1994/SpQR/blob/39753b62378f0e036de181ea07b20951e0cd2359/modelutils.py#L15): ``` @contextmanager def suspend_nn_inits(): skip = lambda *args, **kwargs: None saved_inits = torch.nn.init.kaiming_uniform_, torch.nn.init.uniform_, torch.nn.init.normal_ # saving torch.nn.init.kaiming_uniform_ = torch.nn.init.uniform_ = torch.nn.init.normal_ = skip # replacing try: yield finally: torch.nn.init.kaiming_uniform_, torch.nn.init.uniform_, torch.nn.init.normal_ = saved_inits # restoring ``` but there may be better ones, using some native torch tools? I'd be glad to contribute a PR with the maintainers blessing. Summoning @younesbelkada ### System Info A100-80G + SSD + mucho RAM and kernels. ### Who can help? @younesbelkada ### Reproduction load model, measure timing for [this line](https://github.com/huggingface/transformers/blob/eb8489971ac1415f67b0abdd1584fde8b659ced9/src/transformers/modeling_utils.py#L2996) ### Expected behavior faster loading
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26258/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 2 }
https://api.github.com/repos/huggingface/transformers/issues/26258/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26257
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26257/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26257/comments
https://api.github.com/repos/huggingface/transformers/issues/26257/events
https://github.com/huggingface/transformers/pull/26257
1,902,715,743
PR_kwDOCUB6oc5aqfjr
26,257
Fix gated repo tests
{ "login": "Wauplin", "id": 11801849, "node_id": "MDQ6VXNlcjExODAxODQ5", "avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Wauplin", "html_url": "https://github.com/Wauplin", "followers_url": "https://api.github.com/users/Wauplin/followers", "following_url": "https://api.github.com/users/Wauplin/following{/other_user}", "gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}", "starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions", "organizations_url": "https://api.github.com/users/Wauplin/orgs", "repos_url": "https://api.github.com/users/Wauplin/repos", "events_url": "https://api.github.com/users/Wauplin/events{/privacy}", "received_events_url": "https://api.github.com/users/Wauplin/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26257). All of your documentation changes will be reflected on that endpoint." ]
1,695
1,695
1,695
CONTRIBUTOR
null
Related to this [slack thread](https://huggingface.slack.com/archives/C01NE71C4F7/p1692855942544019) (private). Since end of August (https://github.com/huggingface/transformers/commit/68fa9a5937ae7aa707f5ff2639aa36a37a0a9928), gated repo tests were skipped because failing to pass. This was due to a server-side change that now makes the README files readable even on gated repo (since users have to be able to read a model card before requesting access). This PR fixes the tests -and unskip them- by trying to download [gated_file.txt](https://huggingface.co/hf-internal-testing/dummy-gated-model/blob/main/gated_file.txt) instead. cc @ydshieh
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26257/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26257/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26257", "html_url": "https://github.com/huggingface/transformers/pull/26257", "diff_url": "https://github.com/huggingface/transformers/pull/26257.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26257.patch", "merged_at": 1695122712000 }
https://api.github.com/repos/huggingface/transformers/issues/26256
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26256/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26256/comments
https://api.github.com/repos/huggingface/transformers/issues/26256/events
https://github.com/huggingface/transformers/issues/26256
1,902,668,858
I_kwDOCUB6oc5xaGw6
26,256
A new category for recsys
{ "login": "Anindyadeep", "id": 58508471, "node_id": "MDQ6VXNlcjU4NTA4NDcx", "avatar_url": "https://avatars.githubusercontent.com/u/58508471?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Anindyadeep", "html_url": "https://github.com/Anindyadeep", "followers_url": "https://api.github.com/users/Anindyadeep/followers", "following_url": "https://api.github.com/users/Anindyadeep/following{/other_user}", "gists_url": "https://api.github.com/users/Anindyadeep/gists{/gist_id}", "starred_url": "https://api.github.com/users/Anindyadeep/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Anindyadeep/subscriptions", "organizations_url": "https://api.github.com/users/Anindyadeep/orgs", "repos_url": "https://api.github.com/users/Anindyadeep/repos", "events_url": "https://api.github.com/users/Anindyadeep/events{/privacy}", "received_events_url": "https://api.github.com/users/Anindyadeep/received_events", "type": "User", "site_admin": false }
[ { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
open
false
null
[]
[ "@ArthurZucker, just came to check in here to know whether is this active or not or just to know any thoughts on this. \r\n\r\nThanks ", "Hello @Anindyadeep, for now I do not think we will add recommendation system capabilities to `transformers` unless there is a very large number of requests.\r\n\r\nHowever, we'd be more than happy in helping you or anyone else from the community integrate their recommendation system utility to our other tools and to the Hub; reading your code snippet, I understand this is the true value of what you offer.", "Ah I see @LysandreJik, thanks for the update. At least for now, my use case is been served. However, it would be great sometime, if this feature would be integrated. " ]
1,695
1,696
null
CONTRIBUTOR
null
### Feature request A new category in HuggingFace (both in datasets and models) for recommendation systems. ### Motivation HuggingFace has a rich ecosystem of diverse sets of datasets and models. We have model types ranging from 1. Language Models 2. Graph Models 3. Vision models 4. Multimodal etc And same goes for datasets. However, the one significant category that I did not find and is missing is the recommendation system. Recommendation systems are very important for enterprises and it is one of the most interesting dynamic fields in Machine Learning. We can support recsys in several ways. There are recsys for tabular data, tabular + NLP data, vision data, etc. And the same goes for models too. And while I just started learning and doing some research on recys, I am seeing that I do not have any SOTA models present in huggingface. For example, I can not have any method going like this right now. Where, I do not care much about the candidate generator but simply take a SOTA candidate generator and focus on my ranker model. ```python from transformers import RecSysVocab from transformers import CandidateGen # this can path to csv or a matrix with user's properties user_vocab_lookup_table = RecSysVocab.from_pretrained('/path/to/user.csv') item_vocab_lookup_table = RecSysVocab.from_pretrained('/path/to/item.csv') # build the candidate generator model candidate_gen = CandidateGen.from_pretrained('some-sota-candidate-gen') # now fit the model candidate_gen.find_top_k_similarity( user_id = "some user id", user_columns = [...], # a vector with that user's propeties user_vocab_lookup_table = user_vocab_lookup_table, item_vocab_lookup_table = item_vocab_lookup_table ) ```` The above is a very simple (less accurate) pseudo code, just to have a glimpse of the interface. However it would be awesome to have something specifically for recommendation systems. ### Your contribution I am not sure, if this issue is never thought of before or not. But it would be awesome on working on this, if the core maintainers and contributors are in the same page. RecSys is very diverse. Some of methods involves sequences well others involves these two stage approaches (candidate gen + filtering). So I feel like some discussion would be required on how to structure the modules and how to create to build those interfaces such that it matches with existing methods of huggingface. In terms of my contribution, I can help with these and exited to contribute on this, if I hear back form the community showing similar grounds of interest. Would love to contribute.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26256/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/transformers/issues/26256/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/26255
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26255/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26255/comments
https://api.github.com/repos/huggingface/transformers/issues/26255/events
https://github.com/huggingface/transformers/issues/26255
1,902,665,654
I_kwDOCUB6oc5xaF-2
26,255
Zero-values in ViT attention mask (pixel mask)
{ "login": "martinaianaro99", "id": 65290936, "node_id": "MDQ6VXNlcjY1MjkwOTM2", "avatar_url": "https://avatars.githubusercontent.com/u/65290936?v=4", "gravatar_id": "", "url": "https://api.github.com/users/martinaianaro99", "html_url": "https://github.com/martinaianaro99", "followers_url": "https://api.github.com/users/martinaianaro99/followers", "following_url": "https://api.github.com/users/martinaianaro99/following{/other_user}", "gists_url": "https://api.github.com/users/martinaianaro99/gists{/gist_id}", "starred_url": "https://api.github.com/users/martinaianaro99/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/martinaianaro99/subscriptions", "organizations_url": "https://api.github.com/users/martinaianaro99/orgs", "repos_url": "https://api.github.com/users/martinaianaro99/repos", "events_url": "https://api.github.com/users/martinaianaro99/events{/privacy}", "received_events_url": "https://api.github.com/users/martinaianaro99/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey, the error does not seem to indicate and issue with padding but rather an issue with the `upsampling` function, and thus the shape of the pixel mask. I am not sure this is a bug in transformers as you are using your custom code. \r\nI don't have access to the full traceback so can't help you further. Can you share the full traceback? \r\n", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,695
1,698
1,698
NONE
null
### System Info - `transformers` version: 4.33.2 - Platform: Linux-5.15.109+-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.17.2 - Safetensors version: 0.3.3 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tensorflow version (GPU?): 2.13.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.7.2 (gpu) - Jax version: 0.4.14 - JaxLib version: 0.4.14 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction The code loads a binary matrix from a file, converts it to a PyTorch tensor with values โ€‹โ€‹0 and 1, and assigns it to pixel_mask, which is then added to a dictionary called encoding. part of code of ViltDataset class: ```python pixel_mask_filename = "/content/drive/MyDrive/VILT/Pixel_Masks/" + str(pid) + ".npy" pixel_mask_external = np.load(pixel_mask_filename).astype(int) pixel_mask_external = torch.tensor(pixel_mask_external, dtype=torch.long) pixel_mask_external = torch.where(pixel_mask_external > 0, torch.tensor(1), torch.tensor(0)) pixel_mask=pixel_mask_external encoding["pixel_mask"]=pixel_mask ``` Processor: ```python processor = ViltProcessor.from_pretrained("dandelin/vilt-b32-mlm") encoding = self.processor(image, masked_sentence, padding="max_length", max_length=40, truncation=True, return_tensors="pt") ``` Pixel mask are extracted in this method: ```python def collate_fn(batch): batch = [item for item in batch if item is not None] input_ids = [item['input_ids'] for item in batch] attention_mask = [item['attention_mask'] for item in batch] token_type_ids = [item['token_type_ids'] for item in batch] labels = [item['labels'] for item in batch] pixel_values = [item['pixel_values'] for item in batch] pixel_mask = [item['pixel_mask'] for item in batch] # create new batch collated_batch = {} collated_batch['input_ids'] = torch.stack(input_ids) collated_batch['attention_mask'] = torch.stack(attention_mask) collated_batch['token_type_ids'] = torch.stack(token_type_ids) collated_batch['labels'] = torch.stack(labels) collated_batch['pixel_values'] = torch.stack(pixel_values) collated_batch['pixel_mask'] = torch.stack(pixel_mask) return collated_batch ``` ### Expected behavior I assign to variable pixel mask an array of values 0 or 1. Error is: ```python RuntimeError: Input and output sizes should be greater than 0, but got input (H: 12, W: 12) output (H: 0, W: 0). ``` If I assign to pixel mask other values like 1 or 2, it works. Complete error: ```python ----> 4 train_loss, time_t, memory_t = train_model(model, device, train_dataloader, val_dataloader,learning_rate=LEARNING_RATE, weight_decay=weight_decay, num_epochs=2) 10 frames [/usr/local/lib/python3.10/dist-packages/torch/nn/functional.py](https://localhost:8080/#) in interpolate(input, size, scale_factor, mode, align_corners, recompute_scale_factor, antialias) 3957 if antialias: 3958 return torch._C._nn._upsample_bilinear2d_aa(input, output_size, align_corners, scale_factors) -> 3959 return torch._C._nn.upsample_bilinear2d(input, output_size, align_corners, scale_factors) 3960 if input.dim() == 5 and mode == "trilinear": 3961 assert align_corners is not None RuntimeError: Input and output sizes should be greater than 0, but got input (H: 12, W: 12) output (H: 0, W: 0) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26255/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26255/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26254
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26254/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26254/comments
https://api.github.com/repos/huggingface/transformers/issues/26254/events
https://github.com/huggingface/transformers/issues/26254
1,902,627,322
I_kwDOCUB6oc5xZ8n6
26,254
Access to pre_tokenizer for PreTrainedTokenizer
{ "login": "GitMew", "id": 37484463, "node_id": "MDQ6VXNlcjM3NDg0NDYz", "avatar_url": "https://avatars.githubusercontent.com/u/37484463?v=4", "gravatar_id": "", "url": "https://api.github.com/users/GitMew", "html_url": "https://github.com/GitMew", "followers_url": "https://api.github.com/users/GitMew/followers", "following_url": "https://api.github.com/users/GitMew/following{/other_user}", "gists_url": "https://api.github.com/users/GitMew/gists{/gist_id}", "starred_url": "https://api.github.com/users/GitMew/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/GitMew/subscriptions", "organizations_url": "https://api.github.com/users/GitMew/orgs", "repos_url": "https://api.github.com/users/GitMew/repos", "events_url": "https://api.github.com/users/GitMew/events{/privacy}", "received_events_url": "https://api.github.com/users/GitMew/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey! The equivalent of `pre-tokenizers` is not impelemted directly for `PretrainedTokenizers` (yet, something might be on it's way). The pre-tokenization is usually done in the `prepare_for_tokenization` function.", "@ArthurZucker Thanks for your reply! That's unfortunate. One would expect that the two classes derive from the same base class and that that base class offers pretokenisation (and postprocessing, while we're at it).\r\n\r\nI did see the `prepare_for_tokenization` function, but as far as I can see, it is supposed to output a string, not e.g. a list of strings to be tokenised separately, unless I violate its type signature. That seems like a bad idea, given that the `PreTrainedTokenizer.tokenize` function looks something like this, abstracted:\r\n\r\n```\r\n def tokenizer(text, **kwargs):\r\n text, kwargs = self.prepare_for_tokenization(text, **kwargs)\r\n ...\r\n tokens = self.tokens_trie.split(text)\r\n ...\r\n tokenized_text = []\r\n for token in tokens:\r\n ...\r\n tokenized_text.extend(self._tokenize(token))\r\n```\r\n...wherein I assume the `tokens_trie` is only used to isolate a small set of very special tokens, and `.split` expects a string. Do you have an example of how people would e.g. include `Whitespace()` in `prepare_for_tokenization` compatible with this?", "Usually the `Whitespace()` is done in this function, which is applied to all the inputs if the input is a batch of strings. \r\nA lot of sentencepiece models do this (see LlamaTokenizer) for example. It is sometimes done in the `tokenize`. \r\n I agree with you that the fast and slow lack consistency, and note this for futur improvements ๐Ÿค— Thanks for your input", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,695
1,699
1,699
NONE
null
### Feature request Give access to setting a `pre_tokenizer` for a `transformers.PreTrainedTokenizer`, similar to how this works for `PreTrainedTokenizerFast`. ### Motivation As far as I understand from [these docs](https://huggingface.co/docs/transformers/v4.33.2/en/main_classes/tokenizer), there are two interfaces for interacting with tokenizers in the HuggingFace ecosystem: `PreTrainedTokenizerFast` is a wrapper around Rust code, and `PreTrainedTokenizer` is supposed to be the slow Python equivalent. `PreTrainedTokenizerFast` has a property `backend_tokenizer` which is a `tokenizers.Tokenizer` object, which has a `pre_tokenizer` property and is built from a `tokenizers.models.Model` subclass (the thing that does the tokenization). You can instantiate a `PreTrainedTokenizerFast` from such a `Tokenizer` object with the constructor argument `tokenizer_object`. Meanwhile, none of this is accessible for a `PreTrainedTokenizer`. Here is my use-case: I have a function `tokenizeWord(w: str)` implemented entirely in Python to segment a single word into subwords. I would now like to 1. Build a `PreTrainedTokenizer` from this function, and 2. pre-tokenize sentences on punctuation and whitespace so that each word is sent to that function separately. I can do the first as follows (at least I think this is how it's supposed to be done): ``` class CustomTokenizer(PreTrainedTokenizer): def __init__(self, custom_tkz_algorithm, **kwargs): super().__init__(**kwargs) self.algorithm = custom_tkz_algorithm self.vocab = self.algorithm.get_vocab() self.reverse_vocab = {i: s for s,i in self.vocab.items()} # Assume that the vocabulary is injective (no duplicate IDs) @property def vocab_size(self) -> int: return len(self.vocab) def _convert_token_to_id(self, token): return self.vocab[token] def _convert_id_to_token(self, index: int) -> str: return self.reverse_vocab[index] def _tokenize(self, text, **kwargs) -> List[str]: """ Converts a string in a sequence of tokens (string), using the tokenizer. Split in words for word-based vocabulary or sub-words for sub-word-based vocabularies (BPE/SentencePieces/WordPieces). Do NOT take care of added tokens. """ return tokenizeWord(text) ``` but where does the pre-tokenizer come in? It doesn't even seem feasible to manually use the pre-tokenizers provided by `tokenizers.pre_tokenizers` (e.g. `Whitespace`, to name one) because those all provide Rust interfaces and hence the objects they output don't work with a simple string segmentation function. ### Your contribution None.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26254/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26254/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26253
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26253/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26253/comments
https://api.github.com/repos/huggingface/transformers/issues/26253/events
https://github.com/huggingface/transformers/issues/26253
1,902,623,236
I_kwDOCUB6oc5xZ7oE
26,253
[Bug] whisper pipeline inference bug on transformers master branch
{ "login": "WeichenXu123", "id": 19235986, "node_id": "MDQ6VXNlcjE5MjM1OTg2", "avatar_url": "https://avatars.githubusercontent.com/u/19235986?v=4", "gravatar_id": "", "url": "https://api.github.com/users/WeichenXu123", "html_url": "https://github.com/WeichenXu123", "followers_url": "https://api.github.com/users/WeichenXu123/followers", "following_url": "https://api.github.com/users/WeichenXu123/following{/other_user}", "gists_url": "https://api.github.com/users/WeichenXu123/gists{/gist_id}", "starred_url": "https://api.github.com/users/WeichenXu123/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/WeichenXu123/subscriptions", "organizations_url": "https://api.github.com/users/WeichenXu123/orgs", "repos_url": "https://api.github.com/users/WeichenXu123/repos", "events_url": "https://api.github.com/users/WeichenXu123/events{/privacy}", "received_events_url": "https://api.github.com/users/WeichenXu123/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "CC @BenWilson2\r\n\r\nThis causes MLflow CI failure https://github.com/mlflow-automation/mlflow/actions/runs/6223078500/job/16896947205#step:13:7352", "This is the commit and line that is problematic. \r\n\r\nhttps://github.com/huggingface/transformers/commit/95fe0f5d806ff1b981f1870f290a4d9aaa53a5d4#diff-ccc8e98fcaa81fdf6317a652438a309bcede0bbe336774288c2fcf91d9f11082R551\r\n\r\nThe `stride[0]` object is a tuple, not an int. Should this be `stride[0][0]`?", "CC @xenova @ArthurZucker", "Thanks for the ping. My hunch is that this is due to `batch_size` being larger than 1. Just to confirm, does the same thing happen if you remove that argument?", "> Thanks for the ping. My hunch is that this is due to `batch_size` being larger than 1. Just to confirm, does the same thing happen if you remove that argument?\r\n\r\nYes It only happens when batch > 1", "Hi, I have the same issue (it was not happening before) is there any solution / workaround? This is how i use the pipeline:\r\n`return self.model(filepath,\r\n return_timestamps=\"word\", \r\n chunk_length_s=30,\r\n batch_size=32,\r\n ignore_warning=True)`\r\nThanks", "cc @sanchit-gandhi ", "Thanks for the comprehensive issues description @WeichenXu123! Opened a PR here: #26699. Will discuss how to fix this best with @xenova and let you know when merged!", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "@josebruzzoni\r\n\r\nI've had the same issue.\r\n@WeichenXu123 's replies were very helpful, thanks man!\r\n\r\nFirst try setting batch size to 1 if that's not a problem.\r\n\r\nSecond, you can try going into the location that the error message says in the 3rd from last row.\r\nFor me it says \"\"/home/nofreewill/.local/lib/python3.10/site-packages/transformers/pipelines/automatic_speech_recognition.py\", line 552, in _forward\"\r\nSo I opened it, went to line 552 and changed according to @WeichenXu123 's suggestion:\r\ngenerate_kwargs[\"num_frames\"] = _stride[0]_ // self.feature_extractor.hop_length\r\ngenerate_kwargs[\"num_frames\"] = **stride[0][0]** // self.feature_extractor.hop_length\r\n\r\nAnd it works now with batch size > 1 as well", "also have the same issue, any update on this @sanchit-gandhi ?", "@timnlupo\r\nhave you tried my solution?\r\n\r\nYou only need to write:\r\n[0]\r\nin the file where I said in order to try this out.\r\n\r\nJust open the file with the location that the error message says and go to the line that it mentions and write\r\nstride[0][0] instead of stride[0]", "Another workaround in addition to `batch_size=1` is to disable chunking: `chunk_length_s=0`, but this is probably not feasable for the most.\r\nWhen I read the discussion on the aforementioned PR i see that the maintainers considered this relation already.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.", "Any updates ? @sanchit-gandhi ? Thanks!", "Good news: this was fixed by https://github.com/huggingface/transformers/pull/28114 ๐Ÿฅณ ", "So what is the solution to using batch_size to 1 ? this makes the whole process much slower...." ]
1,695
1,705
1,703
NONE
null
### System Info OS: ubuntu 20.04 transformer version: master branch. `pip install git+https://github.com/huggingface/transformers` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction Run following code: ```python import transformers from packaging.version import Version import pathlib def whisper_pipeline(): task = "automatic-speech-recognition" architecture = "openai/whisper-tiny" model = transformers.WhisperForConditionalGeneration.from_pretrained(architecture) tokenizer = transformers.WhisperTokenizer.from_pretrained(architecture) feature_extractor = transformers.WhisperFeatureExtractor.from_pretrained(architecture) if Version(transformers.__version__) > Version("4.30.2"): model.generation_config.alignment_heads = [[2, 2], [3, 0], [3, 2], [3, 3], [3, 4], [3, 5]] return transformers.pipeline( task=task, model=model, tokenizer=tokenizer, feature_extractor=feature_extractor ) def raw_audio_file(): # The dataset file comes from https://github.com/mlflow/mlflow/blob/master/tests/datasets/apollo11_launch.wav datasets_path = "/path/to/apollo11_launch.wav" return pathlib.Path(datasets_path).read_bytes() inference_config = { "return_timestamps": "word", "chunk_length_s": 60, "batch_size": 16, } whisper = whisper_pipeline() raw_audio_file_data = raw_audio_file() prediction = whisper(raw_audio_file_data, return_timestamps="word", chunk_length_s=60, batch_size=16) ``` The last line raises error like: ``` >>> prediction = whisper(raw_audio_file_data, return_timestamps="word", chunk_length_s=60, batch_size=16) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/weichen.xu/miniconda3/envs/py310/lib/python3.10/site-packages/transformers/pipelines/automatic_speech_recognition.py", line 356, in __call__ return super().__call__(inputs, **kwargs) File "/home/weichen.xu/miniconda3/envs/py310/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1132, in __call__ return next( File "/home/weichen.xu/miniconda3/envs/py310/lib/python3.10/site-packages/transformers/pipelines/pt_utils.py", line 124, in __next__ item = next(self.iterator) File "/home/weichen.xu/miniconda3/envs/py310/lib/python3.10/site-packages/transformers/pipelines/pt_utils.py", line 266, in __next__ processed = self.infer(next(self.iterator), **self.params) File "/home/weichen.xu/miniconda3/envs/py310/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1046, in forward model_outputs = self._forward(model_inputs, **forward_params) File "/home/weichen.xu/miniconda3/envs/py310/lib/python3.10/site-packages/transformers/pipelines/automatic_speech_recognition.py", line 551, in _forward generate_kwargs["num_frames"] = stride[0] // self.feature_extractor.hop_length TypeError: unsupported operand type(s) for //: 'tuple' and 'int' >>> ``` Note this error only happens on transformer github master branch. For released version, above code works well. ### Expected behavior My example code should not raise error.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26253/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26253/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26252
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26252/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26252/comments
https://api.github.com/repos/huggingface/transformers/issues/26252/events
https://github.com/huggingface/transformers/pull/26252
1,902,620,581
PR_kwDOCUB6oc5aqKya
26,252
fix deepspeed available detection
{ "login": "fxmarty", "id": 9808326, "node_id": "MDQ6VXNlcjk4MDgzMjY=", "avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fxmarty", "html_url": "https://github.com/fxmarty", "followers_url": "https://api.github.com/users/fxmarty/followers", "following_url": "https://api.github.com/users/fxmarty/following{/other_user}", "gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}", "starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions", "organizations_url": "https://api.github.com/users/fxmarty/orgs", "repos_url": "https://api.github.com/users/fxmarty/repos", "events_url": "https://api.github.com/users/fxmarty/events{/privacy}", "received_events_url": "https://api.github.com/users/fxmarty/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Could you show us how to see the following?\r\n\r\n> Having tests/deepspeed & [get_env](https://github.com/huggingface/transformers/blob/eb8489971ac1415f67b0abdd1584fde8b659ced9/src/transformers/testing_utils.py#L1344) that adds tests/ in the path then makes is_deepspeed_availebl() returns True", "@ydshieh Reproduction (maybe add relevant `breakpoint()` and print `sys.path` as well):\r\n\r\n```\r\ndocker run --rm -it --gpus all nvcr.io/nvidia/pytorch:23.08-py3 /bin/bash\r\npip list | grep apex # apex is here!\r\ngit clone https://github.com/huggingface/transformers.git\r\ncd transformers\r\npip install -e .[dev-torch]\r\npytest tests/extended/test_trainer_ext.py::TestTrainerExt::test_run_seq2seq_apex -s -vvvvv\r\n```", "I tried to do it in a simple way\r\n\r\n```python\r\nimport os\r\n# given by `get_env`\r\nos.environ[\"PYTHONPATH\"] = \"/transformers/src:/transformers/tests:\"\r\n# shows `/transformers/src:/transformers/tests:`\r\nprint(os.environ[\"PYTHONPATH\"])\r\nfrom transformers.deepspeed import is_deepspeed_available\r\nprint(is_deepspeed_available())\r\n```\r\n\r\nand it prints `False`. Therefore, I am not very certain about the issue. Even with `tests/`, still `False`", "@ydshieh You would need to use: `PYTHONPATH=/path/to/transformers/tests python -c \"from transformers.deepspeed import is_deepspeed_available; print(is_deepspeed_available())\"` to reproduce the issue.\r\n\r\nI believe setting `PYTHONPATH` within a python script does not change `sys.path` (or somthing like this).", "For the record, to reproduce (inside our docker env)\r\n\r\n```\r\nPYTHONPATH=/transformers/tests python3 -c \"from transformers.deepspeed import is_deepspeed_available; print(is_deepspeed_available())\"\r\n```" ]
1,695
1,695
1,695
COLLABORATOR
null
As per title, make the deepspeed available function more robust as https://github.com/huggingface/accelerate/blob/69e4c3c54da3201eda288b500d138761e7a5221c/src/accelerate/utils/imports.py#L72 Having `tests/deepspeed` & [`get_env`](https://github.com/huggingface/transformers/blob/eb8489971ac1415f67b0abdd1584fde8b659ced9/src/transformers/testing_utils.py#L1344) that adds `tests/` in the path then makes `is_deepspeed_availebl()` returns `True` although it should not, and in turn trainer.py [tries to import](https://github.com/huggingface/transformers/blob/eb8489971ac1415f67b0abdd1584fde8b659ced9/src/transformers/trainer.py#L217) `DeepSpeedSchedulerWrapper` that is [not imported in accelerate](https://github.com/huggingface/accelerate/blob/69e4c3c54da3201eda288b500d138761e7a5221c/src/accelerate/utils/__init__.py#L92) as accelerate rightfully detects that DeepSpeed is not available. This issue makes the test `tests/extended/test_trainer_ext.py::TestTrainerExt::test_run_seq2seq_apex` fail when APEX is installed but DeepSpeed is not.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26252/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26252/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26252", "html_url": "https://github.com/huggingface/transformers/pull/26252", "diff_url": "https://github.com/huggingface/transformers/pull/26252.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26252.patch", "merged_at": 1695220815000 }
https://api.github.com/repos/huggingface/transformers/issues/26251
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26251/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26251/comments
https://api.github.com/repos/huggingface/transformers/issues/26251/events
https://github.com/huggingface/transformers/issues/26251
1,902,523,813
I_kwDOCUB6oc5xZjWl
26,251
When I use dataset streaming loading, continuing training will always wait.
{ "login": "Sakurakdx", "id": 48399040, "node_id": "MDQ6VXNlcjQ4Mzk5MDQw", "avatar_url": "https://avatars.githubusercontent.com/u/48399040?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Sakurakdx", "html_url": "https://github.com/Sakurakdx", "followers_url": "https://api.github.com/users/Sakurakdx/followers", "following_url": "https://api.github.com/users/Sakurakdx/following{/other_user}", "gists_url": "https://api.github.com/users/Sakurakdx/gists{/gist_id}", "starred_url": "https://api.github.com/users/Sakurakdx/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Sakurakdx/subscriptions", "organizations_url": "https://api.github.com/users/Sakurakdx/orgs", "repos_url": "https://api.github.com/users/Sakurakdx/repos", "events_url": "https://api.github.com/users/Sakurakdx/events{/privacy}", "received_events_url": "https://api.github.com/users/Sakurakdx/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Because your dataset is an iterable, you have to go through the samples in order. For now, it not possible to shortcut this process. This issue has been mentioned a couple times and I think the Huggingface team might be working on something.\r\nI personally use [MosaicML streaming](https://github.com/mosaicml/streaming) to address this issue.", "> Because your dataset is an iterable, you have to go through the samples in order. For now, it not possible to shortcut this process. This issue has been mentioned a couple times and I think the Huggingface team might be working on something. I personally use [MosaicML streaming](https://github.com/mosaicml/streaming) to address this issue.\r\n\r\nfine, thanks for your suggests", "@Hubert-Bonisseur Trying to approach the problem the same way and switching to the MosaicML streaming. Do you just use Streaming as an IterableDataset into Trainer? Wondering if there are specific modifications needed to resume_from_checkpoint with streaming. ", "If you use the Trainer, you need do to a couple changes: \r\n- You need to overwrite the get_train_dataloader to [use the StreamingDataLoader](https://github.com/mosaicml/streaming/issues/421) from mosaicML streaming\r\n- You need to save the state_dict of the dataloader at each save (use on_save callback)\r\n- The `skip_first_batches` of accelerate that works with standard Dataloader no longer does. As a result I have opted to load the StreamingDataloader state_dict in `get_train_dataloader` and I have patched skip_first_batches like this:\r\n```python\r\nimport accelerate\r\ndef skip_first_batches(dataloader, num_batches=0):\r\n return dataloader\r\naccelerate.skip_first_batches = skip_first_batches\r\n```\r\nIt is dirty but it works, I should probably open an issue to propose making the `skip_first_batches` method customizable\r\n", "Thank you this is super helpful!", "@Hubert-Bonisseur qq, when using the StreamingDataLoader with Trainer, do you prepare it with accelerate? The `accelerate.prepare()` wraps it into a `DataLoaderDispatcher` object where I can't call `save_dict()`. It seems to run when I don't call accelerate, but wondering if that will lead to bugs downstream. ", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,695
1,698
1,698
NONE
null
### System Info - `transformers` version: 4.30.2 - Platform: Linux-5.4.143.bsk.8-amd64-x86_64-with-glibc2.28 - Python version: 3.10.12 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.2 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction def get_train_dataset(self): dataset = datasets.load_dataset(self.config.train_files)["train"] data_size = len(dataset) iterable_dataset = dataset.to_iterable_dataset(num_shards=64) # faster iterable_dataset = iterable_dataset.shuffle(seed=self.config.seed, buffer_size=10000) return iterable_dataset bash train.sh --per_device_train_batch_size 100 --per_device_eval_batch_size 3 2 --dataloader_num_workers 24 --resume_from_checkpoint True ### Expected behavior I use iterable_dataset in datasets to load data streamingly. When I use the trainer to continue training, it will spend a lot of time processing the data. Because the amount of data is very large, it cannot enter the training process for a long time. What can I do to avoid or speed up this process?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26251/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26251/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26250
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26250/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26250/comments
https://api.github.com/repos/huggingface/transformers/issues/26250/events
https://github.com/huggingface/transformers/pull/26250
1,902,486,497
PR_kwDOCUB6oc5apuUi
26,250
Keypoints 0.0 are confusing ../transformers/models/detr/image_processing_detr.py which are fixed
{ "login": "hackpk", "id": 22720175, "node_id": "MDQ6VXNlcjIyNzIwMTc1", "avatar_url": "https://avatars.githubusercontent.com/u/22720175?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hackpk", "html_url": "https://github.com/hackpk", "followers_url": "https://api.github.com/users/hackpk/followers", "following_url": "https://api.github.com/users/hackpk/following{/other_user}", "gists_url": "https://api.github.com/users/hackpk/gists{/gist_id}", "starred_url": "https://api.github.com/users/hackpk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hackpk/subscriptions", "organizations_url": "https://api.github.com/users/hackpk/orgs", "repos_url": "https://api.github.com/users/hackpk/repos", "events_url": "https://api.github.com/users/hackpk/events{/privacy}", "received_events_url": "https://api.github.com/users/hackpk/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi @hackpk, I am making the first review here :) ", "Hi @hackpk , :) \r\n\r\nI opened a discussion in issue #26126 trying to replicate the error. Let's discuss it there, then I continue with the review here.\r\n", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26250). All of your documentation changes will be reflected on that endpoint." ]
1,695
1,701
1,701
CONTRIBUTOR
null
# What does this PR do? This PR will fix the keypoint 0.0 at this transformers/models/detr/image_processing_detr.py as suggested by @duckheada To fix this, I had to edit this file in transformers library: .conda/lib/python3.9/site-packages/transformers/models/detr/image_processing_detr.py I changed this as suggested by: @duckheada ```python if annotations and "keypoints" in annotations[0]: keypoints = [obj["keypoints"] for obj in annotations] print("keypoints", keypoints) #TODO: remove keypoints = np.asarray(keypoints, dtype=np.float32) num_keypoints = keypoints.shape[0] keypoints = keypoints.reshape((-1, 3)) if num_keypoints else keypoints new_target["keypoints"] = keypoints[keep] ``` To this: ```python if annotations and "keypoints" in annotations[0]: keypoints = [obj["keypoints"] for obj in annotations] # Apply the keep mask here to filter the relevant annotations keypoints = [keypoints[i] for i in range(len(keypoints)) if keep[i]] # converting the filtered keypoints list to a numpy array and reshape it keypoints = np.asarray(keypoints, dtype=np.float32) num_keypoints = keypoints.shape[0] keypoints = keypoints.reshape((-1, 3)) if num_keypoints else keypoints new_target["keypoints"] = keypoints # We no longer apply keep mask here ``` Why? To ensure that the filtering applied to the key points respects its original structure (number of keypoints per annotation). When you reshape keypoints with key points. reshape((-1, 3)), it loses the information about which keypoints belong to which annotation. Here is what needed to be done (at least in my little hack-ish workaround): Before reshaping the key points array, I had to apply the keep mask to retain only the annotations I was interested in. Only after this could I reshape the keypoints array to apply further operations. Then, I applied the keep mask on the keypoints list before converting it into a numpy array and reshaping it. This ensured that I only keep the keypoints corresponding to the bounding boxes that satisfy the condition in the keep mask. Fixes #26126 ## Who can review? @ArthurZucker, @younesbelkada, and @amyeroberts Please let me know if I need to do anything else for this issue.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26250/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26250/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26250", "html_url": "https://github.com/huggingface/transformers/pull/26250", "diff_url": "https://github.com/huggingface/transformers/pull/26250.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26250.patch", "merged_at": 1701682153000 }
https://api.github.com/repos/huggingface/transformers/issues/26249
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26249/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26249/comments
https://api.github.com/repos/huggingface/transformers/issues/26249/events
https://github.com/huggingface/transformers/issues/26249
1,902,455,466
I_kwDOCUB6oc5xZSqq
26,249
run_summarization.py t5 model output inconsistent results
{ "login": "menghongtao", "id": 8115797, "node_id": "MDQ6VXNlcjgxMTU3OTc=", "avatar_url": "https://avatars.githubusercontent.com/u/8115797?v=4", "gravatar_id": "", "url": "https://api.github.com/users/menghongtao", "html_url": "https://github.com/menghongtao", "followers_url": "https://api.github.com/users/menghongtao/followers", "following_url": "https://api.github.com/users/menghongtao/following{/other_user}", "gists_url": "https://api.github.com/users/menghongtao/gists{/gist_id}", "starred_url": "https://api.github.com/users/menghongtao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/menghongtao/subscriptions", "organizations_url": "https://api.github.com/users/menghongtao/orgs", "repos_url": "https://api.github.com/users/menghongtao/repos", "events_url": "https://api.github.com/users/menghongtao/events{/privacy}", "received_events_url": "https://api.github.com/users/menghongtao/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey ๐Ÿค— thanks for opening an issue! We try to keep the github issues for bugs/feature requests. \r\nCould you ask your question on the [forum](https://discuss.huggingface.co/) instead? I'm sure the community will be of help!\r\nEven better, ask the author of this model in the community tab [here](https://huggingface.co/santiviquez/t5-small-finetuned-samsum-en/discussions)", "Hi Arthur, thanks for your reply, I have ask this quention on huggingface discussion.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,695
1,698
1,698
NONE
null
I am working on T5 finetune model , and I found the [t5-small-finetuned-samsum-en](https://huggingface.co/santiviquez/t5-small-finetuned-samsum-en) model on huggingface and the rouge metric on https://paperswithcode.com/sota/summarization-on-samsum It shows: (I have delete other models except t5-small-finetuned-samsum-en) Rank | Model | ROUGE-1 | ROUGE-2 | ROUGE-L | ROUGE-LSUM | gen_len | loss | Details | Year | Tags -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- 12 | t5-small-finetuned-samsum-en | 40.039 | 15.85 | 31.808 | 36.089 | 18.107 | 2.192 | ย  | 2022 But when I use transformers examples code run_summarization.py run the model I have download from huggingface [t5-small-finetuned-samsum-en](https://huggingface.co/santiviquez/t5-small-finetuned-samsum-en) , The result is not consistent with above. My result is: my ROUGE-1 is 3.34 but the result above is 40.039, It is very different. Here is my result: (The value should *100) ***** eval metrics ***** eval_gen_len = 12.2066 eval_loss = 7.7219 eval_rouge1_high_fmeasure = 0.0383 eval_rouge1_high_precision = 0.2029 eval_rouge1_high_recall = 0.0225 eval_rouge1_low_fmeasure = 0.0334 eval_rouge1_low_precision = 0.1792 eval_rouge1_low_recall = 0.0194 eval_rouge1_mid_fmeasure = 0.0358 eval_rouge1_mid_precision = 0.1909 eval_rouge1_mid_recall = 0.0209 eval_rouge2_high_fmeasure = 0.0021 eval_rouge2_high_precision = 0.0114 eval_rouge2_high_recall = 0.0012 eval_rouge2_low_fmeasure = 0.0011 eval_rouge2_low_precision = 0.006 eval_rouge2_low_recall = 0.0006 eval_rouge2_mid_fmeasure = 0.0016 eval_rouge2_mid_precision = 0.0086 eval_rouge2_mid_recall = 0.0009 eval_rougeL_high_fmeasure = 0.0334 eval_rougeL_high_precision = 0.1803 eval_rougeL_high_recall = 0.0197 eval_rougeL_low_fmeasure = 0.0293 eval_rougeL_low_precision = 0.1587 eval_rougeL_low_recall = 0.0171 eval_rougeL_mid_fmeasure = 0.0314 eval_rougeL_mid_precision = 0.1688 eval_rougeL_mid_recall = 0.0184 eval_rougeLsum_high_fmeasure = 0.0362 eval_rougeLsum_high_precision = 0.1951 eval_rougeLsum_high_recall = 0.0213 eval_rougeLsum_low_fmeasure = 0.0317 eval_rougeLsum_low_precision = 0.1708 eval_rougeLsum_low_recall = 0.0184 eval_rougeLsum_mid_fmeasure = 0.034 eval_rougeLsum_mid_precision = 0.1832 eval_rougeLsum_mid_recall = 0.0199 eval_runtime = 0:00:54.57 eval_samples = 818 eval_samples_per_second = 14.988 eval_steps_per_second = 14.988 And I have not change the run_summarization.py script , My shell script is: python ./run_summarization_.py \ --model_name_or_path $T5_DIR \ --do_eval \ --source_prefix "summarize: " \ --output_dir ./output/huggingface-summarization \ --dataset_name $samsum_path \ --dataset_config "3.0.0" \ --per_device_train_batch_size=1 \ --per_device_eval_batch_size=1 \ --overwrite_output_dir \ --predict_with_generate \ Can anyone help me on this issue? Thanks a lot.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26249/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26249/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26248
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26248/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26248/comments
https://api.github.com/repos/huggingface/transformers/issues/26248/events
https://github.com/huggingface/transformers/pull/26248
1,902,372,501
PR_kwDOCUB6oc5apWEq
26,248
[`Trainer`] Refactor trainer + bnb logic
{ "login": "younesbelkada", "id": 49240599, "node_id": "MDQ6VXNlcjQ5MjQwNTk5", "avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4", "gravatar_id": "", "url": "https://api.github.com/users/younesbelkada", "html_url": "https://github.com/younesbelkada", "followers_url": "https://api.github.com/users/younesbelkada/followers", "following_url": "https://api.github.com/users/younesbelkada/following{/other_user}", "gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}", "starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions", "organizations_url": "https://api.github.com/users/younesbelkada/orgs", "repos_url": "https://api.github.com/users/younesbelkada/repos", "events_url": "https://api.github.com/users/younesbelkada/events{/privacy}", "received_events_url": "https://api.github.com/users/younesbelkada/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,695
1,695
1,695
CONTRIBUTOR
null
# What does this PR do? Fixes https://github.com/huggingface/transformers/issues/26207 Some users try to perform pure fine-tuning on quantized models; this is simply not supported. Before this PR, in that case we only pass a logger.info which is not a strong enough warning for users to understand why training does not work. cc @SunMarc @ArthurZucker
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26248/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26248/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26248", "html_url": "https://github.com/huggingface/transformers/pull/26248", "diff_url": "https://github.com/huggingface/transformers/pull/26248.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26248.patch", "merged_at": 1695224340000 }
https://api.github.com/repos/huggingface/transformers/issues/26247
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26247/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26247/comments
https://api.github.com/repos/huggingface/transformers/issues/26247/events
https://github.com/huggingface/transformers/pull/26247
1,902,340,836
PR_kwDOCUB6oc5apPXg
26,247
[Time series] Add PatchTSMixer
{ "login": "ajati", "id": 41211350, "node_id": "MDQ6VXNlcjQxMjExMzUw", "avatar_url": "https://avatars.githubusercontent.com/u/41211350?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ajati", "html_url": "https://github.com/ajati", "followers_url": "https://api.github.com/users/ajati/followers", "following_url": "https://api.github.com/users/ajati/following{/other_user}", "gists_url": "https://api.github.com/users/ajati/gists{/gist_id}", "starred_url": "https://api.github.com/users/ajati/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ajati/subscriptions", "organizations_url": "https://api.github.com/users/ajati/orgs", "repos_url": "https://api.github.com/users/ajati/repos", "events_url": "https://api.github.com/users/ajati/events{/privacy}", "received_events_url": "https://api.github.com/users/ajati/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26247). All of your documentation changes will be reflected on that endpoint.", "depends on #25927 ", "I am not sure I understand should I review this ? It's all red, while the other PR does not seem ready either (a lot of recent commits + red as well) ? \r\n", "ok @ArthurZucker let me have a look! will ping you when ready", "> Main comments:\r\n> \r\n> * updating docstrings to HF format\r\n> * use clear variable names everywhere\r\n> * use copied from wherever possible, especially after [[time series] Add PatchTSTย #25927](https://github.com/huggingface/transformers/pull/25927) is merged\r\n> * do not use `context_values` if we already use the term `past_values` for all our time series models (keep in mind we can't change this afterwards)\r\n\r\nHi @NielsRogge - Thanks for your inputs. We have completed all the suggested changes. Requesting your review. Feel free to let us know if any issue is missed out. Only pending step is \"copied from PatchTST\" which we will bring it back as soon as PatchTST is merged.. CC: @kashif @ajati @namctin ", "Hi @patrickvonplaten @NielsRogge - We have completed most of the requested code changes. Requesting your review and approval if things are in place. \r\n\r\nAll test-cases are passing now. \r\n\r\nCopied from PatchTST is still pending and will be done as soon as the PatchTST branch is merged.\r\n\r\nCC: @kashif @namctin @ajati ", "I'll review ๐Ÿ˜‰ ", "> I don't see a conversion script so I suppose the idea was to match 1-1 the original structure but let's avoid this if possible. The code would be a lot cleaner if we have 1 class for each mode mixing. Keep it simple and isolate what can be done in a single class, then use a mapping to get the properclass. Otherwise there is too much granularity, and too many if else controlled by config arguments. It's hard to follow and not really transformers like. It's fine when it's one layer but here it's for all forward all layers. `PatchTSMixerLinearHead` Is fine with me but there are ways to make it a bit readable\r\n> \r\n> Moreover depending on the mode it's most often just the input / output shape that changes. So for this we should control the linear with this arg see for example this function\r\n> \r\n> ```python\r\n> if self.mode in [\"common_channel\", \"mix_channel\"]:\r\n> self.base_pt_block = nn.Sequential(\r\n> nn.Dropout(head_dropout),\r\n> nn.Linear(num_features, patch_len),\r\n> )\r\n> else:\r\n> self.base_pt_block = nn.Sequential(\r\n> nn.Dropout(head_dropout),\r\n> nn.Linear(num_features, patch_len * num_input_channels),\r\n> )\r\n> ```\r\n> \r\n> Sorry for the time sensitivity and the late review! Open to discussions if these models are super good, would be a pity not to make them into a reference implementation!\r\n\r\nYes. We removed \"flatten\" mode which is just a baseline and not used mostly. Now, we do not have else-if block in every forward call. ", "Hi @ArthurZucker - We have addressed most of your comments and also the comments from other reviewers (@NielsRogge @patrickvonplaten)\r\n\r\n\r\nTODO: CopiedFrom from PatchTST pending after its merge.\r\n\r\nCC: @kashif @ajati @namctin ", "> Looks a lot better thanks all for bearing with me ๐Ÿค— In principle looks good to me, left a few nits and will have to wait for other PR to be merged. I'll let @amyeroberts handle the last review and merge as I'm off for next week! Good work!\r\n\r\nSure. Thank you @ArthurZucker for your comments. We will resolve the newer comments soon and will wait for the other PR to get merged.", "> Main comments:\r\n> \r\n> * updating docstrings to HF format\r\n> * use clear variable names everywhere\r\n> * use copied from wherever possible, especially after [[time series] Add PatchTSTย #25927](https://github.com/huggingface/transformers/pull/25927) is merged\r\n> * do not use `context_values` if we already use the term `past_values` for all our time series models (keep in mind we can't change this afterwards)\r\n\r\nAll corrections completed", "@amyeroberts ready for review", "@kashif @ajati Holding off on a review until PatchTST has been merged. There's quite a few comments I would make e.g. removing the `PatchTSTTranspose` which have been [applied to PatchTST](https://github.com/huggingface/transformers/pull/25927#discussion_r1387015192) and will be reflected there after rebasing, running `make fix-copies` and updating the code. ", "All the suggested changes from PatchTST, copied from are resolved in this branch too.. Ready for review and testcases passing now.", "@amyeroberts - Greetings! We have enabled all the changes suggested in PatchTST and also the corrections suggested in this PR from the past reviewers. Requesting your review and approval.", "Hi @ArthurZucker -Thanks for approving this PR. We have resolved all the final changes you mentioned. Please review and help with merge, if all good.\r\n\r\nPS: change with respect to adding segment names to docstring is the only one comment pending. When we add segment names in docstring following the syntax ssuggested - it getting auto removed during make fix-ups. Other than this - all other comments are resolved.", "@vijaye12 thanks for bearing with us in this long review! and congrats for the merge! ๐Ÿš€ " ]
1,695
1,701
1,701
CONTRIBUTOR
null
[PatchTSMixer](https://arxiv.org/pdf/2306.09364.pdf) ([KDD 2023](https://dl.acm.org/doi/abs/10.1145/3580305.3599533)) is a lightweight time-series modeling approach based on the MLP-Mixer architecture. In this HuggingFace implementation, we provide PatchTSMixer's capabilities to effortlessly facilitate lightweight mixing across patches, channels, and hidden features for effective multivariate time-series modeling. It also supports various attention mechanisms starting from simple gated attention to more complex self-attention blocks that can be customized accordingly. The model can be pretrained and subsequently used for various downstream tasks such as forecasting, classification and regression. @kashif Done: ~~TODOs~~ - [x] Add generate method - [x] Make pretrained dataset publicly available - [x] Make pretrained weights publicly available
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26247/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26247/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26247", "html_url": "https://github.com/huggingface/transformers/pull/26247", "diff_url": "https://github.com/huggingface/transformers/pull/26247.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26247.patch", "merged_at": 1701786695000 }
https://api.github.com/repos/huggingface/transformers/issues/26246
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26246/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26246/comments
https://api.github.com/repos/huggingface/transformers/issues/26246/events
https://github.com/huggingface/transformers/pull/26246
1,902,322,303
PR_kwDOCUB6oc5apLVT
26,246
๐ŸŒ [i18n-KO] Translated `debugging.md` to Korean
{ "login": "wonhyeongseo", "id": 29195190, "node_id": "MDQ6VXNlcjI5MTk1MTkw", "avatar_url": "https://avatars.githubusercontent.com/u/29195190?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wonhyeongseo", "html_url": "https://github.com/wonhyeongseo", "followers_url": "https://api.github.com/users/wonhyeongseo/followers", "following_url": "https://api.github.com/users/wonhyeongseo/following{/other_user}", "gists_url": "https://api.github.com/users/wonhyeongseo/gists{/gist_id}", "starred_url": "https://api.github.com/users/wonhyeongseo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wonhyeongseo/subscriptions", "organizations_url": "https://api.github.com/users/wonhyeongseo/orgs", "repos_url": "https://api.github.com/users/wonhyeongseo/repos", "events_url": "https://api.github.com/users/wonhyeongseo/events{/privacy}", "received_events_url": "https://api.github.com/users/wonhyeongseo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26246). All of your documentation changes will be reflected on that endpoint." ]
1,695
1,695
1,695
CONTRIBUTOR
null
# What does this PR do? @kj021 translated the `debugging.md` file of the documentation to Korean. I made sure hanging suggestions were resolved. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 Moved from https://github.com/huggingface/transformers/pull/24869 ## Before reviewing - [x] Check for missing / redundant translations (๋ฒˆ์—ญ ๋ˆ„๋ฝ/์ค‘๋ณต ๊ฒ€์‚ฌ) - [x] Grammar Check (๋งž์ถค๋ฒ• ๊ฒ€์‚ฌ) - [x] Review or Add new terms to glossary (์šฉ์–ด ํ™•์ธ ๋ฐ ์ถ”๊ฐ€) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [ ] Check live-preview for gotchas (live-preview๋กœ ์ •์ƒ์ž‘๋™ ํ™•์ธ) ## Who can review? (Initial) <!-- 1. ์œ„ ์ฒดํฌ๊ฐ€ ๋ชจ๋‘ ์™„๋ฃŒ๋œ ๋’ค์—, ์ด ์•„๋ž˜์— ๋ฆฌ๋ทฐ๋ฅผ ์š”์ฒญํ•  ํŒ€์›๋“ค์„ ๋ฉ˜์…˜ํ•ด์ฃผ์„ธ์š”! --> <!-- Team OSSCA, may you please review this PR? @bolizabeth, @nuatmochoi, @heuristicwave, @mjk0618, @keonju2, @harheem, @HongB1, @junejae, @54data, @Sunmin0520, @seank021, @augustinLib, @sronger, @TaeYupNoh, @kj021, @eenzeenee --> Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) <!-- 2. ํŒ€์›๋“ค๊ณผ ๋ฆฌ๋ทฐ๊ฐ€ ๋๋‚œ ํ›„์—๋งŒ ํ—ˆ๊น…ํŽ˜์ด์Šค ์ง์›๋“ค์—๊ฒŒ ๋ฆฌ๋ทฐ ์š”์ฒญํ•˜๋Š” ์•„๋ž˜ ์ฃผ์„์„ ๋…ธ์ถœํ•ด์ฃผ์„ธ์š”! --> <!-- May you please review this PR? @sgugger, @ArthurZucker, @stevhliu -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26246/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26246/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26246", "html_url": "https://github.com/huggingface/transformers/pull/26246", "diff_url": "https://github.com/huggingface/transformers/pull/26246.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26246.patch", "merged_at": 1695847664000 }
https://api.github.com/repos/huggingface/transformers/issues/26245
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26245/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26245/comments
https://api.github.com/repos/huggingface/transformers/issues/26245/events
https://github.com/huggingface/transformers/pull/26245
1,902,303,694
PR_kwDOCUB6oc5apHHW
26,245
๐ŸŒ [i18n-KO] Translated `big_models.md` to Korean
{ "login": "wonhyeongseo", "id": 29195190, "node_id": "MDQ6VXNlcjI5MTk1MTkw", "avatar_url": "https://avatars.githubusercontent.com/u/29195190?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wonhyeongseo", "html_url": "https://github.com/wonhyeongseo", "followers_url": "https://api.github.com/users/wonhyeongseo/followers", "following_url": "https://api.github.com/users/wonhyeongseo/following{/other_user}", "gists_url": "https://api.github.com/users/wonhyeongseo/gists{/gist_id}", "starred_url": "https://api.github.com/users/wonhyeongseo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wonhyeongseo/subscriptions", "organizations_url": "https://api.github.com/users/wonhyeongseo/orgs", "repos_url": "https://api.github.com/users/wonhyeongseo/repos", "events_url": "https://api.github.com/users/wonhyeongseo/events{/privacy}", "received_events_url": "https://api.github.com/users/wonhyeongseo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26245). All of your documentation changes will be reflected on that endpoint." ]
1,695
1,697
1,697
CONTRIBUTOR
null
# What does this PR do? @bolizabeth translated the `big_models.md` file of the documentation to Korean. I made sure hanging suggestions were resolved. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 Moved from https://github.com/huggingface/transformers/pull/24985 ## Before reviewing - [x] Check for missing / redundant translations (๋ฒˆ์—ญ ๋ˆ„๋ฝ/์ค‘๋ณต ๊ฒ€์‚ฌ) - [x] Grammar Check (๋งž์ถค๋ฒ• ๊ฒ€์‚ฌ) - [x] Review or Add new terms to glossary (์šฉ์–ด ํ™•์ธ ๋ฐ ์ถ”๊ฐ€) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [x] Check live-preview for gotchas (live-preview๋กœ ์ •์ƒ์ž‘๋™ ํ™•์ธ) ## Who can review? (Initial) <!-- 1. ์œ„ ์ฒดํฌ๊ฐ€ ๋ชจ๋‘ ์™„๋ฃŒ๋œ ๋’ค์—, ์ด ์•„๋ž˜์— ๋ฆฌ๋ทฐ๋ฅผ ์š”์ฒญํ•  ํŒ€์›๋“ค์„ ๋ฉ˜์…˜ํ•ด์ฃผ์„ธ์š”! --> <!-- Team OSSCA, may you please review this PR? @bolizabeth, @nuatmochoi, @heuristicwave, @mjk0618, @keonju2, @harheem, @HongB1, @junejae, @54data, @Sunmin0520, @seank021, @augustinLib, @sronger, @TaeYupNoh, @kj021, @eenzeenee --> Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) <!-- 2. ํŒ€์›๋“ค๊ณผ ๋ฆฌ๋ทฐ๊ฐ€ ๋๋‚œ ํ›„์—๋งŒ ํ—ˆ๊น…ํŽ˜์ด์Šค ์ง์›๋“ค์—๊ฒŒ ๋ฆฌ๋ทฐ ์š”์ฒญํ•˜๋Š” ์•„๋ž˜ ์ฃผ์„์„ ๋…ธ์ถœํ•ด์ฃผ์„ธ์š”! --> <!-- May you please review this PR? @sgugger, @ArthurZucker, @stevhliu -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26245/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26245/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26245", "html_url": "https://github.com/huggingface/transformers/pull/26245", "diff_url": "https://github.com/huggingface/transformers/pull/26245.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26245.patch", "merged_at": 1697148013000 }
https://api.github.com/repos/huggingface/transformers/issues/26244
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26244/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26244/comments
https://api.github.com/repos/huggingface/transformers/issues/26244/events
https://github.com/huggingface/transformers/pull/26244
1,902,291,549
PR_kwDOCUB6oc5apEd-
26,244
๐ŸŒ [i18n-KO] Translated `perf_train_gpu_many.md` to Korean
{ "login": "wonhyeongseo", "id": 29195190, "node_id": "MDQ6VXNlcjI5MTk1MTkw", "avatar_url": "https://avatars.githubusercontent.com/u/29195190?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wonhyeongseo", "html_url": "https://github.com/wonhyeongseo", "followers_url": "https://api.github.com/users/wonhyeongseo/followers", "following_url": "https://api.github.com/users/wonhyeongseo/following{/other_user}", "gists_url": "https://api.github.com/users/wonhyeongseo/gists{/gist_id}", "starred_url": "https://api.github.com/users/wonhyeongseo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wonhyeongseo/subscriptions", "organizations_url": "https://api.github.com/users/wonhyeongseo/orgs", "repos_url": "https://api.github.com/users/wonhyeongseo/repos", "events_url": "https://api.github.com/users/wonhyeongseo/events{/privacy}", "received_events_url": "https://api.github.com/users/wonhyeongseo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26244). All of your documentation changes will be reflected on that endpoint." ]
1,695
1,695
1,695
CONTRIBUTOR
null
# What does this PR do? @hyunhp translated the `perf_train_gpu_many.md` file of the documentation to Korean. I made sure hanging suggestions were resolved. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 Moved from https://github.com/huggingface/transformers/pull/24983 ## Before reviewing - [x] Check for missing / redundant translations (๋ฒˆ์—ญ ๋ˆ„๋ฝ/์ค‘๋ณต ๊ฒ€์‚ฌ) - [x] Grammar Check (๋งž์ถค๋ฒ• ๊ฒ€์‚ฌ) - [x] Review or Add new terms to glossary (์šฉ์–ด ํ™•์ธ ๋ฐ ์ถ”๊ฐ€) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [x] Check live-preview for gotchas (live-preview๋กœ ์ •์ƒ์ž‘๋™ ํ™•์ธ) ## Who can review? (Initial) <!-- 1. ์œ„ ์ฒดํฌ๊ฐ€ ๋ชจ๋‘ ์™„๋ฃŒ๋œ ๋’ค์—, ์ด ์•„๋ž˜์— ๋ฆฌ๋ทฐ๋ฅผ ์š”์ฒญํ•  ํŒ€์›๋“ค์„ ๋ฉ˜์…˜ํ•ด์ฃผ์„ธ์š”! --> <!-- Team OSSCA, may you please review this PR? @bolizabeth, @nuatmochoi, @heuristicwave, @mjk0618, @keonju2, @harheem, @HongB1, @junejae, @54data, @Sunmin0520, @seank021, @augustinLib, @sronger, @TaeYupNoh, @kj021, @eenzeenee --> Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) <!-- 2. ํŒ€์›๋“ค๊ณผ ๋ฆฌ๋ทฐ๊ฐ€ ๋๋‚œ ํ›„์—๋งŒ ํ—ˆ๊น…ํŽ˜์ด์Šค ์ง์›๋“ค์—๊ฒŒ ๋ฆฌ๋ทฐ ์š”์ฒญํ•˜๋Š” ์•„๋ž˜ ์ฃผ์„์„ ๋…ธ์ถœํ•ด์ฃผ์„ธ์š”! --> <!-- May you please review this PR? @sgugger, @ArthurZucker, @stevhliu -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26244/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26244/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26244", "html_url": "https://github.com/huggingface/transformers/pull/26244", "diff_url": "https://github.com/huggingface/transformers/pull/26244.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26244.patch", "merged_at": 1695847876000 }
https://api.github.com/repos/huggingface/transformers/issues/26243
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26243/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26243/comments
https://api.github.com/repos/huggingface/transformers/issues/26243/events
https://github.com/huggingface/transformers/pull/26243
1,902,275,046
PR_kwDOCUB6oc5apA7o
26,243
๐ŸŒ [i18n-KO] Translated `tokenizer_summary.md` to Korean
{ "login": "wonhyeongseo", "id": 29195190, "node_id": "MDQ6VXNlcjI5MTk1MTkw", "avatar_url": "https://avatars.githubusercontent.com/u/29195190?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wonhyeongseo", "html_url": "https://github.com/wonhyeongseo", "followers_url": "https://api.github.com/users/wonhyeongseo/followers", "following_url": "https://api.github.com/users/wonhyeongseo/following{/other_user}", "gists_url": "https://api.github.com/users/wonhyeongseo/gists{/gist_id}", "starred_url": "https://api.github.com/users/wonhyeongseo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wonhyeongseo/subscriptions", "organizations_url": "https://api.github.com/users/wonhyeongseo/orgs", "repos_url": "https://api.github.com/users/wonhyeongseo/repos", "events_url": "https://api.github.com/users/wonhyeongseo/events{/privacy}", "received_events_url": "https://api.github.com/users/wonhyeongseo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26243). All of your documentation changes will be reflected on that endpoint." ]
1,695
1,696
1,696
CONTRIBUTOR
null
# What does this PR do? @HanNayeoniee translated the `tokenizer_summary.md` file of the documentation to Korean. I made sure hanging suggestions were resolved. Thank you in advance for your review. Part of https://github.com/huggingface/transformers/issues/20179 Moved from https://github.com/huggingface/transformers/pull/25023 ## Before reviewing - [x] Check for missing / redundant translations (๋ฒˆ์—ญ ๋ˆ„๋ฝ/์ค‘๋ณต ๊ฒ€์‚ฌ) - [x] Grammar Check (๋งž์ถค๋ฒ• ๊ฒ€์‚ฌ) - [x] Review or Add new terms to glossary (์šฉ์–ด ํ™•์ธ ๋ฐ ์ถ”๊ฐ€) - [x] Check Inline TOC (e.g. `[[lowercased-header]]`) - [x] Check live-preview for gotchas (live-preview๋กœ ์ •์ƒ์ž‘๋™ ํ™•์ธ) ## Who can review? (Initial) <!-- 1. ์œ„ ์ฒดํฌ๊ฐ€ ๋ชจ๋‘ ์™„๋ฃŒ๋œ ๋’ค์—, ์ด ์•„๋ž˜์— ๋ฆฌ๋ทฐ๋ฅผ ์š”์ฒญํ•  ํŒ€์›๋“ค์„ ๋ฉ˜์…˜ํ•ด์ฃผ์„ธ์š”! --> <!-- Team OSSCA, may you please review this PR? @bolizabeth, @nuatmochoi, @heuristicwave, @mjk0618, @keonju2, @harheem, @HongB1, @junejae, @54data, @Sunmin0520, @seank021, @augustinLib, @sronger, @TaeYupNoh, @kj021, @eenzeenee --> Team PseudoLab, may you please review this PR? @0525hhgus, @KIHOON71, @sim-so, @gabrielwithappy, @HanNayeoniee, @wonhyeongseo, @jungnerd ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? (Final) <!-- 2. ํŒ€์›๋“ค๊ณผ ๋ฆฌ๋ทฐ๊ฐ€ ๋๋‚œ ํ›„์—๋งŒ ํ—ˆ๊น…ํŽ˜์ด์Šค ์ง์›๋“ค์—๊ฒŒ ๋ฆฌ๋ทฐ ์š”์ฒญํ•˜๋Š” ์•„๋ž˜ ์ฃผ์„์„ ๋…ธ์ถœํ•ด์ฃผ์„ธ์š”! --> <!-- May you please review this PR? @sgugger, @ArthurZucker, @stevhliu -->
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26243/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26243/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26243", "html_url": "https://github.com/huggingface/transformers/pull/26243", "diff_url": "https://github.com/huggingface/transformers/pull/26243.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26243.patch", "merged_at": 1696265733000 }
https://api.github.com/repos/huggingface/transformers/issues/26242
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26242/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26242/comments
https://api.github.com/repos/huggingface/transformers/issues/26242/events
https://github.com/huggingface/transformers/issues/26242
1,902,199,616
I_kwDOCUB6oc5xYUNA
26,242
WhisperForCTC
{ "login": "DavraYoung", "id": 33338429, "node_id": "MDQ6VXNlcjMzMzM4NDI5", "avatar_url": "https://avatars.githubusercontent.com/u/33338429?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DavraYoung", "html_url": "https://github.com/DavraYoung", "followers_url": "https://api.github.com/users/DavraYoung/followers", "following_url": "https://api.github.com/users/DavraYoung/following{/other_user}", "gists_url": "https://api.github.com/users/DavraYoung/gists{/gist_id}", "starred_url": "https://api.github.com/users/DavraYoung/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DavraYoung/subscriptions", "organizations_url": "https://api.github.com/users/DavraYoung/orgs", "repos_url": "https://api.github.com/users/DavraYoung/repos", "events_url": "https://api.github.com/users/DavraYoung/events{/privacy}", "received_events_url": "https://api.github.com/users/DavraYoung/received_events", "type": "User", "site_admin": false }
[ { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
open
false
null
[]
[ "Regarding the accuracy and if the model actually work: I was able to achieve decent accuracy (WER: 7% on my test dataset with only 10% of my dataset(100k audios), starting from WhisperForConditionalGeneration Encoder checkpoint).", "Adding as a feature request FYI @sanchit-gandhi ", "That's very cool @DavraYoung! Did you build the CTC tokenizer yourself as well? And how did `WhisperForCTC` compare to `WhisperForConditionalGeneration` when fine-tuned on your dataset? We could run very fast training by freezing the entire encoder block and only fine-tuning the CTC head ๐Ÿ‘€", "@sanchit-gandhi hi, \r\nRegarding tokenizer: I used wav2vec2 tokenizer with custom vocab(Latin lowercase alphabet + ' , like in wav2vec2 finetuning tutorial.\r\n\r\nRegarding performance:\r\nI cannot directly compare the models right now, since I trained WhisperForConditionalGeneration on slightly different dataset(some entries are not present) and some entries from my current validation dataset were present in training data of WhisperForConditionalGeneration.\r\n\r\nRegarding the actual performance on the unseen dataset, I think, WhisperForConditionalGeneration is much better than any CTC based model, especially when given previous context/prompt, but it requires good dataset with long audios with enough previous text context. CTC head based models on the other hand does not require diverse lengths dataset and may operate on smaller audios, like in my current dataset. Thats why I was investigating CTC head with WhisperEncoder\r\n\r\nIf its needed I can spend some time on training whisper-ctc-medium-960h librispeech English model with Wav2vec2-base-960h tokenizer vocab.", "After playing around with such model, I found issues with degraded performance on validation dataset.\r\n\r\nMy training setup:\r\n`openai/whisper-large` encoder + new ctc head.\r\n400 hours of uzbek short audios.\r\nModified encoder with partial positional encodings\r\n\r\n\r\nI couldnt make freezed version to converge. So I switched to unfreezed version training.\r\nModel was showing good results on 128 Testing samples.\r\nAfter training for 2 epochs I ran the model on validation dataset and unfortunately it has shown worse results that wav2vec2forCtc on the same 2048 clips dataset.\r\n\r\nI modified the positional embeddings in encoder:\r\n```python\r\ninputs_embeds = inputs_embeds.permute(0, 2, 1)\r\nembed_pos = self.embed_positions.weight\r\n# reduce embed_pos to the same shape as inputs_embeds\r\nembed_pos = embed_pos[: inputs_embeds.shape[1], :]\r\n```\r\nprobably they could cause the issue.\r\n\r\nDuring training I observe overall good loss: 0.03-0.07 (3-7%), but I repeatedly see loss jumping to 20-40%. Checked audios with that sample entries, they are fine. It seems like model fails to understand certain features.\r\n\r\n\r\nHere is the list of my models trained on the same dataset(except phone versions). \r\nPhone versions were trained on top of general model with random audio sample rate reduction + 10 hours of real phone data\r\n| Model Name | Average CER | Average WER |\r\n|---------------------------------------|-------------|-------------|\r\n| general-wav2vec2-1b-overfited-11 | 0.004 | 0.029 |\r\n| general-wav2vec2-2b-10-07-2023 | 0.008 | 0.054 |\r\n| general-medium-wav2vec2-conformer | 0.014 | 0.091 |\r\n| mixed-wav2vec-2b-01-09-2023 | 0.014 | 0.086 |\r\n| general-wav2vec2-medium-14-09-2023 | 0.023 | 0.139 |\r\n| general-wav2vec2-small-13-09-2023 | 0.056 | 0.305 |\r\n| general-wav2vec2-100m-11-09-2023 | 0.169 | 0.723 |\r\n| general-whisper-large-ctc-18-09-2023 | 0.178 | 0.197 |\r\n| phone-whisper-large-ctc-18-09-2023 | 0.187 | 0.249 |\r\n| general-whisper-medium-ctc-18-09-2023 | 0.191 | 0.265 |\r\n| general-whisper-ultra-ctc-18-09-2023 | 0.19 | 0.26 |\r\n\r\n\r\n", "Hey @DavraYoung! Thanks for explaining a bit more about how you're using this model. My only concern with adding this to the Transformers' library is that it's a bit of an 'un-official' implementation? That is to say, there are no official pre-trained weights available for the model. \r\n\r\nHaving pre-trained weights and reference code are general pre-requisites for adding a model to Transformers. My view is that Whisper for CTC is a nice extension of the Whisper Transformers code to a different modelling paradigm. However, it's not one that is natively supported in the original Whisper library, or contains official weights on the Hub. Hence, it could be a nice example that you share in a standalone repo of your own? Or showcased on the Hub with your fine-tuned weights?\r\n\r\nRegarding CTC vs Enc-Dec, there are some nice comparisons in the ESB paper: https://arxiv.org/abs/2210.13352\r\n\r\nNotably, CTC performs worse than Enc-Dec with punctuation and casing, and makes more frequent spelling errors. It's these aspects of robustness that make Whisper Enc-Dec such an appealing model for ASR, so I think promoting the Enc-Dec architecture is sensible here.\r\n\r\nIt's hard to comment on your training results without any information / statistics on the training data or task, but they seem to suggest that Wav2Vec2 is a more promising approach for encoder only CTC decoding.", "@DavraYoung \r\nThanks for sharing great work!\r\ncan I use utilize your codes to compare with other CTC based models,\r\nlike HuBERT / WavLM / XLS-R / MMS ?\r\n\r\nas @sanchit-gandhi mentioned, comparing the results with other CTC based models will give the WhisperEncoderForCTC a fairer comparison!", "@cjw414 yes, no problem.\r\nI would also recommend to change the encoder embeddings to work properly with 'longest' padding.\r\n```python\r\n# reduce embed_pos to the same shape as inputs_embeds\r\nembed_pos = embed_pos[: inputs_embeds.shape[1], :]\r\n```\r\nhttps://github.com/huggingface/transformers/blob/e469be340673d1f6931eb22562efd2be7f5a5b8d/src/transformers/models/whisper/modeling_whisper.py#L902\r\n\r\nOtherwise you will need to use WhisperFeatureExtractor with padded audio length to 30s, which may impact training speed, if your average audio length is short.", "thx, but I had my personal experience of Whisper not functioning well if it given short audios without padding.\r\nmaybe I did something wrong, but I might just start with paddding with 30s\r\n\r\nWill notice you if I get some findings!", "I also found that in my Distil-Whisper experiments padding to 30s worked better than non-padding! Likely because the model is pre-trained on such a vast amount of data that it ends up working so strongly on padded inputs", "> I also found that in my Distil-Whisper experiments padding to 30s worked better than non-padding! Likely because the model is pre-trained on such a vast amount of data that it ends up working so strongly on padded inputs\r\n\r\nyup probably this might be one of the obstacles for Whisper from slow inference speed\r\nbtw, I really liked your distil-whisper!!" ]
1,695
1,704
null
NONE
null
### Feature request Request to add WhisperForCTC model. ### Motivation it would be cool if we had custom WhisperForCTC with Whisper encoder and ctc head, just like Wav2vec2, but since whisper is based on mel spectograms, I think it may bring better results. ### Your contribution Here is my implementation, I mostly copied from Wav2vec2ForCTC NOTE: there is TODO that needs to be resolved, I didnt test that part, since whisper operates with transposed hidden_states shape ```python _HIDDEN_STATES_START_POSITION = 2 class ExtendedWhisperConfig(WhisperConfig): def __init__( self, ctc_loss_reduction: str = "mean", final_dropout: float = 0.0, ctc_zero_infinity: bool = False, **kwargs, ): super().__init__(**kwargs) self.ctc_loss_reduction = ctc_loss_reduction self.final_dropout = final_dropout self.ctc_zero_infinity = ctc_zero_infinity class WhisperEncoderForCTC(WhisperPreTrainedModel): config_class = ExtendedWhisperConfig def __init__(self, config): super().__init__(config) self.encoder = WhisperEncoder(config) self.dropout = nn.Dropout(config.final_dropout) if config.vocab_size is None: raise ValueError( f"You are trying to instantiate {self.__class__} with a configuration that " "does not define the vocabulary size of the language model head. Please " "instantiate the model as follows: `WhisperEncoderForCTC.from_pretrained(..., vocab_size=vocab_size)`. " "or define `vocab_size` of your model's configuration." ) output_hidden_size = ( config.output_hidden_size if hasattr(config, "add_adapter") and config.add_adapter else config.hidden_size ) self.lm_head = nn.Linear(output_hidden_size, config.vocab_size) # Initialize weights and apply final processing self.post_init() def freeze_base_model(self): """ Calling this function will disable the gradient computation for the base model so that its parameters will not be updated during training. Only the classification head will be updated. """ for param in self.encoder.parameters(): param.requires_grad = False def forward( self, input_features: Optional[torch.Tensor], attention_mask: Optional[torch.Tensor] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None, labels: Optional[torch.Tensor] = None, ) -> Union[Tuple, CausalLMOutput]: r""" labels (`torch.LongTensor` of shape `(batch_size, target_length)`, *optional*): Labels for connectionist temporal classification. Note that `target_length` has to be smaller or equal to the sequence length of the output logits. Indices are selected in `[-100, 0, ..., config.vocab_size - 1]`. All labels set to `-100` are ignored (masked), the loss is only computed for labels in `[0, ..., config.vocab_size - 1]`. """ return_dict = ( return_dict if return_dict is not None else self.config.use_return_dict ) outputs = self.encoder( input_features, attention_mask=attention_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) hidden_states = outputs[0] hidden_states = self.dropout(hidden_states) logits = self.lm_head(hidden_states) loss = None if labels is not None: if labels.max() >= self.config.vocab_size: raise ValueError( f"Label values must be <= vocab_size: {self.config.vocab_size}" ) attention_mask = ( attention_mask if attention_mask is not None else torch.ones_like(input_features.transpose(1, 2), dtype=torch.long) ) # TODO: check if this is correct input_lengths = self._get_feat_extract_output_lengths( attention_mask.sum(-1) ).to(torch.long) # assuming that padded tokens are filled with -100 # when not being attended to labels_mask = labels >= 0 target_lengths = labels_mask.sum(-1) flattened_targets = labels.masked_select(labels_mask) # ctc_loss doesn't support fp16 log_probs = nn.functional.log_softmax( logits, dim=-1, dtype=torch.float32 ).transpose(0, 1) with torch.backends.cudnn.flags(enabled=False): loss = nn.functional.ctc_loss( log_probs, flattened_targets, input_lengths, target_lengths, blank=self.config.pad_token_id, reduction=self.config.ctc_loss_reduction, zero_infinity=self.config.ctc_zero_infinity, ) if not return_dict: output = (logits,) + outputs[_HIDDEN_STATES_START_POSITION:] return ((loss,) + output) if loss is not None else output return CausalLMOutput( loss=loss, logits=logits, hidden_states=outputs.hidden_states, attentions=outputs.attentions, ) ```
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26242/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26242/timeline
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/26241
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26241/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26241/comments
https://api.github.com/repos/huggingface/transformers/issues/26241/events
https://github.com/huggingface/transformers/issues/26241
1,902,189,761
I_kwDOCUB6oc5xYRzB
26,241
WhisperFeatureExtractor padding='longest' cause whisper model to fail.
{ "login": "DavraYoung", "id": 33338429, "node_id": "MDQ6VXNlcjMzMzM4NDI5", "avatar_url": "https://avatars.githubusercontent.com/u/33338429?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DavraYoung", "html_url": "https://github.com/DavraYoung", "followers_url": "https://api.github.com/users/DavraYoung/followers", "following_url": "https://api.github.com/users/DavraYoung/following{/other_user}", "gists_url": "https://api.github.com/users/DavraYoung/gists{/gist_id}", "starred_url": "https://api.github.com/users/DavraYoung/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DavraYoung/subscriptions", "organizations_url": "https://api.github.com/users/DavraYoung/orgs", "repos_url": "https://api.github.com/users/DavraYoung/repos", "events_url": "https://api.github.com/users/DavraYoung/events{/privacy}", "received_events_url": "https://api.github.com/users/DavraYoung/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hey @DavraYoung! The behaviour you've encountered here is the way the Whisper model gets around dealing with padded/truncated inputs: all input audios are padded/truncated to 30 seconds, regardless of their length, before being converted to log-mel spectrogram inputs. The model is then trained **without** an attention mask. Instead, it learns to ignore the padded inputs from the spectrogram inputs directly.\r\n\r\nAt inference time, we have to match the paradigm the model was trained on, i.e. always pad/truncate audios to 30 seconds. This is why the feature extractor and positional embeddings always expect log-mel spectrograms with a sequence length of 1500, which corresponds to 30 seconds of audio input.\r\n\r\nYou'll find that the OpenAI Whisper implementation also forces the inputs to always be 30 seconds. The Transformers' implementation thus matches this for strict one-to-one equivalence.\r\n\r\nIf you're interested in passing shorter log-mels, you can set the corresponding attribute in the feature extractor, and slice the positional embeddings to the required length.\r\n\r\nHere's a codesnippet on how you can achieve this, slicing to a sequence length of 500 (corresponding to 10 seconds of audio input): https://github.com/sanchit-gandhi/codesnippets/blob/main/whisper-reduce-context.ipynb\r\n\r\nThere's a justification for why we don't slice on-the-fly here: https://github.com/huggingface/transformers/issues/25744#issuecomment-1703112076", "Hey @DavraYoung - did the above explanation help with tackling your issue?", "Hi @sanchit-gandhi\nIf you mean WhisperCTC model implementation, then no, it didn't help.\nThough I tried training it only with padding=\"longest\" and with modified Encoder. But I think it should not affect the accuracy much\n\nI will have time to come back to the experiments with this model in 2 weeks", "Is it ok if we close the issue given that we're keeping the Whisper input context length fixed? We can continue to discuss Whisper CTC on the other dedicated issue thread!" ]
1,695
1,696
1,696
NONE
null
### System Info Hi, I have found huge memory consumption on my WhisperForAudioClassification model even when I supplied small audios, turns out WhisperFeatureExtractor always pads features to 30s chunks, even if my audios is 200ms long(I was doing per word speaker embedding). Then I tried specifying padding='longest' which should not pad number of embeddings for audio, but turns out WhisperEncoder does not support dynamic number of embedding causing it to fail: How I solved the problem: ```python # reduce embed_pos to the same shape as inputs_embeds embed_pos = embed_pos[: inputs_embeds.shape[1], :] ``` https://github.com/huggingface/transformers/blob/e469be340673d1f6931eb22562efd2be7f5a5b8d/src/transformers/models/whisper/modeling_whisper.py#L902 ### Who can help? @sanchit-gandhi ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction How to reproduce the issue: ```python from transformers import WhisperForConditionalGeneration, WhisperFeatureExtractor print("np_audio.shape=", np_audio.shape) # np_audio.shape= (16000,) 16k samples in 1s feature_extractor = WhisperFeatureExtractor.from_pretrained("openai/whisper-small") features = feature_extractor(np_audio, sampling_rate=16000, return_tensors="pt", padding="longest").input_features model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small") print("features.shape=", features.shape) # features.shape= torch.Size([1, 80, 100]). batch_size=1, feature_size=80, seq_len=100 model(features) ``` In my case I supplied 1000ms audio, that has 16k samples the model has thrown an error: ```log RuntimeError: The size of tensor a (50) must match the size of tensor b (1500) at non-singleton dimension 1 File C:\projects\stt\venv\lib\site-packages\transformers\models\whisper\modeling_whisper.py:902, in WhisperEncoder.forward(self, input_features, attention_mask, head_mask, output_attentions, output_hidden_states, return_dict) 899 inputs_embeds = inputs_embeds.permute(0, 2, 1) 900 embed_pos = self.embed_positions.weight --> 902 hidden_states = inputs_embeds + embed_pos 903 hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training) 905 encoder_states = () if output_hidden_states else None ``` ### Expected behavior I expect the model output me the response without error. At least when working with classification head
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26241/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26241/timeline
not_planned
null
null
https://api.github.com/repos/huggingface/transformers/issues/26240
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26240/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26240/comments
https://api.github.com/repos/huggingface/transformers/issues/26240/events
https://github.com/huggingface/transformers/pull/26240
1,902,185,276
PR_kwDOCUB6oc5aotvE
26,240
Fixed unclosed p tags
{ "login": "HanSeokhyeon", "id": 38755868, "node_id": "MDQ6VXNlcjM4NzU1ODY4", "avatar_url": "https://avatars.githubusercontent.com/u/38755868?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HanSeokhyeon", "html_url": "https://github.com/HanSeokhyeon", "followers_url": "https://api.github.com/users/HanSeokhyeon/followers", "following_url": "https://api.github.com/users/HanSeokhyeon/following{/other_user}", "gists_url": "https://api.github.com/users/HanSeokhyeon/gists{/gist_id}", "starred_url": "https://api.github.com/users/HanSeokhyeon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HanSeokhyeon/subscriptions", "organizations_url": "https://api.github.com/users/HanSeokhyeon/orgs", "repos_url": "https://api.github.com/users/HanSeokhyeon/repos", "events_url": "https://api.github.com/users/HanSeokhyeon/events{/privacy}", "received_events_url": "https://api.github.com/users/HanSeokhyeon/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26240). All of your documentation changes will be reflected on that endpoint." ]
1,695
1,695
1,695
CONTRIBUTOR
null
# What does this PR do? Fixed unclosed p tags ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26240/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26240/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26240", "html_url": "https://github.com/huggingface/transformers/pull/26240", "diff_url": "https://github.com/huggingface/transformers/pull/26240.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26240.patch", "merged_at": 1695407968000 }
https://api.github.com/repos/huggingface/transformers/issues/26239
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26239/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26239/comments
https://api.github.com/repos/huggingface/transformers/issues/26239/events
https://github.com/huggingface/transformers/issues/26239
1,902,114,655
I_kwDOCUB6oc5xX_df
26,239
CodeLlama Tokenizer encoding bug
{ "login": "jcao-ai", "id": 8946363, "node_id": "MDQ6VXNlcjg5NDYzNjM=", "avatar_url": "https://avatars.githubusercontent.com/u/8946363?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jcao-ai", "html_url": "https://github.com/jcao-ai", "followers_url": "https://api.github.com/users/jcao-ai/followers", "following_url": "https://api.github.com/users/jcao-ai/following{/other_user}", "gists_url": "https://api.github.com/users/jcao-ai/gists{/gist_id}", "starred_url": "https://api.github.com/users/jcao-ai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jcao-ai/subscriptions", "organizations_url": "https://api.github.com/users/jcao-ai/orgs", "repos_url": "https://api.github.com/users/jcao-ai/repos", "events_url": "https://api.github.com/users/jcao-ai/events{/privacy}", "received_events_url": "https://api.github.com/users/jcao-ai/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "can confirm the inconsistency cc @pacman100 @ArthurZucker, minor differences in tokenization may not significantly impact the final results though.", "I'll have a look, LlamaTokenizer does not seem to suffer from this so should be quick to fix! " ]
1,695
1,695
1,695
NONE
null
### System Info transformers: 4.33.0 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction There is one more space than original text, and there is a mismatch in `non-fast` version: ```python >>> txt = '''[INST] # Python\ndef fibonacci(n): </s><s>[/INST]''' >>> t_fast = AutoTokenizer.from_pretrained('/models/CodeLlama-34b-Instruct-hf', use_fast=True) Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. >>> t_slow = AutoTokenizer.from_pretrained('/models/CodeLlama-34b-Instruct-hf', use_fast=False) Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. >>> >>> t_fast.decode(t_fast.encode(txt)) '<s> [INST] # Python\ndef fibonacci(n): </s><s> [/INST]' >>> >>> t_slow.decode(t_slow.encode(txt)) '<s> [INST] # Python\ndef fibonacci(n): </s><s>/INST]' ``` ### Expected behavior Decoded text should be the same as the original text ?
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26239/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26239/timeline
completed
null
null
https://api.github.com/repos/huggingface/transformers/issues/26238
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26238/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26238/comments
https://api.github.com/repos/huggingface/transformers/issues/26238/events
https://github.com/huggingface/transformers/pull/26238
1,902,045,166
PR_kwDOCUB6oc5aoP0a
26,238
fixing + testing center_crop, so it can accept odd sizes
{ "login": "rafaelpadilla", "id": 31217453, "node_id": "MDQ6VXNlcjMxMjE3NDUz", "avatar_url": "https://avatars.githubusercontent.com/u/31217453?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rafaelpadilla", "html_url": "https://github.com/rafaelpadilla", "followers_url": "https://api.github.com/users/rafaelpadilla/followers", "following_url": "https://api.github.com/users/rafaelpadilla/following{/other_user}", "gists_url": "https://api.github.com/users/rafaelpadilla/gists{/gist_id}", "starred_url": "https://api.github.com/users/rafaelpadilla/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rafaelpadilla/subscriptions", "organizations_url": "https://api.github.com/users/rafaelpadilla/orgs", "repos_url": "https://api.github.com/users/rafaelpadilla/repos", "events_url": "https://api.github.com/users/rafaelpadilla/events{/privacy}", "received_events_url": "https://api.github.com/users/rafaelpadilla/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26238). All of your documentation changes will be reflected on that endpoint.", "> Thanks for fixing!\r\n> \r\n> This is likely going to result in changes in outputs for some users, in particular for CLIP which is our most used vision model. As it's fixing logic I think this OK but we should be aware of this for any future issues. @rafaelpadilla could you run the slow model tests for CLIP and any other model which uses center_crop in its image processor before merging?\r\n\r\nI identified [these 48](https://github.com/search?q=repo%3Ahuggingface%2Ftransformers%20Processor(BaseImageProcessor)%3A&type=code) XXXImageProcessors that inherit from [BaseImageProcessor](https://github.com/huggingface/transformers/blob/main/src/transformers/image_processing_utils.py#L540), which may potentially use the modified `center_crop`. - CLIP is included there.\r\n\r\nRunning the tests for all of them `RUN_SLOW=1 pytest tests/models/xxxxx/image_processing_xxxxx.py`. Will put here the results. :crossed_fingers: ", "@rafaelpadilla You only need to run the slow tests for models whose image processors use `center_crop` in their `preprocess` method - [which should be ~20 models](https://github.com/search?q=repo%3Ahuggingface%2Ftransformers+path%3Amodels%2F*%2Fimage_processing_*.py+self.center_crop%28&type=code). ", "> @rafaelpadilla You only need to run the slow tests for models whose image processors use `center_crop` in their `preprocess` method - [which should be ~20 models](https://github.com/search?q=repo%3Ahuggingface%2Ftransformers+path%3Amodels%2F*%2Fimage_processing_*.py+self.center_crop%28&type=code).\r\n\r\n:heavy_check_mark: These are not affected by the changes:\r\n```\r\nRUN_SLOW=1 pytest tests/models/clip/*\r\nRUN_SLOW=1 pytest tests/models/beit/*\r\nRUN_SLOW=1 pytest tests/models/flava/*\r\nRUN_SLOW=1 pytest tests/models/owlvit/*\r\nRUN_SLOW=1 pytest tests/models/bridgetower/*\r\nRUN_SLOW=1 pytest tests/models/videomae/*\r\nRUN_SLOW=1 pytest tests/models/poolformer/*\r\nRUN_SLOW=1 pytest tests/models/vit_hybrid/*\r\nRUN_SLOW=1 pytest tests/models/deit/*\r\nRUN_SLOW=1 pytest tests/models/chinese_clip/*\r\nRUN_SLOW=1 pytest tests/models/efficientnet/*\r\nRUN_SLOW=1 pytest tests/models/perceiver/*\r\n```\r\n\r\n:x: However, these models are affected by the fixes in the `center_crop()` and will have their tests failed:\r\n```\r\nRUN_SLOW=1 pytest tests/models/bit/* \r\nRUN_SLOW=1 pytest tests/models/levit/* \r\nRUN_SLOW=1 pytest tests/models/vivit/*\r\nRUN_SLOW=1 pytest tests/models/mobilenet_v1/*\r\nRUN_SLOW=1 pytest tests/models/mobilevit/*\r\nRUN_SLOW=1 pytest tests/models/mobilenet_v2/*\r\nRUN_SLOW=1 pytest tests/models/efficientformer/*\r\n```\r\n\r\nI'm glad we ran these tests! They've highlighted a significant trade-off: while the modifications in this PR address issue #22505, they also result in failing tests in 7 models. \r\nMaybe we should accept the different results in our `center_crop` and not merge this PR. :thinking: ", "If I understand correctly, the results will be different only for image with odd number as size (or size diff)?\r\nDo you know what kind of failures we have for `bit, mobilevit` etc. for this PR?\r\n ", "> If I understand correctly, the results will be different only for image with odd number as size (or size diff)? Do you know what kind of failures we have for `bit, mobilevit` etc. for this PR?\r\n\r\nYep. The results will be different if `image_height - crop_height` is odd or if `image_width - crop_width` is odd. This causes the crop of the image slightly different (~1 pixel shifted), resulting in these tests failures. \r\n\r\nDetails for each case -> Note that all are related to this small shift.\r\n\r\n**bit**\r\n(top, bottm, left, right) was originally `(0, 448, 74, 522)` and now is `(0, 448, 75, 523)`\r\n`tests.models.bit.test_modeling_bit::BitModelIntegrationTest.test_inference_image_classification_head()`\r\n\r\n**levit**\r\n(top, bottm, left, right) was originally `(16, 240, 58, 282)` and now is `(16, 240, 59, 283)`\r\n`tests.models.levit.test_modeling_levit.py::LevitModelIntegrationTest::test_inference_image_classification_head()`\r\n\r\n**vivit**\r\n(top, bottm, left, right) was originally `(16, 240, 115, 339)` and now is `(16, 240, 116, 340)` (for the first frame, for instance)\r\n`tests.models.vivit.test_modeling_vivit.py::VivitModelIntegrationTest::test_inference_for_video_classification()`\r\n\r\nmobilenet_v1\r\n(top, bottm, left, right) was originally (16, 240, 58, 282) and now is (16, 240, 59, 283)\r\ntests.models.mobilenet_v1.test_modeling_mobilenet_v1.py::MobileNetV1ModelIntegrationTest::test_inference_image_classification_head\r\n\r\n**mobilevit**\r\n(top, bottm, left, right) was originally `(16, 528, 106, 618)` and now is `(16, 528, 107, 619)`\r\n`tests.models.mobilevit.test_modeling_mobilevit.py::MobileViTModelIntegrationTest::test_inference_semantic_segmentation()`\r\n\r\n**mobilenet_v2**\r\n(top, bottm, left, right) was originally `(16, 529, 106, 619)` and now is `(16, 529, 107, 620)`\r\n`tests.models.mobilenet_v2.test_modeling_mobilenet_v2.py::MobileNetV2ModelIntegrationTest::test_inference_semantic_segmentation()`\r\n\r\n**efficientformer**\r\n(top, bottm, left, right) was originally `(16, 240, 58, 282)` and now is `(16, 240, 59, 283)`\r\n`tests.models.efficientformer.test_modeling_efficientformer.py::EfficientFormerModelIntegrationTest::test_inference_image_classification_head_with_teacher()`\r\n\r\nAnother possibility is to adapt the tests, so they can pass and fix #22505 .\r\nHowever, we may have backward compatibility problems as this new version will produce different results than the previous one.", "Thanks for the detailed information @rafaelpadilla . For me, we can change the tests to make them pass. For the backward compatibility, it is indeed tricky, especially CLIP is used everywhere now. Let's see what core maintainers @amyeroberts and @ArthurZucker say (especially Arthur has deal with some major changes in tokenizers recently).", "@ydshieh @rafaelpadilla In general, I wouldn't be in favour of adding in this change because of how many users' models and outputs this would affect, in particular for CLIP which is one of our most popular models. The outputs compared with OpenAI are different, but they're not \"wrong\". \r\n\r\nAlthough at the moment we're slightly out-of-step with torchvision and its transformations, we don't have a 1:1 correspondence with the library. Most people using vision models in large training pipelines will also not be using the image processors. \r\n\r\nFor reference, as noted in #22608, CLIP originally had the same cropping as torchvision, but this was (I believe accidentally) changed with #17628. \r\n\r\n@ArthurZucker WDYT? ", "Yeah let's keep it BC but allow people to switch to the new behaviour as it does not affect everyone. Tokenizer is specific and I am no longer allowed to break it ๐Ÿ˜… ", "> Yeah let's keep it BC but allow people to switch to the new behaviour as it does not affect everyone. Tokenizer is specific and I am no longer allowed to break it ๐Ÿ˜…\r\n\r\nCould you share (or point to the place) how you did in tokenizers to `allow people to switch to the new behaviour` (I remember you have used something like `legacy` flag ..?)", "Yes, [here](https://github.com/ArthurZucker/transformers/blob/72e9bd23250811083e8b2a37fd6143779d85cc51/src/transformers/models/llama/tokenization_llama.py#L154) it's set to None, and we raise a warning. I don't think we have to for CLIP, but just expose that a new behavior is available", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,695
1,701
1,701
CONTRIBUTOR
null
# What does this PR do? Fixes #22505 Our `center_crop` function does not give proper results if `orig_height - crop_height` is odd or if `orig_width - crop_width` is odd. It seems that our code [here](https://github.com/huggingface/transformers/blob/main/src/transformers/image_transforms.py#L461C66-L461C88) and [here](https://github.com/huggingface/transformers/blob/main/src/transformers/image_transforms.py#L464) alerts about this problem. This problem explains why: * Our `test_center_crop` fails [here](https://github.com/huggingface/transformers/blob/e469be340673d1f6931eb22562efd2be7f5a5b8d/tests/test_image_transforms.py#L329) if we try image sizes with odd height or width. e.g. changing this line [here](https://github.com/huggingface/transformers/blob/e469be340673d1f6931eb22562efd2be7f5a5b8d/tests/test_image_transforms.py#L317) to `image = np.random.randint(0, 256, (3, 223, 223))` will make the test fail. * Our results are different than OpenAI image features, as reported in issue #22505 This PR fixes this problem and includes pytests comparing the output of our `center_crop` function to `torchvision.transforms.CenterCrop` with different image sizes and expected output sizes. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @ArthurZucker and @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26238/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26238/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26238", "html_url": "https://github.com/huggingface/transformers/pull/26238", "diff_url": "https://github.com/huggingface/transformers/pull/26238.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26238.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/26237
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26237/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26237/comments
https://api.github.com/repos/huggingface/transformers/issues/26237/events
https://github.com/huggingface/transformers/pull/26237
1,901,892,507
PR_kwDOCUB6oc5antlh
26,237
Fix the gitlab user mention in issue templates to the correct user
{ "login": "muellerz", "id": 1146450, "node_id": "MDQ6VXNlcjExNDY0NTA=", "avatar_url": "https://avatars.githubusercontent.com/u/1146450?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muellerz", "html_url": "https://github.com/muellerz", "followers_url": "https://api.github.com/users/muellerz/followers", "following_url": "https://api.github.com/users/muellerz/following{/other_user}", "gists_url": "https://api.github.com/users/muellerz/gists{/gist_id}", "starred_url": "https://api.github.com/users/muellerz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muellerz/subscriptions", "organizations_url": "https://api.github.com/users/muellerz/orgs", "repos_url": "https://api.github.com/users/muellerz/repos", "events_url": "https://api.github.com/users/muellerz/events{/privacy}", "received_events_url": "https://api.github.com/users/muellerz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "It's all good :D. I got mentioned occasionally before, which I ignored, but checked that he was later tagged correctly. I don't really use Github so I just deleted the mails without thought, so no real mess there. It did get slightly annoying and I just had the spare time to fix it. And I'm sure having the intended user mentioned is in your interest, too.\r\n\r\nAll the best for the future ๐Ÿ‘ ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26237). All of your documentation changes will be reflected on that endpoint." ]
1,695
1,695
1,695
CONTRIBUTOR
null
# What does this PR do? This fixes a wrong suggested tag of a Github user not acquainted with this project to the correct contributor. #### Personal Note I can definitely see that his and my username seem switched given our clear names and I'm sorry for any confusion. That username was given to me by a teacher in school who didn't want to spell my full last name and it stuck. ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. Documentation: @stevhliu and @MKhalusova
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26237/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26237/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26237", "html_url": "https://github.com/huggingface/transformers/pull/26237", "diff_url": "https://github.com/huggingface/transformers/pull/26237.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26237.patch", "merged_at": 1695080943000 }
https://api.github.com/repos/huggingface/transformers/issues/26236
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26236/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26236/comments
https://api.github.com/repos/huggingface/transformers/issues/26236/events
https://github.com/huggingface/transformers/pull/26236
1,901,823,691
PR_kwDOCUB6oc5anefb
26,236
Fixing tokenizer when `transformers` is installed without `tokenizers`
{ "login": "urialon", "id": 15002544, "node_id": "MDQ6VXNlcjE1MDAyNTQ0", "avatar_url": "https://avatars.githubusercontent.com/u/15002544?v=4", "gravatar_id": "", "url": "https://api.github.com/users/urialon", "html_url": "https://github.com/urialon", "followers_url": "https://api.github.com/users/urialon/followers", "following_url": "https://api.github.com/users/urialon/following{/other_user}", "gists_url": "https://api.github.com/users/urialon/gists{/gist_id}", "starred_url": "https://api.github.com/users/urialon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/urialon/subscriptions", "organizations_url": "https://api.github.com/users/urialon/orgs", "repos_url": "https://api.github.com/users/urialon/repos", "events_url": "https://api.github.com/users/urialon/events{/privacy}", "received_events_url": "https://api.github.com/users/urialon/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks @ArthurZucker , I added a `__repr__` function and `repr=True` to the `dataclass` definition.", "Hi @ArthurZucker ,\r\nI reverted the change to the `__repr__` function and to the `dataclass` decorator.\r\n\r\nI'm an not sure that the `__repr__` behavior is correct. When I load a Bart tokenizer and print it:\r\n```\r\n>>> tokenizer\r\nBartTokenizerFast(name_or_path='facebook/bart-large-mnli', vocab_size=50265, model_max_length=1024, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'bos_token': '<s>', 'eos_token': '</s>', 'unk_token': '<unk>', 'sep_token': '</s>', 'pad_token': '<pad>', 'cls_token': '<s>', 'mask_token': AddedToken(\"<mask>\", rstrip=False, lstrip=True, single_word=False, normalized=False)}, clean_up_tokenization_spaces=True)\r\n```\r\n\r\nMost of the special tokens are strings rather than `AddedToken`s. \r\n\r\nSo, I am keeping only the `__str__` function as it fixes an existing bug.\r\nFixing the `__repr__` function is related but not necessarily coupled with this PR.\r\n\r\nThanks,\r\nUri", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26236). All of your documentation changes will be reflected on that endpoint.", "Thanks for contributing this fix back to upstream, @urialon!" ]
1,695
1,696
1,695
CONTRIBUTOR
null
# What does this PR do? This PR fixes the tokenization of the `<s>` and `</s>` tokens, when `transformers` is installed but `tokenizers` is not installed, fixing the string representation of the `AddedToken` class. Using `transformers` _without_ `tokenizers` installed results in the following problem: ``` import transformers tokenizer = transformers.AutoTokenizer.from_pretrained('facebook/bart-large-mnli') print(tokenizer.convert_tokens_to_ids(['<s>', 'a', '</s>'])) print(tokenizer.encode('a')) ``` Prints: ``` >>> [0, 102, 2] >>> [50265, 102, 50266] ``` In other words, the tokenizer knows that the correct IDs for `<s>` and `</s>` are `0` and `2`, but when encoding an arbitrary string, it adds the new IDs `50265` and `50266` (which are not known to the model!). Using this solution, the tokenizer does *not* add additional token IDs 50265 and up, because it recognizes them as existing already in IDs 0-3. Then, encoding a string using a tokenizer results in adding `0` and `2` as the `<s>` and `</s>` tokens. The two lines of: ``` print(tokenizer.convert_tokens_to_ids(['<s>', 'a', '</s>'])) print(tokenizer.encode('a')) ``` result in the same output of `[0, 102, 2]`, as expected. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @ArthurZucker @ydshieh @hvaara @amyeroberts
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26236/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26236/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26236", "html_url": "https://github.com/huggingface/transformers/pull/26236", "diff_url": "https://github.com/huggingface/transformers/pull/26236.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26236.patch", "merged_at": 1695808684000 }
https://api.github.com/repos/huggingface/transformers/issues/26235
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26235/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26235/comments
https://api.github.com/repos/huggingface/transformers/issues/26235/events
https://github.com/huggingface/transformers/pull/26235
1,901,691,012
PR_kwDOCUB6oc5anBTs
26,235
Fix some docstring in image processors
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/followers", "following_url": "https://api.github.com/users/ydshieh/following{/other_user}", "gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}", "starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions", "organizations_url": "https://api.github.com/users/ydshieh/orgs", "repos_url": "https://api.github.com/users/ydshieh/repos", "events_url": "https://api.github.com/users/ydshieh/events{/privacy}", "received_events_url": "https://api.github.com/users/ydshieh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,695
1,695
1,695
COLLABORATOR
null
# What does this PR do? As Amy mentioned this to me in one of my PR.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26235/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26235/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26235", "html_url": "https://github.com/huggingface/transformers/pull/26235", "diff_url": "https://github.com/huggingface/transformers/pull/26235.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26235.patch", "merged_at": 1695101742000 }
https://api.github.com/repos/huggingface/transformers/issues/26234
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26234/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26234/comments
https://api.github.com/repos/huggingface/transformers/issues/26234/events
https://github.com/huggingface/transformers/pull/26234
1,901,604,877
PR_kwDOCUB6oc5amujc
26,234
Add tokenizer kwargs to fill mask pipeline.
{ "login": "nmcahill", "id": 26336484, "node_id": "MDQ6VXNlcjI2MzM2NDg0", "avatar_url": "https://avatars.githubusercontent.com/u/26336484?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nmcahill", "html_url": "https://github.com/nmcahill", "followers_url": "https://api.github.com/users/nmcahill/followers", "following_url": "https://api.github.com/users/nmcahill/following{/other_user}", "gists_url": "https://api.github.com/users/nmcahill/gists{/gist_id}", "starred_url": "https://api.github.com/users/nmcahill/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nmcahill/subscriptions", "organizations_url": "https://api.github.com/users/nmcahill/orgs", "repos_url": "https://api.github.com/users/nmcahill/repos", "events_url": "https://api.github.com/users/nmcahill/events{/privacy}", "received_events_url": "https://api.github.com/users/nmcahill/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Thanks. Okay I will look into that. First time contributing to HF. ", "Try to install the styling package with `pip install -U \"transformers[quality]\"` ๐Ÿ˜‰ ", "Anything else you can think of? I can't seem to get the setup and quality checks to pass...", "Hey @nmcahill, the recommended tool to run here is `make fixup` which takes care of everything under the hood and does so quite fast.\r\n\r\nI've taken the liberty to add code wrappers around your example and run `make fixup` on your PR directly so that we may merge this PR and include it in today's release. Thank you for your contribution!", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26234). All of your documentation changes will be reflected on that endpoint.", "Thanks @LysandreJik! ", "The test `tests/pipelines/test_pipelines_fill_mask.py::FillMaskPipelineTests::test_large_model_pt` started to fail since this PR being merged. It is the new test block added in this PR\r\n\r\nhttps://github.com/huggingface/transformers/blob/3e68944cc482e915b27b24eec1f87e40be522ec0/tests/pipelines/test_pipelines_fill_mask.py#L219-L229\r\n\r\nThe results we get is shown at the end. \r\n\r\n@nmcahill Could you check if the `tokenizer_kwargs={\"truncation\": True},` does its job here? \r\n\r\nThank you in advance.\r\n\r\n\r\n```\r\n(Pdb) nested_simplify(outputs, decimals=6),\r\n([{'score': 0.281868, 'token': 6, 'token_str': ',', 'sequence': 'My name is,Lorem ipsum dolor sit amet, consectetur adipiscing elit,Lorem ipsum dolor sit amet, consectetur adipiscing elit,Lorem ipsum dolor sit amet, consectetur adipisc\r\ning elit,Lorem ipsum dolor sit amet, consectetur adipiscing elit,Lorem ipsum dolor sit amet, consectetur adipiscing elit,Lorem ipsum dolor sit amet, consectetur adipiscing elit,Lorem ipsum dolor sit amet, consectetur adipiscing elit,Lo\r\nrem ipsum dolor sit amet, consectetur adipiscing elit,Lorem ipsum dolor sit amet, consectetur adipiscing elit,Lorem ipsum dolor sit amet, consectetur adipiscing elit,Lorem ipsum dolor sit amet, consectetur adipiscing elit,Lorem ipsum d\r\nolor sit amet, consectetur adipiscing elit,Lorem ipsum dolor sit amet, consectetur adipiscing elit,Lorem ipsum dolor sit amet, consectetur adipiscing elit,Lorem ipsum dolor sit amet, consectetur adipiscing elit,Lorem ipsum dolor sit am\r\net, consectetur adipiscing elit,Lorem ipsum dolor sit amet, consectetur adipiscing elit,Lorem ipsum dolor sit amet, consectetur adipiscing elit,Lorem ipsum dolor sit amet, consectetur adipiscing elit,Lorem ipsum dolor sit amet, consect\r\netur adipiscing elit,Lorem ipsum dolor sit amet, consectetur adipiscing elit,Lorem ipsum dolor sit amet, consectetur adipiscing elit,Lorem ipsum dolor sit amet, consectetur adipiscing elit,Lorem ipsum dolor sit amet, consectetur adipis\r\ncing elit,Lorem'}, {'score': 0.095431, 'token': 46686, 'token_str': ':,', 'sequence': 'My name is:,Lorem ipsum dolor sit amet, consectetur adipiscing elit,Lorem ipsum dolor sit amet, consectetur adipiscing elit,Lorem ipsum dolor sit am\r\net, consectetur adipiscing elit,Lorem ipsum dolor sit amet, consectetur adipiscing elit,Lorem ipsum dolor sit amet, consectetur adipiscing elit,Lorem ipsum dolor sit amet, consectetur adipiscing elit,Lorem ipsum dolor sit amet, consect\r\netur adipiscing elit,Lorem ipsum dolor sit amet, consectetur adipiscing elit,Lorem ipsum dolor sit amet, consectetur adipiscing elit,Lorem ipsum dolor sit amet, consectetur adipiscing elit,Lorem ipsum dolor sit amet, consectetur adipis\r\ncing elit,Lorem ipsum dolor sit amet, consectetur adipiscing elit,Lorem ipsum dolor sit amet, consectetur adipiscing elit,Lorem ipsum dolor sit amet, consectetur adipiscing elit,Lorem ipsum dolor sit amet, consectetur adipiscing elit,L\r\norem ipsum dolor sit amet, consectetur adipiscing elit,Lorem ipsum dolor sit amet, consectetur adipiscing elit,Lorem ipsum dolor sit amet, consectetur adipiscing elit,Lorem ipsum dolor sit amet, consectetur adipiscing elit,Lorem ipsum \r\ndolor sit amet, consectetur adipiscing elit,Lorem ipsum dolor sit amet, consectetur adipiscing elit,Lorem ipsum dolor sit amet, consectetur adipiscing elit,Lorem ipsum dolor sit amet, consectetur adipiscing elit,Lorem ipsum dolor sit amet, consectetur adipiscing elit,Lorem'}],)\r\n\r\n```", "FYI: both `test_large_model_pt` and `test_large_model_tf` are failing", "What's strange is that I expect to see: `\"My name is <mask>Lorem ipsum dolor sit amet,...\"` which would be the expected output of ` \"My name is <mask>\" + \"Lorem ipsum dolor sit amet, consectetur adipiscing elit,\" * 100, ` but instead i see \"My name is,Lorem ipsum dolor sit amet,\". \r\n\r\nI know these tests passed when my pr was approved. I will need to get some time at night to dig into this... ", "It might simply be that the terminal output is not showing the word `<mask>` though. \r\n\r\nOther than that oddity, though, the fact that it is returning scores at all instead of failing with long input text is the true point of this unit test... I am not sure why the tokens and scores have changed since I tested this locally, but I'm tempted to change the unit test to check for results at all rather than checking for a particular set of tokens/scores. Would that work for everyone?", "> I am not sure why the tokens and scores have changed\r\n\r\nHi @nmcahill Thank you for the response ๐Ÿค— \r\n\r\nThis might be the hardware difference, we use GPU T4. As long as `sequence` is the correct one (i.e. being truncated here), we can adjust the values in other fields.\r\n\r\nMy main concern here is that it looks we pass `tokenizer_kwargs={\"truncation\": True}` but it doesn't seem have the effect in this test.\r\n\r\nTake your time on this, but if you could not allocate some time in the following weeks, let me know ๐Ÿ™ ", "So the behavior if truncation is set to False and the input string is very\r\nlong would be that the model.forward will throw an error. The truncation\r\nhappens between the tokenizer and and the model so the bit that is actually\r\ntruncated is the vector called โ€œinput_idsโ€ in model inputs, the input\r\nstring never gets truncated so no need to check that visually.\r\n\r\nTo prove it to yourself that the truncation=True works, try setting it to\r\nfalse and seeing if the model.forward fails.\r\n\r\nIf it doesnโ€™t fail with Truncation=False then Iโ€™ll definitely try fixing\r\nit. But as far as I can tell, I think this is probably working as expected.\r\n\r\n\r\n\r\nOn Wed, Dec 6, 2023 at 1:53 AM Yih-Dar ***@***.***> wrote:\r\n\r\n> I am not sure why the tokens and scores have changed\r\n>\r\n> Hi @nmcahill <https://github.com/nmcahill> Thank you for the response ๐Ÿค—\r\n>\r\n> This might be the hardware difference, we use GPU T4. As long as sequence\r\n> is the correct one (i.e. being truncated here), we can adjust the values in\r\n> other fields.\r\n>\r\n> My main concern here is that it looks we pass tokenizer_kwargs={\"truncation\":\r\n> True} but it doesn't seem have the effect in this test.\r\n>\r\n> Take your time on this, but if you could not allocate some time in the\r\n> following weeks, let me know ๐Ÿ™\r\n>\r\n> โ€”\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/pull/26234#issuecomment-1842451624>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AGI5ZZEMOKMT5EWHDKEVILTYIAXBLAVCNFSM6AAAAAA45GOORWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNBSGQ2TCNRSGQ>\r\n> .\r\n> You are receiving this because you were mentioned.Message ID:\r\n> ***@***.***>\r\n>\r\n", "OK, got it. I was confused as I saw you added the expected output as `\"sequence\": \"My name is grouped\"` which led me to think the `sequence` is truncated. But this is not the case as you mentioned above." ]
1,695
1,701
1,696
CONTRIBUTOR
null
This pr addresses #25994 by adding tokenizer_kwargs as an input preprocessing parameter to the fill mask pipeline. Attn: @BramVanroy @Narsil.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26234/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26234/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26234", "html_url": "https://github.com/huggingface/transformers/pull/26234", "diff_url": "https://github.com/huggingface/transformers/pull/26234.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26234.patch", "merged_at": 1696321510000 }
https://api.github.com/repos/huggingface/transformers/issues/26233
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26233/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26233/comments
https://api.github.com/repos/huggingface/transformers/issues/26233/events
https://github.com/huggingface/transformers/pull/26233
1,901,597,090
PR_kwDOCUB6oc5ams2W
26,233
docs: rewrite some document
{ "login": "pphuc25", "id": 81808312, "node_id": "MDQ6VXNlcjgxODA4MzEy", "avatar_url": "https://avatars.githubusercontent.com/u/81808312?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pphuc25", "html_url": "https://github.com/pphuc25", "followers_url": "https://api.github.com/users/pphuc25/followers", "following_url": "https://api.github.com/users/pphuc25/following{/other_user}", "gists_url": "https://api.github.com/users/pphuc25/gists{/gist_id}", "starred_url": "https://api.github.com/users/pphuc25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pphuc25/subscriptions", "organizations_url": "https://api.github.com/users/pphuc25/orgs", "repos_url": "https://api.github.com/users/pphuc25/repos", "events_url": "https://api.github.com/users/pphuc25/events{/privacy}", "received_events_url": "https://api.github.com/users/pphuc25/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "@MKhalusova can you help explain why my code fail the test case, this seem like no problem with my view.", "Can you provide more context on what this PR is trying to achieve? \r\nAlso, I may not be the right person to ping for a review.", "I do document, there's something like missing punctuation so I add it, and also add new extension code to reduce reduntant, but this fails so at now I return extension to origin", "I would like cc @amyeroberts to review my code" ]
1,695
1,696
1,696
CONTRIBUTOR
null
Just doing some documents for easier to read I would like to cc @MKhalusova to review my code.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26233/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26233/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26233", "html_url": "https://github.com/huggingface/transformers/pull/26233", "diff_url": "https://github.com/huggingface/transformers/pull/26233.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26233.patch", "merged_at": null }
https://api.github.com/repos/huggingface/transformers/issues/26232
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26232/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26232/comments
https://api.github.com/repos/huggingface/transformers/issues/26232/events
https://github.com/huggingface/transformers/pull/26232
1,901,529,818
PR_kwDOCUB6oc5ameHK
26,232
docs: change assert to raise and some small docs
{ "login": "pphuc25", "id": 81808312, "node_id": "MDQ6VXNlcjgxODA4MzEy", "avatar_url": "https://avatars.githubusercontent.com/u/81808312?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pphuc25", "html_url": "https://github.com/pphuc25", "followers_url": "https://api.github.com/users/pphuc25/followers", "following_url": "https://api.github.com/users/pphuc25/following{/other_user}", "gists_url": "https://api.github.com/users/pphuc25/gists{/gist_id}", "starred_url": "https://api.github.com/users/pphuc25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pphuc25/subscriptions", "organizations_url": "https://api.github.com/users/pphuc25/orgs", "repos_url": "https://api.github.com/users/pphuc25/repos", "events_url": "https://api.github.com/users/pphuc25/events{/privacy}", "received_events_url": "https://api.github.com/users/pphuc25/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi, in general we like to use `raise` statements instead of `assert` because the latter are considered more like debugging statements that can be toggled on or off in certain setups.", "So to catch more debugging information, I switch from using assert to raise. I believe the code I am working on is correct, @stevhliu.", "Hi @stevhliu, can you help me, I cannot run this command", "Yes, it looks like you're missing `black` which can be fixed by installing it first:\r\n\r\n```bash\r\npip install -e \".[quality]\"\r\n```\r\n\r\nThen you should be able to run the `make style` command :)", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26232). All of your documentation changes will be reflected on that endpoint.", "Hi @LysandreJik, I have reverted them back. However, in run_mlm_no_trainer the logging do not have eval_loss, this seem the important one so user can know how well loss of their model is, I think we should keep add the eval_loss logging in that file." ]
1,695
1,695
1,695
CONTRIBUTOR
null
Hi, I create a concise document outlining the process of improving code readability by replacing 'assert' statements with 'raise' statements, along with the addition of documentation and 'logger.info' statements. I would like cc @stevhliu to review my PR.
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26232/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26232/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26232", "html_url": "https://github.com/huggingface/transformers/pull/26232", "diff_url": "https://github.com/huggingface/transformers/pull/26232.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26232.patch", "merged_at": 1695888860000 }
https://api.github.com/repos/huggingface/transformers/issues/26231
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/26231/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/26231/comments
https://api.github.com/repos/huggingface/transformers/issues/26231/events
https://github.com/huggingface/transformers/pull/26231
1,901,511,698
PR_kwDOCUB6oc5amaNx
26,231
Skip unfixable failing tests on RoCm
{ "login": "fxmarty", "id": 9808326, "node_id": "MDQ6VXNlcjk4MDgzMjY=", "avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fxmarty", "html_url": "https://github.com/fxmarty", "followers_url": "https://api.github.com/users/fxmarty/followers", "following_url": "https://api.github.com/users/fxmarty/following{/other_user}", "gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}", "starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions", "organizations_url": "https://api.github.com/users/fxmarty/orgs", "repos_url": "https://api.github.com/users/fxmarty/repos", "events_url": "https://api.github.com/users/fxmarty/events{/privacy}", "received_events_url": "https://api.github.com/users/fxmarty/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26231). All of your documentation changes will be reflected on that endpoint.", "This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored." ]
1,695
1,698
1,698
COLLABORATOR
null
A bug in nn.Conv2d in PyTorch 2.0.1 on RoCm systems make the output of TVLT significantly diverge vs CPU / Nvidia GPUs. The issue is fixed on nightly and on 2.1.0 RC. Some slow tests may be affected as well, thus marking as draft for now. Adding relevant tensors / code to reproduce the issue just in case. [conv2d_mi210.zip](https://github.com/huggingface/transformers/files/12652047/conv2d_mi210.zip)
{ "url": "https://api.github.com/repos/huggingface/transformers/issues/26231/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/transformers/issues/26231/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/transformers/pulls/26231", "html_url": "https://github.com/huggingface/transformers/pull/26231", "diff_url": "https://github.com/huggingface/transformers/pull/26231.diff", "patch_url": "https://github.com/huggingface/transformers/pull/26231.patch", "merged_at": null }